Squash attributes into the pointer's integer, making them an uintptr_t
pair containing 2 bits at the bottom and then the pointer. These bits
are currently unused thanks to alignment requirements.
Configure Dynarmic to mask out these bits on pointer reads.
While we are at it, remove some unused attributes carried over from
Citra.
Read/Write and other hot functions use a two step unpacking process that
is less readable to stop MSVC from emitting an extra AND instruction in
the hot path:
mov rdi,rcx
shr rdx,0Ch
mov r8,qword ptr [rax+8]
mov rax,qword ptr [r8+rdx*8]
mov rdx,rax
-and al,3
and rdx,0FFFFFFFFFFFFFFFCh
je Core::Memory::Memory::Impl::Read<unsigned char>
mov rax,qword ptr [vaddr]
movzx eax,byte ptr [rdx+rax]
Removes all remaining usages of the global system instance. After this,
migration can begin to migrate to being constructed and managed entirely
by the various frontends.
Unicorn long-since lost most of its use, due to dynarmic gaining support
for handling most instructions. At this point any further issues
encountered should be used to make dynarmic better.
This also allows us to remove our dependency on Python.
Allows some implementations to avoid completely zeroing out the internal
buffer of the optional, and instead only set the validity byte within
the structure.
This also makes it consistent how we return empty optionals.
This commit: Implements CPU Interrupts, Replaces Cycle Timing for Host
Timing, Reworks the Kernel's Scheduler, Introduce Idle State and
Suspended State, Recreates the bootmanager, Initializes Multicore
system.
We can also allow unicorn to be constructed in 32-bit mode or 64-bit
mode to satisfy the need for both interpreter instances.
Allows this code to compile successfully of non x86-64 architectures.
The Write functions are used slightly less than the Read functions,
which make these a bit nicer to move over.
The only adjustments we really need to make here are to Dynarmic's
exclusive monitor instance. We need to keep a reference to the currently
active memory instance to perform exclusive read/write operations.
With all of the trivial parts of the memory interface moved over, we can
get right into moving over the bits that are used.
Note that this does require the use of GetInstance from the global
system instance to be used within hle_ipc.cpp and the gdbstub. This is
fine for the time being, as they both already rely on the global system
instance in other functions. These will be removed in a change directed
at both of these respectively.
For now, it's sufficient, as it still accomplishes the goal of
de-globalizing the memory code.
Amends a few interfaces to be able to handle the migration over to the
new Memory class by passing the class by reference as a function
parameter where necessary.
Notably, within the filesystem services, this eliminates two ReadBlock()
calls by using the helper functions of HLERequestContext to do that for
us.
This was initially necessary when AArch64 JIT emulation was in its
infancy and all memory-related instructions weren't implemented.
Given the JIT now has all of these facilities implemented, we can remove
these functions from the CPU interface.
Our initialization process is a little wonky than one would expect when
it comes to code flow. We initialize the CPU last, as opposed to
hardware, where the CPU obviously needs to be first, otherwise nothing
else would work, and we have code that adds checks to get around this.
For example, in the page table setting code, we check to see if the
system is turned on before we even notify the CPU instances of a page
table switch. This results in dead code (at the moment), because the
only time a page table switch will occur is when the system is *not*
running, preventing the emulated CPU instances from being notified of a
page table switch in a convenient manner (technically the code path
could be taken, but we don't emulate the process creation svc handlers
yet).
This moves the threads creation into its own member function of the core
manager and restores a little order (and predictability) to our
initialization process.
Previously, in the multi-threaded cases, we'd kick off several threads
before even the main kernel process was created and ready to execute (gross!).
Now the initialization process is like so:
Initialization:
1. Timers
2. CPU
3. Kernel
4. Filesystem stuff (kind of gross, but can be amended trivially)
5. Applet stuff (ditto in terms of being kind of gross)
6. Main process (will be moved into the loading step in a following
change)
7. Telemetry (this should be initialized last in the future).
8. Services (4 and 5 should ideally be alongside this).
9. GDB (gross. Uses namespace scope state. Needs to be refactored into a
class or booted altogether).
10. Renderer
11. GPU (will also have its threads created in a separate step in a
following change).
Which... isn't *ideal* per-se, however getting rid of the wonky
intertwining of CPU state initialization out of this mix gets rid of
most of the footguns when it comes to our initialization process.
Adjusts the interface of the wrappers to take a system reference, which
allows accessing a system instance without using the global accessors.
This also allows getting rid of all global accessors within the
supervisor call handling code. While this does make the wrappers
themselves slightly more noisy, this will be further cleaned up in a
follow-up. This eliminates the global system accessors in the current
code while preserving the existing interface.
Applies the override specifier where applicable. In the case of
destructors that are defaulted in their definition, they can
simply be removed.
This also removes the unnecessary inclusions being done in audin_u and
audrec_u, given their close proximity.
Gets rid of the largest set of mutable global state within the core.
This also paves a way for eliminating usages of GetInstance() on the
System class as a follow-up.
Note that no behavioral changes have been made, and this simply extracts
the functionality into a class. This also has the benefit of making
dependencies on the core timing functionality explicit within the
relevant interfaces.
Places all of the timing-related functionality under the existing Core
namespace to keep things consistent, rather than having the timing
utilities sitting in its own completely separate namespace.
Like the barrier, this is owned entirely by the System and will always
outlive the encompassing state, so shared ownership semantics aren't
necessary here.
There's no real need to use a shared pointer in these cases, and only
makes object management more fragile in terms of how easy it would be to
introduce cycles. Instead, just do the simple thing of using a regular
pointer. Much of this is just a hold-over from citra anyways.
It also doesn't make sense from a behavioral point of view for a
process' thread to prolong the lifetime of the process itself (the
process is supposed to own the thread, not the other way around).
Many of the member variables of the thread class aren't even used
outside of the class itself, so there's no need to make those variables
public. This change follows in the steps of the previous changes that
made other kernel types' members private.
The main motivation behind this is that the Thread class will likely
change in the future as emulation becomes more accurate, and letting
random bits of the emulator access data members of the Thread class
directly makes it a pain to shuffle around and/or modify internals.
Having all data members public like this also makes it difficult to
reason about certain bits of behavior without first verifying what parts
of the core actually use them.
Everything being public also generally follows the tendency for changes
to be introduced in completely different translation units that would
otherwise be better introduced as an addition to the Thread class'
public interface.
Makes the public interface consistent in terms of how accesses are done
on a process object. It also makes it slightly nicer to reason about the
logic of the process class, as we don't want to expose everything to
external code.
Internally within the kernel, it also includes a member variable for the
floating-point status register, and TPIDR, so we should do the same here to match
it.
While we're at it, also fix up the size of the struct and add a static
assertion to ensure it always stays the correct size.
Previously the second half of the value being written would overwrite
the first half. Thankfully this wasn't a bug that was being encountered,
as the function is currently unused.
This modifies the CPU interface to more accurately match an
AArch64-supporting CPU as opposed to an ARM11 one. Two of the methods
don't even make sense to keep around for this interface, as Adv Simd is
used, rather than the VFP in the primary execution state. This is
essentially a modernization change that should have occurred from the
get-go.
The follow-up to e2457418da, which
replaces most of the includes in the core header with forward declarations.
This makes it so that if any of the headers the core header was
previously including change, then no one will need to rebuild the bulk
of the core, due to core.h being quite a prevalent inclusion.
This should make turnaround for changes much faster for developers.
550d662 load_store_exclusive: Define s == t state to be Constraint_NONE
0b69381 A64/translate: Allow for unpredictable behaviour to be defined
6d236d4 system: Implement MRS CNTFRQ_EL0
6cbb6fb A32/testenv: Add missing headers
6729328 externals: Update xbyak to v5.67
1812bd2 Squashed 'externals/xbyak/' changes from 2794cde7..671fc805
9a95802 externals: Document subtrees
714a840 A64: Implement SQ{ADD, SUB}, and UQ{ADD, SUB}'s vector variants
8cab459 A64: Implement UQADD/UQSUB's scalar variants
18a8151 ir: Add opcodes for unsigned saturating add and subtract
a5660ee x64/reg_alloc: Use type alias for array returned by GetArgumentInfo()
29489b5 ir/value: Use type alias CoprocessorInfo for std::array<u8, 8>
e23ba26 status_register_access: Add support for bits 0 and 1 of mask to MSR
55190bd fuzz_with_unicorn: Split utility functions into fuzz_util
23b049d A32/translate/load_store: Correct detection of writeback
7ec9f15 A32/translate: Add TranslateSingleInstruction
efeecb4 A32/ir_emitter: Bug fix: IREmitter::ExceptionRaised using incorrect opcode
08d1d19 A32/decoders: Split instruction list into include file
2d929cc tests: Refactor unicorn_emu to allow for A32 unicorn
f672368 microinstruction: Improve assert messages
7ebff50 emit_x64_vector: EmitVectorNarrow16: AVX512 implementation
edce230 emit_x64_vector: EmitVectorNarrow32: prefer pblendw to loading constant
We divide the number of ticks to add by the number of cores (4) to obtain a more or less rough estimate of the actual number of ticks added. This assumes that all 4 cores are doing similar work. Previously we were adding ~4 times the number of ticks, thus making the games think that time was going way too fast.
This lets us bypass certain hangs in some games like Breath of the Wild.
We should modify our CoreTiming to support multiple cores (both running in a single thread, and in multiple host threads).