Instead of handles / capabilities having numeric ids that are only valid
for the owning process, they are now global in a system capabilities
table. This will allow for specifying capabilities in IPC that doesn't
need to be kernel-controlled.
Processes will still need to be granted access to given capabilities,
but that can become a simpler system call than the current method of
sending them through mailbox messages (and worse, having to translate
every one into a new capability like was the case before). In order to
track which handles a process has access to, a new node_set based on
node_map allows for an efficient storage and lookup of handles.
When node_map grew, it was not properly applying the fixup routine to
non-moved elements. This fixes the grow algorithm to:
1. Realloc the array and set all new slots to empty/invalid
2. Check each old slot and remove/reinsert the item if it exists and its
optimal slot is later in the array than its current slot
3. Reverse-iterate the original slots and call fixup() on empty slots to
keep items from being located after a more-optimal empty slot
Also fixed the fixup() function to not need to be called in a loop
anymore, as it's only used the one way - on a given empty slot, looping
until it hits an empty slot or optimally-placed item.
The allocator is a interface for types that expose allocator functions
for use in container templates like node_map (usage coming soon).
Also added an implementation for the kernel heap allocator.
The heap_allocator::get_free(order) function returns a reference to the
head pointer of the given freelist, so that it can be manipulated.
However, split_off was also taking a reference to a pointer for an out
param - passing the freelist pointer in here caused split_off to modify
the freelist.
I cleaned up a bunch of the places the freelist pointers were being
touched to make the usage more explicit.
Adding pretty printers to aid in debugging:
* For the cap_table type so that `p g_cap_table` displays a neat table
* For node_sets of handles to easily see what handles a process owns
* For util::vector to include its contents in the output
Created a new util/node_map.h that implements a map that grows in-place.
Now this is used for tracking blocks' size orders, instead of a header
at the start of the memory block. This allows the whole buddy block to
be allocated, allowing for page-aligned (or greater) blocks to be
requested from the heap.
The kernel log levels are now numerically reversed so that more-verbose
levels can be added to the end. Replaced 'debug' with 'verbose', and
added new 'spam' level.
lld started creating ELF files with OSABI set to GNU instead of SysV.
Make sure to pass the option to tell lld we want plain SysV binaries.
Also, some debug output in boot if verification fails in ELF loading.
In order to more easily express constants as bitsets, add more constexpr
to util::bitset. This allows expressing uint64_t constants as bitsets in
the code instead, without changing the generated assembly, to make code
more readable.
Rearranging of the ISR vectors for eventual TPR priority. Also removed
excess IRQs - if we need to support more than 64 IRQ vectors, we can add
some more back in.
Also clean up the legacy PIC init/masking code.
Previously, when adding a new thread, we only ever added it to the
current CPU and relied on work stealing to balance the CPUs. This commit
has the scheduler schedule new tasks round-robin across CPUs in hopes of
having to steal fewer tasks.
Also adds the run_queue.prev pointer for debugging what task was just
running on the given CPU.
The printf library I have been using, while useful, has way more than I
need in it, and had comparably huge stack space requirements. This
change adds a new util::format() which is a replacement for snprintf,
but with only the features used by kernel logging.
The logger has been changed to use it, as well as the few instances of
snprintf in the interrupt handling code before calling kassert.
Also part of this change: the logger's (now vestigial) immediate output
handling code is removed, as well as the "sequence" field on log
message headers.
The isr_prelude (and its IRQ equivalent) had been pushing RIP and RBP in
order to create a fake stack frame. This was in an effor to make GDB
display tracebacks more reliably, but GDB has other reasons for being
finnicky about stack frames, so this was just wasted. This commit gets
rid of it to make looking at the stack clearer.
In the beginning of the interrupt handler, we had previously checked if
the current handler had grabbed an IST stack from the IDT/TSS. If it
was, it saved this value and set it to 0 in the IDT, then restored it at
the end.
Now this is an atomic action. This is unlikely to make a difference
unless the interrupt handler is itself interrupted by an exception
before being able to swap the IDT value, but such a situation is now
impossible.
Previously, the CPU control registers were being set in a number of
different ways. Now, since the APs' need this to be set in the CPU
initialization code, always do it there. This removes some of the
settings from the bootloader, and some unused ones from smp.s.
Additionally, the control registers' flags are now enums in cpu.h and
manipulated via util::bitset.
The test_runner was potentially initializing the array of tests after
tests had been added. Now, allocate the vector dynamically on the first
test addition.
In bsp_early_init(), the BSP cpu_data's rsp0 was getting initialized to
the _value_ at the idle_stack_end symbol, instead of its address. I
don't believe this was causing any actual harm, but it was a red herring
when debugging.
The cpu::cpu_id class no longer looks up all known features in the
constructor, but instead provides access to the map of supported
features as a bitset from the verify() method. It also exposes the
brand_name() method instead of loading the brand name string in the
constructor and storing it as part of the object.
Add a new bitset class which allows for arbitrarily-large bit sets, with
specializations for 32 and 64 bit sets.
Eventually the enum_bitfields code should probably be reconsidered and
moved to bitsets, since it doesn't work everywhere.
This bug has been making me tear my hair out for weeks. When creating
the idle thread for each CPU, we were previously sharing stack areas
with other CPUs' idle threads in an effort to save memory. However, this
caused stack corruption that was very hard to track down. The kernel
stacks are in a vm_area_guarded to better detect this exact kind of
issue, but splitting stacks like this skirts that protection. It's not
worth saving a few KiB per CPU.
Split the functionality of outputting kernel logs out of the UART
driver, and into a new service. The UART driver now registers a console
out channel with the service locator, which the logger service
retrieves, and then enters a loop getting logs from the kernel and
printing them out to the console.
The init process now serves as a service locator for its children,
passing all children a mailbox handle on which it is serving the service
locator protocol.
This commit contains a number of related mailbox issues:
- Add extra parameters to mailbox_respond_receive to allow both the
number of bytes/handles passed in, and the size of the byte/handle
buffers to be passed in.
- Don't delete mailbox messages on receipt if the caller is waiting on
reply
- Correctly pass status messages along with a mailbox::replyer object
- Actually wake the calling thread in the mailbox::replyer dtor
- Make sure to release locks _before_ calling thread::wake() on blocked
threads, as that may cause them to be scheduled ahead of the current
thread.
This commit adds a new flag, j6_channel_block, and a new flags param to
the channel_receive syscall. When the block flag is specified, the
caller will block waiting for data on the channel if the channel is
empty.
Influenced by other libc implementations, I had tried to make memcpy
smarter for differently-sized ranges, but my benchmarks showed no real
change. So change memcpy back to the simple rep movsb implementation.
There was a specialization of util::hash() for uint64_t (which just
returns the integer value), but other integer sizes did not previously
have similar specializations.
Also, two minor semi-related changes to util::map - skip copying empty
nodes when growing the map, and assert that the hash is non-zero when
inserting a new node.
It seems more common to want to sleep for a duration than to sleep to a
specific time. Change the implementation to not make the process look up
the current time first. (Plus, there's no current syscall to do so)
When waking another thread, if that thread has a more urgent priority
than the current thread on the same CPU, send that CPU an IPI to tell it
to run its scheduler.
Related changes in this commit:
- Addition of the ipiSchedule isr (vector 0xe4) and its handler in
isr_handler().
- Change the APIC's send_ipi* functions to take an isr enum and not an
int for their vector parameter
- Thread TCBs now contain a pointer to their current CPU's cpu_data
structure
- Add the maybe_schedule() call to the scheduler, which sends the
schedule IPI to the given thread's CPU only when that CPU is running a
less-urgent thread.
- Move the locking of a run queue lock earlier in schedule() instead of
taking the lock in steal_work() and again in schedule().
The new logger event object for making get_entry() block when no logs
are available was consuming the event's notification even if the thread
did not need to block. This was causing excessive blocking - if multiple
logs had been added since the last call to get_entry(), only one would
be returned, and the next call would block until yet another log was
added.
Now only call event::wait() to block the calling thread if there are no
logs available.
Added profiler.h which defines classes and macros for defining profiler
objects. Also added gdb command j6prof for printing profile data. Added
the syscall_profiles profiler class and auto wrapping of syscalls with
profile objects.
Other changes in this commit:
- Made the gdb command `j6threads` argument for specifying a CPU
optional. Without an argument, it loops through all CPUs.
- Switched to -mcmodel=kernel for kernel code, which makes `call`
instructions easier to follow when debugging / looking at disassembly.
A few changes to the panic handler's display:
- Change rdi and rsi to match other general-purpose registers. (They
were previously blue, matching the stack/base pointer registers.)
- Change the ordering of r8-r15 to be column-major instead of row-major.
I find myself wanting to read down the columns to find the register
I'm looking for, and rax-rdx are already this way.
- Make the flags register yellow, matching the ss and cs registers
- Comment out the call to print_rip() call, as it's only occasionally
helpful and can cause the panic handler to page fault.
When debugging, or in panic callstacks, the BSP idle thread used to be
reported as `_kernel_start`, because it was just the loop at the end of
that assembly function. Now, wrap that loop in a separate symbol called
`bsp_idle` to make it clearer that the cpu is in the idle thread.
The constexpr_hash.h header has fallen out of use. As constexpr hashing
will be used for IDs with the service locator protocol, update these
hashes to be 32 and 64 bit FNV-1a, and replace the _h user-defined
literal with _id (a 64-bit hash), and _id8 (a 32-bit hash folded down to
8 bits). These are now in the util/hash.h header along with the runtime
hash functions.
Three issues that caused build breaks when regenerating the build
directory after the previous commits:
- system.def was including endpoint.def
- syscalls/vm_area.cpp was including j6/signals.h
- util/util.h was missing an include of stddef.h