There have been a number of incidents lately where I've needed to see
logs but have been working on init, and broken the log output of
srv.logger. This commit adds a debug console output to io port 0x6600
if enabled at the top of logger.cpp.
The syscall helpers.h get_handle functions should be returing
j6_err_invalid_arg if the handle they're given is j6_handle_invalid,
unless explicitly set to optional.
Make build_symbol_table.py output statistics on the symbol table it
builds, and emit warnings for zero-length symbols. Also added lengths to
several functions defined in asm that this uncovered.
Restructuring paging into an object that carries its page cache with it
and makes for simpler code. Program loading is also changed to not copy
the pages loaded from the file into new pages - we can impose a new
constraint that anything loaded by boot have a simple, page-aligned
layout so that we can just map the existing pages into the right
addresses. Also included are some linker script changes to help
accommodate this.
I added util::format as a replacement for other printf implementations
last year, but it sat there only being used by the kernel all this time.
Now I've templated it so that it can be used by the bootloader, and
removed printf from panic.serial as well.
Previously, to add a custom linker script, a module would need to modify
its variables after the fact to add to ldflags. Now module constructors
take a new keyword `ld_script` and handle the ldflags and dependencies
properly.
Using `-fvisibility=hidden` when building the kernel, and then
`--discard-all` when stripping it, we shave almost 100KiB off of the
resulting ELF file.
Also dropped some unused symbols from the linker script, and rearranged
the sections so that the file is able to be mapped directly into memory
instead of having each section copied.
Having main.cpp in the kernel and in the application being debugged is
annoying when setting breakpoints, so just like with main() vs
kernel_main(), kernel/main.cpp is now kernel/kernel_main.cpp.
In preparation for futexes, I wanted to make kobjects a bit lighter.
Storing 32 bits of object id, and 8 bits of type (and not ending the
class in a ushort for handle count, which meant all kobjects were likely
to have a bunch of pad bytes), the kobject class data is now just one 8
byte word.
Also from this, change logs that mention threads or processes from
printing the full koid to just 2 bytes of object id from both process
and thread, which makes following the logs much easier.
The status code from thread exit had too many issues, (eg, how does it
relate to process exit code? what happens when different threads exit
with different exit codes?) and not enough value, so I'm getting rid of
it.
In the new mailbox structure, passing a j6_handle_invalid with a message
would result in a permission denied result, as the process did not have
a handle "0".
heap_allocator::reallocate relies on the allocate and free methods so
mostly doesn't need locking, but it does touch the tracking map, so
needs to protect that with a lock.
A number of simplifications of mailboxes now that the interface is much
simpler, and synchronous.
* call and respond can now only transfer one handle at a time
* mailbox objects got rid of the message queue, and just have
wait_queues of blocked threads, and a reply_to map.
* threads now have a message_data struct on them for use by mailboxes
Instead of handles / capabilities having numeric ids that are only valid
for the owning process, they are now global in a system capabilities
table. This will allow for specifying capabilities in IPC that doesn't
need to be kernel-controlled.
Processes will still need to be granted access to given capabilities,
but that can become a simpler system call than the current method of
sending them through mailbox messages (and worse, having to translate
every one into a new capability like was the case before). In order to
track which handles a process has access to, a new node_set based on
node_map allows for an efficient storage and lookup of handles.
The allocator is a interface for types that expose allocator functions
for use in container templates like node_map (usage coming soon).
Also added an implementation for the kernel heap allocator.
The heap_allocator::get_free(order) function returns a reference to the
head pointer of the given freelist, so that it can be manipulated.
However, split_off was also taking a reference to a pointer for an out
param - passing the freelist pointer in here caused split_off to modify
the freelist.
I cleaned up a bunch of the places the freelist pointers were being
touched to make the usage more explicit.
Created a new util/node_map.h that implements a map that grows in-place.
Now this is used for tracking blocks' size orders, instead of a header
at the start of the memory block. This allows the whole buddy block to
be allocated, allowing for page-aligned (or greater) blocks to be
requested from the heap.
The kernel log levels are now numerically reversed so that more-verbose
levels can be added to the end. Replaced 'debug' with 'verbose', and
added new 'spam' level.
Rearranging of the ISR vectors for eventual TPR priority. Also removed
excess IRQs - if we need to support more than 64 IRQ vectors, we can add
some more back in.
Also clean up the legacy PIC init/masking code.
Previously, when adding a new thread, we only ever added it to the
current CPU and relied on work stealing to balance the CPUs. This commit
has the scheduler schedule new tasks round-robin across CPUs in hopes of
having to steal fewer tasks.
Also adds the run_queue.prev pointer for debugging what task was just
running on the given CPU.
The printf library I have been using, while useful, has way more than I
need in it, and had comparably huge stack space requirements. This
change adds a new util::format() which is a replacement for snprintf,
but with only the features used by kernel logging.
The logger has been changed to use it, as well as the few instances of
snprintf in the interrupt handling code before calling kassert.
Also part of this change: the logger's (now vestigial) immediate output
handling code is removed, as well as the "sequence" field on log
message headers.
The isr_prelude (and its IRQ equivalent) had been pushing RIP and RBP in
order to create a fake stack frame. This was in an effor to make GDB
display tracebacks more reliably, but GDB has other reasons for being
finnicky about stack frames, so this was just wasted. This commit gets
rid of it to make looking at the stack clearer.
In the beginning of the interrupt handler, we had previously checked if
the current handler had grabbed an IST stack from the IDT/TSS. If it
was, it saved this value and set it to 0 in the IDT, then restored it at
the end.
Now this is an atomic action. This is unlikely to make a difference
unless the interrupt handler is itself interrupted by an exception
before being able to swap the IDT value, but such a situation is now
impossible.
Previously, the CPU control registers were being set in a number of
different ways. Now, since the APs' need this to be set in the CPU
initialization code, always do it there. This removes some of the
settings from the bootloader, and some unused ones from smp.s.
Additionally, the control registers' flags are now enums in cpu.h and
manipulated via util::bitset.
In bsp_early_init(), the BSP cpu_data's rsp0 was getting initialized to
the _value_ at the idle_stack_end symbol, instead of its address. I
don't believe this was causing any actual harm, but it was a red herring
when debugging.
The cpu::cpu_id class no longer looks up all known features in the
constructor, but instead provides access to the map of supported
features as a bitset from the verify() method. It also exposes the
brand_name() method instead of loading the brand name string in the
constructor and storing it as part of the object.
This bug has been making me tear my hair out for weeks. When creating
the idle thread for each CPU, we were previously sharing stack areas
with other CPUs' idle threads in an effort to save memory. However, this
caused stack corruption that was very hard to track down. The kernel
stacks are in a vm_area_guarded to better detect this exact kind of
issue, but splitting stacks like this skirts that protection. It's not
worth saving a few KiB per CPU.
This commit contains a number of related mailbox issues:
- Add extra parameters to mailbox_respond_receive to allow both the
number of bytes/handles passed in, and the size of the byte/handle
buffers to be passed in.
- Don't delete mailbox messages on receipt if the caller is waiting on
reply
- Correctly pass status messages along with a mailbox::replyer object
- Actually wake the calling thread in the mailbox::replyer dtor
- Make sure to release locks _before_ calling thread::wake() on blocked
threads, as that may cause them to be scheduled ahead of the current
thread.
This commit adds a new flag, j6_channel_block, and a new flags param to
the channel_receive syscall. When the block flag is specified, the
caller will block waiting for data on the channel if the channel is
empty.
It seems more common to want to sleep for a duration than to sleep to a
specific time. Change the implementation to not make the process look up
the current time first. (Plus, there's no current syscall to do so)
When waking another thread, if that thread has a more urgent priority
than the current thread on the same CPU, send that CPU an IPI to tell it
to run its scheduler.
Related changes in this commit:
- Addition of the ipiSchedule isr (vector 0xe4) and its handler in
isr_handler().
- Change the APIC's send_ipi* functions to take an isr enum and not an
int for their vector parameter
- Thread TCBs now contain a pointer to their current CPU's cpu_data
structure
- Add the maybe_schedule() call to the scheduler, which sends the
schedule IPI to the given thread's CPU only when that CPU is running a
less-urgent thread.
- Move the locking of a run queue lock earlier in schedule() instead of
taking the lock in steal_work() and again in schedule().
The new logger event object for making get_entry() block when no logs
are available was consuming the event's notification even if the thread
did not need to block. This was causing excessive blocking - if multiple
logs had been added since the last call to get_entry(), only one would
be returned, and the next call would block until yet another log was
added.
Now only call event::wait() to block the calling thread if there are no
logs available.
Added profiler.h which defines classes and macros for defining profiler
objects. Also added gdb command j6prof for printing profile data. Added
the syscall_profiles profiler class and auto wrapping of syscalls with
profile objects.
Other changes in this commit:
- Made the gdb command `j6threads` argument for specifying a CPU
optional. Without an argument, it loops through all CPUs.
- Switched to -mcmodel=kernel for kernel code, which makes `call`
instructions easier to follow when debugging / looking at disassembly.
A few changes to the panic handler's display:
- Change rdi and rsi to match other general-purpose registers. (They
were previously blue, matching the stack/base pointer registers.)
- Change the ordering of r8-r15 to be column-major instead of row-major.
I find myself wanting to read down the columns to find the register
I'm looking for, and rax-rdx are already this way.
- Make the flags register yellow, matching the ss and cs registers
- Comment out the call to print_rip() call, as it's only occasionally
helpful and can cause the panic handler to page fault.
When debugging, or in panic callstacks, the BSP idle thread used to be
reported as `_kernel_start`, because it was just the loop at the end of
that assembly function. Now, wrap that loop in a separate symbol called
`bsp_idle` to make it clearer that the cpu is in the idle thread.