2MiB large pages were being used for any large page mapping, but the
page manager doesn't correctly handle them everywhere yet. Now only
allow them for offset pointers (eg MMIO space) that will never be
unmapped.
* Non-blocksize-aligned regions could fail to be found. Have the
bootloader load them aligned.
* Consolidating used frame blocks in the bootstrap means these would
have been impossible to free as address space
* mark_permanent wasn't actually removing blocks from the free list
Removed the frame allocation logic from page_manager and replaced it
with using an instance of frame_allocator instead. This had several
major ripple effects:
- memory_initalize() had to change to support this new world
- Where to map used blocks is now passed as a flag, since blocks don't
track their virtual address anymore
- Instead of the complicated "find N contiguous pages that can be
mapped in with one page table", we now just have the bootloader give
us some (currently 64) pages to use both for tables and scratch
space.
- frame_allocator initialization was split into two steps to allow
mapping used blocks before std::move()ing them over
* Heap manager can now manage non-contiguous blocks of memory (currently
all sized at the max block size only)
* Fix a bug where heap manager would try to buddy-merge max-sized blocks
The modules.yaml now has an optional defines: list per module that adds
preprocessor definitions to the build scripts. Also added a --debug flag
to qemu.sh to run QEMU's debugger host.
Previously CPU statue was passed on the stack, but the compiler is
allowed to clobber values passed to it on the stack in the SysV x86 ABI.
So now leave the state on the stack but pass a pointer to it into the
ISR functions.
Under KVM we were hitting what look like out-of-order and/or issues
during initialization when writing to the page tables and then
immediately writing to the mapped memory. Adding a memory barrier and
an io_wait() in memory_bootstrap.cpp fixed it.