Remove ELF and initrd loading from the kernel. The bootloader now loads
the initial programs, as it does with the kernel. Other files that were
in the initrd are now on the ESP, and non-program files are just passed
as modules.
When loading ELF headers (as opposed to sections), the `file_size` of
the data may be smaller than the `mem_size` of the section to be loaded
in memory. Don't blindly copy `mem_size` bytes from the ELF file, but
instead only `file_size`, then zero the rest.
Tags: elf loader
When `page_entry_iterator` became a template and changed its static shifts
translating virtual address to table indices into a for loop, that loop
was getting the indices backwards (ie, PML4E index was really the PTE
index, and so on).
Tags: paging
* When using the non-allocating version of `get_uefi_mappings` the
length was not getting set. Reworked this function.
* Having `build_kernel_mem_map` from `bootloader_main_uefi` caused it to
get an out of date map key. Moved this function into `efi_main` right
before exiting boot services.
The page table code had been copied mostly verbatim from the kernel, and
was a dense mess. I abstraced the `page_table_indices` class and the old
loop behavior of `map_in` into a new `page_entry_iterator` class, making
both `map_pages` and the initial offset mapping code much cleaner.
Tags: vmem paging
Set up initial page tables for both the offset-mapped area and the
loaded kernel code and data.
* Got rid of the `loaded_elf` struct - the loader now runs after the
initial PML4 is created and maps the ELF sections itself.
* Copied in the `page_table` and `page_table_indices` from the kernel,
still need to clean this up and extract it into shared code.
* Added `page_table_cache` to the kernel args to pass along free pages
that can be used for initial page tables.
Tags: paging
* Non-blocksize-aligned regions could fail to be found. Have the
bootloader load them aligned.
* Consolidating used frame blocks in the bootstrap means these would
have been impossible to free as address space
* mark_permanent wasn't actually removing blocks from the free list
Removed the frame allocation logic from page_manager and replaced it
with using an instance of frame_allocator instead. This had several
major ripple effects:
- memory_initalize() had to change to support this new world
- Where to map used blocks is now passed as a flag, since blocks don't
track their virtual address anymore
- Instead of the complicated "find N contiguous pages that can be
mapped in with one page table", we now just have the bootloader give
us some (currently 64) pages to use both for tables and scratch
space.
- frame_allocator initialization was split into two steps to allow
mapping used blocks before std::move()ing them over