Overhaul memory allocation model

This commit makes several fundamental changes to memory handling:

- the frame allocator is now only an allocator for free frames, and does
  not track used frames.
- the frame allocator now stores its free list inside the free frames
  themselves, as a hybrid stack/span model.
  - This has the implication that all frames must currently fit within
    the offset area.
- kutil has a new allocator interface, which is the only allowed way for
  any code outside of src/kernel to allocate. Code under src/kernel
  _may_ use new/delete, but should prefer the allocator interface.
- the heap manager has become heap_allocator, which is merely an
  implementation of kutil::allocator which doles out sections of a given
  address range.
- the heap manager now only writes block headers when necessary,
  avoiding page faults until they're actually needed
- page_manager now has a page fault handler, which checks with the
  address_manager to see if the address is known, and provides a frame
  mapping if it is, allowing heap manager to work with its entire
  address size from the start. (Currently 32GiB.)
This commit is contained in:
Justin C. Miller
2019-04-16 01:13:09 -07:00
parent fd1adc0262
commit 6302e8b73a
33 changed files with 782 additions and 1010 deletions

View File

@@ -11,7 +11,6 @@ using memory::kernel_offset;
using memory::page_offset;
using memory::page_mappable;
extern kutil::frame_allocator g_frame_allocator;
extern kutil::address_manager g_kernel_address_manager;
page_manager g_page_manager(
g_frame_allocator,
@@ -40,7 +39,7 @@ struct free_page_header
page_manager::page_manager(
kutil::frame_allocator &frames,
frame_allocator &frames,
kutil::address_manager &addrs) :
m_page_cache(nullptr),
m_frames(frames),
@@ -341,13 +340,31 @@ page_manager::unmap_table(page_table *table, page_table::level lvl, bool free, p
void
page_manager::unmap_pages(void* address, size_t count, page_table *pml4)
{
if (!pml4) pml4 = get_pml4();
page_out(pml4, reinterpret_cast<uintptr_t>(address), count, true);
if (address >= kernel_offset) {
m_addrs.free(address, count);
if (!pml4)
pml4 = get_pml4();
uintptr_t iaddr = reinterpret_cast<uintptr_t>(address);
page_out(pml4, iaddr, count, true);
if (iaddr >= kernel_offset) {
// TODO
// m_addrs.free(address, count);
}
}
bool
page_manager::fault_handler(uintptr_t addr)
{
if (!m_addrs.contains(addr))
return false;
uintptr_t page = addr & ~0xfffull;
bool user = addr < kernel_offset;
map_pages(page, 1, user);
return true;
}
void
page_manager::check_needs_page(page_table *table, unsigned index, bool user)
{