Added the kutil::map collection, an open addressing, robinhood hash map
with backwards shift deletes. Also added hash.h with templated
implementations of the FNV-1a 64 bit hash function, and pulled the log2
function out of the heap_allocator code into the new util.h.
The tests clearly haven't even been built in a while. I've added a
helper script to the project root. Also added a kassert() handler that
will allow tests to catch or fail on asserts.
Multiple changes regarding channels. Mainly channels are now stream
based and can handle partial reads or writes. Channels now use the
kernel buffers area with the related buffer_cache. Added a fake stdout
stream channel and kernel task to read its contents to the screen in
preparation for handing channels as stdin/stdout to processes.
When committing an area of vmem and splitting from a larger block, the
block that is returned was set to the unknown state, and the leading
block was incorrectly set to the desired state.
Also remove extra unused thread ctor.
Added the functionality for unreseve and uncommit to vm_space. Also
added the documented but not implemented ability to pass a 0 address to
get any address region with the given size.
We were previously allocating kernel stacks as large objects on the
heap. Now keep track of areas of the kernel stack area that are in use,
and allocate them from there. Also required actually implementing
vm_space::commit(). This still needs more work.
When moving heap allocation order calculation to use __builtin_clz, we
based that calculation off of the length requested, not the totla length
including the memory header size.
Previously we were using a less efficient loop method of finding the
appropriate block size order, now we use __builtin_clz, which should use
the CPU's clz instruction if it's available.
Heap allocator is a buddy allocator that deals with power-of-two block
sizes. Previously it referred to both a number of bytes and an order of
magnitude as a 'size'. Rename functions and variables referring to
orders of magnitude to 'order'.
Tags: pedantry
The bip buffer implementation was not initializing it's write-state
member, which was causing logs to always fail when the logger was not in
immediate mode.
Removing the `allocator.h` file defining the `kutil::allocator`
interface, now that explicit allocators are not being passed around.
Also removed the unused `frame_allocator::raw_allocator` class and
`kutil::invalid_allocator` object.
Tags: memory
Before global constructors were working, I had added a hack to zero out
the static vector member of in `slab_allocated`. Now that they are
working, this can be removed.
Look up the global constructor list that the linker outputs, and run
them all. Required creation of the `kutil::no_construct` template for
objects that are constructed before the global constructors are run.
Also split the `memory_initialize` function into two - one for just
those objects that need to happen before the global ctors, and one
after.
Tags: memory c++
Many kernel objects had to keep a hold of refrences to allocators in
order to pass them on down the call chain. Remove those explicit
refrences and use `operator new`, `operator delete`, and define new
`kalloc` and `kfree`.
Also remove `slab_allocator` and replace it with a new mixin for slab
allocation, `slab_allocated`, that overrides `operator new` and
`operator free` for its subclass.
Remove some no longer used related headers, `buddy_allocator.h` and
`address_manager.h`
Tags: memory
`kutil::vector` was calling `operator delete []` on memory that had not
been allocated with `operator new []`, and so was deleting the wrong
pointer.
Tags: bug memory allocator
This commit makes several fundamental changes to memory handling:
- the frame allocator is now only an allocator for free frames, and does
not track used frames.
- the frame allocator now stores its free list inside the free frames
themselves, as a hybrid stack/span model.
- This has the implication that all frames must currently fit within
the offset area.
- kutil has a new allocator interface, which is the only allowed way for
any code outside of src/kernel to allocate. Code under src/kernel
_may_ use new/delete, but should prefer the allocator interface.
- the heap manager has become heap_allocator, which is merely an
implementation of kutil::allocator which doles out sections of a given
address range.
- the heap manager now only writes block headers when necessary,
avoiding page faults until they're actually needed
- page_manager now has a page fault handler, which checks with the
address_manager to see if the address is known, and provides a frame
mapping if it is, allowing heap manager to work with its entire
address size from the start. (Currently 32GiB.)
Additionally, there were several bug fixes needed to allow this:
- frame_allocator was allocating its frame_blocks from the heap, causing
a circular dependency. Now it gives itself a page on its own when
needed.
- frame_allocator::free was putting any trailing pages in a block back
into the list after the current block, so they would be the next block
iterated to.
- frame_allocator::free was updating the address it was looking for
after freeing some pages, but not the count it was looking for, so it
would eventually free all pages after the initial address.
* Non-blocksize-aligned regions could fail to be found. Have the
bootloader load them aligned.
* Consolidating used frame blocks in the bootstrap means these would
have been impossible to free as address space
* mark_permanent wasn't actually removing blocks from the free list
Removed the frame allocation logic from page_manager and replaced it
with using an instance of frame_allocator instead. This had several
major ripple effects:
- memory_initalize() had to change to support this new world
- Where to map used blocks is now passed as a flag, since blocks don't
track their virtual address anymore
- Instead of the complicated "find N contiguous pages that can be
mapped in with one page table", we now just have the bootloader give
us some (currently 64) pages to use both for tables and scratch
space.
- frame_allocator initialization was split into two steps to allow
mapping used blocks before std::move()ing them over
* Heap manager can now manage non-contiguous blocks of memory (currently
all sized at the max block size only)
* Fix a bug where heap manager would try to buddy-merge max-sized blocks