[kernel] Fix some page allocation bugs

Fixing two page allocation issues I came across while debugging:

- Added a spinlock to the page_table static page cache, to avoid
  multiple CPUs grabbing the same page. This cache should probably
  just be made into per-CPU caches.
- Fixed a bitwise math issue ("1" instead of "1ull" when working with
  64-bit numbers) that made it so that pages were never marked as
  allocated when allocating 32 or more.
This commit is contained in:
Justin C. Miller
2021-12-31 20:35:11 -08:00
parent 348b64ebb0
commit 99de8454cd
3 changed files with 23 additions and 10 deletions

View File

@@ -5,6 +5,7 @@
#include <stdint.h>
#include "enum_bitfields.h"
#include "kernel_memory.h"
#include "kutil/spinlock.h"
struct free_page_header;
@@ -141,6 +142,7 @@ struct page_table
static free_page_header *s_page_cache; ///< Cache of free pages to use for tables
static size_t s_cache_count; ///< Number of pages in s_page_cache
static kutil::spinlock s_lock; ///< Lock for shared page cache
/// Get an entry in the page table as a page_table pointer
/// \arg i Index of the entry in this page table