[Kos-cvs] kos/modules/pmm Makefile, 1.7, 1.8 _pmm.h, 1.14, 1.15 _pmm_get_at_addr.c, 1.4, 1.5 _pmm_get_page.c, 1.2, 1.3 _pmm_init.c, 1.10, 1.11 _pmm_put_page.c, 1.5, 1.6 pmm.c, 1.12, 1.13 pmm.h, 1.12, 1.13

thomas at kos.enix.org thomas at kos.enix.org
Tue Dec 28 19:44:47 CET 2004


Update of /var/cvs/kos/kos/modules/pmm
In directory the-doors:/tmp/cvs-serv10813/modules/pmm

Modified Files:
	Makefile _pmm.h _pmm_get_at_addr.c _pmm_get_page.c _pmm_init.c 
	_pmm_put_page.c pmm.c pmm.h 
Log Message:
2004-12-28  Thomas Petazzoni  <thomas at crazy.kos.nx>

	* modules/x86/task/task.c: Try to restrict access to exported
	symbol.

	* modules/x86/task/_thread_cpu_context.c: Move to the new PMM
	system.

	* modules/x86/task/Makefile (all): arch_task.ro instead of
	arch-task.ro.

	* modules/x86/mm/_team_mm_context.c: More informations.

	* modules/x86/mm/_mm.h, modules/x86/mm/mm.c, modules/x86/mm/_rmap.c,
	modules/x86/mm/_vmap.c: The new VMAP/RMAP system. We also make
	sure access to all exported function is restricted to the VMM
	module. 

	* modules/x86/mm/Makefile (all): arch_mm.ro instead of
	arch-mm.ro. 

	* modules/x86/lib/Makefile (all): Rename to arch_lib.ro instead of
	arch-lib.ro. 

	* modules/x86/internals.h: More definitions on the address space
	configuration. 

	* modules/vmm/vmm.h (struct address_space): Add a mutex and a
	spinlock to protect address space.

	* modules/vmm/vmm.c: Restrict access to some exported
	functions. More work has to be done in this area.

	* modules/vmm/_vmm_map.c: Part of the new vmap system.

	* modules/vmm/_vmm_as.c: Make the appropriate lock/unlock on the
	address space mutex. It's just a first try. More reflexion has to
	be made.

	* modules/task/task.h: Make sure DOXYGEN doesn't try to analyze
	the #if stuff, because it doesn't like it.

	* modules/task/_task_utils.c (show_all_thread_info): If team is
	NULL, it means that we want to display the threads of all teams.

	* modules/scheduler/synchq.h: Avoid inclusion of task.h.

	* modules/pmm/pmm.c: New PMM system.

	* modules/pmm/_pmm_put_page.c: New PMM system.

	* modules/pmm/_pmm_init.c: New PMM system.

	* modules/pmm/_pmm_get_page.c: New PMM system.

	* modules/pmm/_pmm_get_at_addr.c: New PMM system.

	* modules/pmm/_pmm.h: struct gpfme is now private.

	* modules/pmm/pmm.h: struct gpfme is now private (migrated to
	_pmm.h). 

	* modules/pmm/Makefile (OBJS): New PMM system, with fewer
	functionnalities. 

	* modules/kos/spinlock.h: New type spinlock_flags_t, that should
	be used instead of k_ui32_t for spinlock flags.

	* modules/kmem/_kvmem_utils.c: Migration to the new PMM
	system and various cleanups.

	* modules/kmem/_kvmem_init.c: Migration to the new PMM
	system and various cleanups.

	* modules/kmem/_kslab_cache_grow.c: Migration to the new PMM
	system and various cleanups.

	* modules/kmem/_kslab_cache_free.c: Migration to the new PMM
	system, and various cleanups.

	* modules/kitc/_kmutex.c: DEBUG_PRINT3 calls to show mutex
	lock/unlock/trylock.

	* modules/init/_init_modules.c (init_modules): A message is
	displayed when initializating modules.

	* modules/ide/_ide.c: Various cleanups.

	* modules/fs/fat/_fat.c: Various cleanups.

	* modules/fs/devfs/devfs.c: Various cleanups, including whitespace
	cleanification.

	* modules/debug/debug.h: Add the DEBUG_PRINT1, DEBUG_PRINT2,
	DEBUG_PRINT3 macros. Maybe there's a cleaner way to do it. David ?

	* modules/debug/debug.c (init_module_level0): Init the
	backtracking stuff a little later so that we have debugging
	messages during this initialization.

	* modules/debug/bt.c (_init_backtracing_stuff): bt_next is not
	anymore a valid candidate to determine if fomit-frame-pointer was
	selected or not, because of gcc optimizations. We use bt_init
	instead.

	* modules/Makefile (doc): Add a target that generates the doxygen
	documentation. 

	* loader/mod.h (EXPORT_FUNCTION_RESTRICTED): Change the symbol
	names generated by the macros, so that they include the name of
	the target module (the one allowed to import the exported
	symbol). This is needed in order to export the same symbol to
	multiple modules. Previously, the RESTRICTED system generated
	symbols that were identical for a given symbol exported to
	multiple modules.

	* doc/testingfr.tex: A big update to this documentation. Not
	finished. The english version should also be updated.

	* TODO: Some new things to do.

	* MkVars (CFLAGS): Pass the DEBUG_LEVEL Makefile variable to the C
	files. In each modules/.../Makefile, we can set a
	DEBUG_LEVEL=value that will set the level of verbosity of the
	module. Macros named DEBUG_PRINT1, DEBUG_PRINT2, DEBUG_PRINT3 have
	been added.
	(MODULES): Change all '-' to '_', because of the new
	EXPORT_FUNCTION_RESTRICTED system. This system creates symbol that
	contains the name of a module (the one allowed to import the
	exported symbol). But the '-' character is not allowed inside C
	identifiers. So, we use '_' instead.

	* CREDITS: Add Fabrice Bellard to the CREDITS, for his Qemu
	emulator.



Index: pmm.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/pmm/pmm.c,v
retrieving revision 1.12
retrieving revision 1.13
diff -u -d -r1.12 -r1.13
--- pmm.c	18 Aug 2003 17:05:33 -0000	1.12
+++ pmm.c	28 Dec 2004 18:44:45 -0000	1.13
@@ -16,11 +16,16 @@
 
 DECLARE_INIT_SYMBOL(init_module_level1, INIT_LEVEL1);
 
-EXPORT_FUNCTION(get_gpfme_at_phys_addr);
-EXPORT_FUNCTION(get_gpfme_at_virt_addr);
-EXPORT_FUNCTION(put_physical_page);
-EXPORT_FUNCTION(get_physical_page);
-EXPORT_FUNCTION(gpfme_unlock);
-EXPORT_FUNCTION(change_gpfme_swap_status);
+EXPORT_FUNCTION(physmem_get_page);
+EXPORT_FUNCTION(physmem_put_page);
+EXPORT_FUNCTION(physmem_inc_use_cnt);
+EXPORT_FUNCTION(physmem_dec_use_cnt);
+EXPORT_FUNCTION(physmem_set_use_cnt);
+EXPORT_FUNCTION(physmem_get_ref_cnt);
+//EXPORT_FUNCTION(physmem_add_rmapping);
+//EXPORT_FUNCTION(physmem_del_rmapping);
+EXPORT_FUNCTION(physmem_get_rmapping_list);
+EXPORT_FUNCTION(physmem_commit_rmapping_list);
+EXPORT_FUNCTION(physmem_set_slab);
+EXPORT_FUNCTION(physmem_get_slab);
 EXPORT_FUNCTION(_get_gpfm_ram_map_size);
-EXPORT_FUNCTION(_gpfm_visit_list_unsafe);

Index: _pmm_init.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/pmm/_pmm_init.c,v
retrieving revision 1.10
retrieving revision 1.11
diff -u -d -r1.10 -r1.11
--- _pmm_init.c	18 Aug 2003 17:05:33 -0000	1.10
+++ _pmm_init.c	28 Dec 2004 18:44:45 -0000	1.11
@@ -99,7 +99,7 @@
 __init_text int init_gpfm(kernel_parameter_t *kp)
 {
   unsigned int i;
-  paddr_t      ppage;
+  paddr_t      paddr;
   vaddr_t      gpfme_page;
 
   main_memory_size = kp->total_mem_size;
@@ -126,12 +126,11 @@
 
   /* Allocate first GPFM page */
   gpfme_page = CORE_KERNEL_VIRTUAL_ADDR - gpfm.ram_map_size;
-  ppage      = get_physical_page(PHYS_PAGE_KERNEL, PHYS_PAGE_NON_SWAPPABLE);
-  ASSERT_FATAL(ppage != 0);
-  ASSERT_FATAL(arch_map_virtual_page(NULL,
-				     gpfme_page, ppage,
-				     VM_ACCESS_WRITE | VM_ACCESS_READ)
-	       == 0);
+  paddr      = physmem_get_page(PHYS_PAGE_KERNEL, PHYS_PAGE_NON_SWAPPABLE);
+  ASSERT_FATAL(paddr != 0);
+
+  map_virtual_page(NULL, gpfme_page, paddr, VM_ACCESS_WRITE | VM_ACCESS_READ);
+
   memset((void*)gpfme_page, 0x0, PAGE_SIZE);
 
   /* Ok, got our first page => let's init its first 3 entries */
@@ -165,11 +164,12 @@
       if (cur_gpfme_end_page != gpfme_page)
 	{
 	  /* Map another page for the GPFM */
-	  ppage = get_physical_page(PHYS_PAGE_KERNEL, PHYS_PAGE_NON_SWAPPABLE);
-	  ASSERT_FATAL(ppage != 0);
-	  ASSERT_FATAL(arch_map_virtual_page(NULL, cur_gpfme_end_page,
-					     ppage, VM_ACCESS_WRITE)
-		       == 0);
+	  paddr = physmem_get_page(PHYS_PAGE_KERNEL, PHYS_PAGE_NON_SWAPPABLE);
+	  ASSERT_FATAL(paddr != 0);
+
+	  map_virtual_page(NULL, cur_gpfme_end_page, paddr,
+			   VM_ACCESS_READ | VM_ACCESS_WRITE);
+
 	  memset((void*)cur_gpfme_end_page, 0x0, PAGE_SIZE);
 	  gpfme_page = cur_gpfme_end_page;
 	}
@@ -178,5 +178,5 @@
       init_ram_gpfme(kp, i);
     }
 
-  return 0;  
+  return 0;
 }

Index: _pmm.h
===================================================================
RCS file: /var/cvs/kos/kos/modules/pmm/_pmm.h,v
retrieving revision 1.14
retrieving revision 1.15
diff -u -d -r1.14 -r1.15
--- _pmm.h	18 Aug 2003 17:05:33 -0000	1.14
+++ _pmm.h	28 Dec 2004 18:44:45 -0000	1.15
@@ -35,6 +35,72 @@
  * place.
  */
 
+// GPFM (Global Page Frame Map) Entry = GPFME
+// One GPFME correspond to one physical page
+typedef struct gpfme gpfme_t;
+struct gpfme
+{
+  paddr_t address;              // 4 bytes (address of the page)
+
+  struct gpfme_flags_s {
+    k_ui32_t page_type :3;
+
+    /* For PHYS_PAGE_{KERNEL,USER} pages ONLY */
+    k_ui32_t swap_status :1;
+
+    /* For PHYS_PAGE_HW_MAPPING pages ONLY */
+    /** Reclaimable: upon cancellation of the hw_mapping, the page is
+     * inserted into the free list for further get_physical_page().\
+     * Unreclaimable: upon hw_mapping cancellation, the gpfme is
+     * destroyed.
+     */
+    k_ui32_t hw_mapping_reclaiming_status :1;
+
+  } flags; // 4 bytes
+
+  union {
+    /* Pointers used for linking in the free pages list */
+    struct {
+      gpfme_t *next;               // 4 bytes
+      gpfme_t *prev;               // 4 bytes
+    } free;
+
+    /* Pointers used for linking in the swappable pages lists */
+    struct {
+      gpfme_t *next;
+      gpfme_t *prev;
+    } swappable;
+
+    /* Pointers used for linking in the non swappable pages lists */
+    struct {
+      gpfme_t *next;
+      gpfme_t *prev;
+    } non_swappable;
+
+    /* Pointers used for linking in the hardware mapping pages list */
+    struct {
+      gpfme_t *next;               // 4 bytes
+      gpfme_t *prev;               // 4 bytes
+    } hw_mapping;
+  } u; // 8 bytes
+
+  // Bitmap for kernel memory allocation
+  struct kslab_slab *slab; //4B
+
+  struct rmap  *mapping_list;
+
+  /** The number of references to this page, which is egal to the
+      number of virtual mappings. When this counter reaches 0, then
+      the page can be freed. */
+  count_t ref_cnt;
+
+  /** The number of use of the page. This counter is used only to
+      count how many entries are used inside a PT. When this counter
+      reaches 0, and that the page is not a shared PT (ref_cnt==1),
+      then the PT can be freed */
+  count_t use_cnt;
+};
+
 /* Global Page Frame lists */
 struct _gpfm_lists_s {
   gpfme_t* ram_map;       // Array
@@ -67,5 +133,6 @@
   list_delete_named(gpfm.listname,item,u.listname.prev,u.listname.next)
 
 int init_gpfm(kernel_parameter_t *kp);
+struct gpfme *_physmem_get_gpfme_at_phys_addr(paddr_t paddr);
 
 #endif /* __pmm_h__ */

Index: Makefile
===================================================================
RCS file: /var/cvs/kos/kos/modules/pmm/Makefile,v
retrieving revision 1.7
retrieving revision 1.8
diff -u -d -r1.7 -r1.8
--- Makefile	8 Jul 2002 07:50:55 -0000	1.7
+++ Makefile	28 Dec 2004 18:44:45 -0000	1.8
@@ -1,4 +1,6 @@
-OBJS= _pmm_init.o _pmm_visit.o _pmm_put_page.o _pmm_get_at_addr.o _pmm_hw_mapping.o _pmm_additional.o _pmm.o _pmm_get_page.o pmm.o
+OBJS= _pmm_init.o  _pmm_put_page.o    _pmm.o _pmm_get_page.o _pmm_rmap.o pmm.o
+
+OLDOBJS=_pmm_additional.o _pmm_hw_mapping.o _pmm_visit.o _pmm_get_at_addr.o
 
 all: pmm.ro
 

Index: pmm.h
===================================================================
RCS file: /var/cvs/kos/kos/modules/pmm/pmm.h,v
retrieving revision 1.12
retrieving revision 1.13
diff -u -d -r1.12 -r1.13
--- pmm.h	8 Jun 2002 15:10:23 -0000	1.12
+++ pmm.h	28 Dec 2004 18:44:45 -0000	1.13
@@ -3,122 +3,48 @@
 
 #include <kos/system.h>
 #include <arch/mm/mm.h>
+#include <kmem/kmem.h>
 
-struct kslab_slab;
-
-// GPFM (Global Page Frame Map) Entry = GPFME
-// One GPFME correspond to one physical page
-typedef struct gpfme gpfme_t;
-struct gpfme 
-{
-  paddr_t address;              // 4 bytes (address of the page)
-
-  struct gpfme_flags_s {
+/* Page types */
 #define PHYS_PAGE_FREE       1
 #define PHYS_PAGE_KERNEL     2
 #define PHYS_PAGE_USER       3
 #define PHYS_PAGE_HW_MAPPING 4
-    k_ui32_t page_type :3;
 
-    /* For PHYS_PAGE_{KERNEL,USER} pages ONLY */
+/* Swap status */
 #define PHYS_PAGE_NON_SWAPPABLE 0
 #define PHYS_PAGE_SWAPPABLE     1
-    k_ui32_t swap_status :1;
-    
-    /* For PHYS_PAGE_HW_MAPPING pages ONLY */
+
+/* Reclaim status */
 #define PHYS_PAGE_HW_MAPPING_NON_RECLAIMABLE 0
 #define PHYS_PAGE_HW_MAPPING_RECLAIMABLE     1
-    /** Reclaimable: upon cancellation of the hw_mapping, the page is
-     * inserted into the free list for further get_physical_page().\ 
-     * Unreclaimable: upon hw_mapping cancellation, the gpfme is
-     * destroyed.
-     */
-    k_ui32_t hw_mapping_reclaiming_status :1;
 
-  } flags; // 4 bytes
-
-  union {
-    /* Pointers used for linking in the free pages list */
-    struct {
-      gpfme_t *next;               // 4 bytes
-      gpfme_t *prev;               // 4 bytes      
-    } free;
 
-    /* Pointers used for linking in the swappable pages lists */
-    struct {
-      gpfme_t *next;
-      gpfme_t *prev;
-    } swappable;
-
-    /* Pointers used for linking in the non swappable pages lists */
-    struct {
-      gpfme_t *next;
-      gpfme_t *prev;
-    } non_swappable;
-
-    /* Pointers used for linking in the hardware mapping pages list */
-    struct {
-      gpfme_t *next;               // 4 bytes
-      gpfme_t *prev;               // 4 bytes
-    } hw_mapping;
-  } u; // 8 bytes
-
-  // Bitmap for kernel memory allocation
-  struct kslab_slab *slab; //4B
+struct gpfme;
+struct rmap;
 
-  struct mapping_s  *mapping_list;
-  k_ui32_t ref_cnt; //4B -- Number of virtual mappings
-};
+paddr_t physmem_get_page(int page_type, int swap_status);
+result_t physmem_put_page(paddr_t paddr);
+result_t physmem_inc_use_cnt(paddr_t paddr);
+result_t physmem_dec_use_cnt(paddr_t paddr, count_t *use_cnt);
+result_t physmem_set_use_cnt(paddr_t paddr, count_t use_cnt);
+result_t physmem_get_ref_cnt(paddr_t paddr, count_t *ref_cnt);
+result_t physmem_get_rmapping_list(paddr_t paddr, struct rmap **list,
+				   spinlock_flags_t *flags);
+result_t physmem_commit_rmapping_list(paddr_t paddr, struct rmap *list,
+				      spinlock_flags_t flags, int count);
+result_t physmem_set_slab(paddr_t paddr, struct kslab_slab *slab);
+result_t physmem_get_slab(paddr_t paddr, struct kslab_slab **slab);
 
+#ifdef __OLD_KOS__
 /** Get the gpfme at a physical address.
- * @return gpfme or NULL (+locked)
- * @note Always call gpfme_unlock, even if gpfme is NULL !!!
  */
-gpfme_t *get_gpfme_at_phys_addr(paddr_t phys, k_ui32_t* gpfm_lock_flags);
+struct gpfme *get_gpfme_at_phys_addr(paddr_t phys);
 
 /** Get the gpfme at a virtual address (in current address space).
- * @return gpfme or NULL (+locked)
- * @note Always call gpfme_unlock, even if gpfme is NULL !!!
- */
-gpfme_t *get_gpfme_at_virt_addr(vaddr_t virt, k_ui32_t* gpfm_lock_flags);
-
-/**
- * Unlock the gpfm
- * @param gpfme may be NULL...
- * @param gpfm_lock_flags never NULL !
- * @return 0 (always)
- */
-int gpfme_unlock(gpfme_t* gpfme, k_ui32_t* gpfm_lock_flags);
-
-/** For any User or Kernel page, change its swap status.
- * @note assumes gpfm is locked. The gpfm is kept locked.
- * @return -1 on error, old swap status when Ok
  */
-int change_gpfme_swap_status(/*in*/gpfme_t *gpfme,
-			     int new_swap_status);
-
-/** Get a new free physical page.
- * @param page_type Either PHYS_PAGE_KERNEL, PHYS_PAGE_USER or
- *                  PHYS_PAGE_HW_MAPPING
- * @param swap_status Either PHYS_PAGE_SWAPPABLE or PHYS_PAGE_NON_SWAPPABLE
- * @return The physical address of the new page, or 0 on error.
- * @note BEWARE: ref_cnt of the new page is set to 0 !
- * @note SAFE (lock Ok).
- * @see put_physical_page()
- */
-paddr_t get_physical_page(int page_type, int swap_status);
+struct gpfme *get_gpfme_at_virt_addr(vaddr_t virt);
 
-/** Release a previously allocated or hardware mapped page. If the
- * page is a hardware mapping and if it has status
- * PHYS_PAGE_HW_MAPPING_RECLAIMABLE, then the corresponding page is
- * inserted into the free list for later get_physical_page().
- *
- * @param paddr the physical address of the page to release.
- * @return 0 when Ok. -1 when page not physically present and used.
- * @see get_physical_page() and @see get_hw_mapping_page()
- * @note SAFE (lock Ok).
- */
-int put_physical_page(paddr_t paddr);
 
 /** Add a new gpfme to the free pages list.
  * @param paddr Is checked agains existing physical addresses already
@@ -152,19 +78,13 @@
  */
 int declare_hw_mapping_page(paddr_t paddr);
 
-
-/**
- * For kvmem_init only, in order to declare the kernel range for the GPFM
- */
-size_t _get_gpfm_ram_map_size(void);
-
 /**
  * Callback called on each gpfme. Should return 0 if want to see next
  * gpfme, or != if must stop.
  * @note locks held
  * @see visit_gpfm_list()
  */
-typedef int (*pmm_list_visitor_t)(gpfme_t* gpfme, void* custom_param);
+typedef int (*pmm_list_visitor_t)(struct gpfme* gpfme, void* custom_param);
 /**
  * Visit the lists according to page type
  * @param page_type either PHYS_PAGE_FREE, PHYS_PAGE_KERNEL, PHYS_PAGE_USER, PHYS_PAGE_HW_MAPPING
@@ -176,3 +96,11 @@
 int _gpfm_visit_list_unsafe(int page_type, pmm_list_visitor_t visitor, void* custom_param);
 
 #endif
+
+/**
+ * For kvmem_init only, in order to declare the kernel range for the GPFM
+ */
+size_t _get_gpfm_ram_map_size(void);
+
+
+#endif

Index: _pmm_get_at_addr.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/pmm/_pmm_get_at_addr.c,v
retrieving revision 1.4
retrieving revision 1.5
diff -u -d -r1.4 -r1.5
--- _pmm_get_at_addr.c	19 Aug 2003 00:13:33 -0000	1.4
+++ _pmm_get_at_addr.c	28 Dec 2004 18:44:45 -0000	1.5
@@ -5,13 +5,11 @@
 
 #include "_pmm.h"
 
-gpfme_t *get_gpfme_at_phys_addr(paddr_t paddr, k_ui32_t* flags)
+struct gpfme *_physmem_get_gpfme_at_phys_addr(paddr_t paddr)
 {
   int nb;
   gpfme_t  *gpfme, *result;
 
-  write_spin_lock(gpfm.lock, *flags);
-
   /* If the requested address is in main memory (or mapped in a main
      memory area), gpfme are direct-mapped */
   if (paddr < main_memory_size)

Index: _pmm_put_page.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/pmm/_pmm_put_page.c,v
retrieving revision 1.5
retrieving revision 1.6
diff -u -d -r1.5 -r1.6
--- _pmm_put_page.c	11 Dec 2003 17:01:27 -0000	1.5
+++ _pmm_put_page.c	28 Dec 2004 18:44:45 -0000	1.6
@@ -5,71 +5,78 @@
 
 #include "_pmm.h"
 
-/* gpfme AND gpfm MUST be locked */
-/* Return TRUE when gpfme must be kfree() */
-static inline bool_t _release_gpfme_unsafe(gpfme_t* gpfme)
+/** Free a physical page
+ *
+ * @param paddr The physical address of the page to be freed
+ *
+ * @result ESUCCESS on success, -EBUSY if physical page is still in
+ * use
+ */
+result_t physmem_put_page(paddr_t paddr)
 {
-  /* Make sure nobody shares the page ! */
-  RETURN_VAL_IF_FAIL_VERBOSE(gpfme->ref_cnt == 0, FALSE);
+  struct gpfme *to_be_free = NULL;
+  struct gpfme *gpfme;
+  spinlock_flags_t flags_gpfm;
 
-  /* If this is a "normal" page (user or kernel data), first remove it
-     from its list */
-  if ((gpfme->flags.page_type == PHYS_PAGE_KERNEL)
-      || (gpfme->flags.page_type == PHYS_PAGE_USER))
+  gpfme = _physmem_get_gpfme_at_phys_addr(paddr);
+  if(gpfme == NULL)
     {
-      if (gpfme->flags.swap_status == PHYS_PAGE_SWAPPABLE)
-	{
-	  GPFM_LIST_DEL(swappable,gpfme);
-	}
-      else
-	{
-	  GPFM_LIST_DEL(non_swappable,gpfme);      
-	}
+      return -EINVAL;
     }
-  /* if this is a hardware mapping, remove it from the hw_mapping */
-  else if (gpfme->flags.page_type == PHYS_PAGE_HW_MAPPING)
+
+  DEBUG_PRINT2("[physmem_put_page] Freeing page @ 0x%x (gpfme=0x%x)\n",
+	       gpfme->address, (unsigned) gpfme);
+
+  write_spin_lock (gpfm.lock, flags_gpfm);
+
+  if (gpfme->ref_cnt != 0)
     {
-      GPFM_LIST_DEL(hw_mapping,gpfme);
+      return -EBUSY;
+    }
+
+  /* Page is not free */
+  ASSERT_FATAL(gpfme->flags.page_type != PHYS_PAGE_FREE);
+
+  switch (gpfme->flags.page_type)
+    {
+
+      /* If this is a "normal" page (user or kernel data), first remove it
+	 from its list */
+    case PHYS_PAGE_KERNEL:
+    case PHYS_PAGE_USER:
+      if (gpfme->flags.swap_status)
+	GPFM_LIST_DEL (swappable, gpfme);
+      else
+	GPFM_LIST_DEL (non_swappable, gpfme);
+
+      break;
+
+      /* if this is a hardware mapping, remove it from the hw_mapping */
+    case PHYS_PAGE_HW_MAPPING:
+      GPFM_LIST_DEL (hw_mapping, gpfme);
       /* Move it to the free list only if the hw_mapping is
          reclaimable. Otherwise, the gpfme element is released. */
       if (gpfme->flags.hw_mapping_reclaiming_status
 	  == PHYS_PAGE_HW_MAPPING_NON_RECLAIMABLE)
 	{
-	  return TRUE; /* kfree */
+	  to_be_free = gpfme;
 	}
+
+      break;
+
+    default:
+      FAILED_VERBOSE ("Invalid page type");
     }
-  else if (gpfme->flags.page_type == PHYS_PAGE_FREE)
-    return FALSE; // Somebody already freed the page
-  else
-    FAILED_VERBOSE("Invalid page type\n");
 
   /* Move the gpfme to the free pages' list */
   gpfme->flags.page_type   = PHYS_PAGE_FREE;
   gpfme->flags.swap_status = PHYS_PAGE_NON_SWAPPABLE;
-  GPFM_LIST_ADD(free, gpfme);
-
-  return FALSE;
-}
-
-int put_physical_page(paddr_t paddr)
-{
-  k_ui32_t flags;
-  bool_t to_be_freed;
-  gpfme_t* gpfme;
+  GPFM_LIST_ADD (free, gpfme);
 
-  gpfme = get_gpfme_at_phys_addr(paddr, & flags);
-  if (! gpfme)
-    {
-      write_spin_unlock(gpfm.lock,flags);
-      return -1;
-    }
+  write_spin_unlock(gpfm.lock, flags_gpfm);
 
-  to_be_freed = _release_gpfme_unsafe(gpfme);
-  write_spin_unlock(gpfm.lock,flags);
+  if(to_be_free != NULL)
+    kfree (to_be_free);
 
-  if (to_be_freed)
-    kfree(gpfme);
-  
-  return 0;
+  return ESUCCESS;
 }
-

Index: _pmm_get_page.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/pmm/_pmm_get_page.c,v
retrieving revision 1.2
retrieving revision 1.3
diff -u -d -r1.2 -r1.3
--- _pmm_get_page.c	23 Mar 2002 15:39:13 -0000	1.2
+++ _pmm_get_page.c	28 Dec 2004 18:44:45 -0000	1.3
@@ -2,11 +2,37 @@
 #include <debug/debug.h>
 #include "_pmm.h"
 
-/* page_type MUST be either PHYS_PAGE_USER or PHYS_PAGE_KERNEL (NO
-   check) */
-paddr_t get_physical_page(int page_type, int swap_status)
+struct gpfme *_physmem_get_gpfme_at_phys_addr(paddr_t paddr)
 {
-  k_ui32_t flags;
+  gpfme_t *result;
+
+  /* If the requested address is in main memory (or mapped in a main
+     memory area), gpfme are direct-mapped */
+  if (paddr < main_memory_size)
+    {
+      result = gpfm.ram_map + (paddr >> PAGE_SIZE_SHIFT);
+      return result;
+    }
+
+  return 0;
+}
+
+/** Allocate a new physical page
+ *
+ * @param page_type The type of the page, either PHYS_PAGE_USER for a
+ * user page, or PHYS_PAGE_KERNEL for a kernel page. No check is made
+ * concerning the real use of the page.
+ *
+ * @param swap_status Tells whether the physical page is swappable or
+ * not.
+ *
+ * @result The physical address of the page
+ *
+ * @note The reference counter of the page is set to 0.
+ */
+paddr_t physmem_get_page(int page_type, int swap_status)
+{
+  spinlock_flags_t flags;
   gpfme_t* gpfme;
 
   RETURN_VAL_IF_FAIL_VERBOSE((page_type == PHYS_PAGE_KERNEL)
@@ -17,9 +43,14 @@
 
   gpfme = list_get_head_named(gpfm.free, u.free.prev, u.free.next);
   GPFM_LIST_DEL(free, gpfme);
-  INIT_GPFME(gpfme);
-  gpfme->flags.page_type   = page_type;
-  gpfme->flags.swap_status = swap_status;
+
+  gpfme->slab               = NULL;
+  gpfme->mapping_list       = NULL;
+  gpfme->ref_cnt            = 0;
+  gpfme->use_cnt            = 0;
+  gpfme->flags.page_type    = page_type;
+  gpfme->flags.swap_status  = swap_status;
+
   if (swap_status == PHYS_PAGE_SWAPPABLE)
     GPFM_LIST_ADD(swappable, gpfme);
   else
@@ -27,5 +58,176 @@
 
   write_spin_unlock(gpfm.lock, flags);
 
+  DEBUG_PRINT2("[physmem_get_page] Allocated page @ 0x%x (gpfme=0x%x)\n",
+	       gpfme->address, (unsigned) gpfme);
+
   return gpfme->address;
 }
+
+/** Increment the use count of a physical page
+ *
+ * @param paddr The physical address of the page for which the use
+ * count has to be incremented
+ *
+ * @return ESUCESS or error code
+ */
+result_t physmem_inc_use_cnt(paddr_t paddr)
+{
+  struct gpfme *gpfme;
+  spinlock_flags_t flags_gpfm;
+
+  write_spin_lock (gpfm.lock, flags_gpfm);
+
+  gpfme = _physmem_get_gpfme_at_phys_addr(paddr);
+  if(gpfme == NULL)
+    {
+      write_spin_unlock (gpfm.lock, flags_gpfm);
+      return -EINVAL;
+    }
+
+  gpfme->use_cnt ++;
+
+  write_spin_unlock (gpfm.lock, flags_gpfm);
+
+  return ESUCCESS;
+}
+
+/** Decrement and return the use count of a physical page
+ *
+ * @param paddr The physical address of the page for which the use
+ * count has to be decremented
+ *
+ * @param use_cnt The address at which the new use counter value is
+ * returned
+ *
+ * @return ESUCCESS or error code
+ */
+result_t physmem_dec_use_cnt(paddr_t paddr, count_t *use_cnt)
+{
+  struct gpfme *gpfme;
+  spinlock_flags_t flags_gpfm;
+
+  write_spin_lock (gpfm.lock, flags_gpfm);
+
+  gpfme = _physmem_get_gpfme_at_phys_addr(paddr);
+  if(gpfme == NULL)
+    {
+      write_spin_unlock (gpfm.lock, flags_gpfm);
+      return -EINVAL;
+    }
+
+  gpfme->use_cnt --;
+  *use_cnt = gpfme->use_cnt;
+
+  write_spin_unlock (gpfm.lock, flags_gpfm);
+
+  return ESUCCESS;
+}
+
+result_t physmem_set_use_cnt(paddr_t paddr, count_t use_cnt)
+{
+  struct gpfme *gpfme;
+  spinlock_flags_t flags_gpfm;
+
+  write_spin_lock (gpfm.lock, flags_gpfm);
+
+  gpfme = _physmem_get_gpfme_at_phys_addr(paddr);
+  if(gpfme == NULL)
+    {
+      write_spin_unlock (gpfm.lock, flags_gpfm);
+      return -EINVAL;
+    }
+
+  gpfme->use_cnt = use_cnt;
+
+  write_spin_unlock (gpfm.lock, flags_gpfm);
+
+  return ESUCCESS;
+}
+
+/** Get current reference count
+ *
+ * @param paddr The physical address of the page for which we want the
+ * reference count.
+ *
+ * @param ref_cnt The address at which the reference counter will be
+ * returned
+ *
+ * @return ESUCCESS or error code
+ */
+result_t physmem_get_ref_cnt(paddr_t paddr, count_t *ref_cnt)
+{
+  struct gpfme *gpfme;
+  spinlock_flags_t flags_gpfm;
+
+  write_spin_lock (gpfm.lock, flags_gpfm);
+
+  gpfme = _physmem_get_gpfme_at_phys_addr(paddr);
+  if(gpfme == NULL)
+    {
+      write_spin_unlock (gpfm.lock, flags_gpfm);
+      return -EINVAL;
+    }
+
+  *ref_cnt = gpfme->ref_cnt;
+
+  write_spin_unlock (gpfm.lock, flags_gpfm);
+
+  return ESUCCESS;
+}
+
+/** Set slab for a physical page
+ *
+ * @param paddr Physical address of the page
+ * @param slab  The slab
+ *
+ * @return Error code
+ */
+result_t physmem_set_slab(paddr_t paddr, struct kslab_slab *slab)
+{
+  struct gpfme *gpfme;
+  spinlock_flags_t flags_gpfm;
+
+  write_spin_lock (gpfm.lock, flags_gpfm);
+
+  gpfme = _physmem_get_gpfme_at_phys_addr(paddr);
+  if(gpfme == NULL)
+    {
+      write_spin_unlock (gpfm.lock, flags_gpfm);
+      return -EINVAL;
+    }
+
+  gpfme->slab = slab;
+
+  write_spin_unlock (gpfm.lock, flags_gpfm);
+
+  return ESUCCESS;
+}
+
+/** Set slab for a physical page
+ *
+ * @param paddr Physical address of the page
+ * @param slab  Where to return the slab address
+ *
+ * @return Error code
+ */
+result_t physmem_get_slab(paddr_t paddr, struct kslab_slab **slab)
+{
+  struct gpfme *gpfme;
+  spinlock_flags_t flags_gpfm;
+
+  write_spin_lock (gpfm.lock, flags_gpfm);
+
+  gpfme = _physmem_get_gpfme_at_phys_addr(paddr);
+  if(gpfme == NULL)
+    {
+      write_spin_unlock (gpfm.lock, flags_gpfm);
+      return -EINVAL;
+    }
+
+  *slab = gpfme->slab;
+
+  write_spin_unlock (gpfm.lock, flags_gpfm);
+
+  return ESUCCESS;
+}



More information about the Kos-cvs mailing list