[Kos-cvs] kos/modules/kmem _kslab_cache_alloc.c, 1.14, 1.15 _kslab_cache_create.c, 1.12, 1.13 _kslab_cache_free.c, 1.16, 1.17 _kslab_cache_grow.c, 1.13, 1.14 _kslab_init.c, 1.11, 1.12 _kvmem_alloc.c, 1.12, 1.13 _kvmem_init.c, 1.18, 1.19 _kvmem_utils.c, 1.18, 1.19 kmem.c, 1.16, 1.17 kmem.h, 1.14, 1.15

thomas at kos.enix.org thomas at kos.enix.org
Tue Dec 28 19:44:41 CET 2004


Update of /var/cvs/kos/kos/modules/kmem
In directory the-doors:/tmp/cvs-serv10813/modules/kmem

Modified Files:
	_kslab_cache_alloc.c _kslab_cache_create.c _kslab_cache_free.c 
	_kslab_cache_grow.c _kslab_init.c _kvmem_alloc.c _kvmem_init.c 
	_kvmem_utils.c kmem.c kmem.h 
Log Message:
2004-12-28  Thomas Petazzoni  <thomas at crazy.kos.nx>

	* modules/x86/task/task.c: Try to restrict access to exported
	symbol.

	* modules/x86/task/_thread_cpu_context.c: Move to the new PMM
	system.

	* modules/x86/task/Makefile (all): arch_task.ro instead of
	arch-task.ro.

	* modules/x86/mm/_team_mm_context.c: More informations.

	* modules/x86/mm/_mm.h, modules/x86/mm/mm.c, modules/x86/mm/_rmap.c,
	modules/x86/mm/_vmap.c: The new VMAP/RMAP system. We also make
	sure access to all exported function is restricted to the VMM
	module. 

	* modules/x86/mm/Makefile (all): arch_mm.ro instead of
	arch-mm.ro. 

	* modules/x86/lib/Makefile (all): Rename to arch_lib.ro instead of
	arch-lib.ro. 

	* modules/x86/internals.h: More definitions on the address space
	configuration. 

	* modules/vmm/vmm.h (struct address_space): Add a mutex and a
	spinlock to protect address space.

	* modules/vmm/vmm.c: Restrict access to some exported
	functions. More work has to be done in this area.

	* modules/vmm/_vmm_map.c: Part of the new vmap system.

	* modules/vmm/_vmm_as.c: Make the appropriate lock/unlock on the
	address space mutex. It's just a first try. More reflexion has to
	be made.

	* modules/task/task.h: Make sure DOXYGEN doesn't try to analyze
	the #if stuff, because it doesn't like it.

	* modules/task/_task_utils.c (show_all_thread_info): If team is
	NULL, it means that we want to display the threads of all teams.

	* modules/scheduler/synchq.h: Avoid inclusion of task.h.

	* modules/pmm/pmm.c: New PMM system.

	* modules/pmm/_pmm_put_page.c: New PMM system.

	* modules/pmm/_pmm_init.c: New PMM system.

	* modules/pmm/_pmm_get_page.c: New PMM system.

	* modules/pmm/_pmm_get_at_addr.c: New PMM system.

	* modules/pmm/_pmm.h: struct gpfme is now private.

	* modules/pmm/pmm.h: struct gpfme is now private (migrated to
	_pmm.h). 

	* modules/pmm/Makefile (OBJS): New PMM system, with fewer
	functionnalities. 

	* modules/kos/spinlock.h: New type spinlock_flags_t, that should
	be used instead of k_ui32_t for spinlock flags.

	* modules/kmem/_kvmem_utils.c: Migration to the new PMM
	system and various cleanups.

	* modules/kmem/_kvmem_init.c: Migration to the new PMM
	system and various cleanups.

	* modules/kmem/_kslab_cache_grow.c: Migration to the new PMM
	system and various cleanups.

	* modules/kmem/_kslab_cache_free.c: Migration to the new PMM
	system, and various cleanups.

	* modules/kitc/_kmutex.c: DEBUG_PRINT3 calls to show mutex
	lock/unlock/trylock.

	* modules/init/_init_modules.c (init_modules): A message is
	displayed when initializating modules.

	* modules/ide/_ide.c: Various cleanups.

	* modules/fs/fat/_fat.c: Various cleanups.

	* modules/fs/devfs/devfs.c: Various cleanups, including whitespace
	cleanification.

	* modules/debug/debug.h: Add the DEBUG_PRINT1, DEBUG_PRINT2,
	DEBUG_PRINT3 macros. Maybe there's a cleaner way to do it. David ?

	* modules/debug/debug.c (init_module_level0): Init the
	backtracking stuff a little later so that we have debugging
	messages during this initialization.

	* modules/debug/bt.c (_init_backtracing_stuff): bt_next is not
	anymore a valid candidate to determine if fomit-frame-pointer was
	selected or not, because of gcc optimizations. We use bt_init
	instead.

	* modules/Makefile (doc): Add a target that generates the doxygen
	documentation. 

	* loader/mod.h (EXPORT_FUNCTION_RESTRICTED): Change the symbol
	names generated by the macros, so that they include the name of
	the target module (the one allowed to import the exported
	symbol). This is needed in order to export the same symbol to
	multiple modules. Previously, the RESTRICTED system generated
	symbols that were identical for a given symbol exported to
	multiple modules.

	* doc/testingfr.tex: A big update to this documentation. Not
	finished. The english version should also be updated.

	* TODO: Some new things to do.

	* MkVars (CFLAGS): Pass the DEBUG_LEVEL Makefile variable to the C
	files. In each modules/.../Makefile, we can set a
	DEBUG_LEVEL=value that will set the level of verbosity of the
	module. Macros named DEBUG_PRINT1, DEBUG_PRINT2, DEBUG_PRINT3 have
	been added.
	(MODULES): Change all '-' to '_', because of the new
	EXPORT_FUNCTION_RESTRICTED system. This system creates symbol that
	contains the name of a module (the one allowed to import the
	exported symbol). But the '-' character is not allowed inside C
	identifiers. So, we use '_' instead.

	* CREDITS: Add Fabrice Bellard to the CREDITS, for his Qemu
	emulator.



Index: kmem.h
===================================================================
RCS file: /var/cvs/kos/kos/modules/kmem/kmem.h,v
retrieving revision 1.14
retrieving revision 1.15
diff -u -d -r1.14 -r1.15
--- kmem.h	19 Aug 2003 00:13:32 -0000	1.14
+++ kmem.h	28 Dec 2004 18:44:38 -0000	1.15
@@ -4,6 +4,7 @@
 #include <kos/types.h>
 
 struct kslab_cache;
+struct kslab_slab;
 
 #define KSLAB_FLAG_BASE        20
 #define KSLAB_FLAG_MASK        (~((1<<KSLAB_FLAG_BASE)-1))

Index: _kslab_init.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/kmem/_kslab_init.c,v
retrieving revision 1.11
retrieving revision 1.12
diff -u -d -r1.11 -r1.12
--- _kslab_init.c	18 Aug 2003 17:05:29 -0000	1.11
+++ _kslab_init.c	28 Dec 2004 18:44:38 -0000	1.12
@@ -33,7 +33,7 @@
   cache_of_kslab_cache.growth_in_progress = FALSE;
   spinlock_init(cache_of_kslab_cache.lock);
   cache_of_kslab_cache.nb_pages_per_slab = 1;
-  cache_of_kslab_cache.nb_elts_per_slab = 
+  cache_of_kslab_cache.nb_elts_per_slab =
     (PAGE_SIZE - sizeof(kslab_slab_t)) / sizeof(kslab_cache_t);
 
   kslab_cache_list = NULL;
@@ -42,10 +42,10 @@
   cache_of_kslab_slab = kslab_cache_create("kslab_slab cache",
 					   sizeof(kslab_slab_t),
 					   0, 0, 0);
-  
+
   if(! cache_of_kslab_slab)
     return -1;
-  
+
   if((cache_of_kslab_slab->flags & (1<<SLAB_POS)) == 0)
     FAILED_VERBOSE("Could not correctly create cache of kslab\n");
 

Index: _kslab_cache_create.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/kmem/_kslab_cache_create.c,v
retrieving revision 1.12
retrieving revision 1.13
diff -u -d -r1.12 -r1.13
--- _kslab_cache_create.c	18 Aug 2003 17:05:29 -0000	1.12
+++ _kslab_cache_create.c	28 Dec 2004 18:44:38 -0000	1.13
@@ -3,7 +3,7 @@
  * http://kos.enix.org
  *
  * Create a cache in the kslab allocator
- * Cache line optimization, and different computations inspired by Linux 
+ * Cache line optimization, and different computations inspired by Linux
  * source code
  *
  * @(#) $Id$
@@ -14,10 +14,8 @@
 #include <kos/spinlock.h>
 #include <lib/std/string.h>
 
-kslab_cache_t *kslab_cache_create(char *name,
-				  size_t size,
-				  int align,
-				  int flags,
+kslab_cache_t *kslab_cache_create(char *name, size_t size,
+				  int align, int flags,
 				  k_ui32_t grow_threshold)
 {
   kslab_cache_t *new_cache;
@@ -30,7 +28,7 @@
   /* Take 'align' param into account */
   if(align != 0)
     size = ALIGN_SUP(size, align);
-  
+
   new_cache = kslab_cache_alloc(&cache_of_kslab_cache);
 
   ASSERT_FATAL(new_cache != NULL);
@@ -41,7 +39,7 @@
   new_cache->flags = flags & KSLAB_FLAG_MASK;
   spinlock_init(new_cache->lock);
   new_cache->original_size = original_size;
-  
+
   /* Small size objets => using on slab */
   if(size < (PAGE_SIZE>>7))
       new_cache->flags |= ON_SLAB;
@@ -83,21 +81,21 @@
   do
     {
       new_cache->nb_pages_per_slab += 1;
-      
+
       if(new_cache->nb_pages_per_slab >= MAX_PAGES_PER_SLAB)
 	break;
 
       waste = new_cache->nb_pages_per_slab*PAGE_SIZE;
       if(new_cache->flags & ON_SLAB)
 	waste -= sizeof(kslab_slab_t);
-      
+
       new_cache->nb_elts_per_slab = waste / size;
       waste -= new_cache->nb_elts_per_slab*size;
 
       if(new_cache->nb_elts_per_slab < MIN_ELTS_PER_SLAB)
 	continue;
-      
-      if(waste < ALIGN_SUP(sizeof(kslab_slab_t), 
+
+      if(waste < ALIGN_SUP(sizeof(kslab_slab_t),
 			   L1_CACHE_SIZE))
 	break;
 
@@ -108,10 +106,10 @@
 
   new_cache->size = size;
 
-  if(new_cache->nb_pages_per_slab*PAGE_SIZE - 
+  if(new_cache->nb_pages_per_slab*PAGE_SIZE -
      new_cache->nb_elts_per_slab*new_cache->size >= sizeof(kslab_slab_t))
     flags |= ON_SLAB;
-  
+
 #ifdef KSLAB_DEBUG
   __dbg_printk(_B_BLUE "[kslab] cache create size %d, nb_pages_per_slab %d, nb_elts_per_slab %d, lost space %d/10000, waste %d \n" _B_NORM,
 	       new_cache->size,
@@ -139,7 +137,7 @@
   /* Pre-grow the new cache in order to avoid lazy allocation, in
      order not to delay the first allocations in the new cache */
   if (__kslab_cache_grow(new_cache,   /* Allocate space for the new slab */
-			 kvalloc(new_cache->nb_pages_per_slab, 
+			 kvalloc(new_cache->nb_pages_per_slab,
 				 (new_cache->flags & SLAB_IS_SWAPPABLE),
 				 TRUE)) < 0) {
     kslab_cache_free(&cache_of_kslab_cache, new_cache);
@@ -149,6 +147,6 @@
   write_spin_lock(kernel_kslab_lock, flags);
   list_add_tail(kslab_cache_list, new_cache);
   write_spin_unlock(kernel_kslab_lock, flags);
-  
+
   return new_cache;
 }

Index: _kvmem_alloc.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/kmem/_kvmem_alloc.c,v
retrieving revision 1.12
retrieving revision 1.13
diff -u -d -r1.12 -r1.13
--- _kvmem_alloc.c	29 Dec 2003 20:50:41 -0000	1.12
+++ _kvmem_alloc.c	28 Dec 2004 18:44:38 -0000	1.13
@@ -39,7 +39,7 @@
   }
 
   /* If range has exactly the same size, just move it to used list */
-  if(current_range->nb_pages == nb_pages) 
+  if(current_range->nb_pages == nb_pages)
     {
       __kvmem_remove_range_from_free_list(current_range);
       __kvmem_add_range_to_used_list(current_range);
@@ -62,15 +62,15 @@
 	RETURN(0);
       new_range->start    = current_range->start;
       new_range->nb_pages = nb_pages;
-      
+
       /* Shrink the range we found above */
       current_range->start    += nb_pages * PAGE_SIZE;
       current_range->nb_pages -= nb_pages;
-      
+
       __kvmem_add_range_to_used_list(new_range);
       virt_base_addr =  new_range->start;
     }
-  
+
   /* If we're close to the MINIMAL_SURVIVAL_RANGE_NB, allocate
              one more page, for more ranges */
   if(total_range_nb == MINIMAL_SURVIVAL_RANGE_NB) {

Index: _kvmem_init.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/kmem/_kvmem_init.c,v
retrieving revision 1.18
retrieving revision 1.19
diff -u -d -r1.18 -r1.19
--- _kvmem_init.c	29 Dec 2003 20:50:41 -0000	1.18
+++ _kvmem_init.c	28 Dec 2004 18:44:38 -0000	1.19
@@ -34,11 +34,11 @@
   used_page_range_list = NULL;
   free_page_range_list = NULL;
 
-  first_page_of_range = 
+  first_page_of_range =
     (page_of_range_t *) PAGE_ALIGN_SUP(kp->allocated_memory_top_virt_addr);
-  paddr = get_physical_page(PHYS_PAGE_KERNEL, PHYS_PAGE_NON_SWAPPABLE);
+  paddr = physmem_get_page(PHYS_PAGE_KERNEL, PHYS_PAGE_NON_SWAPPABLE);
   RETURN_VAL_IF_FAIL(paddr, -1);
-  
+
   map_virtual_page(NULL,
 		   (vaddr_t)first_page_of_range,
 		   paddr,
@@ -129,7 +129,7 @@
 
 #ifdef KVMEM_DEBUG
   __kvmem_show_all_ranges();
-#endif  
+#endif
 
   return 0;
 }

Index: kmem.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/kmem/kmem.c,v
retrieving revision 1.16
retrieving revision 1.17
diff -u -d -r1.16 -r1.17
--- kmem.c	6 Jun 2002 19:34:37 -0000	1.16
+++ kmem.c	28 Dec 2004 18:44:38 -0000	1.17
@@ -17,7 +17,7 @@
   if(__kvmem_init(kp) < 0)
     return -1;
   printk("] ");
-  
+
   printk("[kslab");
   if(__kslab_init() < 0)
     return -1;
@@ -68,7 +68,7 @@
 EXPORT_FUNCTION(kslab_cache_alloc);
 EXPORT_FUNCTION(kslab_cache_free);
 
-EXPORT_FUNCTION(__kvmem_get_used_page_range_list);
+// EXPORT_FUNCTION(__kvmem_get_used_page_range_list);
 
 SPINLOCK(kernel_kvalloc_lock);
 EXPORT_SPINLOCK(kernel_kvalloc_lock);

Index: _kslab_cache_grow.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/kmem/_kslab_cache_grow.c,v
retrieving revision 1.13
retrieving revision 1.14
diff -u -d -r1.13 -r1.14
--- _kslab_cache_grow.c	19 Aug 2003 00:13:32 -0000	1.13
+++ _kslab_cache_grow.c	28 Dec 2004 18:44:38 -0000	1.14
@@ -19,10 +19,10 @@
 
   if(!new_space)
     return -1;
-  
+
   /* Create the linked list of free blocks in the new slab */
-  for(ptr = new_space; 
-      ptr < ((cache->nb_elts_per_slab *  cache->size) + new_space); 
+  for(ptr = new_space;
+      ptr < ((cache->nb_elts_per_slab *  cache->size) + new_space);
       ptr += (cache->size))
     {
       free_blk = (kslab_free_blk_t *) ptr;
@@ -31,10 +31,10 @@
 
   /* Make sure that the next pointers point to NULL for the last
      element of the slab */
-  free_blk = (kslab_free_blk_t *) (new_space + ((cache->nb_elts_per_slab-1) 
+  free_blk = (kslab_free_blk_t *) (new_space + ((cache->nb_elts_per_slab-1)
 					       * cache->size));
   free_blk->next = NULL;
-  
+
   /* If the kslab_slab_t have to be on the slab, its address is at the
      end of this slab */
   if(cache->flags & ON_SLAB)
@@ -51,18 +51,20 @@
       if(! new_slab)
 	return -1;
     }
-  
-  /* Update 'slab' pointers of all gpfme_t concerned by this cache grow */
+
+  /* Update 'slab' pointers of all physical pages concerned by this cache grow */
   for(virt_addr = new_space;
       virt_addr < new_space + cache->nb_pages_per_slab*PAGE_SIZE;
       virt_addr += PAGE_SIZE)
     {
-      k_ui32_t gpfm_lock_flags;
-      gpfme_t *gpfme = get_gpfme_at_virt_addr(virt_addr, & gpfm_lock_flags);
-      CONCEPTION_ASSERT(gpfme != NULL);
+      paddr_t paddr;
+      result_t result;
 
-      gpfme->slab = new_slab;
-      gpfme_unlock(gpfme, & gpfm_lock_flags);
+      result = get_paddr_at_vaddr(NULL, virt_addr, & paddr);
+      CONCEPTION_ASSERT(result == ESUCCESS);
+
+      result = physmem_set_slab(paddr, new_slab);
+      CONCEPTION_ASSERT(result == ESUCCESS);
     }
 
   /* No spinlock needed because the slab is known to be available only
@@ -74,7 +76,7 @@
   new_slab->cache = cache;
   list_add_head(cache->free, new_slab);
   cache->nb_available += cache->nb_elts_per_slab;
-  
+
   return 0;
 }
 

Index: _kslab_cache_free.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/kmem/_kslab_cache_free.c,v
retrieving revision 1.16
retrieving revision 1.17
diff -u -d -r1.16 -r1.17
--- _kslab_cache_free.c	19 Aug 2003 00:13:32 -0000	1.16
+++ _kslab_cache_free.c	28 Dec 2004 18:44:38 -0000	1.17
@@ -26,7 +26,7 @@
 
   slab->nb_free++;
   slab->cache->nb_available++;
-  
+
   /* If number of elements is egal to the total number of elements per
      slab, the slab is free. It is then moved from the semi_full list
      to the free list. We are sure that the page is in the semi_full
@@ -39,7 +39,7 @@
     }
 
   write_spin_unlock(slab->cache->lock, flags);
-  
+
   return 0;
 }
 
@@ -47,19 +47,17 @@
    block */
 kslab_slab_t *__kslab_get_slab(void *block)
 {
-  gpfme_t *gpfme;
-  k_ui32_t flags;
+  paddr_t paddr;
   kslab_slab_t *slab;
+  result_t result;
 
-  gpfme = get_gpfme_at_virt_addr((vaddr_t)block, & flags);
-  if(!gpfme)
+  result = get_paddr_at_vaddr(NULL, PAGE_ALIGN_INF((vaddr_t) block), & paddr);
+  if(result < 0)
     {
-      gpfme_unlock(gpfme, & flags);
       return NULL;
     }
 
-  slab = gpfme->slab;
-  gpfme_unlock(gpfme, & flags);
+  physmem_get_slab(paddr, & slab);
 
   return slab;
 }
@@ -68,7 +66,7 @@
 int kslab_cache_free(kslab_cache_t *cache,
 		     void *block)
 {
-  
+
   kslab_slab_t *slab;
 
   /* Get the slab containing the block */
@@ -76,11 +74,11 @@
 
   if(!slab)
     return -1;
-  
+
   /* Make sure the cache corresponding to the found slab match the
      given cache */
   CONCEPTION_ASSERT(cache == slab->cache);
-  
+
   /* Free the block */
   return __kslab_cache_free_by_slab(slab, block);
 }

Index: _kvmem_utils.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/kmem/_kvmem_utils.c,v
retrieving revision 1.18
retrieving revision 1.19
diff -u -d -r1.18 -r1.19
--- _kvmem_utils.c	29 Dec 2003 20:50:41 -0000	1.18
+++ _kvmem_utils.c	28 Dec 2004 18:44:38 -0000	1.19
@@ -69,17 +69,18 @@
 }
 
 
-int __kvmem_map_range(vaddr_t virt_base_addr, int nb_pages, 
+int __kvmem_map_range(vaddr_t virt_base_addr, int nb_pages,
 		      bool_t is_swappable)
 {
   int i;
   for (i = 0 ; i < nb_pages ; i++, virt_base_addr += PAGE_SIZE) {
-    paddr_t page_addr = get_physical_page(PHYS_PAGE_KERNEL,
+    paddr_t page_addr = physmem_get_page(PHYS_PAGE_KERNEL,
 					 (is_swappable)?PHYS_PAGE_SWAPPABLE:PHYS_PAGE_NON_SWAPPABLE);
 
     if (! page_addr)
       return -1;
-    
+
+    __dbg_printk("Mapping phys page 0x%x to virt 0x%x\n", page_addr, virt_base_addr);
     map_virtual_page(NULL, virt_base_addr, page_addr,
 		     VM_ACCESS_WRITE | VM_ACCESS_READ);
   }

Index: _kslab_cache_alloc.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/kmem/_kslab_cache_alloc.c,v
retrieving revision 1.14
retrieving revision 1.15
diff -u -d -r1.14 -r1.15
--- _kslab_cache_alloc.c	19 Aug 2003 00:13:32 -0000	1.14
+++ _kslab_cache_alloc.c	28 Dec 2004 18:44:38 -0000	1.15
@@ -14,7 +14,7 @@
         do { write_spin_unlock((cache)->lock, lock_flags);  } while(0)
 #define LOCK_CACHE(cache) \
         do { write_spin_lock((cache)->lock, lock_flags); } while(0)
-  
+
 void *kslab_cache_alloc(kslab_cache_t *cache)
 {
   k_ui32_t lock_flags;
@@ -48,6 +48,7 @@
 	new_space = kvalloc(cache->nb_pages_per_slab, 
 			    (cache->flags & SLAB_IS_SWAPPABLE),
 			    TRUE);
+
 	if(__kslab_cache_grow(cache, new_space) < 0)
 	  {
 	    UNLOCK_CACHE(cache);



More information about the Kos-cvs mailing list