[Kos-cvs] kos/modules/vmm Makefile, 1.20, 1.21 _vmm.h, 1.18, 1.19 _vmm_as.c, 1.24, 1.25 _vmm_map.c, 1.5, 1.6 vmm.c, 1.18, 1.19 vmm.h, 1.26, 1.27

thomas at kos.enix.org thomas at kos.enix.org
Tue Dec 28 19:44:58 CET 2004


Update of /var/cvs/kos/kos/modules/vmm
In directory the-doors:/tmp/cvs-serv10813/modules/vmm

Modified Files:
	Makefile _vmm.h _vmm_as.c _vmm_map.c vmm.c vmm.h 
Log Message:
2004-12-28  Thomas Petazzoni  <thomas at crazy.kos.nx>

	* modules/x86/task/task.c: Try to restrict access to exported
	symbol.

	* modules/x86/task/_thread_cpu_context.c: Move to the new PMM
	system.

	* modules/x86/task/Makefile (all): arch_task.ro instead of
	arch-task.ro.

	* modules/x86/mm/_team_mm_context.c: More informations.

	* modules/x86/mm/_mm.h, modules/x86/mm/mm.c, modules/x86/mm/_rmap.c,
	modules/x86/mm/_vmap.c: The new VMAP/RMAP system. We also make
	sure access to all exported function is restricted to the VMM
	module. 

	* modules/x86/mm/Makefile (all): arch_mm.ro instead of
	arch-mm.ro. 

	* modules/x86/lib/Makefile (all): Rename to arch_lib.ro instead of
	arch-lib.ro. 

	* modules/x86/internals.h: More definitions on the address space
	configuration. 

	* modules/vmm/vmm.h (struct address_space): Add a mutex and a
	spinlock to protect address space.

	* modules/vmm/vmm.c: Restrict access to some exported
	functions. More work has to be done in this area.

	* modules/vmm/_vmm_map.c: Part of the new vmap system.

	* modules/vmm/_vmm_as.c: Make the appropriate lock/unlock on the
	address space mutex. It's just a first try. More reflexion has to
	be made.

	* modules/task/task.h: Make sure DOXYGEN doesn't try to analyze
	the #if stuff, because it doesn't like it.

	* modules/task/_task_utils.c (show_all_thread_info): If team is
	NULL, it means that we want to display the threads of all teams.

	* modules/scheduler/synchq.h: Avoid inclusion of task.h.

	* modules/pmm/pmm.c: New PMM system.

	* modules/pmm/_pmm_put_page.c: New PMM system.

	* modules/pmm/_pmm_init.c: New PMM system.

	* modules/pmm/_pmm_get_page.c: New PMM system.

	* modules/pmm/_pmm_get_at_addr.c: New PMM system.

	* modules/pmm/_pmm.h: struct gpfme is now private.

	* modules/pmm/pmm.h: struct gpfme is now private (migrated to
	_pmm.h). 

	* modules/pmm/Makefile (OBJS): New PMM system, with fewer
	functionnalities. 

	* modules/kos/spinlock.h: New type spinlock_flags_t, that should
	be used instead of k_ui32_t for spinlock flags.

	* modules/kmem/_kvmem_utils.c: Migration to the new PMM
	system and various cleanups.

	* modules/kmem/_kvmem_init.c: Migration to the new PMM
	system and various cleanups.

	* modules/kmem/_kslab_cache_grow.c: Migration to the new PMM
	system and various cleanups.

	* modules/kmem/_kslab_cache_free.c: Migration to the new PMM
	system, and various cleanups.

	* modules/kitc/_kmutex.c: DEBUG_PRINT3 calls to show mutex
	lock/unlock/trylock.

	* modules/init/_init_modules.c (init_modules): A message is
	displayed when initializating modules.

	* modules/ide/_ide.c: Various cleanups.

	* modules/fs/fat/_fat.c: Various cleanups.

	* modules/fs/devfs/devfs.c: Various cleanups, including whitespace
	cleanification.

	* modules/debug/debug.h: Add the DEBUG_PRINT1, DEBUG_PRINT2,
	DEBUG_PRINT3 macros. Maybe there's a cleaner way to do it. David ?

	* modules/debug/debug.c (init_module_level0): Init the
	backtracking stuff a little later so that we have debugging
	messages during this initialization.

	* modules/debug/bt.c (_init_backtracing_stuff): bt_next is not
	anymore a valid candidate to determine if fomit-frame-pointer was
	selected or not, because of gcc optimizations. We use bt_init
	instead.

	* modules/Makefile (doc): Add a target that generates the doxygen
	documentation. 

	* loader/mod.h (EXPORT_FUNCTION_RESTRICTED): Change the symbol
	names generated by the macros, so that they include the name of
	the target module (the one allowed to import the exported
	symbol). This is needed in order to export the same symbol to
	multiple modules. Previously, the RESTRICTED system generated
	symbols that were identical for a given symbol exported to
	multiple modules.

	* doc/testingfr.tex: A big update to this documentation. Not
	finished. The english version should also be updated.

	* TODO: Some new things to do.

	* MkVars (CFLAGS): Pass the DEBUG_LEVEL Makefile variable to the C
	files. In each modules/.../Makefile, we can set a
	DEBUG_LEVEL=value that will set the level of verbosity of the
	module. Macros named DEBUG_PRINT1, DEBUG_PRINT2, DEBUG_PRINT3 have
	been added.
	(MODULES): Change all '-' to '_', because of the new
	EXPORT_FUNCTION_RESTRICTED system. This system creates symbol that
	contains the name of a module (the one allowed to import the
	exported symbol). But the '-' character is not allowed inside C
	identifiers. So, we use '_' instead.

	* CREDITS: Add Fabrice Bellard to the CREDITS, for his Qemu
	emulator.



Index: vmm.h
===================================================================
RCS file: /var/cvs/kos/kos/modules/vmm/vmm.h,v
retrieving revision 1.26
retrieving revision 1.27
diff -u -d -r1.26 -r1.27
--- vmm.h	29 Dec 2003 13:42:51 -0000	1.26
+++ vmm.h	28 Dec 2004 18:44:56 -0000	1.27
@@ -14,6 +14,7 @@
 #include <kos/errno.h>
 #include <lib/bst/libbst.h>
 #include <arch/mm/mm.h>
+#include <kitc/kmutex.h>
 #include <karm/karm.h>
 
 typedef enum { MAP_PRIVATE=0x20, MAP_SHARED } sharing_type_t;
@@ -61,6 +62,13 @@
 
   /** Start of the heap, current position of the heap */
   vaddr_t heap_start, heap_current;
+
+  /** Spinlock for manipulations of the page tables */
+  spinlock_t lock;
+
+  /** Semaphore for manipulations of the address space : virtual
+      region tree, etc.. */
+  struct kmutex mutex;
 };
 
 
@@ -109,7 +117,9 @@
 };
 
 
-#include <task/task.h>
+//#include <task/task.h>
+
+struct team;
 
 /* _vmm_as.c */
 result_t as_init (struct address_space *as,
@@ -131,32 +141,34 @@
 
 void as_dump (struct address_space *as);
 
-result_t as_copy(struct address_space *as_from, 
+result_t as_copy(struct address_space *as_from,
 		 struct team *team_to);
 
 result_t as_empty(struct address_space *as);
 
-result_t as_update_heap_start(struct address_space *as, 
+result_t as_update_heap_start(struct address_space *as,
 			      vaddr_t heap_start);
 result_t as_change_heap(struct address_space *as,
 			offset_t increment,
 			vaddr_t *heap_current);
 
 /* _vmm_map.c */
-int map_virtual_page(struct team* dest_team,
+int map_virtual_page(const struct team* dest_team,
 		     vaddr_t virt, paddr_t phys,
 		     access_right_t access_rights);
-int unmap_virtual_range(struct team* dest_team,
+int unmap_virtual_range(const struct team* dest_team,
 			vaddr_t start, size_t len);
-int unmap_virtual_page(struct team* dest_team,
+int unmap_virtual_page(const struct team* dest_team,
 		       vaddr_t vaddr);
-int protect_virtual_page(struct team* dest_team,
+int protect_virtual_page(const struct team* dest_team,
 			 vaddr_t vaddr,
 			 access_right_t access_rights);
-result_t protect_virtual_range(struct team *dest_team,
-			       vaddr_t start, vaddr_t end, 
+result_t protect_virtual_range(const struct team *dest_team,
+			       vaddr_t start, vaddr_t end,
 			       access_right_t access_rights);
-result_t get_paddr_at_vaddr(vaddr_t virt, paddr_t *paddr);
-
+result_t get_paddr_at_vaddr(const struct team *dest_team,
+			    vaddr_t virt, paddr_t *paddr);
+result_t get_virtual_page_status(const struct team *team,
+				 vaddr_t vaddr, vpage_status_t *status);
 
 #endif

Index: vmm.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/vmm/vmm.c,v
retrieving revision 1.18
retrieving revision 1.19
diff -u -d -r1.18 -r1.19
--- vmm.c	11 Dec 2003 17:01:27 -0000	1.18
+++ vmm.c	28 Dec 2004 18:44:56 -0000	1.19
@@ -1,5 +1,4 @@
 #include <loader/mod.h>
-
 #include <vmm/_vmm.h>
 #include "_dev_zero.h"
 
@@ -27,12 +26,12 @@
 EXPORT_FUNCTION(map_virtual_page);
 EXPORT_FUNCTION(get_paddr_at_vaddr);
 EXPORT_FUNCTION(as_page_fault);
-EXPORT_FUNCTION(as_init);
-EXPORT_FUNCTION(as_switch);
+EXPORT_FUNCTION_RESTRICTED (as_init,   task);
+EXPORT_FUNCTION_RESTRICTED (as_switch, task);
+EXPORT_FUNCTION_RESTRICTED (as_copy,   task);
+EXPORT_FUNCTION_RESTRICTED (as_empty,  task);
 EXPORT_FUNCTION(as_map_ures);
 EXPORT_FUNCTION(as_unmap_ures);
-EXPORT_FUNCTION(as_copy);
-EXPORT_FUNCTION(as_empty);
 EXPORT_FUNCTION(as_dump);
 EXPORT_FUNCTION(as_update_heap_start);
 EXPORT_FUNCTION(as_change_heap);

Index: Makefile
===================================================================
RCS file: /var/cvs/kos/kos/modules/vmm/Makefile,v
retrieving revision 1.20
retrieving revision 1.21
diff -u -d -r1.20 -r1.21
--- Makefile	11 Dec 2003 17:01:27 -0000	1.20
+++ Makefile	28 Dec 2004 18:44:56 -0000	1.21
@@ -1,4 +1,6 @@
-OBJS= _vmm_as.o _vmm_map.o _dev_zero.o vmm.o
+OBJS= _vmm_as.o _vmm_map.o vmm.o _dev_zero.o
+
+DEBUG_LEVEL=2
 
 all: vmm.ro
 

Index: _vmm_map.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/vmm/_vmm_map.c,v
retrieving revision 1.5
retrieving revision 1.6
diff -u -d -r1.5 -r1.6
--- _vmm_map.c	27 Oct 2003 15:37:32 -0000	1.5
+++ _vmm_map.c	28 Dec 2004 18:44:56 -0000	1.6
@@ -1,28 +1,129 @@
 #include <arch/mm/mm.h>
 #include <pmm/pmm.h>
 #include <kos/assert.h>
+#include <kos/spinlock.h>
 #include "_vmm.h"
 
-int map_virtual_page(struct team* dest_team,
-		     vaddr_t virt, paddr_t phys,
-		     access_right_t access_rights)
+SPINLOCK(vmm_spinlock);
+
+/** Map a physical page into an address space
+ *
+ * @param dest_team The team in which the page will be mapped, if
+ * NULL, current team will be considered as desintation team.
+ *
+ * @param virt The virtual address at which the physical page has to
+ * be mapped.
+ *
+ * @param phys The physical address of the page to map
+ *
+ * @param access_rights The access rights of the virtual page
+ *
+ * @note This function takes care of taking the lock on pages as
+ * needed.
+ *
+ * @return Error code (@see errno.h)
+ */
+result_t map_virtual_page(const struct team* dest_team,
+			  vaddr_t virt, paddr_t phys,
+			  access_right_t access_rights)
 {
-  return arch_map_virtual_page((dest_team) ? (dest_team->address_space.mm_context) : NULL,
-			       virt, phys, access_rights);
+  result_t result;
+  struct map_session map_session;
+  spinlock_flags_t flags;
+
+  DEBUG_PRINT1("[map_virtual_page] Mapping phys 0x%x => virt 0x%x\n",
+	       phys, virt);
+
+  result = arch_pre_map_virtual_page(& map_session);
+  if(result < 0)
+    {
+      return result;
+    }
+
+  /* TODO : take the lock */
+
+  write_spin_lock(vmm_spinlock, flags);
+
+  result = arch_do_map_virtual_page(& map_session,
+				    (dest_team) ? (dest_team->address_space.mm_context) : NULL,
+				    phys, virt, access_rights);
+
+  write_spin_unlock(vmm_spinlock, flags);
+
+  if(result < 0)
+    {
+      /* TODO : release the lock */
+      /* How to free the previously allocated stuff ? */
+      return result;
+    }
+
+  result = arch_post_map_virtual_page(& map_session);
+
+  if(result < 0)
+    {
+      return result;
+    }
+
+  return ESUCCESS;
 }
 
-int protect_virtual_page(struct team* dest_team,
-			 vaddr_t vaddr,
-			 access_right_t access_rights)
+/** Change the access rights of a single virtual page
+ *
+ * @param dest_team The team in which the page is mapped. If NULL,
+ * current team will be considered as destination team.
+ *
+ * @param vaddr The virtual address of the page
+ *
+ * @param access_rights The new access rights for the page
+ *
+ * @note This function takes care of taking the lock on pages as
+ * needed.
+ *
+ * @result Error code
+ */
+result_t protect_virtual_page(const struct team* dest_team,
+			      vaddr_t vaddr,
+			      access_right_t access_rights)
 {
-  return arch_protect_virtual_page((dest_team) ? (dest_team->address_space.mm_context) : NULL,
-				   vaddr, access_rights);
+  struct mm_context *mm_context;
+  result_t result;
+  spinlock_flags_t flags;
+
+  if(dest_team == NULL)
+    mm_context = NULL;
+  else
+    mm_context = dest_team->address_space.mm_context;
+
+  write_spin_lock(vmm_spinlock, flags);
+
+  result = arch_protect_virtual_page(mm_context, vaddr, access_rights);
+
+  write_spin_unlock(vmm_spinlock, flags);
+
+  return result;
 }
 
-result_t protect_virtual_range(struct team *dest_team,
-			       vaddr_t start, vaddr_t end, 
+/** Change the access rights of a range of virtual pages
+ *
+ * @param dest_team The team in which the range of pages is mapped. If
+ * NULL, the current team will be considered as the destination team.
+ *
+ * @param start Virtual start address of the range
+ *
+ * @param end Virtual end address of the range
+ *
+ * @param access_rights The new access rights for the range
+ *
+ * @note This function takes care of taking the lock on pages as
+ * needed.
+ *
+ * @return Error code
+ */
+result_t protect_virtual_range(const struct team *dest_team,
+			       vaddr_t start, vaddr_t end,
 			       access_right_t access_rights)
 {
+  spinlock_flags_t flags;
   vaddr_t vaddr;
 
   CONCEPTION_ASSERT(PAGE_ALIGN_INF(start) == start);
@@ -35,27 +136,49 @@
   return ESUCCESS;
 }
 
-result_t get_paddr_at_vaddr(vaddr_t virt, paddr_t *paddr)
+/** Unmap a virtual page
+ *
+ * @param dest_team The team in which the virtual page to unmap is
+ * mapped. If NULL, the current team will be considered as the
+ * destination team.
+ *
+ * @param vaddr The virtual address of the page to unmap
+ *
+ * @note This function takes care of taking the lock on pages as
+ * needed.
+ *
+ * @return Error code
+ */
+result_t unmap_virtual_page(const struct team* dest_team,
+			    vaddr_t vaddr)
 {
-  paddr_t res;
+  struct map_session map_session;
+  struct mm_context* mm_ctxt;
+  vpage_status_t vpage_status;
+  result_t result;
+  spinlock_flags_t flags;
 
-  res = arch_get_paddr_at_vaddr(virt);
+  mm_ctxt = ((dest_team) ? (dest_team->address_space.mm_context) : NULL);
 
-  *paddr = res;
+  DEBUG_PRINT1("[vmm/unmap_virtual_page] Unmapping 0x%x (mm_ctxt=0x%x)\n",
+	       vaddr, mm_ctxt);
 
-  return ESUCCESS;
-}
+  arch_pre_unmap_virtual_page(& map_session);
 
-int unmap_virtual_page(struct team* dest_team,
-		       vaddr_t vaddr)
-{
-  struct mm_context* mm_ctxt = (dest_team) ? (dest_team->address_space.mm_context) : NULL;
+  write_spin_lock(vmm_spinlock, flags);
 
-  switch(arch_get_vpage_status(mm_ctxt, vaddr))
+  result = arch_get_virtual_page_status(mm_ctxt, vaddr, & vpage_status);
+  if(result < 0)
+    {
+      write_spin_unlock(vmm_spinlock, flags);
+      return result;
+    }
+
+  switch(vpage_status)
     {
     case PHYS_PAGE_PRESENT:
       /* unmap this page */
-      arch_unmap_virtual_page(mm_ctxt, vaddr);
+      arch_do_unmap_virtual_page(& map_session, mm_ctxt, vaddr);
       break;
 
     case PHYS_PAGE_SWAPPED:
@@ -63,35 +186,145 @@
       /* - When ref_cnt reaches 0, tell the swapper that this page
 	   has become unused
 	 - Unmap this vpage */
-      arch_unmap_virtual_page(mm_ctxt, vaddr);
+      arch_do_unmap_virtual_page(& map_session, mm_ctxt, vaddr);
       break;
 
     case PHYS_PAGE_UNMAPPED:
-      arch_unmap_virtual_page(mm_ctxt, vaddr);
+      arch_do_unmap_virtual_page(& map_session, mm_ctxt, vaddr);
       break;
 
     default:
       FAILED_VERBOSE("Invalid vpage status!");
     }
 
-  return 0;
+  write_spin_unlock(vmm_spinlock, flags);
+  arch_post_unmap_virtual_page(& map_session);
+
+  return ESUCCESS;
 }
 
-/*
- * For the current thread, unmap the virtual range from the current
- * address space.  No check is provided against accross-region
- * unmapping.  Unused PT's are not freed. This operation will be
- * provided by a garbage collector.
+/** Unmap a virtual range
+ *
+ * @param dest_team The destination team
+ *
+ * @param start The starting address of the area to unmap
+ *
+ * @param len The size of the area to unmap
+ *
+ * @note No check is provided against accross-region unmapping.
+ *
+ * @note This function takes care of taking the lock as needed
+ *
+ * @return Error code. If an error is returned, then the range is left
+ * partially mapped, partially unmapped.
  */
-int unmap_virtual_range(struct team* dest_team, vaddr_t start, size_t len)
+result_t unmap_virtual_range(const struct team* dest_team, vaddr_t start, size_t len)
 {
-  int ret;
+  result_t result;
   vaddr_t page;
 
-  ret = 0;
   for ( page = start ; page < (start + len) ; page += PAGE_SIZE)
-    if (unmap_virtual_page(dest_team, page))
-      ret = 1;
+    {
+      result = unmap_virtual_page(dest_team, page);
+      if(result < 0)
+	{
+	  return result;
+	}
+    }
 
-  return ret;
+  return ESUCCESS;
+}
+
+/** Remap the given virtual range to an other team
+ *
+ * This function remaps all the page of the virtual range [start ;
+ * end] of the <b>current</b> team to the given destination team
+ * (dest_team). This function is used in the fork() mechanism.
+ *
+ * @param dest_team     The destination team
+ * @param start         Beginning of the virtual range
+ * @param end           End of the virtual range
+ * @param access_rights Access rights that applies to the remapped range
+ *
+ * @return Error code
+ *
+ * @todo Detect map_virtual_page errors. Do the calls to arch_* by
+ * hand to be able to correctly handle lock problems.
+ */
+result_t dup_virtual_range(const struct team *dest_team, vaddr_t start, vaddr_t end,
+			   access_right_t access_rights)
+{
+  vaddr_t cur;
+
+  for (cur = start ; cur < end ; cur += PAGE_SIZE)
+    {
+      paddr_t paddr;
+
+      get_paddr_at_vaddr(NULL, cur, & paddr);
+
+      map_virtual_page(dest_team, cur, paddr, access_rights);
+    }
+
+  return ESUCCESS;
+}
+
+/** Get the status of a virtual page (either mapped, swapped or unmapped)
+ *
+ * @param team The destination team
+ *
+ * @param vaddr The address of the virtual page
+ *
+ * @param status Where the status is returned
+ *
+ * @return Error code
+ */
+result_t get_virtual_page_status(const struct team *team,
+				 vaddr_t vaddr, vpage_status_t *status)
+{
+  struct mm_context *mm_context;
+  result_t result;
+  spinlock_flags_t flags;
+
+  if(team == NULL)
+    mm_context = NULL;
+  else
+    mm_context = team->address_space.mm_context;
+
+  write_spin_lock(vmm_spinlock, flags);
+
+  result = arch_get_virtual_page_status(mm_context, vaddr, status);
+
+  write_spin_unlock(vmm_spinlock, flags);
+
+  return result;
+}
+
+/** Get the physical address of a virtual page
+ *
+ * @param team Destination team
+ *
+ * @param vaddr Virtual address of the page
+ *
+ * @param paddr Where the physical address is returned
+ *
+ * @return Error code
+ */
+result_t get_paddr_at_vaddr(const struct team *team, vaddr_t vaddr, paddr_t *paddr)
+{
+  struct mm_context *mm_context;
+  result_t result;
+  spinlock_flags_t flags;
+
+  if(team == NULL)
+    mm_context = NULL;
+  else
+    mm_context = team->address_space.mm_context;
+
+  write_spin_lock(vmm_spinlock, flags);
+
+  result = arch_get_paddr_at_vaddr(mm_context, vaddr, paddr);
+
+  write_spin_unlock(vmm_spinlock, flags);
+
+  return result;
 }

Index: _vmm.h
===================================================================
RCS file: /var/cvs/kos/kos/modules/vmm/_vmm.h,v
retrieving revision 1.18
retrieving revision 1.19
diff -u -d -r1.18 -r1.19
--- _vmm.h	11 Dec 2003 17:01:27 -0000	1.18
+++ _vmm.h	28 Dec 2004 18:44:56 -0000	1.19
@@ -4,6 +4,11 @@
 #include <loader/mod.h>
 #include <vmm/vmm.h>
 #include <kos/macros.h>
+#include <arch/mm/mm.h>
+
+/* _vmm_map.c */
+result_t dup_virtual_range(const struct team *dest_team, vaddr_t start, vaddr_t end,
+			   access_right_t access_rights);
 
 __init_text result_t _init_as_engine(void);
 

Index: _vmm_as.c
===================================================================
RCS file: /var/cvs/kos/kos/modules/vmm/_vmm_as.c,v
retrieving revision 1.24
retrieving revision 1.25
diff -u -d -r1.24 -r1.25
--- _vmm_as.c	17 Jun 2004 22:12:01 -0000	1.24
+++ _vmm_as.c	28 Dec 2004 18:44:56 -0000	1.25
@@ -13,16 +13,20 @@
 #include <karm/interface/mapping.h>
 #include <pmm/pmm.h>
 #include <kmem/kmem.h>
+#include <kitc/kmutex.h>
 #include <lib/std/string.h>
 #include <vmm/_vmm.h>
 
-/* The SLAB cache for the virrtual regions */
+/** The SLAB cache for the virtual regions */
 static struct kslab_cache *vmm_vr_cache = NULL;
 
[...1007 lines suppressed...]
+
       result = _as_grow_vr(vr, wanted_heap - as->heap_start);
       if(result < 0)
 	{
 	  *heap_current = as->heap_current;
+	  kmutex_unlock(& as->mutex);
 	  return result;
 	}
-      
+
       as->heap_current = wanted_heap;
       *heap_current    = wanted_heap;
     }
 
-  as_dump(as);
+  kmutex_unlock(& as->mutex);
+
   return ESUCCESS;
 }
 



More information about the Kos-cvs mailing list