Nate Lawson writes: > > You're right about where the problem is (top of stack trace and listing > below). However, your patch causes an immediate panic on boot due to a > NULL deref. I don't think you want it to always return NULL if called > with M_NOWAIT set. :) Other ideas? > I suppose the only alternative is to "do it right" and remove Giant from the uma zone alloc code. >From looking at the code for a little while this morning, it looks like there are 3 allocators that could be called at this point in the code: 1) page_alloc(): Calls kmem_malloc(). Should be MPSAFE on NOWAIT allocations. Needs Giant on WAITOK allocations. 2) obj_alloc(): Calls vm_page_alloc() -- that's MPSAFE. Calls pmap_qenter() -- I've got no freaking clue if that's MPSAFE on all platforms. I think it is, since kmem_malloc is MPSAFE & it calls pmap_enter(), but I'm not sure. uma_small_alloc(): i386 - no uma_small_alloc, no problem alpha - uma_small_alloc is SMP safe ia64 - uma_small_alloc should be SMP safe, as it seems to be doing just the moral equivalent of PHYS_TO_K0SEG() to map the memory into the kernel. sparc64: I have no idea.. Drew > slab_zalloc + 0xdf > uma_zone_slab + 0xd8 > uma_zalloc_bucket + 0x15d > uma_zalloc_arg + 0x307 > malloc > ... > m_getcl > > (gdb) l *slab_zalloc+0xdf > 0xc02f646f is in slab_zalloc (../../../vm/uma_core.c:707). > 702 else > 703 wait &= ~M_ZERO; > 704 > 705 if (booted || (zone->uz_flags & UMA_ZFLAG_PRIVALLOC)) { > 706 mtx_lock(&Giant); > 707 mem = zone->uz_allocf(zone, > 708 zone->uz_ppera * UMA_SLAB_SIZE, &flags, wait); > 709 mtx_unlock(&Giant); > 710 if (mem == NULL) { > 711 ZONE_LOCK(zone);Received on Fri Apr 04 2003 - 05:03:37 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:02 UTC