on 27/09/2010 20:54 Andriy Gapon said the following: > > It seems that minidump on amd64 is always dumping at least about 1GB of data > regardless of actual memory size and usage and thus can be even larger than > regular dump. > > Specifically, I suspect the following code: > for (va = VM_MIN_KERNEL_ADDRESS; va < MAX(KERNBASE + NKPT * NBPDR, > kernel_vm_end); va += NBPDR) { > i = (va >> PDPSHIFT) & ((1ul << NPDPEPGSHIFT) - 1); > /* > * We always write a page, even if it is zero. Each > * page written corresponds to 2MB of space > */ > ptesize += PAGE_SIZE; > > It seems that difference between KERNBASE and VM_MIN_KERNEL_ADDRESS is already > ~500G. Which means 500G divided by 2M equals 250K iterations/pages. Which is 1GB > of data. > > Looks like this came from amd64 KVA expansion. > And it seems a little bit wasteful? So perhaps we need to add another level of indirection? I.e. first dump contiguous array of "pseudo-pde" entries that would point to chunks of "pseudo-pte" entries, so that "pseudo-pte" entries could be sparse. This is instead of dumping 1GB of contiguous "pseudo-ptes" as we do now. A bit of work, though. -- Andriy GaponReceived on Wed Sep 29 2010 - 18:41:50 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:07 UTC