On Wed, 7 Jan 2004 11:49:16 +1100 Peter Jeremy <peterjeremy_at_optushome.com.au> wrote: > >You need to make a context switch for every malloc call. > > This isn't true. There are no circumstances under which phkmalloc > requires a context switch. Since Unix uses a pre-emptive scheduler, > a context switch may occur at any time, though it is preferentially > performed during system calls. Yes, wrong wording by me. I was trying to say it has to enter the kernel via a syscall. In the worst case for every malloc()/free(). And entering the kernel is expensive, compared to not entering the kernel and doing stuff in userland. > If the free memory pool managed by phkmalloc has insufficient space to > fulfil the request, or is excessively large following a free() then it > will use brk(2) to allocate/return additional memory. The kernel may > choose to schedule an alternative process during the brk() call. With a sufficiently large amount of syscalls the wall clock time will increase. And that's what Holger reported (further reading your mail: not because of this fact, but I hadn't digged into the perl module). [reason why the perl module behaves poorly] > It's not clear why the builtin perl malloc is so much faster in this > case. A quick check of the perl malloc code suggests that it uses a > geometric-progression bucket arrangement (whereas phkmalloc appears to > use page-sized buckets for large allocations) - this would > significantly reduce the number of realloc() copies. This is IMHO the right allocation algorithm for such programs (at least I don't know of a better one and I've seen it in several places where you can't guess the amount of memory you need). I'm sure the perl developers tuned the perl_malloc() with real world perl programs. Maybe this kind of behavior is typical for a lot of perl programs. Bye, Alexander. -- I will be available to get hired in April 2004. http://www.Leidinger.net Alexander _at_ Leidinger.net GPG fingerprint = C518 BC70 E67F 143F BE91 3365 79E2 9C60 B006 3FE7Received on Wed Jan 07 2004 - 03:43:10 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:36 UTC