Large amounts of memory access operations cause panic under CURRENT (was "Large gap between fwrite and write, and fread and read")

From: <youshi10_at_u.washington.edu>
Date: Mon, 16 Jul 2007 14:56:02 -0700 (PDT)
On Mon, 16 Jul 2007, Milos Vyletel wrote:

> On Mon, Jul 16, 2007 at 05:06:57AM -0700, Garrett Cooper wrote:
>> Hello again Hackers,
>>    I ran some tests and I noticed a large difference in the cumulative
>> sums of fwrite(2) vs write(3) and fread(2) vs read(3) (3-fold
>> differences on a real machine).
>>    Please download
>> <http://students.washington.edu/youshi10/posted/fat.tgz>, take a look at
>> README for some results, and read for more details on how you can run
>> the tests as well if curious, and feel free to send me the results if
>> desired. If you do run the tests, please don't use the /tmp disk at all
>> on the machine as it will most likely skew parts of the test, and please
>> pay heed to the warning, otherwise the test box may become unusable.
>>    One thing that has me puzzled though... why is there such a large
>> difference? Is it because fread(2) allows object by object instantiation
>> (i.e. preallocates objects according to sizeof and returns the number of
>> allocated figures), whereas read(3) just reads in raw lengths and
>> returns the amount of chars scanned in? Does the same logic apply for
>> fwrite(2) and write(3)?
>> Thanks,
>> -Garrett -- I really need to go to bed earlier.. haha.
>> _______________________________________________
>
> Hi,
>
> This is kinda off topic, but with this test and more than hundred thousand (actually about 190000 is enough) iterations I'm able to crash my amd64 machine with kmem_map too small panic. This is 6 days old current with zfs on root.
>
> So this panic is not only i386 specific thing :\
>
> mv

Go figure it'd cause panics for other people.

I wasn't using zfs at all but it panicked anyhow once (my amd64 VM only, not my i386 test server, surprisingly). I wish I'd gotten the panic but I walked away to get a glass of water, and there wasn't a core dump because the VM shut down completely instead of restarting. Heh.

My virtual machine died around 90k on the first trial though. I'll be sure to reduce the amount and see what happens, and I'll put nanosleeps or usleeps between the read and write ops to see if that alleviates the race condition seen, but I'll keep the problem code around for reference later in case I've stimulated some sort of weird bug in FreeBSD, or otherwise.

Both my VM and test server run almost no programs though other than samba and rsync, so you'll probably see the panic faster / more frequently than I will if you run a lot more programs resident in memory.

Just curious, what scheduler are you using on CURRENT, what processor do you have, and what are your memory specs?

Thanks,
-Garrett
Received on Mon Jul 16 2007 - 19:56:04 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:14 UTC