Re: tmpfs panic

From: Kostik Belousov <kostikbel_at_gmail.com>
Date: Sun, 15 Jun 2008 21:20:12 +0300
On Sun, Jun 15, 2008 at 09:25:53PM +0400, Dmitry Morozovsky wrote:
> On Sun, 15 Jun 2008, Dmitry Morozovsky wrote:
> 
> DM> KB> > KB> I suspect this may be my mistake.
> DM> KB> > KB> IN case you can reproduce it, please, try the patch below.
> DM> KB> > 
> DM> KB> > Well, it seems hard to reproduce, possibly some races in kernel memory 
> DM> KB> > allocation exist. Keep trying (run rsync -aH svn tree to tmpfs in the loop)
> DM> KB> 
> DM> KB> Unmount is required to trigger the problem. Please, report to me whatever
> DM> KB> your results are. I will commit the patch then.
> DM> 
> DM> yes, the loop contains rsync, umount, mount. I'll let this run for several 
> DM> hours, then report back to you.
> 
> With about 80 turns without a panic I think it was great coincidence that I'd 
> encountered this panic. ;-)
Thank you for testing.

> 
> BTW, side result: 128M for kern.maxswzone is enough for filling tmpfs
> to 100% under amd64 with 4G RAM + 8G swap.
>
> Also, I can observe tmpfs is doing non-optimal; I did not found
> straight ways to set block/frag size; I suppose for most tmpfs usage
> they should be decreases to the lowest values, such as 4k/512 -- what
> do you think?
Block and fragment size concepts are not applicable to the tmpfs;
basically, this is the point for having such fs in the system. Each file
on the tmpfs is presented as the swap-backed vm object.

Besides the set of the (mostly) known problems with correctness and
stability, current implementation has quite unefficient implementation
of the mmap and buffer cache interaction. The vm object (and pages) used
for the vm operations are copied from the backing vm object instead of
being reused. This means that we get essentially twice as much memory
used, and copying.

Received on Sun Jun 15 2008 - 16:20:19 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:32 UTC