Re: ZFS kmem_map too small.

From: Adam McDougall <mcdouga9_at_egr.msu.edu>
Date: Thu, 4 Oct 2007 21:25:22 -0400
On Fri, Oct 05, 2007 at 02:00:46AM +0200, Pawel Jakub Dawidek wrote:

  Hi.
  
  We'are about to branch RELENG_7 and I'd like to start discussion with
  folks that experience 'kmem_map too small' panic with the latest HEAD.
  
  I'm trying hard to reproduce it and I can't, so I need to gather more
  info how you are able to provoke this panic.
  
  What I did was to rsync 200 FreeBSD src trees from one directory to
  another on the same ZFS file system. It worked fine.
  
  The system I'm using is i386 and the only tuning I did is bigger
  kmem_map. From my /boot/loader.conf:
  
  vm.kmem_size=629145600
  vm.kmem_size_max=629145600
  
  The machine is dual core Pentium D 3GHz with 1GB of RAM. My pool is:
  
  lcf:root:/tank/0# zpool status
    pool: tank
   state: ONLINE
   scrub: none requested
  config:
  
          NAME        STATE     READ WRITE CKSUM
          tank        ONLINE       0     0     0
            ad4       ONLINE       0     0     0
            ad5       ONLINE       0     0     0
            ad6       ONLINE       0     0     0
            ad7       ONLINE       0     0     0
  
  errors: No known data errors
  
  If you can still see those panic, please let me know as soon as possible
  and try to describe what your workload looks like, how to reproduce it,
  etc. I'd really like ZFS to be rock-stable for 7.0 even on i386.
  

I have a athlon x2 with 2G ram running amd64 -current build not more than a 
few weeks old and I started getting kmem panics at ~350M kmem, bumped kmem
to 1G but it paniced at 1G after a few days, bumped it to 1.5G and it has been running 
since then, but the 1G crash was probably on Oct 1.  Nightly it 
has a few systems ssh into it over the internet and run rsync as a 
system backup method.  I would login and get some details but at present
ssh is not responding properly and I'm offsite.  I can check on it tomorrow.
It is not a critical system so I can make adjustments to it if I should try
to make it crash.  I think I have 3 x 250gb sata disks in the zpool.  I don't
think I've tuned anything except kmem, and that was only since a month or so
ago.  I updated it near then, and Before that, it used to get panics that might
have been unrelated to zfs (strange ones like kernel trap 9, others?). 

The systems that are being backed up to it are live login/webservers with
a healthy amount of (probably mostly small) files.  I know some of the 
files are excluded from rsync but the largest host being backed up does have approx 
75 million inodes.  Whatever gets backed up from it runs in one shot at 5am.
Received on Thu Oct 04 2007 - 23:41:47 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:18 UTC