On Tue, Jun 05, 2007 at 08:35:51PM -0400, Kris Kennaway wrote: > On Wed, Jun 06, 2007 at 02:19:57AM +0200, Ivan Voras wrote: > > Sean Hafeez wrote: > > > Has anyone looked at the ZFS port and how it does on 32-bit CPUs vs > > > 64-bit ones? I know under Solaris they do not recommend using a 32-bit > > > CPU. I my case I was thinking about doing some testing on a Dual P3-850. > > > > It works, and there's never been doubt that it would work. The main > > resource you need is memory. At least 1 GB is recommended, but it should > > work with 512 MB (though people were reporting panics unless they scale > > ZFS and VFS parameters down). If you're thinking of using it in > > production, you should read the threads on this list regarding ZFS, > > especially those mentioning panics. > > It "works", but there are serious performance issues to do with how > ZFS on freebsd handles caching of data. In order to get reasonable > performance you will want to tune VM_KMEM_SIZE_MAX as high as you can > get away with (depends on how much ram you have). Roughly half of > this will be used by the ARC (zfs buffer cache). This is typically > less memory than the standard buffer cache would have available so ZFS > still loses out on caching, particularly on systems with a lot of RAM. > > You may also need to hack zfs a bit. The following patch improves > performance for me on amd64 (and avoids a deadlock). I have not > tested whether it is sufficient or reasonable on i386 (only amd64), > the KVA shortage there makes it hard to tune memory availability the > way zfs wants it. > > There is also a panic condition that may be triggered on SMP when you > have INVARIANTS enabled. pjd and I don't yet understand the cause of > this but it appears to be spurious ("returning to userspace with 1 > locks held" when no locks appear to actually be held, i.e. it seems to > be some kind of leak in the stats). Also on amd64 it helps to crank kern.maxvnodes way up if you have the ram for it (I use 400000 on my 2GB system). With my patch it seems to do a reasonable job of autotuning itself if you set it too high, but there is a bit of performance loss from this if it kickstarts vnlru too frequently. Watch vfs.numvnodes to see where it stabilizes over time on your workload and then cap it a bit higher. On i386 this may be bad advice since vnodes are also allocated out of the kmem_map on i386 (in amd64 the use the direct mapped area) and will compete for space with everything else (i.e. with the default kmem_map size you have to *lower* the kern.maxvnodes from 100000 to 75000 to avoid zfs running out of space). Running with maxvnodes too low will seriously limit your performance by reducing caching though. The bottom line is that zfs on freebsd/i386 currently seems hard to tune for performance, so if possible consider running it on amd64 instead. There is a lot of scope for someone to fix zfs on freebsd to be more sane about memory management (on all architectures), and hopefully someone will be motivated to do that. KrisReceived on Tue Jun 05 2007 - 22:45:16 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:11 UTC