On Sun, 18 Jul 2004, Norikatsu Shigemura wrote: > On Sat, 17 Jul 2004 15:59:31 -0400 > Alex Vasylenko <lxv_at_omut.org> wrote: > > I find the performance of nullfs somewhat lacking as measured in the test > > described below (a config with nullfs performs worse (~2x slower) than the same > > config with vnodefs). For simplicity the test was done in chroot, doing it in a > > jail has no significant impact on performance. > > Wow, I confirmed this behavior with 'make buildworld' on > 5-current(2004/7/2, SMP). > > nullfs mounted /usr/src, /usr/obj: about 5000sec > ln -s'ed /usr/src, /usr/obj: about 3000sec There are a number of potential causes for this, and working out which it is would be useful. One is the direct overhead associated with stacking -- extra computation, locking, function calls, etc. Another is the indirect overhead associated allocating additional twice as many vnodes for every file system object (original location, new location). This can be measured in both actual memory overhead, but also the impact on hitting the maxvnodes bound, which causes vnodes to be recycled. It could be that you're hitting the bound and as a result useful vnodes are leaving the vnode cache. You might try looking at the value of vfs.numvnodes, vfs.wantfreevnodes, vfs.freevnodes, and kern.maxvnodes at intervals through the benchmark -- maybe running a script that pulls down the sysctl values every 10 seconds or 20 seconds or such. On some systmes, "memory is no object" -- on other systems it is -- it would be interesting to know how much memory your system has. Finally, it might be interesting to know what the page fault rate and disk I/O transaction rates during the benchmark. These might point at the additional memory consumption creating pressure for necessary memory. Robert N M Watson FreeBSD Core Team, TrustedBSD Projects robert_at_fledge.watson.org Principal Research Scientist, McAfee ResearchReceived on Sat Jul 17 2004 - 21:01:02 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:02 UTC