On Friday, March 13, 2015 06:32:03 AM Mateusz Guzik wrote: > On Thu, Mar 12, 2015 at 06:13:00PM -0500, Alan Cox wrote: > > Below is partial results from a profile of a parallel (-j7) "buildworld" on > > a 6-core machine that I did after the introduction of pmap_advise, so this > > is not a new profile. The results are sorted by total waiting time and > > only the top 20 entries are listed. > > > > Well, I ran stuff on lynx2 in the zoo on fresh -head with debugging > disabled (MALLOC_PRODUCTION included) and got quite different results. > > The machine is Intel(R) Xeon(R) CPU E5-2680 v2 _at_ 2.80GHz > 2 package(s) x 10 core(s) x 2 SMT threads > > 32GB of ram > > Stuff was built in a chroot with world hosted on zfs. > > > max wait_max total wait_total count avg wait_avg > > cnt_hold cnt_lock name > > > > 1027 208500 16292932 1658585700 5297163 3 313 0 > > 3313855 kern/vfs_cache.c:629 (rw:Name Cache) > > > > 208564 186514 19080891106 1129189627 355575930 53 3 0 > > 1323051 kern/vfs_subr.c:2099 (lockmgr:ufs) > > > > 169241 148057 193721142 419075449 13819553 14 30 0 > > 110089 kern/vfs_subr.c:2210 (lockmgr:ufs) > > > > 187092 191775 1923061952 257319238 328416784 5 0 0 > > 5106537 kern/vfs_cache.c:488 (rw:Name Cache) > > > > make -j 12 buildworld on freshly booted system (i.e. the most namecache insertions): > > 32 292 3042019 33400306 8419725 0 3 0 2578026 kern/sys_pipe.c:1438 (sleep mutex:pipe mutex) > 170608 152572 642385744 27054977 202605015 3 0 0 1306662 kern/vfs_subr.c:2176 (lockmgr:zfs) You are using ZFS, Alan was using UFS. It would not surprise me that those would perform quite differently, and it would not surprise me that UFS is more efficient in terms of its interactions with the VM. -- John BaldwinReceived on Wed Mar 18 2015 - 13:46:53 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:56 UTC