hello To push zfs, I launch 2 scrub at the same time, after ~20 seconds the system freeze: zpool scrub pool0 && zpool scrub pool2 My pools: zpool status pool: pool0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool0 ONLINE 0 0 0 mirror ONLINE 0 0 0 da0s2 ONLINE 0 0 0 da1s2 ONLINE 0 0 0 errors: No known data errors pool: pool1 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool1 ONLINE 0 0 0 da0s3 ONLINE 0 0 0 da1s3 ONLINE 0 0 0 errors: No known data errors pool: pool2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool2 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad4s3 ONLINE 0 0 0 ad6s3 ONLINE 0 0 0 errors: No known data errors I'm running 7.0-BETA2 with patch http://people.freebsd.org/~pjd/patches/vm_kern.c.2.patch Root is on pool0 [root_at_morzine ~]# df -h Filesystem Size Used Avail Capacity Mounted on pool0 34G 16M 34G 0% / devfs 1.0K 1.0K 0B 100% /dev /dev/mirror/gm0s1a 496M 220M 236M 48% /bootfs procfs 4.0K 4.0K 0B 100% /proc pool0/home 35G 1.5G 34G 4% /home pool1 16G 128K 16G 0% /pool1 pool1/qemu 24G 16G 8.1G 66% /pool1/qemu pool1/squid 12G 39M 12G 0% /pool1/squid pool2 72G 0B 72G 0% /pool2 pool2/WorkBench 64G 21G 43G 33% /pool2/WorkBench pool2/backup 32G 6.8G 25G 21% /pool2/backup pool2/download 72G 0B 72G 0% /pool2/download pool2/morzine 85G 13G 72G 16% /pool2/morzine pool2/qemu 16G 16G 112M 99% /pool2/qemu pool2/sys 73G 1.2G 72G 2% /pool2/sys pool0/tmp 34G 384K 34G 0% /tmp pool0/usr 40G 5.7G 34G 14% /usr pool0/var 34G 116M 34G 0% /var pool0/var/spool 39G 5.2G 34G 13% /var/spool devfs 1.0K 1.0K 0B 100% /var/named/dev I can break on my serial console and her are some informations: db> ps pid ppid pgrp uid state wmesg wchan cmd 3425 3424 3425 0 RVs cron 3424 1161 1161 0 S ppwait 0xc5f30000 cron 3423 589 3423 0 S+ zfs:&vq- 0xc5c0b334 zpool 3419 0 0 0 SL vgeom:io 0xc8ba1308 [vdev:worker ad6s3] 3418 0 0 0 SL vgeom:io 0xda70f748 [vdev:worker ad4s3] 3417 0 0 0 SL zfs:(&sp 0xc56da318 [spa_scrub_thread] 3415 0 0 0 SL vgeom:io 0xd90229c8 [vdev:worker da1s2] 3414 0 0 0 SL vgeom:io 0xc5892208 [vdev:worker da0s2] 3413 0 0 0 SL zfs:(&sp 0xc56db318 [spa_scrub_thread] 3309 998 979 8 S nanslp 0xc0890924 sleep 3136 995 995 8 S select 0xc089b778 initial thread 2610 1490 2610 0 S+ select 0xc089b778 ssh 76040 1 76016 2001 S select 0xc089b778 initial thread 76038 76034 76016 2001 S (threaded) firefox-bin 100333 S ucond 0xcb3fc080 firefox-bin 100327 S ucond 0xc8ba1e80 firefox-bin 100326 S ucond 0xc722abc0 firefox-bin 100323 S ucond 0xc5b56680 firefox-bin 100285 0xcb466580 firefox-bin 100156 S select 0xc089b778 firefox-bin 100441 S select 0xc089b778 initial thread 76034 76030 76016 2001 S wait 0xcca1e2a8 sh 76030 1 76016 2001 S wait 0xcca1f000 sh 29979 29976 29979 70 Rs postgres 29978 29976 29978 70 Rs postgres 29976 1 29976 70 Ss select 0xc089b778 postgres 25774 1 25774 25 Ss pause 0xcf074858 sendmail 25770 1 25770 0 Ss select 0xc089b778 sendmail 589 587 589 0 S+ wait 0xcbe18d48 bash 587 1311 587 2001 Ss+ wait 0xca7e52a8 su 7234 1486 7234 0 S+ select 0xc089b778 ssh 58922 1 58922 0 Ss kqread 0xdb632200 cupsd 54279 1 54279 53 Ss (threaded) named 100207 S select 0xc089b778 named 100206 S ucond 0xc76cadc0 named 100205 S ucond 0xd1c3c440 named 100204 S ucond 0xd1c3dac0 named 00291 S sigwait 0xfcae1be0 named 12309 1 12308 2001 R initial thread 7760 1 7760 0 SLs aiordy 0xc8220220 [aiod4] 7759 1 7759 0 SLs aiordy 0xcaf2e440 [aiod3] 7758 1 7758 0 SLs aiordy 0xd16fc220 [aiod2] 7757 1 7757 0 SLs aiordy 0xd16ff000 [aiod1] 7719 0 0 0 SL - 0xd24b5600 [aiod_bio taskq] 4812 0 0 0 SL gj:work 0xcfba6e00 [g_journal ad4s2] 1798 1794 1790 2001 R (threaded) thunderbird-bin 100332 CanRun thunderbird-bin 100299 RunQ thunderbird-bin 100300 RunQ thunderbird-bin 100298 RunQ thunderbird-bin 100407 RunQ thunderbird-bin 100406 S ucond 0xc722b600 thunderbird-bin 100329 S ucond 0xc722a7c0 thunderbird-bin 100328 S ucond 0xcb3fc140 thunderbird-bin 100325 S ucond 0xc5b2e980 thunderbird-bin 100324 S ucond 0xd1d22a00 thunderbird-bin 100319 S ucond 0xc76e0980 thunderbird-bin 0301 S select 0xc089b778 thunderbird-bin 100289 S select 0xc089b778 initial thread 1794 1790 1790 2001 S wait 0xcaf32d48 sh 1790 1302 1790 2001 Ss wait 0xcbe10d48 sh 1789 1777 1777 80 S (threaded) httpd 100318 S kqread 0xca2b1200 httpd 100317 S ucond 0xd1829100 httpd 100316 S ucond 0xd1ba7a00 httpd 100315 S ucond 0xc8a044c0 httpd 100314 S ucond 0xc7cf26c0 httpd 100313 S ucond 0xc76e0740 httpd 100312 S ucond 0xc692a140 httpd 100311 S ucond 0xd1829780 httpd 100310 S ucond 0xc7bf1680 httpd 100309 S ucond 0xd1d22040 httpd 100308 S ucond 0xd1d22e00 httpd 100307 S ucond 0xd1d228c0 httpd 100306 S ucond 0xcb293400 httpd 100305 S ucond 0xcb5f6440 httpd 100304 S ucond 0xd1829ac0 httpd 100303 S ucond 0xd0b81c80 httpd 100254 S piperd 0xc7a03000 httpd 1786 1777 1777 80 S accept 0xcefe266a httpd 1785 1781 1777 0 S piperd 0xc666a318 cronolog 1784 1779 1777 0 S piperd 0xc666a630 cronolog 1783 1780 1777 0 S piperd 0xd19e7318 cronolog 1782 1778 1777 0 S piperd 0xc6669948 cronolog 1781 1777 1777 0 S wait 0xcbe102a8 sh 1780 1777 1777 0 S wait 0xcaf2daa0 sh 1779 1777 1777 0 S wait 0xd0bb02a8 sh 1778 1777 1777 0 S wait 0xcaf32000 sh 1777 1 1777 0 Rs httpd 1774 1768 1756 0 S+ piperd 0xc666a7bc cronolog 1768 1 1756 0 S+ wait 0xc71b77f8 sh 1753 1 1729 2001 S select 0xc089b778 initial thread 1490 1322 1490 0 S+ wait 0xcbe18550 bash 1488 1320 1488 0 S+ ttyin 0xc5dec810 bash 1486 1318 1486 0 S+ wait 0xc5bd1550 bash 1484 1316 1484 0 S+ ttyin 0xc669cc10 bash 1322 1311 1322 2001 Ss+ wait 0xce569d48 su 1320 1311 1320 2001 Ss+ wait 0xc6cb6d48 su 1318 1311 1318 2001 Ss+ wait 0xcbe0f000 su 1316 1311 1316 2001 Ss+ wait 0xcbe187f8 su 1313 1311 1280 2001 S+ sbwait 0xcefe20bc initial thread 1312 1302 1280 2001 R+ initial thread 1311 1 1280 2001 R+ initial thread 1309 1302 1280 2001 S+ select 0xc089b778 initial thread 1303 1295 1280 2001 R+ initial thread 1302 1295 1280 2001 R+ initial thread 1301 1295 1280 2001 S+ select 0xc089b778 initial thread 1300 1295 1280 2001 S+ select 0xc089b778 initial thread 1299 1 1299 2001 Rs xfce-mcs-manager 1295 1280 1280 2001 S+ select 0xc089b778 initial thread 1293 1 1293 2001 Ss select 0xc089b778 dbus-daemon 1292 1 1280 2001 S+ select 0xc089b778 dbus-launch 1287 1 1287 2001 Ss select 0xc089b778 ssh-agent 1280 1274 1280 2001 S+ wait 0xcbe0faa0 sh 1275 1274 1275 2001 S+ select 0xc089b778 Xorg 1274 1272 1274 2001 S+ wait 0xc6cb6550 xinit 1272 1271 1272 2001 S+ wait 0xc5f31aa0 bash 1271 1 1271 0 Ss+ wait 0xcaf31d48 login 1239 1 1239 0 Ss+ ttyin 0xc56eac10 getty 1238 1 1238 0 Ss+ ttyin 0xc5703010 getty 1237 1 1237 0 Ss+ ttyin 0xc5703410 getty 1221 1 1221 0 Ss select 0xc089b778 inetd 1192 1 1192 0 Rs moused 1161 1 1161 0 Ss nanslp 0xc0890924 cron 1156 1 1156 0 Ss select 0xc089b778 sshd 1045 1 1045 501 Ss select 0xc089b778 cvsupd 1017 995 995 8 S select 0xc089b778 innfeed 1002 997 979 8 S+ nanslp 0xc0890924 initial thread 998 996 979 8 S+ wait 0xc6cb6aa0 sh 997 1 979 8 S+ wait 0xc65f8550 sh 996 1 979 8 S+ wait 0xc71bd2a8 sh 995 1 995 8 Ss select 0xc089b778 innd 967 1 967 279 Ss nanslp 0xc0890924 perl 957 1 957 0 Ss select 0xc089b778 bsnmpd 948 1 948 556 Ss select 0xc089b778 dbus-daemon 920 915 915 0 S lockf 0xc8b4e140 saslauthd 918 915 915 0 S lockf 0xc5892580 saslauthd 917 915 915 0 S lockf 0xc8b4f040 saslauthd 916 915 915 0 S lockf 0xc76cb080 saslauthd 915 1 915 0 Ss accept 0xc8613c9a saslauthd 908 1 908 389 Ss (threaded) slapd 100293 S ucond 0xc8b7f940 slapd 100409 S ucond 0xcb5f7c40 slapd 100209 S select 0xc089b778 slapd 100155 S uwait 0xc5ae0bc0 slapd 903 894 903 100 Ss piperd 0xc669218c unlinkd 894 892 892 100 R (threaded) squid 100225 S ucond 0xc692a940 squid 100224 S ucond 0xc692a680 squid 100223 S ucond 0xc7cf2bc0 squid 100222 S ucond 0xc7cf2b40 squid 100221 S ucond 0xc7cf2b80 squid 100220 S ucond 0xc68c5740 squid 100219 S ucond 0xc562e600 squid 100218 S ucond 0xc8b4e000 squid 100217 S ucond 0xc8b4e100 squid 100216 S ucond 0xc7cf2c40 squid 100215 S ucond 0xc8b4e480 squid 100214 S ucond 0xc8b4e440 squid 100213 S ucond 0xc68c5c00 squid 100212 S ucond 0xc76e0240 squid 100211 S ucond 0xc6c89a40 squid 100210 S ucond 0xc6c89080 squid 100071 CanRun initial thread 892 1 892 100 Ss wait 0xc65f87f8 squid 883 1 882 0 S nanslp 0xc0890924 smartd 859 1 859 0 Rs ntpd 821 1 821 0 Rs (threaded) apcupsd 100208 S select 0xc089b778 apcupsd 100187 RunQ apcupsd 792 1 792 0 Ss auditd 0xc08aaa28 auditd 779 1 779 0 Ss select 0xc089b778 rpcbind 700 1 700 0 Rs syslogd 295 0 0 0 SL zfs:(&tq 0xc5bcbaac [zil_clean] 294 0 0 0 SL zfs:(&tq 0xc5bcbb78 [zil_clean] 293 0 0 0 SL zfs:(&tq 0xc5bcbc44 [zil_clean] 292 0 0 0 SL zfs:(&tq 0xc5bcbd10 [zil_clean] 291 0 0 0 SL zfs:(&tq 0xc5bcbddc [zil_clean] 290 0 0 0 SL zfs:(&tq 0xc5bcbea8 [zil_clean] 289 0 0 0 SL zfs:(&tq 0xc6c52050 [zil_clean] 288 0 0 0 SL zfs:(&tq 0xc6c5211c [zil_clean] 287 0 0 0 SL zfs:(&tq 0xc6c521e8 [zil_clean] 286 0 0 0 SL zfs:(&tq 0xc6c522b4 [zil_clean] 285 0 0 0 SL zfs:(&tq 0xc6c52380 [zil_clean] 284 0 0 0 SL zfs:(&tq 0xc6c5244c [zil_clean] 283 0 0 0 SL zfs:(&tq 0xc558b380 [zil_clean] 282 0 0 0 SL zfs:(&tq 0xc558b44c [zil_clean] 281 0 0 0 SL zfs:(&tq 0xc558b518 [zil_clean] 279 0 0 0 SL zfs:(&tx 0xc638c32c [txg_thread_enter] 278 0 0 0 SL zfs:(&sp 0xc56da310 [txg_thread_enter] 277 0 0 0 SL zfs:(&tx 0xc638c31c [txg_thread_enter] 274 0 0 0 SL zfs:(&tq 0xc558b5e4 [spa_zio_intr_5] 273 0 0 0 SL zfs:(&tq 0xc558b5e4 [spa_zio_intr_5] 272 0 0 0 SL zfs:(&tq 0xc558b6b0 [spa_zio_issue_5] 271 0 0 0 SL zfs:(&tq 0xc558b6b0 [spa_zio_issue_5] 270 0 0 0 SL zfs:(&tq 0xc558b77c [spa_zio_intr_4] 269 0 0 0 SL zfs:(&tq 0xc558b77c [spa_zio_intr_4] 268 0 0 0 SL zfs:(&tq 0xc558a6b0 [spa_zio_issue_4] 267 0 0 0 SL zfs:(&tq 0xc558a6b0 [spa_zio_issue_4] 266 0 0 0 SL zfs:(&tq 0xc558a848 [spa_zio_intr_3] 265 0 0 0 SL zfs:(&tq 0xc558a848 [spa_zio_intr_3] 264 0 0 0 SL zfs:(&tq 0xc558a914 [spa_zio_issue_3] 263 0 0 0 SL zfs:(&tq 0xc558a914 [spa_zio_issue_3] 262 0 0 0 SL zfs:(&tq 0xc558ac44 [spa_zio_intr_2] 261 0 0 0 SL zfs:(&tq 0xc558ac44 [spa_zio_intr_2] 260 0 0 0 SL zfs:(&tq 0xc558addc [spa_zio_issue_2] 259 0 0 0 SL zfs:(&tq 0xc558addc [spa_zio_issue_2] 258 0 0 0 RL [spa_zio_intr_1] 257 0 0 0 SL zfs:&vq- 0xc5c10b34 [spa_zio_intr_1] 256 0 0 (&tq 0xc558ad10 [spa_zio_issue_1] 255 0 0 0 SL zfs:(&tq 0xc558ad10 [spa_zio_issue_1] 254 0 0 0 SL zfs:(&tq 0xc558ab78 [spa_zio_intr_0] 253 0 0 0 SL zfs:(&tq 0xc558ab78 [spa_zio_intr_0] 252 0 0 0 SL zfs:(&tq 0xc558aaac [spa_zio_issue_0] 251 0 0 0 SL zfs:(&tq 0xc558aaac [spa_zio_issue_0] 224 0 0 0 RL [txg_thread_enter] 223 0 0 0 RL [txg_thread_enter] 222 0 0 0 SL zfs:(&tx 0xc694071c [txg_thread_enter] 221 0 0 0 SL vgeom:io 0xc5b57b88 [vdev:worker da1s3] 220 0 0 0 SL vgeom:io 0xc5ae0c08 [vdev:worker da0s3] 219 0 0 0 SL zfs:(&tq 0xc558b2b4 [spa_zio_intr_5] 218 0 0 0 SL zfs:(&tq 0xc558b2b4 [spa_zio_intr_5] 217 0 0 0 SL zfs:(&tq 0xc558b1e8 [spa_zio_issue_5] 216 0 0 0 SL zfs:(&tq 0xc558b1e8 [spa_zio_issue_5] 215 0 0 0 SL zfs:(&tq 0xc558b11c [spa_zio_intr_4] 214 0 0 0 SL zfs:(&tq 0xc558b11c [spa_zio_intr_4] 213 0 0 0 SL zfs:(&tq 0xc558a5e4 [spa_zio_issue_4] 212 0 0 0 SL zfs:(&tq 0xc558a5e4 [spa_zio_issue_4] 211 0 0 0 SL zfs:(&tq 0xc558a77c [spa_zio_intr_3] 210 0 0 0 SL zfs:(&tq 0xc558a77c [spa_zio_intr_3] 209 0 0 0 SL zfs:(&tq 0xc558a9e0 [spa_zio_issue_3] 208 0 0 0 SL zfs:(&tq 0xc558a9e0 [spa_zio_issue_3] 207 0 0 0 SL zfs:(&tq 0xc558a050 [spa_zio_intr_2] 206 0 0 0 SL zfs:(&tq 0xc558a050 [spa_zio_intr_2] 205 0 0 0 SL zfs:(&tq 0xc558a11c [spa_zio_issue_2] 204 0 0 0 SL zfs:(&tq 0xc558a11c [spa_zio_issue_2] 203 0 0 0 SL zfs:(&tq 0xc558a1e8 [spa_zio_intr_1] 202 0 0 0 SL zfs:(&tq 0xc558a1e8 [spa_zio_intr_1] 201 0 0 0 SL zfs:(&tq 0xc558a2b4 [spa_zio_issue_1] 200 0 0 0 SL zfs:(&tq 0xc558a2b4 [spa_zio_issue_1] 199 0 0 0 SL zfs:(&tq 0xc558a380 [spa_zio_intr_0] 198 0 0 0 SL zfs:(&tq 0xc558a380 [spa_zio_intr_0] 197 0 0 0 SL zfs:(&tq 0xc558a44c [spa_zio_issue_0] 196 0 0 0 SL zfs:(&tq 0xc558a44c [spa_zio_issue_0] 112 0 0 0 SL zfs:(&tq 0xc558a518 [zil_clean] 111 0 0 0 SL zfs:(&tx 0xc5af0b24 [txg_thread_enter] 110 0 0 0 SL zfs:(&zi 0xc61a2668 [txg_thread_enter] 109 0 0 0 SL zfs:(&tx 0xc5af0b1c [txg_thread_enter] 106 0 0 0 SL zfs:(&tq 0xc558b848 [spa_zio_intr_5] 105 0 0 0 SL zfs:(&tq 0xc558b848 [spa_zio_intr_5] 104 0 0 0 SL zfs:(&tq 0xc558b914 [spa_zio_issue_5] 103 0 0 0 SL zfs:(&tq 0xc558b914 [spa_zio_issue_5] 102 0 0 0 SL zfs:(&tq 0xc558b9e0 [spa_zio_intr_4] 101 0 0 0 SL zfs:(&tq 0xc558b9e0 [spa_zio_intr_4] 100 0 0 0 SL zfs:(&tq 0xc558baac [spa_zio_issue_4] 99 0 0 0 SL zfs:(&tq 0xc558baac [spa_zio_issue_4] 98 0 0 0 SL zfs:(&tq 0xc558bb78 [spa_zio_intr_3] 97 0 0 0 SL zfs:(&tq 0xc558bb78 [spa_zio_intr_3] 96 0 0 0 SL zfs:(&tq 0xc558bc44 [spa_zio_issue_3] 95 0 0 0 SL zfs:(&tq 0xc558bc44 [spa_zio_issue_3] 94 0 0 0 SL zfs:(&tq 0xc558bd10 [spa_zio_intr_2] 93 0 0 0 SL zfs:&vq- 0xc5c0b334 [spa_zio_intr_2] 92 0 0 0 SL zfs:(&tq 0xc558bddc [spa_zio_issue_2] 91 0 0 0 SL zfs:(&tq 0xc558bddc [spa_zio_issue_2] 90 0 0 0 RL CPU 0 [spa_zio_intr_1] 89 0 0 0 SL zfs:&vq- 0xc5c0b334 [spa_zio_intr_1] 88 0 0 0 SL zfs:(&tq 0xc5bcb050 [spa_zio_issue_1] 87 0 0 0 SL zfs:(&tq 0xc5bcb050 [spa_zio_issue_1] 86 0 0 0 SL zfs:(&tq 0xc5bcb11c [spa_zio_intr_0] 85 0 0 0 SL zfs:(&tq 0xc5bcb11c [spa_zio_intr_0] 84 0 0 0 SL zfs:(&tq 0xc5bcb1e8 [spa_zio_issue_0] 83 0 0 0 SL zfs:(&tq 0xc5bcb1e8 [spa_zio_issue_0] 56 0 0 0 SL m:w1 0xc5afea00 [g_mirror gm0s1] 55 0 0 0 SL sdflush 0xc08aab24 [softdepflush] 54 0 0 0 SL vlruwt 0xc588e000 [vnlru] 53 0 0 0 RL [syncer] 52 0 0 0 SL psleep 0xc089bc04 [bufdaemon] 51 0 0 0 SL pgzero 0xc08ab6e0 [pagezero] 50 0 0 0 SL psleep 0xc08ab2f8 [vmdaemon] 49 0 0 0 SL psleep 0xc08ab2c0 [pagedaemon] 47 0 0 0 RL [arc_reclaim_thread] 46 0 0 0 SL jsw:wait 0xc088e7b4 [g_journal switcher] 45 0 0 0 SL waiting_ 0xc089f24c [sctp_iterator] 44 0 0 0 WL [swi0: sio] 43 0 0 0 WL [irq12: psm0] 42 0 0 0 RL [irq1: atkbd0] 41 0 0 0 WL [irq15: ata1] 40 0 0 0 WL [irq14: ata0] 39 0 0 0 SL usbevt 0xc55d0210 [usb4] 38 0 0 0 SL usbevt 0xc56bd210 [usb3] 37 0 0 0 SL usbevt 0xc5696210 [usb2] 36 0 0 0 RL [irq18: uhci2] 35 0 0 0 SL usbevt 0xc569d210 [usb1] 34 0 0 0 WL [irq19: uhci1++] 33 0 0 0 SL usbtsk 0xc088e3d4 [usbtask-dr] 32 0 0 0 SL usbtsk 0xc088e3c0 [usbtask-hc] 31 0 0 0 SL usbevt 0xc55f3210 [usb0] 30 0 0 0 WL [irq17: uhci0 ehci0] 29 0 0 0 SL idle 0xc568a000 [mpt_recovery0] 28 0 0 0 RL [em0 taskq] 27 0 0 0 RL [irq16: nvidia0+++] 26 0 0 0 WL [irq9: acpi0] 25 0 0 0 WL [swi2: cambio] 24 0 0 0 SL ccb_scan 0xc08785d4 [xpt_thrd] 23 0 0 0 SL - 0xc5568b00 [acpi_task_2] 22 0 0 0 SL - 0xc5568b00 [acpi_task_1] 21 0 0 0 SL - 0xc5568b00 [acpi_task_0] 20 0 0 0 SL - 0xc5568b80 [kqueue taskq] 19 0 0 0 WL [swi6: task queue] 18 0 0 0 WL [swi6: Giant taskq] 9 0 0 0 SL - 0xc5568e00 [thread taskq] 17 0 0 0 WL [swi5: +] 16 0 0 0 RL [yarrow] 8 0 0 0 SL crypto_r 0xc08aa314 [crypto returns] 7 0 0 0 SL crypto_w 0xc08aa2ec [crypto] 6 0 0 0 SL zfs:(&tq 0xc558b050 [system_taskq] 5 0 0 0 SL zfs:(&tq 0xc558b050 [system_taskq] 4 0 0 0 SL - 0xc088e76c [g_down] 3 0 0 0 SL - 0xc088e768 [g_up] 2 0 0 0 SL - 0xc088e760 [g_event] 15 0 0 0 WL [swi1: net] 14 0 0 0 WL [swi3: vm] 13 0 0 0 RL [swi4: clock sio] 12 0 0 0 RL [idle: cpu0] 11 0 0 0 RL [idle: cpu1] 1 0 1 0 SLs wait 0xc5528d48 [init] 10 0 0 0 RL CPU 1 [audit] 0 0 0 0 WLs [swapper] db> show lockedvnods Locked vnodes 0xd29c5880: tag zfs, type VREG usecount 1, writecount 0, refcount 1 mountedhere 0 flags () v_object 0xc81ccc1c ref 0 pages 0 lock type zfs: SHARED (count 1) 0xd7f4d550: tag zfs, type VREG usecount 1, writecount 0, refcount 1 mountedhere 0 flags () v_object 0xc904a0f8 ref 0 pages 0 lock type zfs: SHARED (count 1) 0xcff60990: tag zfs, type VREG usecount 1, writecount 1, refcount 1 mountedhere 0 flags () v_object 0xca73ae0c ref 0 pages 0 lock type zfs: EXCL (count 1) by thread 0xc65f7660 (pid 700) db> show allpcpu Current CPU: 1 cpuid = 0 curthread = 0xc5b6aaa0: pid 90 "spa_zio_intr_1" curpcb = 0xfa577d90 fpcurthread = none idlethread = 0xc5529880: pid 12 "idle: cpu0" APIC ID = 0 currentldt = 0x50 cpuid = 1 curthread = 0xc5529220: pid 10 "audit" curpcb = 0xf668cd90 fpcurthread = none idlethread = 0xc5529660: pid 11 "idle: cpu1" APIC ID = 1 currentldt = 0x50 db> show intr irq1: atkbd0 (pid 42) {NEED} irq4: sio0 (no thread) irq9: acpi0 (pid 26) irq12: psm0 (pid 43) irq14: ata0 (pid 40) {ENTROPY} irq15: ata1 (pid 41) {ENTROPY} irq16: nvidia0+++ (pid 27) {ENTROPY, NEED} irq17: uhci0 ehci0 (pid 30) irq18: uhci2 (pid 36) {NEED} irq19: uhci1++ (pid 34) {ENTROPY} swi4: clock sio (pid 13) {SOFT, NEED} swi3: vm (pid 14) {SOFT} swi1: net (pid 15) {SOFT} swi5: + (pid 17) {SOFT} swi6: Giant taskq (pid 18) {SOFT} swi6: task queue (pid 19) {SOFT} swi2: cambio (pid 25) {SOFT} irq256: em0 (no thread) swi0: sio (pid 44) {SOFT} db> I can test again on request HenriReceived on Fri Nov 09 2007 - 15:37:08 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:21 UTC