on 10/02/2012 01:16 Mark Felder said the following: > Hi all, > > Previous kernel I was running on a test SAN was 9-STABLE from Jan 24th. Sorry, no > commit # -- didn't have svn on the machine back then. > > Today I built r231282 because it had an interesting fix in it: > > r231141 | mm | 2012-02-07 11:57:33 -0600 (Tue, 07 Feb 2012) | 25 lines > > MFC r230514: > Merge illumos revisions 13572, 13573, 13574: > > Rev. 13572: > disk sync write perf regression when slog is used post oi_148 [1] > > Rev. 13573: > crash during reguid causes stale config [2] > allow and unallow missing from zpool history since removal of pyzfs [5] > > Rev. 13574: > leaking a vdev when removing an l2cache device [3] > memory leak when adding a file-based l2arc device [4] > leak in ZFS from metaslab_group_create and zfs_ereport_checksum [6] > > References: > https://www.illumos.org/issues/1909 [1] > https://www.illumos.org/issues/1949 [2] > https://www.illumos.org/issues/1951 [3] > https://www.illumos.org/issues/1952 [4] > https://www.illumos.org/issues/1953 [5] > https://www.illumos.org/issues/1954 [6] > > Obtained from: illumos (issues #1909, #1949, #1951, #1952, #1953, #1954) > > > After booting into this kernel iSCSI was hosed. None of the ESXi servers looking > at it could do any I/O at all. Weird errors in the istgt log, too: > > Feb 9 16:26:23 zfs-san2 istgt[8177]: Login from > iqn.1998-01.com.vmware:esx1-21ecbe81 (172.16.17.41) on > iqn.2011-12.net.supranet.san2.istgt:lun7 LU7 (172.16.17.182:3260,1), > ISID=23d000002, TSIH=4, CID=0, HeaderDigest=off, DataDigest=off > Feb 9 16:26:23 zfs-san2 istgt[8177]: istgt_iscsi.c: > 777:istgt_iscsi_write_pdu_internal: ***ERROR*** iscsi_write() failed (errno=32) Are you positive that this breakage is ZFS related? BTW, errno 32 is EPIPE. > Feb 9 16:26:23 zfs-san2 istgt[8177]: istgt_iscsi.c:4984:sender: ***ERROR*** > iscsi_write_pdu() failed on > iqn.2011-12.net.supranet.san2.istgt:lun7,t,0x0001(iqn.1998-01.com.vmware:esx1-21ecbe81,i,0x00023d000002) > > > > > I didn't see any other commits between that could cause this, but can anyone else > confirm? After rebooting into the Jan 24th kernel everything went back to normal... > > > > Thanks, > > > > Mark > > > zfs-san2# zfs list > NAME USED AVAIL REFER MOUNTPOINT > tank 8.25T 7.76T 1.05M /tank > tank/LUN1 1.03T 8.77T 24.7G - > tank/LUN2 1.03T 8.77T 19.7G - > tank/LUN3 1.03T 8.70T 93.6G - > tank/LUN4 1.03T 8.77T 19.3G - > tank/LUN5 1.03T 8.79T 44.1K - > tank/LUN6 1.03T 8.79T 44.1K - > tank/LUN7 1.03T 8.79T 44.1K - > tank/LUN8 1.03T 8.79T 44.1K - > tank/nfs 2.30G 7.76T 2.30G /tank/nfs > > > > zfs-san2# zpool status > pool: tank > state: ONLINE > scan: scrub repaired 0 in 0h0m with 0 errors on Thu Jan 26 17:05:40 2012 > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > multipath/disk01 ONLINE 0 0 0 > multipath/disk02 ONLINE 0 0 0 > multipath/disk03 ONLINE 0 0 0 > multipath/disk04 ONLINE 0 0 0 > raidz1-1 ONLINE 0 0 0 > multipath/disk05 ONLINE 0 0 0 > multipath/disk06 ONLINE 0 0 0 > multipath/disk07 ONLINE 0 0 0 > multipath/disk08 ONLINE 0 0 0 > raidz1-2 ONLINE 0 0 0 > multipath/disk09 ONLINE 0 0 0 > multipath/disk10 ONLINE 0 0 0 > multipath/disk11 ONLINE 0 0 0 > multipath/disk12 ONLINE 0 0 0 > logs > da1 ONLINE 0 0 0 > cache > da2 ONLINE 0 0 0 > da3 ONLINE 0 0 0 > > errors: No known data errors > _______________________________________________ > freebsd-current_at_freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org" > -- Andriy GaponReceived on Fri Feb 10 2012 - 10:54:45 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:23 UTC