First of all Thank you for help. As for high load on system, looks like problems with l2arc have litle impact on load comparatively to another just now not fully classifed things. Looks like ower internal software and libs that it use didn't like new VMEM subsystem, at last system behavior complitely diferent from 6 month older CURRENT. So for now none problem's with l2arc errors. Will try to understand reason of load and fix or at last ask for help again ^). Steven Hartland wrote: SH> First off I just wanted to clarify that you don't need to compression on SH> dataset for L2ARC to use LZ4 compression, it does this by default as is SH> not currently configurable. SH> SH> Next up I believe we've found the cause of this high load and I've just SH> committed the fix to head: SH> http://svnweb.freebsd.org/base?view=revision&sortby=file&revision=256889 SH> SH> Thanks to Vitalij for testing :) SH> SH> Dmitriy if you could test on your side too that would be appreciated. SH> SH> Regards SH> Steve SH> SH> ----- Original Message ----- SH> From: "Vitalij Satanivskij" <satan_at_ukr.net> SH> To: "Allan Jude" <freebsd_at_allanjude.com> SH> Cc: <freebsd-current_at_freebsd.org> SH> Sent: Thursday, October 10, 2013 6:03 PM SH> Subject: Re: ZFS L2ARC - incorrect size and abnormal system load on r255173 SH> SH> SH> > AJ> Some background on L2ARC compression for you: SH> > AJ> SH> > AJ> http://wiki.illumos.org/display/illumos/L2ARC+Compression SH> > SH> > I'm alredy see it. SH> > SH> > SH> > SH> > AJ> http://svnweb.freebsd.org/base?view=revision&revision=251478 SH> > AJ> SH> > AJ> Are you sure that compression on pool/zfs is off? it would normally SH> > AJ> inherit from the parent, so double check with: zfs get compression pool/zfs SH> > SH> > Yes, compression turned off on pool/zfs, it's was may time rechecked. SH> > SH> > SH> > SH> > AJ> Is the data on pool/zfs related to the data on the root pool? if SH> > AJ> pool/zfs were a clone, and the data is actually used in both places, the SH> > AJ> newer 'single copy ARC' feature may come in to play: SH> > AJ> https://www.illumos.org/issues/3145 SH> > SH> > No, both pool and pool/zfs have diferent type of data, pool/zfs was created as new empty zfs (zfs create pool/zfs) SH> > SH> > and data was writed to it from another server. SH> > SH> > SH> > Right now one machine work fine with l2arc. This machine without patch for corecting ashift on cache devices. SH> > SH> > At last 3 day's working with zero errors. Another servers with same config similar data, load and so on after 2 day SH> > work began report abouy errors. SH> > SH> > SH> > AJ> SH> > AJ> SH> > AJ> SH> > AJ> -- SH> > AJ> Allan Jude SH> > AJ> SH> > AJ> _______________________________________________ SH> > AJ> freebsd-current_at_freebsd.org mailing list SH> > AJ> http://lists.freebsd.org/mailman/listinfo/freebsd-current SH> > AJ> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org" SH> > _______________________________________________ SH> > freebsd-current_at_freebsd.org mailing list SH> > http://lists.freebsd.org/mailman/listinfo/freebsd-current SH> > To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org" SH> > SH> SH> ================================================ SH> This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. SH> SH> In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 SH> or return the E.mail to postmaster_at_multiplay.co.uk. SH> SH> _______________________________________________ SH> freebsd-current_at_freebsd.org mailing list SH> http://lists.freebsd.org/mailman/listinfo/freebsd-current SH> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"Received on Tue Oct 22 2013 - 12:11:07 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:43 UTC