Re: ZFS secondarycache on SSD problem on r255173

From: Vitalij Satanivskij <satan_at_ukr.net>
Date: Tue, 22 Oct 2013 15:26:33 +0300
خي, just up to now no error on l2arc 

L2 ARC Summary: (HEALTHY)
        Passed Headroom:                        1.99m
        Tried Lock Failures:                    144.53m
        IO In Progress:                         130.15k
        Low Memory Aborts:                      7
        Free on Write:                          335.56k
        Writes While Full:                      30.31k
        R/W Clashes:                            115.31k
        Bad Checksums:                          0
        IO Errors:                              0
        SPA Mismatch:                           153.15m

L2 ARC Size: (Adaptive)                         433.75  GiB
        Header Size:                    0.49%   2.12    GiB


I will test for longer time, but looks like problem gone.


Vitalij Satanivskij wrote:
VS> Steven Hartland wrote:
VS> SH> So previously you only started seeing l2 errors after there was
VS> SH> a significant amount of data in l2arc? Thats interesting in itself
VS> SH> if thats the case.
VS> 
VS> Yes someting arround  200+gb
VS>  
VS> SH> I wonder if its the type of data, or something similar. Do you
VS> SH> run compression on any of your volumes?
VS> SH> zfs get compression
VS> 
VS> Just now testing goes on next configuration 
VS> 
VS> first zfs is top level pool calling disk1  have enable lz4 compression and secondarycache = metadata
VS> 
VS> next zfs is disk1/data with compression=off and secondarycache = all 
VS> 
VS> Error was seen on confiruration like that and on configuration where was seted as secondarycache = none for disk1 (disk1/data still fully cached)
VS> 
VS> 
VS> 
VS> 
VS> SH>     Regards
VS> SH>     Steve
VS> SH> ----- Original Message ----- 
VS> SH> From: "Vitalij Satanivskij" <satan_at_ukr.net>
VS> SH> 
VS> SH> 
VS> SH> > 
VS> SH> > Just  now I cannot say, as to triger problem we need at last 200+gb size on l2arc wich usually grow in one production day.
VS> SH> > 
VS> SH> > But for some reason today in the morning server was rebooted so cache was flushed and now only 100Gb. 
VS> SH> > 
VS> SH> > Need to wait some more time.
VS> SH> > 
VS> SH> > At last for now none error on l2.
VS> SH> 
VS> SH> 
VS> SH> ================================================
VS> SH> This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 
VS> SH> 
VS> SH> In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
VS> SH> or return the E.mail to postmaster_at_multiplay.co.uk.
VS> SH> 
VS> SH> _______________________________________________
VS> SH> freebsd-current_at_freebsd.org mailing list
VS> SH> http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS> SH> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
VS> _______________________________________________
VS> freebsd-current_at_freebsd.org mailing list
VS> http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
Received on Tue Oct 22 2013 - 10:26:37 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:43 UTC