Re: ZFS L2ARC - incorrect size and abnormal system load on r255173

From: Vitalij Satanivskij <satan_at_ukr.net>
Date: Thu, 10 Oct 2013 12:22:23 +0300
Same situation hapend yesterday again :( 

What's confuse me while trying to understend where I'm wrong


Firt some info.

We have zfs pool  "POOL" and one more zfs on it "POOL/zfs"

POOL - have only primarycache enabled "ALL" 
POOL/zfs - have both primay and secondary for "ALL"

POOL have compression=lz4

POOL/zfs have none


POOL - have around 9TB data

POOL/zfs - have 1TB

Secondary cache have configuration - 

        cache
          gpt/cache0    ONLINE       0     0     0
          gpt/cache1    ONLINE       0     0     0
          gpt/cache2    ONLINE       0     0     0

gpt/cache0-2 it's intel sdd SSDSC2BW180A4 180gb 

So full real size  for l2 is 540GB (realy 489gb)

First question  - data on l2arc will be compressed on not?

Second in stats we see 

L2 ARC Size: (Adaptive)                         2.08    TiB

eary it was 1.1 1.4 ...

So a) how cache can be biger than zfs it self
   b) in case it's not compressed (answer for first question) how it an be biger than real ssd size?


one more coment if l2 arc size grove above phisical sizes I se  next stats 

kstat.zfs.misc.arcstats.l2_cksum_bad: 50907344
kstat.zfs.misc.arcstats.l2_io_error: 4547377

and growing.


System is r255173 with patch from rr255173


At last maybe somebody have any ideas what's realy hapend...





Vitalij Satanivskij wrote:
VS> 
VS> One more question - 
VS> 
VS> we have two counter - 
VS> 
VS> kstat.zfs.misc.arcstats.l2_size: 1256609410560
VS> kstat.zfs.misc.arcstats.l2_asize: 1149007667712
VS> 
VS> can anybody explain how to understand them i.e.  l2_asize - real used space on l2arc an l2_size - uncompressed size,
VS> 
VS> or maybe something else ?
VS> 
VS> 
VS> 
VS> Vitalij Satanivskij wrote:
VS> VS> 
VS> VS> Data on pool have compressratio around 1.4 
VS> VS> 
VS> VS> On diferent servers with same data type and load L2 ARC Size: (Adaptive) can be diferent 
VS> VS> 
VS> VS> for example 1.04    TiB vs  1.45    TiB
VS> VS> 
VS> VS> But it's all have same porblem  - grow in time.
VS> VS> 
VS> VS> 
VS> VS> More stange for us - 
VS> VS> 
VS> VS> ARC: 80G Total, 4412M MFU, 5040M MRU, 76M Anon, 78G Header, 2195M Other
VS> VS> 
VS> VS> 78G header size and ubnormal - 
VS> VS> 
VS> VS> kstat.zfs.misc.arcstats.l2_cksum_bad: 210920592
VS> VS> kstat.zfs.misc.arcstats.l2_io_error: 7362414
VS> VS> 
VS> VS> sysctl's growing avery second.
VS> VS> 
VS> VS> All part's of server (as hardware part's) in in normal state.
VS> VS> 
VS> VS> After reboot no problem's for some period untile cache size grow to some limit.
VS> VS> 
VS> VS> 
VS> VS> 
VS> VS> Mark Felder wrote:
VS> VS> MF> On Mon, Oct 7, 2013, at 13:09, Dmitriy Makarov wrote:
VS> VS> MF> > 
VS> VS> MF> > How can L2 ARC Size: (Adaptive) be 1.44 TiB (up) with total physical size
VS> VS> MF> > of L2ARC devices 490GB?
VS> VS> MF> > 
VS> VS> MF> 
VS> VS> MF> http://svnweb.freebsd.org/base?view=revision&revision=251478
VS> VS> MF> 
VS> VS> MF> L2ARC compression perhaps?
VS> VS> MF> _______________________________________________
VS> VS> MF> freebsd-current_at_freebsd.org mailing list
VS> VS> MF> http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS> VS> MF> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
VS> VS> _______________________________________________
VS> VS> freebsd-current_at_freebsd.org mailing list
VS> VS> http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS> VS> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
VS> _______________________________________________
VS> freebsd-current_at_freebsd.org mailing list
VS> http://lists.freebsd.org/mailman/listinfo/freebsd-current
VS> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
Received on Thu Oct 10 2013 - 07:22:27 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:42 UTC