Ok. Just right now system rebooted with you patch. Trim enabled again. WIll wait some time untile size of used cache grow's. Steven Hartland wrote: SH> Looking at the l2arc compression code I believe that metadata is always SH> compressed with lz4, even if compression is off on all datasets. SH> SH> This is backed up by what I'm seeing on my system here as it shows a SH> non-zero l2_compress_successes value even though I'm not using SH> compression at all. SH> SH> I think we we may well need the following patch to set the minblock SH> size based on the vdev ashift and not SPA_MINBLOCKSIZE. SH> SH> svn diff -x -p sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c SH> Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c SH> =================================================================== SH> --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c (revision 256554) SH> +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c (working copy) SH> _at__at_ -5147,7 +5147,7 _at__at_ l2arc_compress_buf(l2arc_buf_hdr_t *l2hdr) SH> len = l2hdr->b_asize; SH> cdata = zio_data_buf_alloc(len); SH> csize = zio_compress_data(ZIO_COMPRESS_LZ4, l2hdr->b_tmp_cdata, SH> - cdata, l2hdr->b_asize, (size_t)SPA_MINBLOCKSIZE); SH> + cdata, l2hdr->b_asize, (size_t)(1ULL << l2hdr->b_dev->l2ad_vdev->vdev_ashift)); SH> SH> if (csize == 0) { SH> /* zero block, indicate that there's nothing to write */ SH> SH> Could you try this patch on your system Vitalij see if it has any effect SH> on the number of l2_cksum_bad / l2_io_error? SH> SH> Regards SH> Steve SH> ----- Original Message ----- SH> From: "Vitalij Satanivskij" <satan_at_ukr.net> SH> To: "Steven Hartland" <killing_at_multiplay.co.uk> SH> Cc: "Vitalij Satanivskij" <satan_at_ukr.net>; "Dmitriy Makarov" <supportme_at_ukr.net>; "Justin T. Gibbs" <gibbs_at_FreeBSD.org>; "Borja SH> Marcos" <borjam_at_sarenet.es>; <freebsd-current_at_freebsd.org> SH> Sent: Friday, October 18, 2013 3:45 PM SH> Subject: Re: ZFS secondarycache on SSD problem on r255173 SH> SH> SH> > SH> > Just right now stats not to actual because of some another test. SH> > SH> > Test is simply all gpart information destroyed from ssd and SH> > SH> > They used as raw cache devices. Just SH> > 2013-10-18.11:30:49 zpool add disk1 cache /dev/ada1 /dev/ada2 /dev/ada3 SH> > SH> > So sizes at last l2_size and l2_asize in not actual. SH> > SH> > But heare it is: SH> > SH> > kstat.zfs.misc.arcstats.hits: 5178174063 SH> > kstat.zfs.misc.arcstats.misses: 57690806 SH> > kstat.zfs.misc.arcstats.demand_data_hits: 313995744 SH> > kstat.zfs.misc.arcstats.demand_data_misses: 37414740 SH> > kstat.zfs.misc.arcstats.demand_metadata_hits: 4719242892 SH> > kstat.zfs.misc.arcstats.demand_metadata_misses: 9266394 SH> > kstat.zfs.misc.arcstats.prefetch_data_hits: 1182495 SH> > kstat.zfs.misc.arcstats.prefetch_data_misses: 9951733 SH> > kstat.zfs.misc.arcstats.prefetch_metadata_hits: 143752935 SH> > kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1057939 SH> > kstat.zfs.misc.arcstats.mru_hits: 118609738 SH> > kstat.zfs.misc.arcstats.mru_ghost_hits: 1895486 SH> > kstat.zfs.misc.arcstats.mfu_hits: 4914673425 SH> > kstat.zfs.misc.arcstats.mfu_ghost_hits: 14537497 SH> > kstat.zfs.misc.arcstats.allocated: 103796455 SH> > kstat.zfs.misc.arcstats.deleted: 40168100 SH> > kstat.zfs.misc.arcstats.stolen: 20832742 SH> > kstat.zfs.misc.arcstats.recycle_miss: 15663428 SH> > kstat.zfs.misc.arcstats.mutex_miss: 1456781 SH> > kstat.zfs.misc.arcstats.evict_skip: 25960184 SH> > kstat.zfs.misc.arcstats.evict_l2_cached: 891379153920 SH> > kstat.zfs.misc.arcstats.evict_l2_eligible: 50578438144 SH> > kstat.zfs.misc.arcstats.evict_l2_ineligible: 956055729664 SH> > kstat.zfs.misc.arcstats.hash_elements: 8693451 SH> > kstat.zfs.misc.arcstats.hash_elements_max: 14369414 SH> > kstat.zfs.misc.arcstats.hash_collisions: 90967764 SH> > kstat.zfs.misc.arcstats.hash_chains: 1891463 SH> > kstat.zfs.misc.arcstats.hash_chain_max: 24 SH> > kstat.zfs.misc.arcstats.p: 73170954752 SH> > kstat.zfs.misc.arcstats.c: 85899345920 SH> > kstat.zfs.misc.arcstats.c_min: 42949672960 SH> > kstat.zfs.misc.arcstats.c_max: 85899345920 SH> > kstat.zfs.misc.arcstats.size: 85899263104 SH> > kstat.zfs.misc.arcstats.hdr_size: 1425948696 SH> > kstat.zfs.misc.arcstats.data_size: 77769994240 SH> > kstat.zfs.misc.arcstats.other_size: 6056233632 SH> > kstat.zfs.misc.arcstats.l2_hits: 21725934 SH> > kstat.zfs.misc.arcstats.l2_misses: 35876251 SH> > kstat.zfs.misc.arcstats.l2_feeds: 130197 SH> > kstat.zfs.misc.arcstats.l2_rw_clash: 110181 SH> > kstat.zfs.misc.arcstats.l2_read_bytes: 391282009600 SH> > kstat.zfs.misc.arcstats.l2_write_bytes: 1098703347712 SH> > kstat.zfs.misc.arcstats.l2_writes_sent: 130037 SH> > kstat.zfs.misc.arcstats.l2_writes_done: 130037 SH> > kstat.zfs.misc.arcstats.l2_writes_error: 0 SH> > kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 375921 SH> > kstat.zfs.misc.arcstats.l2_evict_lock_retry: 331 SH> > kstat.zfs.misc.arcstats.l2_evict_reading: 43 SH> > kstat.zfs.misc.arcstats.l2_free_on_write: 255730 SH> > kstat.zfs.misc.arcstats.l2_abort_lowmem: 0 SH> > kstat.zfs.misc.arcstats.l2_cksum_bad: 854359 SH> > kstat.zfs.misc.arcstats.l2_io_error: 38254 SH> > kstat.zfs.misc.arcstats.l2_size: 136696884736 SH> > kstat.zfs.misc.arcstats.l2_asize: 131427690496 SH> > kstat.zfs.misc.arcstats.l2_hdr_size: 742951208 SH> > kstat.zfs.misc.arcstats.l2_compress_successes: 5565311 SH> > kstat.zfs.misc.arcstats.l2_compress_zeros: 0 SH> > kstat.zfs.misc.arcstats.l2_compress_failures: 0 SH> > kstat.zfs.misc.arcstats.l2_write_trylock_fail: 325157131 SH> > kstat.zfs.misc.arcstats.l2_write_passed_headroom: 4897854 SH> > kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 115704249 SH> > kstat.zfs.misc.arcstats.l2_write_in_l2: 15114214372 SH> > kstat.zfs.misc.arcstats.l2_write_io_in_progress: 63417 SH> > kstat.zfs.misc.arcstats.l2_write_not_cacheable: 3291593934 SH> > kstat.zfs.misc.arcstats.l2_write_full: 47672 SH> > kstat.zfs.misc.arcstats.l2_write_buffer_iter: 130197 SH> > kstat.zfs.misc.arcstats.l2_write_pios: 130037 SH> > kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 369077156457472 SH> > kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 8015080 SH> > kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 79825 SH> > kstat.zfs.misc.arcstats.memory_throttle_count: 0 SH> > kstat.zfs.misc.arcstats.duplicate_buffers: 0 SH> > kstat.zfs.misc.arcstats.duplicate_buffers_size: 0 SH> > kstat.zfs.misc.arcstats.duplicate_reads: 0 SH> > SH> > SH> > Values of SH> > --------------------------------- SH> > kstat.zfs.misc.arcstats.l2_cksum_bad: 854359 SH> > kstat.zfs.misc.arcstats.l2_io_error: 38254 SH> > -------------------------------- SH> > SH> > not growing from last cache reconfiguration, just wait some time to see - maybe problem disapers :) SH> > SH> > SH> > SH> > SH> > SH> > SH> > Steven Hartland wrote: SH> > SH> Hmm so that rules out a TRIM related issue. I wonder if the SH> > SH> increase in ashift has triggered a problem in compression. SH> > SH> SH> > SH> What are all the values reported by: SH> > SH> sysctl -a kstat.zfs.misc.arcstats SH> > SH> SH> > SH> Regards SH> > SH> Steve SH> > SH> SH> > SH> ----- Original Message ----- SH> > SH> From: "Vitalij Satanivskij" <satan_at_ukr.net> SH> > SH> To: "Steven Hartland" <killing_at_multiplay.co.uk> SH> > SH> Cc: <satan_at_ukr.net>; "Justin T. Gibbs" <gibbs_at_FreeBSD.org>; <freebsd-current_at_freebsd.org>; "Borja Marcos" SH> > <borjam_at_sarenet.es>; SH> > SH> "Dmitriy Makarov" <supportme_at_ukr.net> SH> > SH> Sent: Friday, October 18, 2013 9:01 AM SH> > SH> Subject: Re: ZFS secondarycache on SSD problem on r255173 SH> > SH> SH> > SH> SH> > SH> > Hello. SH> > SH> > SH> > SH> > Yesterday system was rebooted with vfs.zfs.trim.enabled=0 SH> > SH> > SH> > SH> > System version 10.0-BETA1 FreeBSD 10.0-BETA1 #6 r256669, without any changes in code SH> > SH> > SH> > SH> > Uptime 10:51 up 16:41 SH> > SH> > SH> > SH> > sysctl vfs.zfs.trim.enabled SH> > SH> > vfs.zfs.trim.enabled: 0 SH> > SH> > SH> > SH> > Around 2 hours ago errors counter's SH> > SH> > kstat.zfs.misc.arcstats.l2_cksum_bad: 854359 SH> > SH> > kstat.zfs.misc.arcstats.l2_io_error: 38254 SH> > SH> > SH> > SH> > begin grow from zero values. SH> > SH> > SH> > SH> > After remove cache SH> > SH> > 2013-10-18.10:37:10 zpool remove disk1 gpt/cache0 gpt/cache1 gpt/cache2 SH> > SH> > SH> > SH> > and attach again SH> > SH> > SH> > SH> > 2013-10-18.10:38:28 zpool add disk1 cache gpt/cache0 gpt/cache1 gpt/cache2 SH> > SH> > SH> > SH> > counters stop growing (of couse thay not zeroed) SH> > SH> > SH> > SH> > before cache remove kstat.zfs.misc.arcstats.l2_asize was around 280GB SH> > SH> > SH> > SH> > hw size of l2 cache is 3x164G SH> > SH> > SH> > SH> > => 34 351651821 ada3 GPT (168G) SH> > SH> > 34 6 - free - (3.0K) SH> > SH> > 40 8388608 1 zil2 (4.0G) SH> > SH> > 8388648 343263200 2 cache2 (164G) SH> > SH> > 351651848 7 - free - (3.5K) SH> > SH> > SH> > SH> > SH> > SH> > Any hypothesis what alse we can test/try etc? SH> > SH> > SH> > SH> > SH> > SH> > SH> > SH> > Steven Hartland wrote: SH> > SH> > SH> Correct. SH> > SH> > SH> ----- Original Message ----- SH> > SH> > SH> From: "Vitalij Satanivskij" <satan_at_ukr.net> SH> > SH> > SH> SH> > SH> > SH> SH> > SH> > SH> > Just to be sure I understand you clearly, I need to test next configuration: SH> > SH> > SH> > SH> > SH> > SH> > 1) System with ashift patch eg. just latest stable/10 revision SH> > SH> > SH> > 2) vfs.zfs.trim.enabled=0 in /boot/loader.conf SH> > SH> > SH> > SH> > SH> > SH> > So realy only diferens in default system configuration is disabled trim functional ? SH> > SH> > SH> > SH> > SH> > SH> > SH> > SH> > SH> > SH> > SH> > SH> > Steven Hartland wrote: SH> > SH> > SH> > SH> Still worth testing with the problem version installed but SH> > SH> > SH> > SH> with trim disabled to see if that clears the issues, if SH> > SH> > SH> > SH> nothing else it will confirm / deny if trim is involved. SH> > SH> > SH> SH> > SH> > SH> SH> > SH> > SH> ================================================ SH> > SH> > SH> This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. SH> > In the SH> > SH> > event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any SH> > SH> > information contained in it. SH> > SH> > SH> SH> > SH> > SH> In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 SH> > SH> > SH> or return the E.mail to postmaster_at_multiplay.co.uk. SH> > SH> > SH> SH> > SH> > SH> _______________________________________________ SH> > SH> > SH> freebsd-current_at_freebsd.org mailing list SH> > SH> > SH> http://lists.freebsd.org/mailman/listinfo/freebsd-current SH> > SH> > SH> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org" SH> > SH> > _______________________________________________ SH> > SH> > freebsd-current_at_freebsd.org mailing list SH> > SH> > http://lists.freebsd.org/mailman/listinfo/freebsd-current SH> > SH> > To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org" SH> > SH> > SH> > SH> SH> > SH> SH> > SH> ================================================ SH> > SH> This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the SH> > event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any SH> > information contained in it. SH> > SH> SH> > SH> In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 SH> > SH> or return the E.mail to postmaster_at_multiplay.co.uk. SH> > SH> SH> > SH> _______________________________________________ SH> > SH> freebsd-current_at_freebsd.org mailing list SH> > SH> http://lists.freebsd.org/mailman/listinfo/freebsd-current SH> > SH> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org" SH> > SH> SH> SH> ================================================ SH> This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. SH> SH> In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 SH> or return the E.mail to postmaster_at_multiplay.co.uk. SH> SH> _______________________________________________ SH> freebsd-current_at_freebsd.org mailing list SH> http://lists.freebsd.org/mailman/listinfo/freebsd-current SH> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"Received on Sat Oct 19 2013 - 06:55:57 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:43 UTC