Re: Uneven load on drives in ZFS RAIDZ1

From: Stefan Esser <se_at_freebsd.org>
Date: Mon, 19 Dec 2011 22:34:35 +0100
Am 19.12.2011 22:07, schrieb Daniel Kalchev:
> On Dec 19, 2011, at 11:00 PM, Stefan Esser wrote:
>> Well, I had dedup enabled for a few short tests. But since I have got
>> "only" 8GB of RAM and dedup seems to require an order of magnitude more
>> to be working well, I switched dedup off again after a few hours.
> 
> You will need to get rid of the DDT, as those are read nevertheless
> even with dedup (already) disabled. The tables refer to already
> deduped data.

Thanks for the hint!

Is there an easy way to identify the file systems that ever had dedup
enabled? (I don't mind to extract the information from zdb output, in
case that the UI of choice.)

I seem to remember that I tried it with my /usr/svn (which obviously had
lots of duplicated files), but I do not remember on which other file
systems I tried it ... (I've created some 20-25 filesystems on this pool.)

> In my case, I had about 2-3TB of deduced data, with 24GB RAM. There
> was no shortage of RAM and I could not confirm that ARC is full.. but
> somehow the pool was placing heavy read on one or two disks only (all
> others, nearly idle) -- apparently many small size reads.
>
> I resolved my issue by copying the data to a newly created filesystem
> in the same pool -- luckily there was enough space available, then
> removing the 'deduped' filesystems.

This should be easy in the case of /usr/svn, thanks for the suggestion!

> That last operation was particularly slow and at one time I had
> spontaneous reboot -- the pool was 'impossible to mount', and as
> weird as it sounds, I had 'out of swap space' killing the 'zpool
> list' process.
> I let it sit for few hours, until it has cleared itself.
> 
> I/O in that pool is back to normal now.
> 
> There is something terribly wrong with the dedup code.
> 
> Well, if your test data is not valuable, you can just delete it. :)

I could also start over with a clean SVN check-out, but since I've got
the free disk space to copy the data over, I'll try that first.

Thanks again and best regards, STefan
Received on Mon Dec 19 2011 - 20:34:39 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:22 UTC