Re: SMART: disk problems on RAIDZ1 pool: (ada6:ahcich6:0:0:0): CAMstatus: ATA Status Error

From: O. Hartmann <ohartmann_at_walstatt.org>
Date: Sat, 23 Dec 2017 12:25:41 +0100
Am Thu, 14 Dec 2017 12:05:20 +0100
Willem Jan Withagen <wjw_at_digiware.nl> schrieb:

> On 13/12/2017 17:47, Rodney W. Grimes wrote:
> >> On Tue, 12 Dec 2017 14:58:28 -0800
> >> Cy Schubert <Cy.Schubert_at_komquats.com> wrote:
> >> I think people responding to my thread made it clear that the WD Green
> >> isn't the first-choice-solution for a 20/6 (not 24/7) duty drive and
> >> the fact, that they have serviced now more than 25000 hours, it would
> >> be wise to replace them with alternatives.  
> > 
> > I think someone had an apm command that turns off the head park,
> > that would do wonders for drive life.   On the other hand, I think
> > if it was my data and I saw that the drive had 2M head load cycles
> > I would be looking to get out of that driv with any data I could
> > not easily replace.  If it was well backed up or easily replaced
> > my worries would be less.  
> 
> WD made their first series of Green disks green by aggressively turning 
> them into sleep state. Like when few secs there was nog activity they 
> would park the head, spin it down, and sleep the disk...
> Access would need to undo the whole series of command.
> 
> This could be reset by writing in one of the disks registers. I remember 
> doing that for my 1,5G WDs (WD15EADS from 2009). That saved a lot of 
> startups. I still have 'm around, but only use them for things that are 
> not valuable at all. Some have died over time, but about half of them 
> still seem to work without much trouble.
> 
> WD used to have a .exe program to actually do this. But that did not
> work on later disks. And turning things of on those disks was 
> impossible/a lot more complex.
> 
> This type of disk worked quite a long time in my ZFS setup. Like a few 
> years, but I turned parking of as soon as there was a lot of turmoil 
> about this in the community.
> Now I using WD reds for small ZFS systems, and WD red Pro for large 
> private storage servers. Professional server get HGST He disks, a bit 
> more expensive, but very little fallout.
> 
> --WjW

Hello fellows.

First of all, I managed it over the past week+ to replace all(!) drives with new ones. I
decided to use this time HGST 4TB Deskstar NAS (HGST HDN726040ALE614) instead of WD RED
4TB (WDC WD40EFRX-68N32N0). The one WD RED is about to be replaced in the next days.

Apart from the very long resilvering time (the first drive, the Western Digital WD RED
4TB with 64MB cache and 5400 rpm) took 11 h, all HGST drives, although considered faster
(7200 rpm, 128 MB cache) took 15 - 16 h), everything ran smoothly - except, as mentioned,
the exorbitant times of recovery.

A very interesting point in this story is: as you could see, the WD Caviar Green 3TB
drives suffered from a high "193 Load_Cycle_Count" - almost 85 per hour. When replacing
the drives, I figured out, that one of the four drives was already a Western Digital RED
3TB NAS drive, but investigating its  "193 Load_Cycle_Count" revealed, that this drive
also had a unusual high reload count - see "smartctl -x" output attached. It seems, as
you already stated, that the APM feature responsible for this isn't available. The drive
has been purchased Q4/2013.

The HGST drives are very(!) noisy, th ehead movement induces a notable ringing, while the
WD drive(s) are/were really silent. The power consumption of the HGST drives is higher.
But apart from that, I'm disappointed about the fact that WD has also implemented this
"timebomb" Load_Cycle_Count issue.

Thanks a lot for your help and considerations!

Kind regards,
Oliver

-- 
O. Hartmann

Ich widerspreche der Nutzung oder Übermittlung meiner Daten für
Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG).

Received on Sat Dec 23 2017 - 10:26:50 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:14 UTC