On 12/23/2017 05:25, O. Hartmann wrote: > Am Thu, 14 Dec 2017 12:05:20 +0100 > Willem Jan Withagen <wjw_at_digiware.nl> schrieb: > >> On 13/12/2017 17:47, Rodney W. Grimes wrote: >>>> On Tue, 12 Dec 2017 14:58:28 -0800 >>>> Cy Schubert <Cy.Schubert_at_komquats.com> wrote: >>>> I think people responding to my thread made it clear that the WD Green >>>> isn't the first-choice-solution for a 20/6 (not 24/7) duty drive and >>>> the fact, that they have serviced now more than 25000 hours, it would >>>> be wise to replace them with alternatives. >>> I think someone had an apm command that turns off the head park, >>> that would do wonders for drive life. On the other hand, I think >>> if it was my data and I saw that the drive had 2M head load cycles >>> I would be looking to get out of that driv with any data I could >>> not easily replace. If it was well backed up or easily replaced >>> my worries would be less. >> WD made their first series of Green disks green by aggressively turning >> them into sleep state. Like when few secs there was nog activity they >> would park the head, spin it down, and sleep the disk... >> Access would need to undo the whole series of command. >> >> This could be reset by writing in one of the disks registers. I remember >> doing that for my 1,5G WDs (WD15EADS from 2009). That saved a lot of >> startups. I still have 'm around, but only use them for things that are >> not valuable at all. Some have died over time, but about half of them >> still seem to work without much trouble. >> >> WD used to have a .exe program to actually do this. But that did not >> work on later disks. And turning things of on those disks was >> impossible/a lot more complex. >> >> This type of disk worked quite a long time in my ZFS setup. Like a few >> years, but I turned parking of as soon as there was a lot of turmoil >> about this in the community. >> Now I using WD reds for small ZFS systems, and WD red Pro for large >> private storage servers. Professional server get HGST He disks, a bit >> more expensive, but very little fallout. >> >> --WjW > Hello fellows. > > First of all, I managed it over the past week+ to replace all(!) drives with new ones. I > decided to use this time HGST 4TB Deskstar NAS (HGST HDN726040ALE614) instead of WD RED > 4TB (WDC WD40EFRX-68N32N0). The one WD RED is about to be replaced in the next days. > > Apart from the very long resilvering time (the first drive, the Western Digital WD RED > 4TB with 64MB cache and 5400 rpm) took 11 h, all HGST drives, although considered faster > (7200 rpm, 128 MB cache) took 15 - 16 h), everything ran smoothly - except, as mentioned, > the exorbitant times of recovery. > > A very interesting point in this story is: as you could see, the WD Caviar Green 3TB > drives suffered from a high "193 Load_Cycle_Count" - almost 85 per hour. When replacing > the drives, I figured out, that one of the four drives was already a Western Digital RED > 3TB NAS drive, but investigating its "193 Load_Cycle_Count" revealed, that this drive > also had a unusual high reload count - see "smartctl -x" output attached. It seems, as > you already stated, that the APM feature responsible for this isn't available. The drive > has been purchased Q4/2013. > > The HGST drives are very(!) noisy, th ehead movement induces a notable ringing, while the > WD drive(s) are/were really silent. The power consumption of the HGST drives is higher. > But apart from that, I'm disappointed about the fact that WD has also implemented this > "timebomb" Load_Cycle_Count issue. > > Thanks a lot for your help and considerations! > > Kind regards, > Oliver I have a fairly large number of HGST "NAS" drives in service across multiple locations (several dozens units total.) I don't like their 5Tb models much (they're slow comparatively for an unknown reason) but the 4Tb and 6Tb models I have in the field, while noisy and somewhat more power-hungry (the latter comes from the 7200 rpm speed) have yet to suffer a failure. -- Karl Denninger karl_at_denninger.net <mailto:karl_at_denninger.net> /The Market Ticker/ /[S/MIME encrypted email preferred]/
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:14 UTC