At least they aren’t paving stones
I had to send a Barracuda drive back recently.
“It’s fine” said Seagate’s SMART analysis tools.
“Clunk clunk clunk clonk” said the HDD.
I know which of those results I trusted more.
Barracudas are SMR garbage nowadays, they’re coasting on their reputation of many years ago when they were actually decent hard drives for the price.
I only wanted it for a Jellyfin drive. The one thing it could have been useful for and it even failed at that.
Seems like you need to pay the extra for an Ironwolf drive to get an actual “just like the good old days” HDD.
Yes, buy NAS or enterprise drives for a NAS, don’t buy consumer drives.
…when they were actually decent hard drives for the price.
And had a five year warranty.
Exos are typically still good drives though if you’re going for storage.
Just don’t buy Seagate. Their drives consistently have the highest annualized failure rate on Backblaze reports ( https://www.backblaze.com/blog/wp-content/uploads/2024/05/6-AFR-by-Manufacturer.png ), and is consistent with my experience in small anecdotal sample of roughly 30 drives. This results in a ripple effect where the failed drive adds more work to the other drives (array rebuild after replacement), thereby increasing their risk of failing, too.
If you look at the data, Seagate is also some of their oldest drives, and some of their most used. Likewise, they have almost no WD drives, yet that’s what you recommend below.
I’m not saying you should or should not buy Seagate drives, I’m just saying that’s not what you should be taking away from that data. What it seems to say is that Seagate drives are more likely to fail early, and if they don’t, they’ll likely last a while, even in a use case like Backblaze. Some capacities should be also avoided.
That said, I don’t think this data is applicable to an average home user. If you’re running a NAS 24/7, maybe, but if you’re looking for a single desktop drive (esp if it’s solid state), it’s useless to you because you won’t be buying those models (though failure rates by capacity apply since they likely use the same platters).
AFR is a percentage, 1 drive from a pool of 10 means 10%, 5 drives from 100 means 5%; so with regards to your point that they don’t have much WD drives, if they don’t have much WD, then each fail is even more detrimental on the chart, therefore making the data even more impactful. The data also showed the average across all manufactures and you can see clearly Seagate being consistently above the average quarter over quarter. The failure rate is annualized, so age of drive is also factored into the consideration.
When there’s a clear trend of higher failure rate represented as a percentage, I’m not going to volunteer my data, NAS or otherwise, as tribute to brand loyalty from a manufacture that’s gone downhill from the decades past.
The failure rate is annualized, so age of drive is also factored into the consideration.
Sort of. If we’re mostly seeing failures during the first year or two and high average age, that means their QC is terrible, but that’s something a consumer can work with by burning in drives. If average age is lower, that means drives are probably failing further into their life, which means a burn-in won’t likely detect the worst of it.
If Seagate were so unreliable, why would Backblaze be using so much of them? They used to use cheap consumer drives in the past, but if you look at the drives they have in service, they’re pretty much all enterprise class drives, so it’s not like they’re abusing customer warranties or anything.
Here’s a survey of IT pros from 2019, which gives Seagate the award for every single category for Enterprise HDDs:
While the top two companies of Enterprise HDDs were close in all categories, Seagate has proven itself a leader by being voted Market for the seventh year in a row; also picking up titles for Price, Performance, Reliability, Innovation, and Service and Support, sweeping the board for a two-year streak. Western Digital came in second for all categories trailed by Toshiba.
Backblaze places Toshiba as first for reliability, whereas this survey put them third.
Why the discrepancy? Idk, but there’s a good chance Backblaze is doing something wonky in their reporting, or they have significantly different environmental factors in their datacenters or something than average. Or maybe they’re not burning in their drives (or counting those as failures) and other IT pros are (and not counting those as failures). Maybe their goal is to reduce demand so they can get the drives cheaper. I really don’t know.
I’m not going to tell you what you should buy. I personally have WD drives in my NAS because I got a decent price for them years ago, but I wouldn’t hesitate to put Seagate drives in there either. Regardless, I’m going to test the drives when I get them.
A bit less than 20 years ago a new PC arrived in our home, and some of the letters on the drive inside it said “Seagate Barracuda”. And that drive lasted longer than the motherboard in that box (and the CPU’s integrated graphics started gradually failing a few years before that, so I was using a cheap discrete card).
Point is, I have good associations with the brand, sad that it’s become this bad.
Way back when SSD were prohibitively expensive for poor student me way back when, they came up with Momentus XT; I don’t know if they were the first hybrid HDD/SSD, but it was my first foray into flash storage. I had the earlier version with controller such that should the flash memory dies, I’d still have access to the HDD.
It, was, glorious…
I hear you. The brand is really not what we remembered them to be.
What do you recommend instead?
I had bad experiences with Seagate between 2002 and 2009. Multiple, sudden, premature drive failures under ideal operating conditions. I haven’t bought a Seagate drive in over 10 years.
WD enterprise grade hardware is still good for me, as of 2 years ago. Their customer service sucks but the hardware is still good
In general I tend to go for Toshiba or Hitachi (rebranded to a different name if I recall…) if I have a preference. I have some really old drives like 15+ years old still chugging along.
In my home server my Seagates have been dying one after another, I have replaced each failed one with a Toshiba and they have been rock solid so far
WD has been treating me well, but the most recent batch had been hgst he10 from server part deals from a couple years back so I can’t comment on the more recent drives.
Western Digital used to be great. Don’t know if they still are. I never had an issue with any of my HDDs from them (I only ever bought the high end stuff though)
PSA to always run a full length SMART check for any drives you buy, even from OEM. The short test and log are not enough, I have bought faulty drives that someone had reset the logs and power on hours.
All passed short SMART test, but failed long SMART test after only a few minutes. Found just one drive that the skrub forgot to wipe and the log showed 6 continuous years of power on usage.
Even from OEM, you will at least know if the hardware is DOA which you can then RMA.
Fucking people are wild.
Probably performs a good burn-in for them too.
Do people still do that? Used to be common practice to power on equipment and let it sit, either idle or full-tilt, for a couple days before even starting to configure it. Let the factory bugs scatter out.
Landlord just got me a new washing machine. I’ve been burning it in since Sunday.
My parents bought a beach house (a bungalow on a postage stamp, before anyone gets an ideas that we’re some 1%ers) and it came with an old washer dryer. My old man put a single pair of jeans in the dryer and seemingly forgot about them. He says he did it for a timer. Leaves the house. Nobody there for a week. My mom comes in, dryer still running, jeans essentially translucent at this point. One of the things you can laugh at only because it wasn’t a tragedy.
I can tell your lieing, because your pants are on fire.
No no, that was the old washer setting pants on fire.
Never heard of that, interesting
Yeah, we did that at my last company to make sure our hardware was up to spec. We deployed an IOT device for long term outdoor installations, so it needed to survive very hot temps. We had a refrigerator we gutted and added heat to, and we’d run a simulation with heavier than expected load for a couple days and tossed/RMAd the bad units.
That was a literal burn in, but the same concept ak applies to pretty much everything. If you build/buy a PC, test the hardware (prime95 CPU test, memtest for RAM, etc). Put it through its paces to work out the major bugs before relying on it so you don’t have to RMA a production system.
I do; I use a four pass destructive run of badblocks on new drives before implementing them.
Secondary PSA Seagate use some godawful numbering scheme on their SMART results, if you’re not aware of the fact you need a calculator understand the raw error count it will freak you the fuck out.
Accoirding to TFA these drives all passed SMART tests.
what do we call this scandal? seagate? seagate…gate?
Seagate’s not the one responsible for this, though. It was the work of shady retailers.
yeah but where’s the pun in that
“Shady German Retailers Spin Refurbished Drives As New”
Shady German Retailers Spin Refurbished Drives As New gate
Sea(gate)^2
Seetor-gate. Because it’s German
Der Meertor-Skandal
“lightly fucked”
How does one test for this?
Since the other answer is desktop use, if in Linux your best bet is
smartctl
.There are several programs that can check for disk info (S.M.A.R.T), so I’ll lay out some options for you
CrystalDiskInfo is free to use on Windows
For MacOS (where realistically you’d be doing this for an external drive as I believe they don’t show you much or anything at all on modern internal drives) you can get a free trial of DriveDX. There are probably other programs you can use for free, but if you only need to do it once, just get that because it does a really good job of letting you know what’s up. Just visualizes things in an easily newbie-understandable way.
There are programs that can check such things as runtime, wear (…).
Seems strange as its from several different retailers but seagate confirmed they where refurbished so seems a bit bait and switch but why would so many be doing it?
Either Seagate is doing it or all the retailers get them from the same source (which may not be Seagate) that is doing it or is contaminated by fulfillment pooling
The wholesaler where these are shipped from may have bought a large amount of hard drives from China and Co mingled the stock. Most logical explanation.
they confirmed they were refurbished, as well as the drives were OEM drives (meaning different warranty) so the problem is that someone 100% has a mixed assortment of storage. whether that was on Seagates end or the retailers end (more likely imo to be on the retailers end, as Seagate has their own refurbished drive market they run, and would only be a seagate problem if someone mistakingly shipped a bunch to a retailer) as they are their own source and is not affected by other sources.
Because they can get away with it due to the fact that most people don’t know how to view the “hours powered on” information or other SMART diagnostic output.
actually the article mentions the smart metrics did not show it. The guy had to use something deeper.
SMART data is reset by a lot of refurbished HDDs, but then you usually KNOW that they are refurbished
It may depend on the level of “refurbishing” that’s been done, but I don’t believe that’s a very good idea.
The Retailers source the drives, and aren’t paying particularly too much attention, they’re not opening what seemingly looks like oem secured retail packaging, and simply having them dropshipped from the wholesaler