- cross-posted to:
- hardware@lemmy.world
- cross-posted to:
- hardware@lemmy.world
Haven’t bought Seagate in 15 years. They improve their longevity?
Not worth the risk for me to find out lol. My granddaddy stored his data on WD drives and his daddy before him, and my daddy after him. Now I store my data on WD drives and my son will to one day. Such is life.
Seagate. The company that sold me an HDD which broke down two days after the warranty expired.
No thanks.
laughing in Western Digital HDD running for about 10 years nowWestern digital so good
Had the same experience and opinion for years, they do fine on Backblaze’s drive stats but don’t know that I’ll ever super trust them just 'cus.
That said, the current home server has a mix of drives from different manufacturers including seagate to hopefully mitigate the chances that more than one fails at a time.
I currently have an 8 year old Seagate external 4TB drive. Should I be concerned?
Funny because I have a box of Seagate consumer drives recovered from systems going to recycling that just won’t quit. And my experience with WD drives is the same as your experience with Seagate.
Edit: now that I think about it, my WD experience is from many years ago. But the Seagate drives I have are not new either.
Survivorship bias. Obviously the ones that survived their users long enough to go to recycling would last longer than those that crap out right away and need to be replaced before the end of the life of the whole system.
I mean, obviously the whole thing is biased, if objective stats state that neither is particularly more prone to failure than the other, it’s just people who used a different brand once and had it fail. Which happens sometimes.
Ah I wasn’t thinking about that. I got the scrappy spinny bois.
I’m fairly sure me and my friends had a bad batch of Western digitals too.
Did you buy consumer Barracuda?
I had the opposite experience. My Seagates have been running for over a decade now. The one time I went with Western Digital, both drives crapped out in a few years.
Heck yeah.
Always a fan of more storage. Speed isn’t everything!
Great, can’t wait to afford one in 2050.
That’s good, really good news, to see that HDDs are still being manufactured and being thought of. Because I’m having a serious problem trying to find a new 2.5’ HDD for my old laptop here in Brazil. I can quickly find SSDs across the Brazilian online marketplaces, and they’re not much expensive, but I’m intending on purchasing a mechanical one because SSDs won’t hold data for much longer compared to HDDs, but there are so few HDD for sale, and those I could find aren’t brand-new.
Dude i had a 240 gb ssd 14 years old. And the SMART is telling me that has 84% life yet. This was a main OS drive and was formatted multiple times. Literally data is going to be discontinued before this disk is going to die. Stop spreading fake news. Realistically how many times you fill a SSD in a typical scenario?
As per my previous comment, I had
/var
,/var/log
,/home/me/.cache
, among many other frequently written directories on the SSD since 2019. SSDs have fewer write cycles than HDDs, it’s not “fake news”.“However, SSDs are generally more expensive on a per-gigabyte basis and have a finite number of write cycles, which can lead to data loss over time.”
(https://en.wikipedia.org/wiki/Solid-state_drive)
I’m not really sure why exactly mine it’s coil whining, it happens occasionally and nothing else happens aside from the high-pitched sound, but it’s coil whining.
How the hell a SSD can coil whine… Without mobile parts lol… Second, realistically for a normal user, it’s probable that SSD is going to last more than 10 years. We aren’t talking about intensive data servers here. We are talking about The hardcorest of the gamers for example, normal people. And of course, to begin with HDDs haven’t a write limit lol. They fail because of its mechanical parts. Finally, cost benefit. The M.2 I was suggesting is $200 buck for 4Tb. Cmon it’s not the end of the world and you multiply speeds… By 700…
How the hell a SSD can coil whine… Without mobile parts lol…
Do you even know what “coil whine” is? It has nothing to do with moving parts! “Coil whine” is a physical phenomenon which happens when electrical current makes an electronic component, such as an inductor, to slightly vibrate, emitting a high-pitched sound. It’s a well-known phenomenon for graphic cards (whose only moving part is the cooler, not the source of their coil whinings). SSDs aren’t supposed to make coil whines, and that’s why I’m worried about the health of mine.
Finally, cost benefit. The M.2 I was suggesting is $200 buck for 4Tb. Cmon it’s not the end of the world and you multiply speeds… By 700…
I’m not USian so pricing and cost benefits may differ. Also, the thing is that I already have another SSD, a 240G SSD. I don’t need to buy another one, I just need a HDD which is what I said in my first comment. Just it: a personal preference, a personal opinion regarding personal experiences and that’s all. The only statement I said beyond personal opinions was regarding the life span which I meant the write rate thing. But that’s it: personal opinion, no need for ranting about it.
SSDs won’t hold data for much longer compared to HDDs
Realistically this is not a good reason to select SSD over HDD. If your data is important it’s being backed up (and if it’s not backed up it’s not important. Yada yada 3.2.1 backups and all. I’ll happily give real backup advise if you need it)
In my anecdotal experience across both my family’s various computers and computers I’ve seen bite the dust at work, I’ve not observed any longevity difference between HDDs and SSDs (in fact I’ve only seen 2 fail and those were front desk PCs that were effectively always on 24/7 with heavy use during all lobby hours, and that was after multiple years of that usecase) and I’ve never observed bit rot in the real world on anything other than crappy flashdrives and SD cards (literally the lowest quality flash you can get)
Honestly best way to look at it is to select based on your usecase. Always have your boot device be an SSD, and if you don’t need more storage on that computer than you feel like buying an SSD to match, don’t even worry about a HDD for that device. HDDs have one usecase only these days: bulk storage for comparatively low cost per GB
I replaced my laptop’s DVD drive with a HDD caddy adapter, so it supports two drives instead of just one. Then, I installed a 120G SSD alongside with a 500G HDD, with the HDD being connected through the caddy adapter. The entire Linux installation on this laptop was done in 2019 and, since then, I never reinstalled nor replaced the drives.
But sometimes I hear what seems to be a “coil whine” (a short high pitched sound) coming from where the SSD is, so I guess that its end is near. I have another SSD (240G) I bought a few years ago, waiting to be installed but I’m waiting to get another HDD (1TB or 2TB) in order to make another installation, because the HDD was reused from another laptop I had (therefore, it’s really old by now, although I had no I/O errors nor “coil whinings” yet).
Back when I installed the current Linux, I mistakenly placed
/var
and/home
(and consequently,/home/me/.cache
and/home/me/.config
, both folders of which have high write rates because I use KDE Plasma) on the SSD. As the years passed by, I realized it was a mistake but I never had the courage to relocate things, so I did some “creative solutions” (“gambiarra”) such as creating a symlinked folder for.cache
and.config
, pointing them to another folder within the HDD.As for backup, while I have three old spare HDDs holding the same old data (so it’s a redundant backup), there are so many (hundreds of GBs) new things I both produced and downloaded that I’d need lots of room to better organize all the files, finding out what is not needed anymore and renewing my backups. That’s why I was looking for either 1TB or 2TB HDDs, as brand-new as possible (also, I’m intending to tinker more with things such as data science after a fresh new installation of Linux). It’s not a thing that I’m really in a hurry to do, though.
Edit: and those old spare HDDs are 3.5" so they wouldn’t fit the laptop.
I doubt the high pitched whine that you’re hearing is the SSD failing. The sheer amount of writes to fully wear out an SSD is…honestly difficult to achieve in the real world. I’ve got decade old budget SSDs in some of my computers that are still going strong!
These things are unreliable, I had 3 seagate HDDs in a row fail on me. Never had an issue with SSDs and never looked back.
Seagate in general are unreliable in my own anecdotal experience. Every Seagate I’ve owned has died in less than five years. I couldn’t give you an estimate on the average failure age of my WD drives because it never happened before they were retired due to obsolescence. It was over a decade regularly though.
well until you need capacity why not use an SSD. It’s basically mandatory for the operating system drive too
Capacity for what?. There are 4tb SSD m.2 costing $200 bucks cmon…
I would rather not buy so large SSDs. for most stuff the performance advantage is useless while the price is much larger, and my impression is still that such large SSDs have a shorter lifespan (regarding how many writes will it take to break down). recovering data fron a failing HDD is also easier: SSDs just turn read-only or completely fail at one point, in the latter case often even data recovery companies being unable to recover anything, while HDDs will often give signs that a good monitoring software can detect weeks or months before, so that you know to be more cautious with it
How is it easier? Do you open your HDDs and take info from there? Do you have specialized equipment and knowledge? Second, if you detect on smart that you are closer to TBW, change the SSD duh… Smart is a lot more effective on SSDs depending the model it even gives you time to live…
I mean, cool and all, but call me when sata or m2 ssds are 10TB for $250, then we’ll talk.
cool never will buy another seagate ever though.
Same but western digital, 13gb that failed and lost all my data 3 time and 3rd time was outside the warranty! I had paid 500$, the most expensive thing I had ever bought until tgat day.
Lmao the HDD in the first machine I built in the mid 90s was 1.2GB
I had a 20mb hard drive
I had a 1gb hard drive that weighed like 20 kgs, some 40 odd pounds
Back then that was very impressive!
Yup. My grandpa had 10 MB in his DOS machine back then.
My dad had a 286 with a 40MB hard drive in it. When it spun up it sounded like a plane taking off. A few years later he had a 486 and got a 2gb Seagate hard drive. It was an unimaginable amount of space at the time.
The computer industry in the 90s (and presumably the 80s, I just don’t remember it) we’re wild. Hardware would be completely obsolete every other year.
It really was doubling in speed about every 18 months.
My 286er had 2MB RAM and no hard drive, just two 5.25" floppy drives. One to boot the OS from, the other for storage and software.
I upgrade it to 4 MB RAM and bought a 20 MB hard drive, moved EVERY piece of software I had onto it, and it was like 20% full. I sincerely thought that should last forever.
Today I casually send my wife a 10 sec video from the supermarket to choose which yoghurt she wants and that takes up about 25 MB.
I had 128KB of RAM and I loaded my games from tape. And most of those only used 48KB of it.
Yeah we still had an old 8086 with tape drive and all from my dad’s university times around, but I never acutely used that one.
The two models, […] each offer a minimum of 3TB per disk
Huh? The hell is this supposed to mean? Are they talking about the internal platters?
Here i am still rocking 6TB.
My first HDD had a capacity of 42MB. Still a short way to go until factor 10⁶.
My first one was a Seagate ST-238R. 32 MB of pure storage, baby. For some reason I thought we still needed the two disk drives as well, but I don’t remember why.
“Oh what a mess we weave when we amiss interleave!”
We’d set the interleave to, say, 4:1 (four revolutions to read all data in a track, IIRC), because the hard drive was too fast for the CPU to deal with the data… ha.
My first HD was a 20mb mfm drive :). Be right back, need some “just for men” for my beard (kidding, I’m proud of it).
This is for cold and archival storage right?
I couldn’t imagine seek times on any disk that large. Or rebuild times…yikes.
Definitely not for either of those. Can get way better density from magnetic tape.
They say they got the increased capacity by increasing storage density, so the head shouldn’t have to move much further to read data.
You’ll get further putting a cache drive in front of your HDD regardless, so it’s vaguely moot.
Just one would be a great backup, but I’m not ready to run a server with 30TB drives.
I’m here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.
This would net around 180TB in that form factor. Thats would go a long way for a long while.
I dunno if you would want to run raidz2 with disks this large. The resilver times would be absolutely bazonkers, I think. I have 24 TB drives in my server and run mirrored vdevs because the chances of one of those drives failing during a raidz2 resilver is just too high. I can’t imagine what it’d be like with 30 TB disks.
Is RAID2 ever the right choice? Honestly, I don’t touch anything outside of 0, 1, 5, 6, and 10.
Edit: missed the z, my bad. Raidz2 is fine.
raidz2 is analogous to RAID 6. It’s just the ZFS term for double parity redundancy.
Yeah I agree. I just got 20tb in mine. Decided to just z2, which in my case should be fine. But was contemplating the same thing. Going to have to start doing z2 with 3 drives in each vdev lol.
A few years ago I had a 12 disk RAID6 array and the power distributor (the bit between the redundant PSUs and the rest of the system) went and took 5 drives with them, lost everything on there. Backup is absolutely essential but if you can’t do that for some reason at least use RAID1 where you only lose part of your data if you lose more than 2 drives.
Just a reminder: These massive drives are really more a “budget” version of a proper tape backup system. The fundamental physics of a spinning disc mean that these aren’t a good solution for rapid seeking of specific sectors to read and write and so forth.
So a decent choice for the big machine you backup all your VMs to in a corporate environment. Not a great solution for all the anime you totally legally obtained on Yahoo.
Not sure if the general advice has changed, but you are still looking for a sweet spot in the 8-12 TB range for a home NAS where you expect to regularly access and update a large number of small files rather than a few massive ones.
So I’m guessing you don’t really know what you’re talking about.
honestly curious, why the hell was this downvoted? I work in this space and I thought this was still the generally accepted advice?
Because people are thinking through specific niche use cases coupled with “Well it works for me and I never do anything ‘wrong’”.
I’ll definitely admit that I made the mistake of trying to have a bit of fun when talking about something that triggers the dunning kruger effect. But people SHOULD be aware of how different use patterns impacts performance, how that performance impacts users, and generally how different use patterns impact wear and tear of the drive.
Not sure what you’re going on about here. Even these discs have plenty of performance for read/wrote ops for rarely written data like media. They have the same ability to be used by error checking filesystems like zfs or btrfs, and can be used in raid arrays, which add redundancy for disc failure.
The only negatives of large drives in home media arrays is the cost, slightly higher idle power usage, and the resilvering time on replacing a bad disc in an array. Your 8-12TB recommendation already has most of these negatives. Adding more space per disc is just scaling them linearly.
Additionally, most media is read in a contiguous scan. Streaming media is very much not random access.
Your typical access pattern is going to be seeking to a chunk, reading a few megabytes of data in a row for the streaming application to buffer, and then moving on. The ~10ms of access time at the start are next to irrelevant. Particularly when you consider that the OS has likely observed that you have unutilized RAM and loads the entire file into the memory cache to bypass the hard drive entirely.
HDD read rates are way faster than media playback rates, and seek times are just about irrelevant in that use case. Spinning rust is fine for media storage. It’s boot drives, VM/container storage, etc, that you would want to have on an SSD instead of the big HDD.