Western Digital details 14-platter 3.5-inch HAMR HDD designs with 140 TB and beyond (www.tomshardware.com)
from veeesix@lemmy.ca to selfhosted@lemmy.world on 07 Feb 21:52
https://lemmy.ca/post/60059065

cross-posted from: beehaw.org/post/24650125

Because nothing says “fun” quite like having to restore a RAID that just saw 140TB fail.

Western Digital this week outlined its near-term and mid-term plans to increase hard drive capacities to around 60TB and beyond with optimizations that significantly increase HDD performance for the AI and cloud era. In addition, the company outlined its longer-term vision for hard disk drives’ evolution that includes a new laser technology for heat-assisted magnetic recording (HAMR), new platters with higher areal density, and HDD assemblies with up to 14 platters. As a result, WD will be able to offer drives beyond 140 TB in the 2030s.

Western Digital plans to volume produce its inaugural commercial hard drives featuring HAMR technology next year, with capacities rising from 40TB (CMR) or 44TB (SMR) in late 2026, with production ramping in 2027. These drives will use the company’s proven 11-platter platform with high-density media as well as HAMR heads with edge-emitting lasers that heat iron-platinum alloy (FePt) on top of platters to its Curie temperature — the point at which its magnetic properties change — and reducing its magnetic coercivity before writing data.

#selfhosted

threaded - newest

solrize@lemmy.ml on 07 Feb 22:03 next collapse

As a result, will be able to offer drives beyond 140 TB in the 2030s.

Um thanks but tell us about 2026?

lemmyng@piefed.ca on 07 Feb 23:38 collapse

Shrimp platters.

ToTheGraveMyLove@sh.itjust.works on 07 Feb 23:42 collapse

Whoops, sorry, the oceans are hostile to life now. No more shrimp platters. Try again next time.

FirmDistribution@lemmy.world on 07 Feb 22:08 next collapse

with optimizations that significantly increase HDD performance for the AI and cloud era

Can somebody do anything with a normal consumer in mind these days? 😭

dual_sport_dork@lemmy.world on 07 Feb 22:18 next collapse

Not until somebody shuts off the investor money faucet for AI. Then they’ll come crawling back — although inevitably not until after they go whining to all the world’s governments about wanting a bailout.

But hey, look at the bright side. We’ve already had the cryptocurrency mining boom and bust, and “AI” boom and soon to be bust. There’s still time for some idiot to invent the next tech scam fad which will conveniently require a shitload of hardware for no recognizably useful purpose.

cecilkorik@piefed.ca on 08 Feb 04:39 next collapse

Then they’ll come crawling back — although inevitably not until after they go whining to all the world’s governments about wanting a bailout.

And don’t forget the part where, whether they get a bailout or not, they’ll still have to double the prices of everything to make up for all the money they lost on that stupid AI bubble exploding in their face (which all of us are somehow to blame for, obviously, which is why we have to pay them back for it)

AndrewZabar@lemmy.world on 08 Feb 16:44 collapse

”although inevitably not until after they go whining to all the world’s governments about wanting a bailout”.

Ahem… Whining? Wanting? Try instructing. They own the governments so they will just tell them to do it, and it will be done.

akilou@sh.itjust.works on 07 Feb 22:44 next collapse

Does data take up less room when it’s being used by AI?

wewbull@feddit.uk on 09 Feb 13:14 collapse

No, quite the opposite. Models are largely a mass of random looking numbers that can’t be compressed losslessly.

mycodesucks@lemmy.world on 08 Feb 02:11 next collapse

No, and it’s by design.

You’re gonna lease a tablet and use cloud-based storage services and like it.

The dystopia is here.

RalfWausE@feddit.org on 08 Feb 07:52 next collapse

Back to the 70s and early 80s…

selokichtli@lemmy.ml on 08 Feb 15:03 collapse

Yeah, adding all the surveillance technology developed in the last 40 years, so you dont dare to take your eyes out of the display, for example.

wewbull@feddit.uk on 09 Feb 13:13 collapse

Hp is doing laptop rental for non-commercial customers only.

myserverisdown@lemmy.world on 08 Feb 03:51 next collapse

140 TB is a whole heck of a lot of movies and TV shows

Kushan@lemmy.world on 08 Feb 06:58 next collapse

It’s about the storage I have in my server right now - using 15 drives ☠️

brygphilomena@lemmy.dbzer0.com on 08 Feb 14:24 collapse

It’s about half of mine, with about 30 drives. Whatcha running?

Kushan@lemmy.world on 08 Feb 15:26 collapse

I’m running a TrueNAS build which has just grown in time. Started off at 5x8TB drives, then added 5x16TB drives and just last week added another 5x26TB drives (that was costly ☠️). It’s all running in a very cheap case using an old threadripper machine I had (2950x), which thankfully supports ECC (128GB purchased years ago before the sillyness).

wltr@discuss.tchncs.de on 09 Feb 04:47 collapse

Imagine buying one for cheap because it has some bad blocks and it’s unreliable to keep real valuable data on it! I have a 8 TB HDD bought for like less than a $100 a decade ago, from a friend though, as he had some bad blocks there. I host only media for the HTPC there, but it’s been a solid all these years. And when it dies, sad, but nothing valuable that I cannot redownload.

rumba@lemmy.zip on 08 Feb 05:36 next collapse

Normal consumers can install jellyfin. At some point they’ll make downloading a crime, they wouldn’t hurt people to have a decent collection of stuff ready for that day.

frongt@lemmy.zip on 09 Feb 04:21 collapse

That point was Jan 23rd. torrentfreak.com/ripping-clips-for-youtube-reacti…

rumba@lemmy.zip on 09 Feb 15:25 collapse

oh shit, how the hell did I miss that.

atzanteol@sh.itjust.works on 08 Feb 14:06 next collapse

That fuck you mean? You can use these drives for any purpose you want.

selokichtli@lemmy.ml on 08 Feb 15:00 collapse

Well, that’s a target market right now. Intel GPUs are doing better than expected, I think, thanks to all the big corporations abandoning “normal consumers”.

billwashere@lemmy.world on 07 Feb 22:40 next collapse

This would be a bitch to have to rebuild in a raid array. At some point a drive can get TOO big. And this is looking to cross that line.

irmadlad@lemmy.world on 07 Feb 22:54 next collapse

At some point a drive can get TOO big

I was thinking the same. I would hate to toast a 140 TB drive. I think I’d just sit right down and cry. I’ll stick with my 10 TB drives.

rtxn@lemmy.world on 07 Feb 23:05 next collapse

This is not meant for human beings. A creature that needs over 140 TB of storage in a single device can definitely afford to run them in some distributed redundancy scheme with hot swaps and just shred failed units. We know they’re not worried about being wasteful.

thejml@sh.itjust.works on 07 Feb 23:50 next collapse

Rebuild time is the big problem with this in a RAID Array. The interface is too slow and you risk losing more drives in the array before the rebuild completes.

rtxn@lemmy.world on 08 Feb 00:04 collapse

Realistically, is that a factor for a Microsoft-sized company, though? I’d be shocked if they only had a single layer of redundancy. Whatever they store is probably replicated between high-availability hosts and datacenters several times, to the point where losing an entire RAID array (or whatever media redundancy scheme they use) is just a small inconvenience.

thejml@sh.itjust.works on 08 Feb 00:20 next collapse

True, but that’s going to really be pushing your network links just to recover. Realistically, something like ZFS or a RAID-6 with extra hot spares would help reduce the risks, but it’s still a non trivial amount of time. Not to mention the impact to normal usage during that time period.

frongt@lemmy.zip on 08 Feb 04:02 collapse

Network? Nah, the bottleneck is always going to be the drive itself. Storage networks might pass absurd numbers of Gbps, but ideally you’d be resilvering from a drive on the same backplane, and SAS-4 tops out at 24 Gbps, but there’s no way you’re going to hit that write speed on a single drive. The fastest retail drives don’t do more than ~2 Gbps. Even the Seagate Mach.2 only does around twice that due to having two head actuators.

thejml@sh.itjust.works on 08 Feb 17:25 collapse

100%. But the post i was responding to was talking about recovering a failed array from other copies, not locally.

enumerator4829@sh.itjust.works on 08 Feb 08:27 next collapse

Fairly significant factor when building really large systems. If we do the math, there ends up being some relationships between

  • disk speed
  • targets for ”resilver” time / risk acceptance
  • disk size
  • failure domain size (how many drives do you have per server)
  • network speed

Basically, for a given risk acceptance and total system size there is usually a sweet spot for disk sizes.

Say you want 16TB of usable space, and you want to be able to lose 2 drives from your array (fairly common requirement in small systems), then these are some options:

  • 3x16TB triple mirror
  • 4x8TB Raid6/RaidZ2
  • 6x4TB Raid6/RaidZ2

The more drives you have, the better recovery speed you get and the less usable space you lose to replication. You also get more usable performance with more drives. Additionally, smaller drives are usually cheaper per TB (down to a limit).

This means that 140TB drives become interesting if you are building large storage systems (probably at least a few PB), with low performance requirements (archives), but there we already have tape robots dominating.

The other interesting use case is huge systems, large number of petabytes, up into exabytes. More modern schemes for redundancy and caching mitigate some of the issues described above, but they are usually onlu relevant when building really large systems.

tl;dr: arrays of 6-8 drives at 4-12TB is probably the sweet spot for most data hoarders.

brygphilomena@lemmy.dbzer0.com on 08 Feb 14:43 collapse

I’d imagine they are using ceph or similar.

You have disk level protection for servers. Server level protection for racks. Rack level protection for locations. Location level protection for datacenters. Probably datacenter level protections for geographic regions.

It’s fucking wild when you get to that scale.

MonkeMischief@lemmy.today on 08 Feb 06:25 collapse

This is not meant for human beings.

This is for like, Smaug but if he hoarded classic anime and the entirety of Steam or something. Lol

gravitas_deficiency@sh.itjust.works on 08 Feb 00:19 collapse

Yeah I’m running 16s and that’s pushing it imo

non_burglar@lemmy.world on 07 Feb 23:07 next collapse

It doesn’t really matter, the current limitations are not so much data density at rest, but getting the data in and out at a useful speed. We breached the capacity barrier long ago with disk arrays.

SATA will no longer be improved, we now need u.2 designs for data transport that are designed for storage. This exists, but needs to filter down through industrial application to get to us plebs.

irmadlad@lemmy.world on 07 Feb 23:34 collapse

640K ought to be enough for anybody.

pHr34kY@lemmy.world on 08 Feb 00:09 collapse

I don’t get how a single person would have that much data. I fit my whole life from the first shot I took on a digital camera in 2001… Onto a 4TB drive.

…and even then, two thirds of it is just pirated movies.

billwashere@lemmy.world on 08 Feb 00:23 next collapse

Amateur 😀

But seriously I probably have close to 100 TB of music, TV shows, movies, books, audiobooks, pictures, 3d models, magazines, etc.

panda_abyss@lemmy.ca on 08 Feb 00:49 collapse

I need a home for my orphaned podman containers /s

I think this is better targeted to small and medium businesses. 

if you run this as a NAS you could easily have all your budd s obsesses files in one place without needing complex networking. 

just_another_person@lemmy.world on 08 Feb 00:06 next collapse

This ONLY works at an insane scale. This will never hit the consumer market.

Korkki@lemmy.ml on 08 Feb 02:50 collapse

Also what current consumer level application could require of storage 140TB. That would be some advanced level data hoarding or smth.

Andres4NY@social.ridetrans.it on 08 Feb 03:04 next collapse

@Korkki @just_another_person I see 4k HDR blue ray movie rips these days on the order of 50GB (edit: eg, Eddington.2025.MULTi.VFF.2160p.DV.HDR.BluRay.REMUX.HEVC-[BATGirl]: 77.73G).

Which is too rich for my blood (I'm still watching on 1080p screens over here), but for someone with the right kind of home theater.. that's only ~280 movies on a 14TB drive. Lots of movie collections, even in the olden days of physical VHS and DVDs, span 1,000+ movies.

Zorque@lemmy.world on 08 Feb 05:28 collapse

14TB or 140TB? The latter is what’s being talked about, so that’s more like 2800 movies. Which more than covers that 1000+ movie criteria.

Andres4NY@social.ridetrans.it on 08 Feb 05:43 collapse

@Zorque I'm saying that 14TB will only fit 280 (or more likely, less) of those ultra-hq movies, so 140TB (or, in the lead up to that, 100TB, since they're talking about 5+ years or more before they even get close to 140TB) is reasonable for a 1,000-2,000 movie collection. Obviously I'm being loose with numbers, but the fact that one single movie can consume almost 80GB.. well, you can start to understand consumer demand for 100+TB drives.

just_another_person@lemmy.world on 08 Feb 03:18 collapse

The failure rate is going to be absolute INSANE as well.

gravitas_deficiency@sh.itjust.works on 08 Feb 00:18 next collapse

Holy fuck can you imagine how long it would take to re-stripe a failed drive in a z2 array 😭

Telorand@reddthat.com on 08 Feb 01:14 next collapse

Not a clue. Care to eli5?

SmoothLiquidation@lemmy.world on 08 Feb 01:35 collapse

When you are running a server just to store files (a NAS) you generally set it up so multiple physical hard disks are joined together into an array so if one fails, none of the data is lost. You can replace a failed drive by taking it out and putting in a new working drive and then the system has to copy all of the data over from the other drives. This process can take many hours to run even with the 10-20 TB drive you get today, so doing the same thing with 140 TB drive would take days.

Andres4NY@social.ridetrans.it on 08 Feb 01:51 next collapse

@SmoothLiquidation @Telorand They also claim up to 8x speed improvements with HAMR. Obviously that remains to be seen, but if they could roughly match capacity improvements, that would keep restriping in the same ballpark.

wltr@discuss.tchncs.de on 09 Feb 04:58 collapse

Thanks! So, why does it matter? It’s a server, you can have it to do the job unattended. Or does it affect other services and you’re unable to use anything else before it finishes?

SmoothLiquidation@lemmy.world on 09 Feb 05:36 collapse

It will take a long time and while it runs it will use a lot of resources so the server can be bogged down. It is also a dangerous time for a NAS, because if you have a drive down, and another drive dies, the whole pool can collapse. The process involves reading every bit on every drive, so it does put strain on everything.

Some people will go out of their way to buy drives from different manufacturing batches so if one batch has a problem, not all of their drives will fail.

The way striping works (at an eli5 level) is you have a bunch of drives and one is a check for everything else. So let’s say you have four 10tb drives. Three would be data and one would be the check, so you get 30tb of usable space.

In reality you don’t have a single drive working as a check, instead you spread the checks across all of the drives, if you map it out with “d” being data and “c” being check it looks like this: dddc ddcd dcdd cddd

This way each drive has the same number of checks on it, and also why we call it striping.

Dremor@lemmy.world on 08 Feb 07:37 collapse

My Z2 had à drive failure recently, with 4To drives. Took me almost 3 days to re-silver the array 😅. fortunately had a hot spare setup, so it started as soon as it failed, but now a second drive is showing signs of failing soon, so I had to pay the AI tax (168€) to get one asap (arriving Monday), as well as a second one, cheaper (around 120€), but which won’t arrive until the end of April.

Decronym@lemmy.decronym.xyz on 08 Feb 00:30 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

5 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

[Thread #72 for this comm, first seen 8th Feb 2026, 00:30] [FAQ] [Full list] [Contact] [Source code]

zorflieg@lemmy.world on 08 Feb 01:15 next collapse

I wonder why current consumer HDD’s don’t have NVME connectors on them. Like I know speeding up the bus isn’t going to make the spinning rust access faster but the cache ram would probably benefit from not being capped at 550MBps

Shady_Shiroe@lemmy.world on 08 Feb 01:53 next collapse

I just hope smaller sized drives become cheaper. The word “hope” is doing a lot of heavy lifting here.

Supervisor194@lemmy.world on 08 Feb 02:39 collapse

Ten years from now…

Amazon search: “hard drive”

Result: 4TB $198

Zozano@aussie.zone on 08 Feb 03:22 next collapse

BARGAIN!

AndrewZabar@lemmy.world on 08 Feb 16:42 collapse

I think ten years from now you’ll be hard pressed to find anyone even wasting their time on something so small.

HeyThisIsntTheYMCA@lemmy.world on 08 Feb 17:23 next collapse

so you say, but people still collect “antique” hardware.

AndrewZabar@lemmy.world on 08 Feb 21:35 collapse

Well, retro etc. but I wouldn’t consider this to be that. There’s no inherent value of a run-of-the-mill drive with merely lower storage capacity. And certainly not worth a premium.

HeyThisIsntTheYMCA@lemmy.world on 08 Feb 22:16 collapse

it’s not antique yet. i still have my 5.25" diskettes with quest for glory 2 on them and they’re almost antique. i think the usb drive that reads them still works. give them another couple years.

do HDDs work better than SSDs in space? because of the cosmic rays and shit? or something about intermittent power? no, really, this is a real problem that they could be already solving, one i know jack shit about.

AndrewZabar@lemmy.world on 09 Feb 00:21 next collapse

So you want to be a hero!!! I only ever played the first one but fell in love with it.

Erana’s Peace. hidengoseke. Meep’s Peep, my friend.

HeyThisIsntTheYMCA@lemmy.world on 09 Feb 02:35 collapse

the second was the best in the series, but they all have their charm. i really need to buy the new game the coles made

AndrewZabar@lemmy.world on 09 Feb 17:03 collapse

Link please? (New game)

HeyThisIsntTheYMCA@lemmy.world on 09 Feb 17:38 collapse

there have been too many quest for glory successor fan projects i have gotten them all confused in my head. It looks like it’s Hero U, Rogue to Redemption

AndrewZabar@lemmy.world on 10 Feb 14:46 collapse

Nice! Thanks.

Anything else out there these days worth putting time into?

I’d love to see Torchlight ported to Android.

HeyThisIsntTheYMCA@lemmy.world on 10 Feb 17:12 collapse

i mean, what’s your flavor? people are telling all kinds of great stories.

AndrewZabar@lemmy.world on 10 Feb 23:25 collapse

Ah, see I am interested in a lot of variety but these days I don’t really have the time for extremely involved stuff. I enjoy games that you can spend small clips of time on, rather than having to devote a lot at once. I like cool visuals and gizmos - that’s why Torchlight appealed to me so much. Upgrading and advancing. I loved all the Angband clones I played over the years. I love lots of the BigFish style games like where there’s a cool story, hidden object puzzles, other types of puzzles, click / find stuff, and problem solving. Also I enjoy really good musical score. Ever play Drawn the Painted Tower and its sequels? Absolutely mesmerizing game of artistic beauty. I liked games like Sword of Fargoal, as well - also a sort of fancier Angband. Dungeon crawlers, adventure stories, cool gadget type equipment / magic spells etc.

I think it would be easier to specify the things I definitely won’t devote a single second to: sports, racing, RTS, hugely long-term upgrade stuff à la Sim City (though I used to love it). Roads of Rome is an exception. God I love that. And I also loved loved loved Magesty. Nothing where reflexes are needed. Again, used to be great in my youth but it’s not my thing anymore.

I loved the Krondor series by Raymond Feist. I enjoyed every Zork incarnation, especially Return to Zork, Zork: Nemesis, and Zork Grand Inquisitor. Might & Magic I loved, as well as Wizardry. Kings Quest series and of course Hero’s Quest. I liked the Diablo editions that were very like Torchlight.

Most of all is that I prefer it be on Android or Linux.

Wow did I just write ALLLL of that? Meh. Just sharing my game tastes.

P.S. oooh I LOVED the Samorost series. Amazing style, beautiful gameplay simulation and just plain fun and moderately challenging.

frongt@lemmy.zip on 09 Feb 04:29 collapse

It depends. For anything going into space, especially microsats, the biggest concerns are space, weight, and power. SSDs are better at all of those, plus they don’t have any gyroscopic effects, and they’re much less susceptible to vibrations (e.g. the absolute earthquake at liftoff and the sudden jolts during each rocket stage). They are more susceptible to high-energy particles, but they can be hardened through shielding and parity/redundancy.

For a datacenter on Mars, you’re less concerned with SWaP, only as much as you need to be to get it there as cargo. Obviously that means space and weight are still concerns, but not power.

The other factor with using fewer larger drives is that when you have a failure, you lose a lot more data, and any recovery takes longer.

Supervisor194@lemmy.world on 09 Feb 00:32 collapse

Kind of the point of my comment was that drive size/cost is stagnating despite the massive technical progress in the space. I bought my first 4TB drive in 2020 ($89). Going back to 2015, I was buying 2TB at the same price ($86). Here in 2026, what’s the ~same price? 4TB ($99). 8TB is $180.

AndrewZabar@lemmy.world on 09 Feb 17:03 collapse

Well this is not a tech issue at all, it’s the fact that global economics have become a dumpster fire - particularly, in America. I can’t say I’m certain there are no other factors, but economically everything has gotten out of hand.

iturnedintoanewt@lemmy.world on 08 Feb 02:09 next collapse

Doesn’t this sound awfully similar to the Mini disc technology? The discs were only writable when heated by a laser. They were pretty impressive for the time… But not very fast. Especially when writing.

thatradomguy@lemmy.world on 08 Feb 02:12 next collapse

Probably still with only 1 year warranty…

Grapho@lemmy.ml on 08 Feb 06:22 collapse

And if it breaks at 10 months and they take another 2 to send your replacement back, well, they no longer need to send one that actually works this time either

MonkeMischief@lemmy.today on 08 Feb 06:28 next collapse

Okay cool, cool, so does this mean ridiculous data centers will use these things, and then can I get another 4TB RED for my NAS so I can fit my whole life on a mirrored total of 8TB without paying 8x what it’s worth, please?

Thaaaaanks…

brygphilomena@lemmy.dbzer0.com on 08 Feb 14:23 next collapse

Is there a Lemmy community for trading surplus hardware yet?

I have a pile of HDDs and servers that I no longer use. I’ve transitioned almost all mine to 20tb+. I might have 8 or 10 4tb REDs laying around. They’re old, probably have thousands of power on hours in the smart data though.

I set up a community on midwest.social and the lemmy.world one is dead (because I’m in the midwest) !homelabsales@midwest.social

yyprum@lemmy.dbzer0.com on 08 Feb 14:40 next collapse

Are you in Europe by any chance? :)

brygphilomena@lemmy.dbzer0.com on 08 Feb 14:44 collapse

Ah, no. Sorry. Midwest, USA.

yyprum@lemmy.dbzer0.com on 08 Feb 17:35 collapse

No apologies needed, I’m not even OP :) it was just a long shot :D

[deleted] on 09 Feb 02:08 next collapse

.

MonkeMischief@lemmy.today on 12 Feb 17:47 collapse

Right on!

I don’t know if there’s a hardware trading community yet. I think one challenge is simply how lemmy seems to aim for more general anonymity than reddit, and the DM system isn’t really used to my understanding. (Except by “that fediverse girl” LOL)

Establishing a sense of reasonable trustworthiness to thwart bad actors might take some work.

brygphilomena@lemmy.dbzer0.com on 19 Feb 23:18 collapse

I set up a community on midwest.social (because I’m in the midwest) !homelabsales@midwest.social

AndrewZabar@lemmy.world on 08 Feb 16:41 collapse

8TB? That’s my ideal RAM configuration lol. ;-)

wltr@discuss.tchncs.de on 09 Feb 04:51 collapse

If not joking, what would you want a huge amount of ram for on a server?

DeadDigger@lemmy.zip on 09 Feb 05:26 next collapse

Running more multi box copies of gw2

HiTekRedNek@lemmy.world on 09 Feb 15:39 next collapse

ZFS ARC, baby!

AndrewZabar@lemmy.world on 09 Feb 17:08 collapse

No, I was joking.

InFerNo@lemmy.ml on 08 Feb 12:58 next collapse

I’d put this in a mirror configuration tbh.

Fmstrat@lemmy.world on 08 Feb 13:52 next collapse

Question: Are failures due to issues on a specific platter? Meaning, could a ZRAID theoretically use specific platters as a way to replicate data and not require 140TB of resilvering on a failure?

Nilz@sopuli.xyz on 08 Feb 16:44 next collapse

IIRC, HDDs have some reserved sectors in case some go bad. But in practice, once you start having faulty sectors it’s usually a sign that the drive is dying and you should replace it ASAP.

I think if you know drive topology you can technically create partitions on platter level, but I don’t really see a reason why you’d do it. If the drive is dying you need to resilver the entire drive’s content to a new disk anyway.

Andres4NY@social.ridetrans.it on 08 Feb 18:26 collapse

@Fmstrat @veeesix Since there's two very diffrent questions there.. The first, "where do the failures happen?": anywhere. It could be the controller dying (in which case the platters themselves are fine if you replace the board, but otherwise the whole thing is toast). It could be the head breaking. It could be issues with a specific platter. It could be something that affects _all_ the platters (like dust getting inside the sealed area). So basically, it very much depends.

Andres4NY@social.ridetrans.it on 08 Feb 18:28 collapse

@Fmstrat @veeesix The second, could you do raid across specific platters - yes and no. The drive firmware specifically hides the details of the underlying platter layout. But if you targeted a specific model, you could probably hack something together that would do raid across the platters. But given the answer to the first question, why would you?

Fmstrat@lemmy.world on 08 Feb 19:01 collapse

Great answers, thank you.

Alpha71@lemmy.world on 08 Feb 17:45 next collapse

Okay. I want total honesty here. How many of you could actually fill that thing up?

greedytacothief@lemmy.dbzer0.com on 08 Feb 18:33 next collapse

With useful stuff? Never. With random bullshit I think might be useful some day if only I find the time? Easy

suzune@ani.social on 08 Feb 19:27 next collapse

… or be able to backup it?

LifeInMultipleChoice@lemmy.world on 08 Feb 19:28 next collapse

I remember Mac OS X having an issue with its mail app awhile back that would create massive log files continuously that would keep generating until they filled the entire drive. You would have to boot to a recovery partition or such because the OS partition wouldn’t have enough room to expand/boot and remove them and fix the issue.

Imagine having 130 terabytes of invisible log files

thethunderwolf@lemmy.dbzer0.com on 09 Feb 16:21 collapse

keep the OS partition like 4tb and make a separate partition for the pirated movies

alekwithak@lemmy.world on 08 Feb 20:05 next collapse

Archive.org, Anna’s archive, Jan 6 footage, Epstein files, there’s plenty to back up.

mlg@lemmy.world on 09 Feb 03:05 next collapse

No sweat, try mirroring a private tracker and you’ll very quickly run out lol. You need a couple of petabytes worth.

The real problem is the price of HDDs not going down due to lower production in light of SSDs.

I fully expect WD to drop this as some stupidly expensive SAS drives that almost no consumer will buy. They should at least apply the dual heads for speed tech so we get faster HDDs for the same price.

JasSmith@sh.itjust.works on 09 Feb 15:17 collapse

I have a lot of Linux IOSs which are definitely not VR porn. I have 200TB total including parity disks, and 150TB usable. It’s a real pain in the ass to maintain so many disks, and the power bill isn’t fun either. I’d love to replace them with fewer disks.

nuko147@lemmy.world on 08 Feb 17:58 next collapse

Whats the point when the prices for 4-8TB disks are stable the last 5 years? (I think that they are getting higher even…)

sefra1@lemmy.zip on 08 Feb 18:10 next collapse

The point is that 8TB are too small, and not enough for my anime.

Agent641@lemmy.world on 08 Feb 23:53 next collapse

“Anime”

jj4211@lemmy.world on 09 Feb 04:43 collapse

Retaining that much detail on tentacles takes some drive space

Holytimes@sh.itjust.works on 10 Feb 04:13 collapse

See the problem is the details keep getting higher res, but we also never stopped to ask if 32 tentacles was too much…

nuko147@lemmy.world on 09 Feb 05:07 collapse

If the price per TB is stable you just buy 2 or 3 disks. It used to be that you buy one disk because by the time you needed more space the price per TB would be dropped a lot (halved even).

remon@ani.social on 09 Feb 05:31 collapse

My NAS has a limited number of bays, so buying more low-storage disks isn’t a great option.

SpikesOtherDog@ani.social on 09 Feb 21:24 collapse

Buy a SAS adapter and put them in an external storage rack.

Zetta@mander.xyz on 08 Feb 19:02 next collapse

The point is the need for more and more data storage is never going to stop.

stressballs@lemmy.zip on 09 Feb 01:02 collapse

Yep. It’s absurd. Who spends that much on a 4TB?

pound_heap@lemmy.dbzer0.com on 08 Feb 20:58 next collapse

Does the increased density mean that the speed also goes up? It would be nice if a 7200 RPM drive could finally saturate SATA3 bandwidth.

frongt@lemmy.zip on 09 Feb 04:17 next collapse

No.

jj4211@lemmy.world on 09 Feb 04:41 collapse

Linear density could also boost throughout. Multiple actuators also exist.

DonutsRMeh@lemmy.world on 08 Feb 22:46 next collapse

And how much will that cost? Sounds like something fantastic for my Jellyfin server. I’ll have all the 4k HDR I can get my hands on.

Agent641@lemmy.world on 08 Feb 23:52 next collapse

If you have to ask, you can’t afford it 😭

Dozzi92@lemmy.world on 09 Feb 03:16 next collapse

Who’s Barry Badrinath?

DonutsRMeh@lemmy.world on 11 Feb 19:28 collapse

Maybe I can. The only thing you know about me is my username 😂

recklessengagement@lemmy.world on 09 Feb 05:37 next collapse

Going by the usual trends of $20+/tb, I’d say. fuckin expensive

turmacar@lemmy.world on 09 Feb 17:47 next collapse

For now anyway, it used to be $20+/gb. I’ll settle for flooding the market with refurbished 16+tb drives.

DonutsRMeh@lemmy.world on 11 Feb 19:29 collapse

Very cheap. Just kidding. Fuck that shit

SaveTheTuaHawk@lemmy.ca on 09 Feb 17:51 collapse

I would not put 130TB on any one piece of hardware, because when it fails, it will be a very sad day.

SpikesOtherDog@ani.social on 09 Feb 21:00 next collapse

This hardware is for those who are storing EB of data.

Appoxo@lemmy.dbzer0.com on 10 Feb 13:14 next collapse

That’s why this is perfect for a distributed array or as a data mass grave.
You don’t really store anything in there that is needed often.

Guess why (deep-)archive S3 is so much cheaper than hot S3 storage.
That’s a reason why.

DonutsRMeh@lemmy.world on 11 Feb 19:29 collapse

Don’t even mention it. I have real world experience in that area 😂

Ferroto@lemmy.world on 09 Feb 00:59 next collapse

If you were to ask me a year ago I’d tell you that HDD’s would be the next dead storage medium but now SSD’s cost more then I spent on my rig and HDD’s are pushing 140 TB’s

filcuk@lemmy.zip on 09 Feb 21:38 next collapse

I wonder if tapes make any sort of ‘comeback’ to the consumer market.

Appoxo@lemmy.dbzer0.com on 10 Feb 13:12 collapse

I just looked up prices for servers we sell out work.
They saw a price increase of 47%.
The SSDs and RAM saw an increase of about 25% and ~150% respectively.
Absolutely ludicrous and BS (ironically both the price and available stock increased. So it’s just preying on the market instead of an actual shortage lol)

harambe69@lemmy.dbzer0.com on 09 Feb 03:32 next collapse

Ya hmar

NewOldGuard@lemmy.ml on 09 Feb 15:58 collapse

Yalla

irmadlad@lemmy.world on 09 Feb 14:01 next collapse

We’ve come a long way:

<img alt="" src="https://lemmy.world/pictrs/image/9601da39-c890-49fb-9466-20eb45a53915.jpeg">

SaveTheTuaHawk@lemmy.ca on 09 Feb 17:50 collapse

That was my first USB thumb drive.

irmadlad@lemmy.world on 09 Feb 17:53 next collapse

IIRC that was 5 mb. It weighed about 2000 lbs

HertzDentalBar@lemmy.blahaj.zone on 10 Feb 00:01 collapse

Fee-fi-fo-fum.

leftzero@lemmy.dbzer0.com on 09 Feb 20:00 collapse

In a pinch the drive can also double as a flywheel battery.