I’m done with RAID5
from eagerbargain3@lemmy.world to jellyfin@lemmy.ml on 11 Feb 11:13
https://lemmy.world/post/43002985

It make no sense today for me anymore…I found more downside than real use of my RAID5 array.

My setup: 5 disks of 22TB in Raid 5

I have another NAS full SSD, with 8 SSD but I hate the nature of RAID in SSD: they die unexpected most of the time. I prefer to lose 4TB then put 30TB at risks if 2 or more SSD decide to stop working

So maybe duplicating on another disk in a mirror (rsync) is maybe better for me

#jellyfin

threaded - newest

MentalEdge@sopuli.xyz on 11 Feb 11:19 next collapse

Pretty much.

My volumes are either RAID1 or mergerfs.

truthfultemporarily@feddit.org on 11 Feb 11:27 next collapse

RAID5 has been dead in commercial contexts for around 10 years. Reason is the resilver time is just too long. Now mostly you either use striped mirrors or do redundancy on the software level.

mbirth@lemmy.ml on 11 Feb 12:18 collapse

Now mostly you either use striped mirrors

How is rebuilding an xx TB mirrored disk faster than rebuilding an xx TB disk that’s part of a RAID? Since most modern NASes use software RAID, it’s only a matter of tweaking a few parameters to speed up the rebuild process.

MentalEdge@sopuli.xyz on 11 Feb 12:44 next collapse

Rebuilding parity requires processing power. Copying a mirror does not.

There’s also the fact that the rebuild stresses the drives, increasing the chance of a cascade failure, where the resulting rebuild after a drive failure, reveals other drive failures.

It all results in management overhead, which having to “just tweak some parameters” makes worse, not better.

In comparison to simple mirroring and backing up offsite, RAID is a headache.

The redundancy it provides is better achieved in other ways, and the storage pooling it provides is better achieved in other ways.

mbirth@lemmy.ml on 11 Feb 13:09 collapse

Rebuilding parity requires processing power.

That shouldn’t be an issue with any NAS bought in the past decade.

the rebuild stresses the drives

You can tweak the parameters so the rebuild is being done slower. Also, mirroring a disk stresses the (remaining) disk as well. (But to be fair, if that one fails, you’ll still be able to access the data from the other mirror-pair(s).)

It all results in management overhead

I’m not seeing that. Tweaking parameters is not necessary unless you want to change the default behaviour. Default behaviour is fine in most cases.

In comparison […] RAID is a headache.

Speak for yourself. I rather enjoy the added storage capacity.

MentalEdge@sopuli.xyz on 11 Feb 13:15 collapse

I rather enjoy the added storage capacity.

So do I.

It’s just that I use btrfs, mergerfs, or lvms to pool storage. Not RAID.

Making changes to my storage setup is far easier using these options, much more so than RAID.

Mergerfs especially makes adding or removing capacity truly trivial, with the only lengthy processes involved being bog-standard file transfers.

Hard drive storage is pretty cheap. And the effort it takes to make changes to a raid volume as my needs change over the years, just isn’t worth the savings.

mbirth@lemmy.ml on 11 Feb 13:32 collapse

How often do you change your storage setup? I’ve configured everything once like 5 years ago and haven’t touched it since. I can add larger disks in pairs and the Synology does some LVM-/mdraid-magic to add the newly available free space as RAID1 until I add a third larger disk and it remodels it to RAID5.

How do you handle parity with MergerFS? Or are all your storage partitions mirrored?

Hard drive storage is pretty cheap.

Not really - especially, if you’re looking for CMR drives. And any storage increase needs at least 2 disks with basically no (ethical) way to get any money back for the old ones.

MentalEdge@sopuli.xyz on 11 Feb 13:46 collapse

Every year or so.

My NAS is self-built.

I used to buy one more drive whenever my pools would start getting full. I’m now in a place where I can discard data about as fast as I get more to store, I don’t predict needing new drives until one fails.

I’ve re-arranged my volumes to increase or decrease parity many times after buying drives or instead of buying drives.

Mergerfs makes access easy, the underlying drives are either with or without parity pairs, and I have things arranged so that critical files are always stored with mirroring, while non-critical files are not.

mbirth@lemmy.ml on 11 Feb 14:46 collapse

Interesting! Thank you for that insight. I might adopt some methods for when I finally replace the Synology with a new NAS (which will definitely not be another Synology device!).

truthfultemporarily@feddit.org on 11 Feb 12:48 collapse

It’s not faster but you’re safer during it. If you have a RAID5, you cannot have a second disk fail during resilver. With striped mirrors another disk fail will have at most a 1/3 chance to destroy all data.

mbirth@lemmy.ml on 11 Feb 13:20 collapse

Agreed. However, mirroring the remaining disk onto a new one makes it more likely for it to fail, too, I guess?

I think the more important rule would be to not buy two disks from the same batch. And then go with whatever tickles your fancy.

4grams@awful.systems on 11 Feb 12:40 next collapse

This is why I went back to a simple snapraid and Mergerfs setup. It only spins up the disk it uses, slow but a lot more efficient. It also is based on dead simple ext4 drives which are all still accessible even if the software fails; it’s all file level. I’ve lost many drives over the years and have successfully rebuilt every time.

Scale is about the same as yours, about 24tb made up of 4 and 10tb disks. It’s unglamorous, it’s old school but it works and is reliable.

tenchiken@anarchist.nexus on 11 Feb 12:52 collapse

BTRFS

RAID5

Wat….

If you NEED uptime, use mdraid or ZFS.

BTRFS and RAID5 is NOT production ready.

eagerbargain3@lemmy.world on 11 Feb 13:05 next collapse

dont need uptime… tried truenas scale on my ugreen nAS 64GB RAM (no ECC) and did not like it. ZFS is great no question but learning curve and risks of losing my whole array is too high (or two array of ZFS2). A pure EXT4 JBOD with replication once in a while is enough and more energy efficient for media

Anyway I envision to keep updating most of it to AV1 down the line, so reducing storage need over time (long period)

exu@feditown.com on 12 Feb 11:50 collapse

RAID (any form of it) is an uptime technology. If you don’t need uptime, you don’t need RAID

moonpiedumplings@programming.dev on 11 Feb 22:56 next collapse

No, isn’t it only software raid5 done via btrfs?

Btrfs + hardware raid should work fine. The OS can’t tell the difference anyways.

tenchiken@anarchist.nexus on 12 Feb 06:01 collapse

Yeah but that’s not what I interpreted it as. OP might be using either I suppose.

Personally, hardware raid irritates me since recovery scenarios are harder to recover from without $$$. I’ve had more luck with mdraid recovery than several vendors of hardware raid.

I do think BTRFS is cool, but like at things there’s caveats.

TomB19@lemmy.ml on 12 Feb 06:53 collapse

Fact this.