Rough draft server/NAS is complete!
from ramenshaman@lemmy.world to selfhosted@lemmy.world on 17 Jul 05:56
https://lemmy.world/post/33080730

Just got all the hardware set up and working today, super stoked!

In the pic:

I went with the Raspberry Pi to save some money and keep my power consumption low. I’m planning to use the NAS for streaming TV shows and movies (probably with Jellyfin), replacing my google photos account (probably with Immich), and maybe steaming music (not sure what I might use for that yet). The Pi is running Raspberry Pi Desktop OS, might switch to the server version. I’ve got all 5 drives set up and I’ve tested out streaming some stuff locally including some 4K movies, so far so good!

For those wondering, I added the 5V buck convertor because some people online said the SATA hat doesn’t do a great job of supplying power to the Pi if you’re only providing 12V to the barrel jack, so I’m going to run a USB C cable to the Pi. Also using it to send 5V to the PWM pin on the fan. Might add some LEDs too, fuck it.

Next steps:

Any tips/suggestions are welcome! Will post again once I get the enclosure set up.

#selfhosted

threaded - newest

avidamoeba@lemmy.ca on 17 Jul 06:12 next collapse

  • That power situation looks suspicious. You better know what you’re doing so you don’t run into undercurrent events under load.
  • Use ZFS RAIDz1 instead of RAID 5.
ramenshaman@lemmy.world on 17 Jul 06:15 collapse

Ultimately I would love to use ZFS but I read that it’s difficult to expand/upgrade. Not familiar with ZFS RAIDz1 though, I’ll look into it. Thanks!

I build robots for a living, the power is fine, at least for a rough draft. I’ll clean everything up once the enclosure is set up. The 12V supply is 10A which is just about the limit of what a barrel jack can handle and the 5V buck is also 10A, which is about double what the Pi 5 power supply can provide.

ryannathans@aussie.zone on 17 Jul 06:18 next collapse

Easier to expand than a typical raid 5 array

avidamoeba@lemmy.ca on 17 Jul 06:32 collapse

This. Also it’s not difficult to expand at all. There are multiple ways. Just ask here. You could also ask for hypothetical scenarios now if you like.

ryannathans@aussie.zone on 17 Jul 07:57 collapse

Could also google it lol

CmdrShepard49@sh.itjust.works on 17 Jul 06:29 next collapse

Z1 is just single parity.

AFAIK expanding a ZFS pool is a new feature. Its used in Proxmox but their version hasn’t been updated yet, so I don’t have the ability to try it out yet. It t should be available to you otherwise.

Sweet build! I have all these parts laying around so this would be a fun project. Please share your enclosure design if you’d like!

avidamoeba@lemmy.ca on 17 Jul 06:36 collapse

Basically the equivalent of RAID 5 in terms of redundancy.

You don’t even need to do RAIDz expansion, although that feature could save some space. You can just add another redundant set of disks to the existing one. E.g. have a 5-disk RAIDz1 which gives you the space of 4 disks. Then maybe slap on a 2-disk mirror which gives you the space of 1 additional disk. Or another RAIDz1 with however many disks you like. Or a RAIDz2, etc. As long as the newly added space has adequate redundancy of its own, it can be seamlessly added to the existing one, “magically” increasing the available storage space. No fuss.

ramenshaman@lemmy.world on 17 Jul 07:21 next collapse

Awesome. It’s my understanding that ZFS can help prevent bit rot, so would ZFS RAIDz1 also do this?

I found this, it seems to show all the steps I would need to take to install RAIDz1: jeffgeerling.com/…/htgwa-create-zfs-raidz1-zpool-…

avidamoeba@lemmy.ca on 17 Jul 13:43 next collapse

Yes, it prevents bit rot. It’s why I switched to it from the standard mdraid/LVM/Ext4 setup I used before.

The instructions seem correct but there’s some room for improvement.

Instead of using logical device names like this:

sudo zpool create zfspool raidz1 sda sdb sdc sdd sde -f

You want to use hardware IDs like this:

sudo zpool create zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...

You can discover the mapping of your disks to their logical names like this:

ls -la /dev/disk/by-id/*

Then you also want to add these options to the command:

sudo zpool create -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool ...

These do useful things like setting optimal block size, compression (basically free performance), a bunch of settings that make ZFS behave like a typical Linux filesystem (its defaults come from Solaris).

Your final create command should look like:

sudo zpool create  -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...

You can experiment till you get your final creation command since creation/destruction is very fast. Don’t hesitate to create/destroy multiple times till you got it right.

avidamoeba@lemmy.ca on 17 Jul 16:20 collapse

Updated ☝️ 👇

heatermcteets@lemmy.world on 17 Jul 07:28 collapse

Doesn’t losing a vdev cause the entire pool to be lost? I guess to your point with sufficient redundancy for new vdev 1 drive redundancy whether 3 disks or 5 is essentially the same risk. If a vdev is added without redundancy that would increase risk of losing the entire pool.

avidamoeba@lemmy.ca on 17 Jul 13:33 collapse

Yes exactly.

Creat@discuss.tchncs.de on 17 Jul 07:49 next collapse

ZFS, specifically RaidZx, can be expanded like and raid 5/6 these days, assuming support from the distro (works with TrueNAS for example). The patches for this have been merged years ago now. Expanding any other array (like a striped mirror) is even simpler and is done by adding VDevs.

eneff@discuss.tchncs.de on 17 Jul 10:28 next collapse

ZRAID expansion is now better than ever before!

In the beginning of this year (with ZFS 2.3.0) they added zero-downtime expansion along with some other things like enhanced deduplication.

fmstrat@lemmy.nowsci.com on 17 Jul 12:46 collapse

ZFS is so… So much better. In every single way. Change now before it’s too late, learn and use the features as you go.

coaxil@lemmy.zip on 17 Jul 07:00 next collapse

This might be of interest to you, he also has his files up on maker world

www.youtube.com/watch?v=8CmYghBYT0o

ramenshaman@lemmy.world on 17 Jul 07:26 collapse

That’s a super clean build, too bad he didn’t make a 5 drive version.

coaxil@lemmy.zip on 17 Jul 07:39 collapse

Dude seems super responsive to input and requests, legit might do one if you hit him up. Also some how missed you are running 5 drives, and not 4! My bad

Estebiu@lemmy.dbzer0.com on 17 Jul 07:05 next collapse

Yeah, raid 5 in 2025 for a nas? A big no no

Pestdoktor@lemmy.world on 17 Jul 07:34 next collapse

I’m new to this topic and only recently learned about RAID levels. Why is it a big no no?

ramenshaman@lemmy.world on 17 Jul 08:24 collapse

I’m in the same boat. Based on the things I’ve learned in the last hour or two, ZFS RAIDz1 is just newer and better. Someone told me that ZFS will help prevent bit rot, which is a concern for me, so I’m assuming ZFS RAIDz1 also does this, though I haven’t confirmed it yet. I’m designing my enclosure now and haven’t looked into that yet.

Estebiu@lemmy.dbzer0.com on 17 Jul 12:45 collapse

Yup, it does that. You can run a scrub whonever you want and it’ll manually check them. Or you can just open the files and it will check at runtime.

ramenshaman@lemmy.world on 17 Jul 07:35 next collapse

TIL. Looking into ZFS RAIDz1.

RunningInRVA@lemmy.world on 17 Jul 11:48 collapse

Also a no-no. ZFS RAIDz2 is what you want.

r00ty@kbin.life on 17 Jul 08:13 collapse

My understanding is that the only issues were the write hole on power loss for raid 5/6 and rebuild failures due to un-seen damage to surviving drives.

Issues with single drive rebuild failures should be largely mitigated by regular drive surface checks and scrubbing if the filesystem supports it. This should ensure that any single drive errors that might have been masked by raid are removed and all drives contain the correct data.

The write hole itself could be entirely mitigated since the OP is building their own system. What I mean by that is that they could include a "mini UPS" to keep 12v/5v up long enough to shut down gracefully in a power loss scenario (use a GPIO for "power good" signal). Now, back in the day we had raid controllers with battery backup to hold the cache memory contents and flush it to disk on regaining power. But, those became super rare quite some time ago now. Also, hardware raid was always a problem with getting a compatible replacement if the actual controller died.

Is there another issue with raid 5/6 that I'm not aware of?

ramenshaman@lemmy.world on 17 Jul 08:26 next collapse

they could include a “mini UPS” to keep 12v/5v up long enough to shut down gracefully in a power loss scenario

That’s a fuckin great idea.

r00ty@kbin.life on 17 Jul 08:55 collapse

I was looking at doing something similar with my Asustor NAS. That is, supply the voltage, battery, charging circuit myself, and add one of those CH347 USB boards to provide I2C/GPIO etc and just have the charging circuit also provide a voltage good signal that software on the NAS could poll and use to shut down.

ramenshaman@lemmy.world on 17 Jul 09:23 collapse

Nice. For the Pi5 running Pi OS, do you think using a GPIO pin to trigger a sudo shutdown command be graceful enough to prevent issues?

r00ty@kbin.life on 17 Jul 09:42 collapse

I think so. I would consider perhaps allowing a short time without power before doing that. To handle short cuts and brownouts.

So perhaps poll once per minute, if no power for more than 5 polls trigger a shutdown. Make sure you can provide power for at least twice as long as the grace period. You could be a bit more flash and measure the battery voltage and if it drops below a certain threshold send a more urgent shutdown on another gpio. But really if the batteries are good for 20mins+ then it should be quite safe to do it on a timer.

The logic could be a bit more nuanced, to handle multiple short power cuts in succession to shorten the grace period (since the batteries could be drained somewhat). But this is all icing on the cake I would say.

ramenshaman@lemmy.world on 17 Jul 17:27 collapse

“sudo shutdown” gives you 60 seconds, “sudo shutdown now” does not, which is what I usually use. I’m thinking I could launch a script on startup that will check a pin every x seconds and run a shutdown command once it gets pulled low.

Estebiu@lemmy.dbzer0.com on 17 Jul 12:44 collapse

For me, raid 5 always has been great, but zfs it’s just… Better. Snapshots, scrubs, datasets… I also like how you can export/import a pool super easily, and stuff. It’s just better overall.

nao@sh.itjust.works on 17 Jul 07:17 next collapse

so I’m going to run a USB C cable to the Pi

Isn’t that already the case in the photo? It looks like the converter including all that cabling is only there to get 5v for the fan, but it’s difficult to see where the usb-c comes from

ramenshaman@lemmy.world on 17 Jul 07:35 collapse

Good catch. I don’t have my USB-C cable coming from the buck convertor set up yet, waiting on some parts to arrive tomorrow. The USB-C power is currently coming from a separate power supply in this set up. Ultimately, there will be a single 12V barrel jack port to power the whole system.

Ek-Hou-Van-Braai@piefed.social on 17 Jul 08:02 next collapse

Nice I love it!!

I also have a "messy" setup like this, looking forward to 3D printing a case and then creating a cooling solution for it

justme@lemmy.dbzer0.com on 17 Jul 08:43 next collapse

I love it! The power part is what always blocks me. I would like to set up a couple of data ssds, but never know if you actually need the 3.3v part etc, so currently I put a Pico PSU on a muATX board in a way to huge tower.

GreenKnight23@lemmy.world on 17 Jul 09:05 next collapse

PLA warps over time even at low heat. that said, as long as you have good airflow it shouldn’t be a problem to use it for housing, but anything directly contacting the drives might warp.

I thought about doing this myself and was leaning towards reusing drive sleds from existing hardware. it’ll save on design and printing time as well as alleviate problems with heat and the printed parts.

the sleds are usually pretty cheap on ebay, and you can always buy replacements without much effort.

ramenshaman@lemmy.world on 17 Jul 17:23 collapse

Printing the bracket for the drives in PLA now. I designed them to make minimal contact with the drives so I think they’ll be ok. Even in the rough draft setup the 140mm fan seems like overkill to keep them all cool. If the bracket warps I’ll reprint in something else. Polymaker recently released HT-PLA and HT-PLA-GF, which I’ve been eager to try.

remotelove@lemmy.ca on 17 Jul 09:39 next collapse

The fan is good, but the orientation seems like it would struggle pushing air between the drives. Maybe a push-pull setup with a second fan?

Onomatopoeia@lemmy.cafe on 17 Jul 12:42 collapse

I’d at least flip the drive on the right so it’s underside is closer to the fan, as that side gets hotter in my experience, so it would have more effective cooling.

ramenshaman@lemmy.world on 19 Jul 23:15 collapse

Printing an enclosure now that will significantly improve the airflow

Allero@lemmy.today on 17 Jul 10:25 next collapse

I would argue either RAID 5 or ZFS RAIDz1 are inherently unsafe, since recovery would take a lot of read-write operations, and you better pray every one of 4 remaining drives will hold up well even after one clearly failed.

I’ve witnessed many people losing their data this way, even among prominent tech folks (looking at you, LTT).

RAID6/ZFS RAIDz2 is the way. Yes, you’re gonna lose quite a bit more space (leaving 24TB vs 32TB), but added reliability and peace of mind are priceless.

(And, in any case, make backups for anything critical! RAID is not a backup!)

Poem_for_your_sprog@lemmy.world on 17 Jul 11:35 collapse

How do I do I how I do I do I how do

cellardoor@lemmy.world on 17 Jul 12:06 next collapse

RAIDZ1. RAID 5 is historically plagued by issues and just not a reliable bet.

Onomatopoeia@lemmy.cafe on 17 Jul 12:39 collapse

RAID 5 is fine, as part of a storage and data management plan. I run it on an older NAS, though It can do RAID 6.

No RAID is reliable in the sense of “it’ll never fail” - fault tolerance has been added to it over the years but it’S still a storage pool from multiple drives.

ZFS adds to it’s fault resistance, but you still better have proper backups/redundancy.

jagermo@feddit.org on 17 Jul 12:41 collapse

Yolo it and mergerFS all the disks into one!

LifeInMultipleChoice@lemmy.world on 17 Jul 13:15 collapse

Then encrypt the drive(s), and auto run a split command that ensures the data is stored all over. Your launcher can have a built in cat command to ensure it takes longer to start the files, but this way we know when one drive dies, that data is straight fucked

SkyezOpen@lemmy.world on 17 Jul 13:51 collapse

Sitting on a chair with a hammer suspended above your nutsack and having a friend cut the rope at a random time will provide the same effect and surprise with much less effort.

Aceticon@lemmy.dbzer0.com on 17 Jul 14:01 collapse

Dust is going to be a problem (well, maybe not that much electrically, but it maks it a pita to keep clean) after some months, especially for the Raspberry Pi.

Consider getting (or, even better, 3D printing) an enclosure for it at least (maybe the HDDs will be fine as they are since the fan keeps the air moving and dust probably can’t actually settle down on it).

ramenshaman@lemmy.world on 17 Jul 17:17 collapse

I’ve got that covered. I got a filter for the big intake fan. Printing the first batch of enclosure parts now.