Anyone running ZFS?
from blackstrat@lemmy.fwgx.uk to selfhosted@lemmy.world on 03 Oct 17:24
https://lemmy.fwgx.uk/post/266430

At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I’ll need to IT flash the HBA, or get another. I’m guessing it’s best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

#selfhosted

threaded - newest

paperd@lemmy.zip on 03 Oct 17:29 next collapse

If you want multiple VMs to use the storage on the ZFS pool, better to create it in proxmox rather than passing raw disks thru to the VM.

ZFS is awesome, I wouldn’t use anything else now.

SzethFriendOfNimi@lemmy.world on 03 Oct 17:32 next collapse

If I recall correctly it’s important to be running ECC memory right?

Otherwise corrupter bites/data can cause file system issues or loss.

ShortN0te@lemmy.ml on 03 Oct 17:36 next collapse

You recall wrong. ECC is recommended for any server system but not necessary.

RaccoonBall@lemm.ee on 03 Oct 22:03 collapse

And if you dont have ECC zfs just might save your bacon when a more basic fs would allow corruption

avidamoeba@lemmy.ca on 04 Oct 18:52 next collapse

It might also save it from shit controllers and cables which ECC can’t help with. (It has for me)

conorab@lemmy.conorab.com on 07 Oct 11:12 collapse

I don’t think ZFS can do anything for you if you have bad memory other than help in diagnosing. I’ve had two machines running ZFS where they had memory go bad and every disk in the pool showed data corruption errors for that write and so the data was unrecoverable. Memory was later confirmed to be the problem with a Memtest run.

snowfalldreamland@lemmy.ml on 03 Oct 20:27 collapse

I think ecc isn’t more required for zfs then for any other file system. But the idea that many people have is that if somebody goes through the trouble of using raid and using zfs then the data must be important and so ecc makes sense.

farcaller@fstab.sh on 04 Oct 06:29 collapse

ECC is slightly more required for ZFS because its ARC is generally more aggressive than the usual linux caching subsystem. That said, it’s not a hard requirement. My curent NAS was converted from my old windows box (which apparently worked for years with bad ram). Zfs uncovered the problem in the first 2 days by reporting the (recoverable) data corruption in the pool. When I fixed the ram issue and hash-checked against the old backup all the data was good. So, effectively, ZFS uncovered memory corruption and remained resilient against it.

blackstrat@lemmy.fwgx.uk on 03 Oct 19:31 collapse

What I have now is one VM that has the array volume passed through and the VM exports certain folders for various purposes to other VMs. So for example, my application server VM has read access to the music folder so I can run Emby. Similar thing for photos and shares out to my other PCs etc. This way I can centrally manage permissions, users etc from that one file server VM. I don’t fancy managing all that in Proxmox itself. So maybe I just create the zpool in Proxmox, pass that through to the file server VM and keep the management centralised there.

scrubbles@poptalk.scrubbles.tech on 03 Oct 17:31 next collapse

I did on proxmox. One thing I didn’t know about ZFS, it has a lot of random writes, I believe logs and journaling. I killed 6 SSDs in 6 months. It’s a great system - but consumer SSDs can’t handle it.

ShortN0te@lemmy.ml on 03 Oct 18:04 next collapse

I use a consumer SSD for caching on ZFS now for over 2 years and do not have any issues with it. I have a 54 TB pool with tons of reads and writes and no issue with it.

smart reports 14% used.

blackstrat@lemmy.fwgx.uk on 03 Oct 19:32 next collapse

Did you have atime on?

avidamoeba@lemmy.ca on 04 Oct 03:54 collapse

That doesn’t sound right. Also random writes don’t kill SSDs. Total writes do and you can see how much has been written to an SSD in its SMART values. I’ve used SSDs for swap memory for years without any breaking. Heavily used swap for running VMs and software builds. Their total bytes written counters were increasing steadily but haven’t reached the limit and haven’t died despite the sustained random writes load. One was an Intel MacBook onboard SSD. Another was a random Toshiba OEM NVMe. Another was a Samsung OEM NVMe.

BlueEther@no.lastname.nz on 03 Oct 17:38 next collapse

I run proxmox and a trunas VM.

  • TrueNAS is on a virt disk on a NVME drive with all the other VMs/LXCs
  • I pass the HBA through to TrueNAS with PCI passthrough: 6 disk Raid z2. this is ‘vault’ and has all my backups of hone dirs and photos etc
  • I pass through two HDs as raw disks for bulk storage (of linux ISOs): 2 disk Mirrored zfs

Seems to work well

blackstrat@lemmy.fwgx.uk on 03 Oct 20:44 collapse

I’m starting to think this is the way to do it because it loses the dependency on Proxmox to a large degree.

minnix@lemux.minnix.dev on 04 Oct 16:09 collapse

Yes you don’t need Proxmox for what you’re doing.

blackstrat@lemmy.fwgx.uk on 04 Oct 16:58 collapse

I was thinking Proxmox would add a layer between the raw disks and the VM that might interfere with ZFS, in a similar way how a non IT more HBA does. From what I understand now, the passthrough should be fine.

minnix@lemux.minnix.dev on 03 Oct 18:01 next collapse

ZFS is great, but to take advantage of it’s positives you need the right drives, consumer drives get eaten alive as @scrubbles@poptalk.scrubbles.tech mentioned and your IO delay will be unbearable. I use Intel enterprise SSDs and have no issues.

scrubbles@poptalk.scrubbles.tech on 03 Oct 18:59 next collapse

No idea why you’re getting downvoted, it’s absolutely correct and it’s called out in the official proxmox docs and forums. Proxmox logs and journals directly to the zfs array regularly, to the point of drive destroying amounts of writes.

blackstrat@lemmy.fwgx.uk on 03 Oct 19:25 next collapse

I’m not intending to run Proxmox on it. I have that running on an SSD, or maybe it’s an NVME, I forget. This will just be for data storage mainly of photos that one VM will manage and NFS share out to other machines.

minnix@lemux.minnix.dev on 03 Oct 19:54 next collapse

Yes I’m specifically referring to your ZFS pool containing your VMs/LXCs. Enterprise SSDs for that. Get them on ebay. Just do a search on the Proxmox forums for enterprise vs consumer SSD to see the problem with consumer hardware for ZFS. For Proxmox itself you want something like an NVME with DRAM, specifically underprovisioned for an unused space buffer for the drive controller to use for wear leveling.

scrubbles@poptalk.scrubbles.tech on 03 Oct 20:09 collapse

Ah I’ll clarify that I set mine up next to the system drive in proxmox, through the proxmox zfs helper program. There was probably something in there that set up settings in a weird way

ShortN0te@lemmy.ml on 03 Oct 19:39 collapse

What exactly are you referring to? ZIL? ARC? L2ARC? And what docs? Have not found that call out in the official docs.

blackstrat@lemmy.fwgx.uk on 03 Oct 20:40 next collapse

Could this because it’s a RAIDZ-2/3? They will be writing parity as well as data and the usual ZFS checksums. I am running RAID5 at the moment on my HBA card and my limit is definitely the 1Gbit network for file transfers, not the disks. And it’s only me that uses this thing, it sits totally idle 90+% of the time.

minnix@lemux.minnix.dev on 03 Oct 21:11 collapse

For ZFS what you want is PLP and high DWPD/TBW. This is what Enterprise SSDs provide. Everything you’ve mentioned so far points to you not needing ZFS so there’s nothing to worry about.

blackstrat@lemmy.fwgx.uk on 03 Oct 21:30 collapse

I won’t be running ZFS on any solid state media, I’m using spinning rust disks meant for NAS use.

My desire to move to ZFS is bitrot prevention and as a result of this:

www.youtube.com/watch?v=l55GfAwa8RI

minnix@lemux.minnix.dev on 03 Oct 21:44 collapse

Looking back at your original post, why are you using Proxmox to begin with for NAS storage??

blackstrat@lemmy.fwgx.uk on 04 Oct 05:13 collapse

The server runs Proxmox and one of the VMs runs as a fileserver. Other VMs and containers do other things.

RaccoonBall@lemm.ee on 03 Oct 22:06 next collapse

Complete nonsense. Enterprise drives are better for reliability if you plan on a ton of writes, but ZFS absolutely does not require them in any way.

Next you’ll say it needs ECC RAM

minnix@lemux.minnix.dev on 03 Oct 22:51 collapse

ZFS absolutely does not require them in any way.

Who said it does? Also regarding Proxmox:

…proxmox.com/…/2-node-cluster-with-the-the-least-…

forum.proxmox.com/threads/…/post-632197

avidamoeba@lemmy.ca on 04 Oct 17:07 collapse

And you probably know that sync writes will shred NAND while async writes are not that bad.

This doesn’t make sense. SSD controllers have been able to handle any write amplification under any load since SandForce 2.

Also most of the argument around speed doesn’t make sense other than DC-grade SSDs being expected to be faster in sustained random loads. But we know how fast consumer SSDs are. We know their sequential and random performance, including sustained performance - under constant load. There are plenty benchmarks out there for most popular models. They’ll be as fast as those benchmarks on average. If that’s enough for the person’s use case, it’s enough. And they’ll handle as many TB of writes as advertised and the amount of writes can be monitored through SMART.

And why would ZFS be any different than any other similar FS/storage system in regards to random writes? I’m not aware of ZFS generating more IO than needed. If that were the case, it would manifest in lower performance compared to other similar systems. When in fact ZFS is often faster. I think SSD performance characteristics are independent from ZFS.

Also OP is talking about HDDs, so not even sure where the ZFS on SSDs discussion is coming from.

minnix@lemux.minnix.dev on 04 Oct 17:40 collapse

There is no way to get acceptable IOPS out of HDDs within Proxmox. Your IO delay will be insane. You could at best stripe a ton of HDDs but even then one enterprise grade SSD will smoke it as far as performance goes. Post screenshots of your current Proxmox HDD/SSD disk setup with your ZFS pool, services, and IO delay and then we can talk. The difference that enterprise gives you is night and day.

blackstrat@lemmy.fwgx.uk on 05 Oct 06:20 collapse

Are you saying SSDs are faster than HDDs?

minnix@lemux.minnix.dev on 05 Oct 13:22 collapse

I was asking them to post their setup so I can evaluate their experience with regards to Proxmox and disk usage.

avidamoeba@lemmy.ca on 04 Oct 03:58 collapse

Not sure where you’re getting that. Been running ZFS for 5 years now on bottom of the barrel consumer drives - shucked drives and old drives. I have used 7 shucked drives total. One has died during a physical move. The remaining 6 are still in use in my primary server. Oh and the speed is superb. The current RAIDz2 composed of the shucked 6 and 2 IronWolfs does 1.3GB/s sequential reads and write IOPS at 4K in the thousands. Oh and this is all happening on USB in 2x 4-bay USB DAS enclosures.

walden@sub.wetshaving.social on 03 Oct 19:57 next collapse

I use zfs with Proxmox. I have it as a bind mount to Turnkey Fileserver (a default lxc template).

I access everything through NFS (via turnkey Fileserver). Even other VMs just get the NFS added to the fstab file. File transfers happen extremely fast VM to VM, even though it’s “network” storage.

This gives me the benefits of zfs, and NFS handles the “what if’s”, like what if two VMs access the same file at the same time. I don’t know exactly what NFS does in that case, but I haven’t run into any problems in the past 5+ years.

Another thing that comes to mind is you should make turnkey Fileserver a privileged container, so that file ownership is done through the default user (1000 if I remember correctly). Unprivileged uses wonky UIDs which requires some magic config which you can find in the docs. It works either way, but I chose the privileged route. Others will have different opinions.

NeoNachtwaechter@lemmy.world on 03 Oct 20:29 next collapse

better to pass the individual disks through to the VM and manage the zpool from there?

That’s what I do.

I like it better this way, because less dependencies.

Proxmox boots from it’s own SSD, the VM that provides the NAS lives there, too.

The zpool (consisting of 5 good old harddisks) can be easily plugged somewhere else if needed, and it carries the data of the NAS, but nothing else. I can rebuild the proxmox base, I can reinstall that VM, they all do not affect each other.

blackstrat@lemmy.fwgx.uk on 03 Oct 20:46 collapse

Good point. Having a small VM that just needs the HBA passed through sounds like the best idea so far. More portable and less dependencies.

possiblylinux127@lemmy.zip on 03 Oct 23:09 next collapse

I use ZFS but you need to be very aware of its problems

Learn zpool

avidamoeba@lemmy.ca on 04 Oct 03:49 next collapse

Yes we run ZFS. I wouldn’t use anything else. It’s truly incredible. The only comparable choice is LVMRAID + Btrfs and it still isn’t really comparable in ease of use.

Chewy7324@discuss.tchncs.de on 04 Oct 17:37 collapse

Why LVM + BTRFS instead of only using btrfs? Unless you need RAID 5/6, which doesn’t work well on btrfs.

avidamoeba@lemmy.ca on 04 Oct 18:11 collapse

Unless you need RAID 5/6, which doesn’t work well on btrfs

Yes. Because they’re already using some sort of parity RAID so I assume they’d use RAID in ZFS/Btrfs and as you said, that’s not an option for Btrfs. So LVMRAID + Btrfs is the alternative. LVMRAID because it’s simpler to use than mdraid + LVM and the implementation is still mdraid under the covers.

ikidd@lemmy.world on 04 Oct 19:44 next collapse

Most NAS VMs want you to pass them the raw device so they can manage ZFS themselves. For every other VM, I have the VM running on ZFS storage that Proxmox uses and manages, and it will manage the datasets for backup, snapshots, etc.

It is definitely the way to go. The ability to snapshot a VM or CT before updates alone is worth it.

TheHolm@aussie.zone on 05 Oct 06:47 next collapse

both works. Just do not forgot to assign fake serial numbers if you are passing disks. IMHO passing disk will be more performant, or may be just pass HBA controller if other disks are on different controller.

blackstrat@lemmy.fwgx.uk on 05 Oct 15:04 collapse

Why fake serial numbers?

TheHolm@aussie.zone on 06 Oct 06:24 collapse

to stop guessing what HDD to replace when one failed. VM can’t see actual HDDs as SMART is not getting forwarded.

Mio@feddit.nu on 05 Oct 06:48 next collapse

I am more looking into BTRF for backup due to I run Linux and not BSD ZFS requires more RAM I only have one disk I want to benefit from snapshots, compression and deduplication.

blackstrat@lemmy.fwgx.uk on 05 Oct 06:58 collapse

I used btrfs once. Never again!

Mio@feddit.nu on 05 Oct 18:04 collapse

Why?

blackstrat@lemmy.fwgx.uk on 05 Oct 19:42 collapse

It stole all my data. It’s a bit of a clusterfuck of a file system, especially one so old. This article gives a good overview: arstechnica.com/…/examining-btrfs-linuxs-perpetua… It managed to get into a state where it wouldn’t even let me mount it readonly. I even resorted to running commands of which the documentation just said “only run this if you know what you’re doing”, but actually gave no guidance to understand - it was basically a command for the developer to use and noone else. It ddn’t work anyway. Every other system that was using the same disks but with ext4 on their filesystems came back and I was able to fsck them and continue on. I think they’re all still running without issue 6 years later.

For such an old file system, it has a lot of braindead design choices and a huge amount of unreliability.

snugglebutt@lemmy.blahaj.zone on 05 Oct 23:39 next collapse

‘short for “B-Tree File System”’. maybe i should stop reading it as butterfucks

Mio@feddit.nu on 06 Oct 16:18 collapse

Dataloss is never fun. File systemet in general need a long time to iron out all the bugs. Hope it is in a better state today. I remember when ext4 was new and crashed in a laptop. Ubuntu was to early to adopt it, or I did not use LTS.

But as always, make sure to have a proper backup on a different physical location.

zingo@sh.itjust.works on 07 Oct 13:17 collapse

Found a Swede in this joint! Cheers.

Mio@feddit.nu on 09 Oct 05:36 collapse

You will find many more at feddit.nu

corsicanguppy@lemmy.ca on 05 Oct 07:37 collapse

I’m running ZFS at two jobs and my homelab.

Terabytes and terabytes. Usually presented to the hypervisor as a lun and managed on the VM itself.

I don’t run proxmox, though. Some ldoms, some esx, soon oVirt.