Tell me why I shouldn't use btrfs
from possiblylinux127@lemmy.zip to selfhosted@lemmy.world on 23 Nov 02:34
https://lemmy.zip/post/26769148

About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

#selfhosted

threaded - newest

just_another_person@lemmy.world on 23 Nov 02:45 next collapse

If it didn’t give you problems, go for it. I’ve run it for years and never had issues either.

vividspecter@lemm.ee on 23 Nov 02:53 next collapse

No reason not to. Old reputations die hard, but it’s been many many years since I’ve had an issue.

I like also that btrfs is a lot more flexible than ZFS which is pretty strict about the size and number of disks, whereas you can upgrade a btrfs array ad hoc.

I’ll add to avoid RAID5/6 as that is still not considered safe, but you mentioned RAID1 which has no issues.

TwiddleTwaddle@lemmy.blahaj.zone on 23 Nov 04:05 collapse

I’ve been vaguely planning on using btrfs in raid5 for my next storage upgrade. Is it really so bad?

vividspecter@lemm.ee on 23 Nov 05:12 next collapse

Check status here. It looks like it may be a little better than the past, but I’m not sure I’d trust it.

An alternative approach I use is mergerfs + snapraid + snapraid-btrfs. This isn’t the best idea for a system drive, but if it’s something like a NAS it works well and snapraid-btrfs doesn’t have the write hole issues that normal snapraid does since it operates on r/o snapshots instead of raw data.

sntx@lemm.ee on 23 Nov 11:42 collapse

It’s affected by the write-hole phenomenon. In BTRFS case that can mean that perfectly good old data might corrupt without any notice.

SendMePhotos@lemmy.world on 23 Nov 03:24 next collapse

I run it now because I wanted to try it. I haven’t had any issues. A friend recommended it as a stable option.

Bookmeat@lemmy.world on 23 Nov 03:35 next collapse

A bit of topic; am I the only one that pronounces it “butterface”?

wrekone@lemmy.dbzer0.com on 23 Nov 04:03 next collapse

Not anymore.

myersguy@lemmy.simpl.website on 23 Nov 04:11 collapse

You son of a bitch, I’m in.

uhmbah@lemmy.ca on 23 Nov 04:29 next collapse

Ah feck. Not any more.

combatfrog@sopuli.xyz on 23 Nov 13:07 next collapse

Similarly, I read bcachefs as BCA Chefs 😅

prole@lemmy.blahaj.zone on 23 Nov 14:02 next collapse

Isn’t it meant to be like “better FS”? So you’re not too far off.

Asparagus0098@sh.itjust.works on 23 Nov 14:44 next collapse

i call it “butter FS”

blackstrat@lemmy.fwgx.uk on 24 Nov 22:34 collapse

I was meant to be Better FS, but it corrupted it to btrfs without noticing.

downhomechunk@midwest.social on 24 Nov 01:25 next collapse

I call it butter fuss. Yours is better.

adept@programming.dev on 24 Nov 19:10 collapse

Related, and I cannot help but read “bcachefs” as “bitch café”

tychosmoose@lemm.ee on 23 Nov 03:46 next collapse

Using it here. Love the flexibility and features.

horse_battery_staple@lemmy.world on 23 Nov 03:53 next collapse

Do you rely on snapshotting and journaling? If so backup your snapshots.

possiblylinux127@lemmy.zip on 23 Nov 06:07 collapse

Why?

I already take backups but I’m curious if you have had any serious issues

horse_battery_staple@lemmy.world on 23 Nov 10:57 collapse

Are you backing up files from the FS or sre you backing up the snapshots? I had a corrupted journal from a power outage that borked my install. Could not get to the snapshots on boot. Booted into a live disk and recovered the snapshot that way. Would’ve taken hours to restore from a standard backup, however it was minutes restoring the snapshot.

If you’re not backing up BTRFS snapshots and just backing up files you’re better off just using ext4.

github.com/digint/btrbk

Lem453@lemmy.ca on 23 Nov 03:57 next collapse

Btrfs only has issues with raid 5. Works well for raid 1 and 0. No reason to change if it works for you

possiblylinux127@lemmy.zip on 23 Nov 06:06 next collapse

It is stable with raid 0,1 and 10.

Raid 5 and 6 are dangerous

blackstrat@lemmy.fwgx.uk on 24 Nov 22:33 collapse

I think it has more issues than just with raid 5 &6!

catloaf@lemm.ee on 23 Nov 04:42 next collapse

Meh. I run proxmox and other boot drives on ext4, data drives on xfs. I don’t have any need for additional features in btrfs. Shrinking would be nice, so maybe someday I’ll use ext4 for data too.

I started with zfs instead of RAID, but I found I spent way too much time trying to manage RAM and tuning it, whereas I could just configure RAID 10 once and be done with it. The performance differences are insignificant, since most of the work it does happens in the background.

You can benchmark them if you care about performance. You can find plenty of discussion by googling “ext vs xfs vs btrfs” or whichever ones you’re considering. They haven’t changed that much in the past few years.

WhyJiffie@sh.itjust.works on 23 Nov 04:55 next collapse

but I found I spent way too much time trying to manage RAM and tuning it,

I spent none, and it works fine. what was your issue?

catloaf@lemm.ee on 23 Nov 05:10 collapse

I have four 6tb data drives and 32gb of RAM. When I set them up with zfs, it claimed quite a few gb of RAM for its cache. I tried allocating some of the other NVMe drive as cache, and tried to reduce RAM usage to reasonable levels, but like I said, I found that I was spending a lot of time fiddling instead of just configuring RAID and have it running just fine in much less time.

MangoPenguin@lemmy.blahaj.zone on 23 Nov 16:51 collapse

You can ignore the RAM usage, it’s just cache. It uses up to half your RAM by default but if other things need it zfs will just clear RAM for that to happen.

catloaf@lemm.ee on 23 Nov 17:00 collapse

That might be what was supposed to happen, but when I started up the VMs I saw memory contention.

possiblylinux127@lemmy.zip on 23 Nov 06:04 collapse

Proxmox only supports btrfs or ZFS for raid

Or at least that’s what I thought

MangoPenguin@lemmy.blahaj.zone on 23 Nov 16:52 collapse

ext4 and others too.

possiblylinux127@lemmy.zip on 23 Nov 17:03 collapse

For raid?

MangoPenguin@lemmy.blahaj.zone on 24 Nov 00:01 collapse

You could do it with mdadm

possiblylinux127@lemmy.zip on 24 Nov 01:13 collapse

Not on Proxmox

suzune@ani.social on 23 Nov 05:10 next collapse

The question is how do you get a bad performance with ZFS?

I just tried to read a large file and it gave me uncached 280 MB/s from two mirrored HDDs.

The fourth run (obviously cached) gave me over 3.8 GB/s.

possiblylinux127@lemmy.zip on 23 Nov 06:02 collapse

I have never heard of anyone getting those speeds without dedicated high end hardware

Also the write will always be your bottleneck.

Moonrise2473@feddit.it on 23 Nov 07:04 next collapse

I have similar speeds on a truenas that I installed on a simple i3 8100

possiblylinux127@lemmy.zip on 23 Nov 08:12 collapse

How much ram and what is the drive size?

I suspect this also could be an issue with SSDs. I have seen a lot a posts around describing similar performance on SSDs.

Moonrise2473@feddit.it on 23 Nov 10:02 collapse

64 gb of ecc ram (48gb cache used by zfs) with 2tb drives (3 of them)

possiblylinux127@lemmy.zip on 23 Nov 17:14 collapse

Yeah it sounds like I don’t have enough ram.

sugar_in_your_tea@sh.itjust.works on 23 Nov 21:33 collapse

ZFS really likes RAM, so if you’re running anything less than 16GB, that could be your issue.

possiblylinux127@lemmy.zip on 24 Nov 01:18 collapse

From the Proxmox documentation:

As a general rule of thumb, allocate at least 2 GiB Base + 1 GiB/TiB-Storage. For example, if you have a pool with 8 TiB of available storage space then you should use 10 GiB of memory for the ARC.

I changed the arc size on all my machines to 4GB and it runs a bit better. I am getting much better performance. I though I had changed it but I didn’t regenerate initramfs so it didn’t apply. I am still having issues with VM transfers locking up the cluster but that might be fixable by tweaking some settings.

16GB might be overkill or underkill depending on what you are doing.

stuner@lemmy.world on 23 Nov 07:50 next collapse

I’m seeing very similar speeds on my two-HDD RAID1. The computer has an AMD 8500G CPU but the load from ZFS is minimal. Reading / writing a 50GB /dev/urandom file (larger than the cache) gives me:

  • 169 MB/s write
  • 254 MB/s read

What’s your setup?

possiblylinux127@lemmy.zip on 23 Nov 08:09 collapse

Maybe I am CPU bottlenecked. I have a mix of i5-8500 and i7-6700k

The drives are a mix but I get almost the same performance across machines

stuner@lemmy.world on 23 Nov 08:34 collapse

It’s possible, but you should be able to see it quite easily. In my case, the CPU utilization was very low, so the same test should also not be CPU-bottlenecked on your system.

possiblylinux127@lemmy.zip on 23 Nov 17:28 collapse

Is your machine part of a cluster by chance? Of so, when you do a VM transfer what performance do you see?

stuner@lemmy.world on 23 Nov 22:26 collapse

Unfotunately, I can help you with that. The machine is not running any VMs.

suzune@ani.social on 23 Nov 08:36 collapse

This is an old PC (Intel i7 3770K) with 2 HDDs (16 TB) attached to onboard SATA3 controller, 16 GB RAM and 1 SSD (120 GB). Nothing special. And it’s quite busy because it’s my home server with a VM and containers.

tripflag@lemmy.world on 23 Nov 05:31 next collapse

Not proxmox-specific, but I’ve been using btrfs on my servers and laptops for the past 6 years with zero issues. The only times it’s bugged out is due to bad hardware, and having the filesystem shouting at me to make me aware of that was fantastic.

The only place I don’t use zfs is for my nas data drives (since I want raidz2, and btrfs raid5 is hella shady) but the nas rootfs is btrfs.

cmnybo@discuss.tchncs.de on 23 Nov 05:42 next collapse

Don’t use btrfs if you need RAID 5 or 6.

The RAID56 feature provides striping and parity over several devices, same as the traditional RAID5/6. There are some implementation and design deficiencies that make it unreliable for some corner cases and the feature should not be used in production, only for evaluation or testing. The power failure safety for metadata with RAID56 is not 100%.

btrfs.readthedocs.io/en/latest/btrfs-man5.html#ra…

lurklurk@lemmy.world on 23 Nov 11:01 next collapse

Or run the raid 5 or 6 separately, with hardware raid or mdadm

Even for simple mirroring there’s an argument to be made for running it separately from btrfs using mdadm. You do lose the benefit of btrfs being able to automatically pick the valid copy on localised corruption, but the admin tools are easier to use and more proven in a case of full disk failure, and if you run an encrypted block device you need to encrypt half as much stuff.

Eideen@lemmy.world on 23 Nov 14:57 next collapse

I have no problem running it with raid 5/6. The important thing is to have a UPS.

dogma11@lemmy.world on 23 Nov 16:23 collapse

I’ve been running a btrfs storage array with data on raid5 and metadata I believe raid1 for the last 5 or so years and have yet to have a problem because of it. I did unfortunately learn not to fully trust the windows btrfs driver but was fortunately able to restore from backups and redownloading.

I wouldn’t hesitate to set it up again for myself or anybody else, and adding a UPS would be icing on the cake. (I added UPS to my setup this last summer)

Anonymouse@lemmy.world on 23 Nov 16:39 collapse

I’ve got raid 6 at the base level and LVM for partitioning and ext4 filesystem for a k8s setup. Based on this, btrfs doesn’t provide me with any advantages that I don’t already have at a lower level.

Additionaly, for my system, btrfs uses more bits per file or something such that I was running out of disk space vs ext4. Yeah, I can go buy more disks, but I like to think that I’m running at peak efficiency, using all the bits, with no waste.

sugar_in_your_tea@sh.itjust.works on 23 Nov 21:29 collapse

btrfs doesn’t provide me with any advantages that I don’t already have at a lower level.

Well yeah, because it’s supposed to replace those lower levels.

Also, BTRFS does provide advantages over ext4, such as snapshots, which I think are fantastic since I can recover if things go sideways. I don’t know what your use-case is, so I don’t know if the features BTRFS provides would be valuable to you.

Anonymouse@lemmy.world on 24 Nov 15:05 collapse

Generally, if a lower level can do a thing, I prefer to have the lower level do it. It’s not really a reason, just a rule of thumb. I like to think that the lower level is more efficient to do the thing.

I use LVM snapshots to do my backups. I don’t have any other reason for it.

That all being said, I’m using btrfs on one system and if I really like it, I may migrate to it. It does seem a whole lot simpler to have one thing to learn than all the layers.

sugar_in_your_tea@sh.itjust.works on 24 Nov 15:32 next collapse

Yup, I used to use LVM, but the two big NAS filesystems have a ton of nice features and they expect to control the disk management. I looked into BTRFS and ZFS, and since BTRFS is native to Linux (some of my SW doesn’t support BSD) and I don’t need anything other than RAID mirror, that’s what I picked.

I used LVM at work for simple RAID 0 systems where long term uptime was crucial and hardware swaps wouldn’t likely happen (these were treated like IOT devices), and snapshots weren’t important. It works well. But if you want extra features (file-level snapshots, compression, volume quotas, etc), BTRFS and ZFS make that way easier.

Anonymouse@lemmy.world on 25 Nov 01:01 collapse

I am interested in compression. I may give it a try when I swap out my desktop system. I did try btrfs in it’s early, post alpha stage, but found that the support was not ready yet. I think I had a VM system that complained. It is older now and more mature and maybe it’s worth another look.

jj4211@lemmy.world on 24 Nov 23:00 collapse

Actually, the lower level may likely be less efficient, due to being oblivious about the nature of the data.

For example, a traditional RAID1 mirror on creation immediately starts a rebuild across all the potential data capacity of the storage, without a single byte of actual data written. So you spend an entire drive wipe making “don’t care” bytes redundant.

Similarly, for snapshotting, it can only track dirty blocks. So you replace uninitialized data that means nothing with actual data, the snapshot layer is compelled to back up that unitiialized data, because it has no idea whether the blocks replaced were uninialized junk or real stuff.

There’s some mechanisms in theory and in practice to convey a bit of context to the block layer, but broadly speaking by virtue of being a mostly oblivious block level, you have to resort to the most naive and often inefficient approaches.

That said, block capacity is cheap, and doing things at the block level can be done in a ‘dumb’ way, which may be easier for an implementation to get right, versus a more clever approach with a bigger surface for mistakes.

Anonymouse@lemmy.world on 25 Nov 00:57 collapse

Those are some good points. I guess I was thinking about the hardware. At least where I do RAID, it’s on the controller, so that offloads much of the parity checking and such to the controller and not the CPU. It’s all probably negligible for the apps that I run, but my hardware is quite old, so maybe trying to squeeze all the performance I can is a worthwhile activity.

zarenki@lemmy.ml on 23 Nov 05:53 next collapse

I’ve been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.

I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.

One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn’t do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.

possiblylinux127@lemmy.zip on 23 Nov 06:00 next collapse

Btrfs Raid 10 reportedly is stable

stuner@lemmy.world on 23 Nov 06:45 collapse

With version 2.3 (currently in RC), ZFS will at least support RAIDZ expansion. That should already help a lot for a NAS usecase.

SRo@lemmy.dbzer0.com on 23 Nov 06:14 next collapse

One time I had a power outage and one of the btrfs hds (not in a raid) couldn’t be read anymore after reboot. Even with help from the (official) btrfs mailinglist It was impossible to repair the file system. After a lot of low level tinkering I was able to retrieve the files, but the file system itself was absolutely broken, no repair process was possible. I since switched to zfs, the emergency options are much more capable.

possiblylinux127@lemmy.zip on 23 Nov 06:30 collapse

Was that less than 2 years ago? Were you using kernel 5.15 or newer?

SRo@lemmy.dbzer0.com on 23 Nov 13:04 collapse

Yes that was may/june 23 and I was on a 6.x kernel

avidamoeba@lemmy.ca on 23 Nov 06:28 next collapse

You shouldn’t have abysmal performance with ZFS. Something must be up.

possiblylinux127@lemmy.zip on 23 Nov 06:42 collapse

What’s up is ZFS. It is solid but the architecture is very dated at this point.

There are about a hundred different settings I could try to change but at some point it is easier to go btrfs where it works out of the box.

prenatal_confusion@feddit.org on 23 Nov 08:53 next collapse

Since most people with decently simple setups don’t have the described problem likely somethings up with your setup.

Yes ifta old and yes it’s complicated but it doesn’t have to be to get a decent performance.

possiblylinux127@lemmy.zip on 23 Nov 17:25 next collapse

I have been trying to get ZFS working well for months. Also I am not the only one having issues as I have seen lots of other posts about similar problems.

prenatal_confusion@feddit.org on 24 Nov 05:18 collapse

I don’t doubt that you have problems with your setup. Given the large number of (simple) zfs setups that are working flawlessly there are a bound to be a large number of issues to be found on the Internet. People that are discontent voice their opinion more often and loudly compared to the people that are satisfied.

avidamoeba@lemmy.ca on 23 Nov 17:59 collapse

I used to run a mirror for a while with WD USB disks. Didn’t notice any performance problems. Used Ubuntu LTS which has a built-in ZFS module, not DKMS, although I doubt there’s performance problems stemming from DKMS.

avidamoeba@lemmy.ca on 23 Nov 17:50 next collapse

What seems dated in its architecture? Last time I looked at it, it struck me as pretty modern compared to what’s in use today.

possiblylinux127@lemmy.zip on 23 Nov 17:57 collapse

It doesn’t share well. Anytime anything IO heavy happens the system completely locks up.

That doesn’t happen on other systems

avidamoeba@lemmy.ca on 23 Nov 18:13 collapse

That doesn’t speak much of the architecture. Also it’s really odd. Not denying what you’re seeing is happening, just that it seems odd based on the setups I run with ZFS. My main server is in fact a shared machine that I use as a workstation and games along as a server. All works in parallel. I used to have a mirror, then a 4-disk RAIDz and now an 8-disk RAIDz2. I have multiple applications constantly using the pool. I don’t notice any performance slowdowns on the desktop, or in-game when IO goes high. The only time I notice anything is when something like multiple Plex transcoders hit the CPU hard. Sequential performance is around 1.3GB/s which is limited by the data bus speeds (USB DAS boxes). Random performance is very good although I don’t have any numbers out of my head. I’m using mostly WD Elements shucked disks and a couple of IronWolfs. No enterprise grade disks on this system.

I’m also not saying that you have to keep fucking around with it instead of going Btrfs. Simply adding another anecdote to the picture. If I had a serious problem like that and couldn’t figure it out I’d be on LVMRAID+Ext4 which is what used prior to ZFS.

possiblylinux127@lemmy.zip on 23 Nov 18:18 collapse

Yeah maybe my machines are cursed

avidamoeba@lemmy.ca on 23 Nov 18:24 next collapse

That is totally possible. I spent a month changing boards and CPUs to fix a curse on my main, unrelated to storage. In case you’re curious.

Andres4NY@social.ridetrans.it on 23 Nov 18:40 collapse

@avidamoeba @possiblylinux127 Does your ZFS not print on Tuesdays? https://bugs.launchpad.net/ubuntu/+source/cupsys/+bug/255161/

avidamoeba@lemmy.ca on 23 Nov 19:14 collapse

I feel like this one flew right over my head. 🥹

sugar_in_your_tea@sh.itjust.works on 23 Nov 21:31 collapse

I doubt that. Some options:

  • bad memory
  • failing drives
  • silent CPU faults
  • poor power delivery

The list is endless. Maybe BTRFS is more tolerant of the problems you’re facing, but that doesn’t mean the problems are specific to ZFS. I recommend doing a bit of testing to see if everything looks fine on the HW side of things (memtest, smart tests, etc).

possiblylinux127@lemmy.zip on 24 Nov 01:19 collapse

I set the Arc cache to 4GB and it is working better now

interdimensionalmeme@lemmy.ml on 24 Nov 07:30 next collapse

You have angered the zfs gods!

possiblylinux127@lemmy.zip on 24 Nov 16:04 collapse

I have gotten a ton of people to help me. Sometimes it is easier to piss people off to gather info and usage tips.

jj4211@lemmy.world on 24 Nov 23:05 collapse

You’ve been downvoted, but I’ve seen a fair share of ZFS implementations confirm your assessment.

E.g. “Don’t use ZFS if you care about performance, especially on SSD” is a fairly common refrain in response to anyone asking about how to get the best performance out of their solution.

Moonrise2473@feddit.it on 23 Nov 07:00 next collapse

One day I had a power outage and I wasn’t able to mount the btrfs system disk anymore. I could mount it in another Linux but I wasn’t able to boot from it anymore. I was very pissed, lost a whole day of work

JackbyDev@programming.dev on 23 Nov 08:37 next collapse

ACID go brrr

Philippe23@lemmy.ca on 23 Nov 11:40 collapse

When did this happen?

Moonrise2473@feddit.it on 23 Nov 14:21 collapse

I think 5 years ago, on Ubuntu

exu@feditown.com on 23 Nov 09:09 next collapse

Did you set the correct block size for your disk? Especially modern SSDs like to pretend they have 512B sectors for some compatibility reason, while the hardware can only do 4k sectors. Make sure to set ashift=12.

Proxmox also uses a very small volblocksize by default. This mostly applies to RAIDz, but try using a higher value like 64k. (Default on Proxmox is 8k or 16k on newer versions)

discourse.practicalzfs.com/t/…/1694

randombullet@programming.dev on 23 Nov 22:03 collapse

I’m thinking of bumping mine up to 128k since I do mostly photography and videography, but I’ve heard that 1M can increase write speeds but decrease read speeds?

I’ll have a RAIDZ1 and a RAIDZ2 pool for hot storage and warm storage.

BrownianMotion@lemmy.world on 23 Nov 11:03 next collapse

My setup is different to yours but not totally different. I run ESXi 8, and I started to use BTRFS on some of my VM’s.

I had a power failure, that was longer than the UPS could handle. Most of the system shutdown safely, a few VM’s did not. All of the EXT4 VM’s were easily recovered (including another one that was XFS). TWO of the BTRFS systems crashed into a non recoverable state.

Nothing I could do to fix them, they were just toast. I had no choice but to recover using backups. This made me highly aware that BTRFS is still not a reliable FS.

I am migrating everything from BTRFS to something more stable and reliable like EXT4. It’s simply not worth the headache.

Philippe23@lemmy.ca on 23 Nov 11:39 next collapse

When did this happen?

BrownianMotion@lemmy.world on 24 Nov 01:32 collapse

It was only a few weeks ago (maybe 4). Systems are all kept up to date with ansible. Most are Debian but there are few Ubuntu. The two that failed were both Debian.

Granted both that failed have high [virtual] disk usage compared to the other VM’s. I cannot remember the failure now, but lots of searching confirmed that it was likely unrecoverable (they could boot, but only into read only). None of the btrfs-check “dangerous” commands could recover it, spitting out tons of errors about mismatching somethings (again, forgotten the error).

blackstrat@lemmy.fwgx.uk on 24 Nov 17:26 collapse

I had almost exactly the same thing happen.

poVoq@slrpnk.net on 23 Nov 12:56 next collapse

I am using btrfs on raid1 for a few years now and no major issue.

It’s a bit annoying that a system with a degraded raid doesn’t boot up without manual intervention though.

Also, not sure why but I recently broke a system installation on btrfs by taking out the drive and accessing it (and writing to it) from another PC via an USB adapter. But I guess that is not a common scenario.

blackstrat@lemmy.fwgx.uk on 24 Nov 17:25 collapse

The whole point of RAID redundancy is uptime. The fact that btrfs doesn’t boot with a degraded disk is utterly ridiculous and speaks volumes of the developers.

bruhduh@lemmy.world on 23 Nov 13:01 next collapse

Raid 5/6, only bcachefs will solve it

possiblylinux127@lemmy.zip on 23 Nov 17:08 collapse

Btrfs Raid 5 and raid 6 are unstable and dangerous

Bcachefs is cool but it is way to new and isn’t even part of the kernel as of yet.

bruhduh@lemmy.world on 23 Nov 17:16 collapse

en.wikipedia.org/wiki/Bcachefs it was added as of Linux 6.7

Edit: and I’ve said raid 5/6 as what troubles btrfs have so you proved my point while trying to explain to me that I’m not right

possiblylinux127@lemmy.zip on 23 Nov 17:26 collapse

I though was then removed later as there was a disagreement between Linus and the bcachefs dev

bruhduh@lemmy.world on 23 Nov 17:35 collapse

Yeah, i remember something like that, i don’t remember exactly which kernel version it was when they removed it

OhYeah@lemmy.dbzer0.com on 23 Nov 21:15 collapse

Pretty sure it’s not removed, they just aren’t accepting any changes from the developer for the 6.13 cycle

bruhduh@lemmy.world on 24 Nov 02:24 collapse

Thanks for clarification)

fmstrat@lemmy.nowsci.com on 23 Nov 13:18 next collapse

What kind of disks, and how is your ZFS set up? Something seems amis here.

sem@lemmy.blahaj.zone on 23 Nov 13:41 next collapse

Btrfs came default with my new Synology, where I have it in Synology’s raid config (similar to raid 1 I think) and I haven’t had any problems.

I don’t recommend the btrfs drivers for windows 10. I had a drive using this and it would often become unreachable under load, but this is more a Windows problem than a problem with btrfs

domi@lemmy.secnd.me on 24 Nov 00:11 next collapse

btrfs has been the default file system for Fedora Workstation since Fedora 33 so not much reason to not use it.

nichtburningturtle@feddit.org on 24 Nov 01:13 next collapse

Didn’t have any btrfs problems yet, infact cow saved me a few times on my desktop.

Heavybell@lemmy.world on 24 Nov 22:07 collapse

Can you elaborate for the curious among us?

nichtburningturtle@feddit.org on 24 Nov 22:25 collapse

btrfs + timeshift saved me multiple times, when updates broke random stuff.

Heavybell@lemmy.world on 24 Nov 22:44 collapse

I have research to do, I see.

ikidd@lemmy.world on 24 Nov 06:40 next collapse

btrfs raid subsystem hasn’t been fixed and is still buggy, and does weird shit on scrubs. But fill your boots, it’s your data.

interdimensionalmeme@lemmy.ml on 24 Nov 07:28 next collapse

For my jbod array, I use ext4 on gpt partitions. Fast efficient mature.

For anything else I use ext4 on lvm thinpools.

possiblylinux127@lemmy.zip on 24 Nov 16:05 collapse

That doesn’t do error detection and correction nor does it have proper snapshots.

interdimensionalmeme@lemmy.ml on 24 Nov 19:23 collapse

There is error detection, crc checks and lvm does snapshots and offline deduplication

However I run sha256 checks offline and PAR files for forward error correction

tfowinder@lemmy.ml on 23 Nov 04:25 collapse

Used it in development environment, well I didn’t need the snapshot feature and it didn’t have a straightforward swap setup, it lead to performance issues because of frequent writes to swap.

Not a big issue but annoyed me a bit.