Proxmox rebuild
from randombullet@programming.dev to selfhosted@lemmy.world on 01 Sep 2024 06:42
https://programming.dev/post/18840088

Greetings fellow enthusiasts.

I’m going to rebuild my proxmox server and would like to have a few opinions.

First thing is I use my server as a NAS and then run VMs off that.

I have 2 x 20tb in ZFS mirror but I’m planning on changing that to 3 x 24tb in ZFS1.

I currently have a ZFS pool in proxmox and then add that pool to Open Media Vault.

Issue is, if my OMV breaks and I’ll have to create another VM, I’m pretty sure all that data would become inaccessible to my OMV.

I’ve heard of people creating a NFS in proxmox and then passing it through to OMV?

Or should I get HMB cards and then just pass it through the VM and then just run it natively within OMV. I’d need to install the ZFS kernal into OMV as well.

Would like to hear some options and tips.

#selfhosted

threaded - newest

echutaaa@sh.itjust.works on 01 Sep 2024 07:12 next collapse

You can bind mount a directory on your pool into an lxc too. I do this with smb and a few other file/data services without issue but never tried omv. If containers work for you it might be the simpler way to go.

[deleted] on 01 Sep 2024 07:27 next collapse

.

jozza@lemmy.world on 01 Sep 2024 07:27 collapse

Don’t suppose you could give a quick run-down on that process? I’m needing to do it have have been struggling with the available documentation.

echutaaa@sh.itjust.works on 01 Sep 2024 07:59 collapse

I have some notes from doing it but its been a minute, the overview is:

  • create your users in and out of the container with the correct ids
  • edit the conf to pass through the dir and map the ids
  • edit the subuid and subgid

The documentation on this kinda sucks because its not all in one place so if you find the first link you might get lost without the info in the second. I took me a few forum posts to find out about all the id mapping stuff and finally find the right page.

NeoNachtwaechter@lemmy.world on 01 Sep 2024 07:13 next collapse

all that data would become inaccessible to my OMV.

Why?

Nothing gets destroyed unless your OMV actively destroys things (which is very unlikely)

A zpool is easily portable to a new machine/VM.

Cooljimy84@lemmy.world on 01 Sep 2024 07:16 next collapse

I run snapraid and mergerfs, as the nas storage. Not much changes on my NAS and the stuff I really care about like my pictures and videos are on a small ZFS pool. Both are directly on proxmox, meaning I can just plug them in to another Linux machine and research if it all goes sideways. Its all shared from the host via SMB NFS or for jellyfin and immicher its a moint point for the container

Decronym@lemmy.decronym.xyz on 01 Sep 2024 08:05 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
LTS Long Term Support software version
LXC Linux Containers
NAS Network-Attached Storage
NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
SMB Server Message Block protocol for file and printer sharing; Windows-native
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

7 acronyms in this thread; the most compressed thread commented on today has 16 acronyms.

[Thread #945 for this sub, first seen 1st Sep 2024, 08:05] [FAQ] [Full list] [Contact] [Source code]

Kaavi@lemmy.world on 01 Sep 2024 11:12 next collapse

My own approach is to run vm/lxc of SSDs that’s are hosted on proxmox directly.

Then I have a truenas with Nas storage. I mount that through SMB to proxmox and pass the different dirs into the vm/lxc that need them.

SSD are much better performance for vm/lxc.

Edit: even running the Nas as a vm i would mount it with SMB, making it easy to spilt them up later if you want. Also I have 10gbit netcards between the nas and proxmox.

TCB13@lemmy.world on 01 Sep 2024 12:06 collapse

You should consider replacing Proxmox with LXD/Incus because, depending in your needs, you might be able to replace your Proxmox instances with Incus and avoid a few headaches in the future.

While being free and open-source software, Proxmox requires a payed license for the stable version and updates. Furthermore the Proxmox guys have been found to withhold important security updates from non-stable (not paying) users for weeks.

Incus / LXD is an alternative that offers most of the Proxmox’s functionality while being fully open-source – 100% free and it can be installed on most Linux systems. You can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes).

Incus also provides a unified experience to deal with both LXC containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs. The same thing can’t be said about Proxmox, while it tries to make things smoother there are a few inconsistencies and incompatibilities there.

Incus is free can be installed on any clean Debian system with little to no overhead and on the release of Debian 13 it will be included on the repositories.

Another interesting advantage of Incus is that you can move containers and VMs between hosts with different base kernels and Linux distros. If you’ve bought into the immutable distro movement you can also have your hosts run an immutable with Incus on top.

Incus Under Debian 12

If you’re on stable Debian 12 then you’ve a couple of options:

  • Run the LXD version provided on their repositories: this will give you LXD 5.0.2 LTS that is guaranteed to be compatible with Debian 13’s Incus. Note that this was added before Canonical decided to move LXD in-house;
  • Use the backported version as described here: linuxcontainers.org/incus/docs/main/installing/;
  • Get the latest Incus pre-compiled from github.com/zabbly/incus and install as described above.

In the first option you’ll get a Debian 12 stable system with a stable LXD 5.0.2 LTS, it works really well however it doesn’t provide a WebUI. The second and third options will give you the latest Incus but they might not be as stable. Personally I was running LXD from Snap since Debian 10, and moved to LXD 5.0.2 LTS repository under Debian 12 because I don’t care about the WebUI. I can see how some people, particularly those coming from Proxmox, would like the WebUI so getting the latest Incus might be a good option.

I believe most people running Proxmox today will, eventually, move to Incus and never look back, I just hope they do before Proxmox GmbH changes their licensing schemes or something fails. If you don’t require all features of Proxmox then Incus works way better with less overhead, is true open-source, requires no subscriptions, and doesn’t delay important security updates.

Note that modern versions of Proxmox already use LXC containers so why not move to Incus that is made by the same people? Why keep dragging all of the Proxmox overhead and potencial issues?

___@lemm.ee on 01 Sep 2024 15:04 collapse

The only issue is not having a simple backup interface and feature in general. Has this been addressed yet? How are snapshots with ZFS on Incus?

TCB13@lemmy.world on 01 Sep 2024 15:13 collapse

Maybe this will help you: linuxcontainers.org/incus/docs/main/backup/

How are snapshots with ZFS on Incus?

What do you mean? They work, described here, the WebUI can also make snapshots for you.