I keep waffling on Proxmox. Sell me. For or against.
from chazwhiz@lemmy.world to selfhosted@lemmy.world on 02 Nov 00:20
https://lemmy.world/post/38197314

In the next ~6 months I’m going to entirely overhaul my setup. Today I have a NUC6i3 running Home Assistant OS, and a NUC8i7 running OpenMediaVault with all the usual suspects via Docker.

I want to upgrade hardware significantly, partially because I’d like to bring in some local LLM. Nothing crazy, 1-8B models hitting 50tps would make me happy. But even that is going to mean a beefy machine compared to today, which will be nice for everything else too of course.

I’m still all over the place on hardware, part of what I’m trying to decide is whether to go with a single machine for everything or keep them separate.

Idea 1 is a beefy machine and Proxmox with HA in a VM, OMV or TrueNAS in another, and maybe a 3rd straight Debian to separate all the Docker stuff. But I don’t know if I want to add the complexity.

Idea 2 would be beefy machine for straight OMV/TrueNAS and run most stuff there, and then just move HA over to the existing i7 for more breathing room (mostly for Frigate, which could also separate to other machine I guess).

I hear a lot of great things about Proxmox, but I’m not sold that it’s worth the new complexity for me. And keeping HA (which is “critical” compared to everything else) separated feels like a smart choice. But keeping it on aging hardware diminishes that anyway, so I don’t know.

Just wanting to hear various opinions I guess.

#selfhosted

threaded - newest

curbstickle@anarchist.nexus on 02 Nov 00:39 next collapse

Not sure what youre doing with OMV that couldn’t be done in proxmox, so feel free to elaborate there.

Almost all my servers are proxmox (some just Debian, though a few more specific work related solutions are lurking about). For docker I’d do an LXC, btw, I wouldn’t bother with a full VM.

My (excessive) setup is all proxmox, set up as a high availability cluster. HA runs in a VM, and my USB devices are passed through (technically its USB over IP extension, so the USB devices for various VMs continually pass through even if I have to shut a server down).

Its where Jellyfin, Audiobookshelf, homepage.dev, a bajillion stupid containers I mostly dont need, DNS, monitoring and analytics, mealie (recipe server), various websites I host, etc, etc all live. Nothing is by itself on a box except my workstations, but for non-linux use I have VMs I remote into (mostly industry specific software and random crap like an xp VM to use an old piece of hardware).

foggenbooty@lemmy.world on 03 Nov 18:24 collapse

Can you quickly run me through how USB over IP is helping you out? I get it for devices that are physically distant, but how is the abstraction helping you for reboots? Isn’t it just the server you’re rebooting that talks to the USB device anyway?

curbstickle@anarchist.nexus on 03 Nov 18:38 collapse

I have a single.ip transmitter and multiple receivers, IP controllable and routable.

If VM1 uses USB device1 on RX1 from tx1, and host1 goes down, when VM1 is going to be run on host2, rx2 is switched as the receive from tx1, and VM1 still has access to the USB device.

For the record, icron 2304s I got because of work stuff (that accepts commands, which are the version they only oem now).

foggenbooty@lemmy.world on 04 Nov 01:11 collapse

Ah, got it, it’s for VM migrations. That makes a lot more sense.

curbstickle@anarchist.nexus on 04 Nov 01:20 collapse

Ah yeah, sorry, didnt realize that wasn’t clear.

Only one machine at a time handles USB devices by design - OTA TV tuner, zigbee/zwave, USB to serial adapter, and an 8 channel relay.

boydster@sh.itjust.works on 02 Nov 01:23 next collapse

For me, I’m Team Proxmox. It’s just easy to spin up containers for pretty much anything I need. No need for the resource overhead of a full-on virtual machine if I simply need to run a LAMP app. Anything you really have an issue transitioning from Docker to LXC can still be run inside a container with Docker installed. And if you need to set up a VM for Windows or pfSense or some other OS for whatever reason, it’s insanely easy to do.

poVoq@slrpnk.net on 02 Nov 01:27 next collapse

Proxmox adds a lot of complexity and a nice GUI. If you are fine with using the terminal, there is really not much benefit from Proxmox and the potential issues from the added complexity are IMHO not worth it. I am not a Proxmox expert though, so take this advise with a grain of salt 😅

pineapple@lemmy.ml on 02 Nov 03:05 collapse

Is it decently easy to create and manage vm’s and containers with the terminal? I use proxmox at the moment. Should I switch to Ubuntu server?

curbstickle@anarchist.nexus on 02 Nov 04:03 next collapse

Should I switch to Ubuntu server?

Thats a hard no IMO.

Even if you want to do something other than proxmox (just use Debian, fedora, or opensuse).

Its not bad from the CLI, you just need to know your commands.

virt-install --name=deb13-vm --vcpus=1 --memory=1024 --cdrom=/tmp/debian-13.0.0-amd64-netinst.iso --disk size=8 --os-variant=debian13

Will get you 1 vcpu, 1GB ram, and an 8GB drive worth of debian. If you don’t specify a path, in home under .local/share/libvirt/images it will go!

You can also then

virsh edit deb13-vm

And you’ll get the XML, where you can edit away.

Personally, I’d rather use the webgui for most things, but yeah its perfectly doable from the CLI.

pineapple@lemmy.ml on 02 Nov 05:34 collapse

I would have thought debian is better than ubuntu but I couldn’t find a server version of debian. Where do I find debian server or debian cli only?

tofu@lemmy.nocturnal.garden on 02 Nov 06:26 collapse

Debian is by default suited for server. Just skip the desktop environment part in the installer.

pineapple@lemmy.ml on 02 Nov 08:37 collapse

Oh ok, I’ve never installed debian before so thats good to know.

poVoq@slrpnk.net on 02 Nov 10:34 collapse

With libvirt it is fairly easy yes. And you can also install a standalone web-gui like Cockpit or use the desktop app virt-manager over ssh to do it.

suicidaleggroll@lemmy.world on 02 Nov 01:58 next collapse

In my opinion, Proxmox is worth it for two reasons:

  1. Easy high-availability setup and control

  2. Proxmox Backup Server

Those two are what drove me to switch from KVM, and I don’t regret it at all. PBS truly is a fantastic piece of software.

jasonweiser@sh.itjust.works on 03 Nov 17:09 collapse

Upvoted for PBS alone. Incremental backups that are rock solid mean you can completely brick your server and have it back to normal in minutes

solrize@lemmy.ml on 02 Nov 02:41 next collapse

Proxmox is a convenient gui wrapper around libvirt but you can do everything without it.

wiki.debian.org/libvirt

hperrin@lemmy.ca on 02 Nov 02:56 next collapse

It’s got more than just VM management, but yeah, it’s a frontend for a bunch of other services, that you don’t need Proxmox for.

Creat@discuss.tchncs.de on 02 Nov 03:26 next collapse

but you can do everything without it.

yes but why would you? There’s a reason we use GUIs, especially when new to a field (like virtualization).

vividspecter@aussie.zone on 02 Nov 03:38 next collapse

yes but why would you?

Mainly because you’re required to use their distribution, or to build on Debian, which is not to everyone’s liking.

Of course that’s an argument against proxmox, and not virt-manager and the like.

solrize@lemmy.ml on 02 Nov 03:55 next collapse

libvirt comes with some gui tool of its own, though I haven’t used it. I generally prefer to understand what I’m doing, so I use command line tools or API’s at first. GUI’s are a convenience to use later, once it’s clear how they work.

non_burglar@lemmy.world on 02 Nov 06:27 collapse

Once you get to know the GUI well enough and start scripting, the GUI becomes less relevant.

moonpiedumplings@programming.dev on 03 Nov 04:45 collapse

This is untrue, proxmox is not a wrapper around libvirt. It has it’s own API and it’s own methods of running VM’s.

rbos@lemmy.ca on 02 Nov 02:44 next collapse

I’ve been using Ganeti for like 15 years now, and I’m not sure what proxmox offers besides a nice GUI. I know how Ganeti works and getting up to speed on a new one doesn’t seem super interesting to me. Is anyone here familiar with both?

axum@lemmy.blahaj.zone on 02 Nov 12:28 collapse

Ganeti development is more or less dead. If you look at the github repo, it hasn’t seen a notable release in 4 years. All that’s been done is a small bugfix patch two months ago by the community.

The project being based on Haskell code also makes it less attractive for new devs.

rbos@lemmy.ca on 02 Nov 15:17 collapse

Stable. :)

hperrin@lemmy.ca on 02 Nov 02:53 next collapse

It’s great if you need what it offers. Otherwise, it’s simpler to set up something like Ubuntu Server.

I use Proxmox to run my email service, port87.com, because I can have high-availability services that can move around the different Proxmox hosts. It’s great for production stuff.

I also use it to run my seedbox, because graphics in the browser through Proxmox is really easy.

For everything else (my Jellyfin, Nextcloud, etc), I have a server that runs Ubuntu Server and use a docker compose stack for each service.

JAWNEHBOY@reddthat.com on 02 Nov 11:49 collapse

I had never heard of Port87 before, how do you like it? And I assume you pay no monthly fee by hosting your own domain?

hperrin@lemmy.ca on 03 Nov 04:34 collapse

I meant that I made it. :) It’s my own email service, and I run it on Proxmox. So, take this with a grain of salt knowing that I wrote and run it, but I think it’s the best email service by far. I wrote an article about how it works really well for me here:

sciactive.com/…/the-best-email-for-those-who-stru…

Feel free to sign up for free and try it out. :D

Cyber@feddit.uk on 03 Nov 21:22 collapse

Interesting.

I have an old free email provider that’s just passed the email service to another provider

I’m looking to move because I used to be able to use <anything-at-all>@my-email.domain and I’m not sure I’ll be able to do that anymore

I basically do what you’re doing - using email prefixes for the site I’m registering with… I even caught a company out once when I suddenly started getting spam from that email address. They’d sold my details…

hperrin@lemmy.ca on 04 Nov 00:06 collapse

You should check out Port87. :) You wouldn’t need to change any of your addresses if you bring your domain on. Custom domains is $10/month though, so it would cost you more. Hopefully the features would be worth it for you, and if not, you can always migrate it again to a different provider. That’s something I love about email. If you have your own domain, you can completely avoid vendor lock in.

JeanValjean@piefed.social on 02 Nov 03:07 next collapse

From an earlier post I made much like yours, I decided to go with incus. I’d be fully migrated if real life hadn’t kicked me in the taint for a few weeks.

SaintWacko@slrpnk.net on 02 Nov 04:08 next collapse

I will always recommend Proxmox, not just because it’s really easy to add more stuff, but because it’s really safe to tinker with. You take a snapshot, start messing around, and if you break something you just revert to the snapshot

OnfireNFS@lemmy.world on 02 Nov 04:24 collapse

This. Even if you were going to run a bare metal server it’s almost always nicer to install Proxmox and just have a single VM

HybridSarcasm@lemmy.world on 02 Nov 12:07 collapse

This is how I run my OPNsense router. Snapshots are great and rebooting is SO much faster!

HiTekRedNek@lemmy.world on 03 Nov 02:21 collapse

Uh. OpnSense on bare metal can also do snapshots, if you set it up correctly…

notfromhere@lemmy.ml on 02 Nov 05:39 next collapse

I’m running Proxmox and hate it. I still recommend it for what you are trying to do. I think it would work quite nicely. Three of my four nodes have llama.cpp VMs hosting OpenAI-compatible LLM endpoints (llama-server) and I run Claude Code against that using a simple translation proxy.

Proxmox is very opinionated on certain aspects and I much prefer bare metal k8s for my needs.

FiduciaryOne@lemmy.world on 02 Nov 05:40 next collapse

I like ProxMox too, I’m quite happy that I dove in with it. Just one word of warning - if you mount a drive volume in a container, destroy the container and restore it from a backup, it wipes out the mounted drive. I, uh, lost a bunch of data that way. Not super important data, but still.

I’m still glad I went with ProxMox though. It makes spinning up something a breeze, and I also went with HA in a VM, and another Debian VM for Docker, and a bunch of random LXCs.

non_burglar@lemmy.world on 02 Nov 06:23 next collapse

Is this separate from a bind mount? Cause that doesn’t happen with bind mounts.

FiduciaryOne@lemmy.world on 02 Nov 22:12 collapse

Yeah, not a bind mount. There was a warning, but I was restoring a ton of LXCs and clicked through the warning too fast. My fault, I’m not super sore about it, just warning others as a service to prevent what happened to me!

non_burglar@lemmy.world on 02 Nov 22:55 collapse

Fair enough!

frongt@lemmy.zip on 02 Nov 09:44 collapse

If you can replicate it, you should really file a bug report so that the next guy doesn’t lose data.

stankmut@lemmy.world on 02 Nov 21:11 collapse

It tells you it will happen when you use the restore backup feature.

non_burglar@lemmy.world on 02 Nov 06:28 next collapse

Don’t use Proxmox, use incus. It’s way easier to run and doesn’t give a care about your storage.

MangoPenguin@lemmy.blahaj.zone on 02 Nov 13:00 next collapse

No backup utility like PBS though, thats why I haven’t switched.

non_burglar@lemmy.world on 02 Nov 14:32 collapse

Like I said, incus don’t care about your storage.

I’ve never used PBS, I’ve always just rolled my own. I currently keep 7 daily, 4 weekly and 4 monthly. My data mounts are all nfsv4.

Edit: isnt it possible to use pbs with non-proxmox systems?

MangoPenguin@lemmy.blahaj.zone on 02 Nov 19:00 collapse

Yeah it sounds nice but too much time investment for me.

I can install PBS client on any system but it requires manual setup and scheduling which I don’t want to do. When used with Proxmox that’s all handled for me.

Also I don’t think Proxmox cares about storage either, I just use ZFS which is completely standard under the hood.

non_burglar@lemmy.world on 03 Nov 12:32 collapse

Also I don’t think Proxmox cares about storage either,

Proxmox forces you to add a “storage area”, which is fine, except you must use their mount path of /mnt/pve/ and you must add NFS tuning switches via pve or they don’t work.

Proxmox is great, I used it for 8 years. But it is also opinionated and doesn’t like non-standard configs.

MangoPenguin@lemmy.blahaj.zone on 03 Nov 13:11 collapse

Oh I see what you mean yeah, I’ve never used NFS before with it.

moonpiedumplings@programming.dev on 03 Nov 04:49 collapse

I like Incus a lot, but it’s not as easy to create complex virtual networksnas it is with proxmox, which is frustrating in educational/learning environments.

dbtng@eviltoast.org on 02 Nov 07:46 next collapse

I use PVE professionally. I could spent some time bitching about how it handles ssh keys and the fragile corosync cluster management. I could complain about the sloppy release cycle and the way they move fast and break shit. Or all the janky shit they’ve slapped together in PBS. I could go on.

But I actually pay for a license for my homelab. And ya, it is THE thing at work now.

I’ve often heard it said that Proxmox isn’t a great option. But its the best one.
If you do try it, don’t bother asking questions here.
Go to the source. forum.proxmox.com

tmjaea@lemmy.world on 02 Nov 09:44 collapse

Please elaborate. How does it handle ssh keys? And what is fragile regarding corosync?

dbtng@eviltoast.org on 02 Nov 19:24 collapse

SSH key management in PVE is handled in a set of secondary files, while the original debian files are replaced with symlinks. Well, that’s still debian. And in some circumstances the symlinks get b0rked or replaced with the original SSH files, the keys get out of sync, and one machine in the cluster can’t talk to another. The really irritating thing about this is that the tools meant to fix it (pvecm updatecerts) don’t work. I’ve got an elaborate set of procedures to gather the certs from the hosts and fix the files when it breaks, but it sux bad enough that I’ve got two clusters I’m putting off fixing.

Corosync is the cluster. It’s a shared file system that immediately replicates any changes to all members. That’s essentially anything under /etc/pve/. Corosync is very sensitive. I believe they ask for 10ms lag or less between hosts, so it can’t work over a WAN connection. Shit like VM restores or vmotion between hosts can flood it out. Looks fukin awful when it goes down. Your whole cluster goes kaput.

All corosync does is push around this set of config files, so a dedicated NIC is overkill, but in busy environments, you might wind up resorting to that. You can put cororsync on its own network, but you obviously need a network for that. And you can establish throttles on various types of host file transfer activities, but that’s a balancing act that I’ve only gotten right in our colos where we only have 1gb networks. I have my systems provisioned on a dedicated corosync vlan and also use a secondary IP on a different physical interface, but corosync is too dumb to fall back to the secondary if the primary is still “up”, regardless of whether its actually communicating, so I get calls on my day off about “the cluster is down!!!1” when people restore backups.

tmjaea@lemmy.world on 02 Nov 20:06 collapse

Thanks for your answer.

I use proxmox since version 2.1 in my home lab and since 2020 in production at work. We did not have issues with the ssh files yet. Also corosync is working fine although it shares its 10g network with ceph.

In all that time I was not aware of how the certs are handled, despite the fact I had two official proxmox trainings. Ouch.

dbtng@eviltoast.org on 02 Nov 21:53 collapse

Cool.

Here. SSH key issues. There was a huge forum war.
…proxmox.com/…/ssh-keys-in-a-proxmox-cluster-reso…
But its still a thing. That still needs to be fixed by a human. Today that’s me.

Regarding CEPH and corosync on the same network … well I’m just getting started with that now. I do have them on different vlans, but its the same 10gb set of nics. I’m hoping if it gets really lousy, my netadmin can prioritize the corosync vlan. I’ll burn that bridge when I come to it.


EDIT … The linked forum post above leads to the SSH key answer, but its convoluted.
Here’s what I put in my own wiki.

Get the right key from each server.
cat ~/.ssh/id_rsa.pub

Make sure they match in here. Fix em if they don’t.
/etc/pve/priv/authorized_keys

There’s a couple symlinks to fix too, but this should get it.

SaltySalamander@fedia.io on 02 Nov 09:58 next collapse

No.

sem@lemmy.blahaj.zone on 02 Nov 12:35 next collapse

Don’t add a layer of abstraction until you need it, or you have the free time to learn it well enough that it won’t cause you problems while you experiment.

jubilationtcornpone@sh.itjust.works on 02 Nov 13:36 next collapse

I use Proxmox for Work and Hyper-V at home. Looking forward to retiring my old Hyper-V host and replace it with Proxmox because Hyper-V is a pain.

Virtualization really helps with reliability. In particular, by allowing you to quickly take snapshots before doing anything destructive and by streamlining backup and recovery.

polle@feddit.org on 02 Nov 13:54 next collapse

I need do update my hardware and thought about switching to proxmox, because of all the good things i hear about it. Iam currently on unraid, but this thing still runs and its the same installation of 7 years ago. It had zero downtime. Mutliple drives, vms and docker container. Easy to use and rock solid.

TunaLobster@lemmy.world on 02 Nov 14:02 next collapse

I did it purely so I could fully back up my server VM and move it to new hardware when I wanted to upgrade. I just have to install Proxmox, attach the NAS, and pull the VM backup. And just like that everything is back to running just as it was before the upgrade! Now just faster and more energy efficient!

dieTasse@feddit.org on 04 Nov 08:34 collapse

I have recently moved non-vm truenas to a new hardware and actually it was a breeze. I just created the backup, disconnected the drives, physically put them into the new server, install the truenas, restored the backup, and it was done. I understand that everyone has different preferences. I’m just saying that it’s easy to move truenas without it being the VM as well.

melfie@lemy.lol on 02 Nov 15:24 next collapse

I shy away from VMs because I prefer having a pool of resources on a machine that can be used as needed instead of being pre-allocated. Pre-allocating CPU, RAM, and doing PCI passthough for GPUs wastes already limited resources and is extra effort. Yes, the best practice for production k8s is setting resource requests and limits, but it’s not something I want to bother with when I only have one server.

Cyber@feddit.uk on 03 Nov 14:46 collapse

Just to address the resourcing point…

VM resources can be over allocated, meaning that the hypervisor will try it’s best to meet their requirements, so you’re not wasting anything and could run more VMs than you have resources for.

Yes, VMs can also be configured to need a certain amount of resources and the hypervisor will have to stop, but I just wanted you to know it’s not fixed.

muusemuuse@sh.itjust.works on 02 Nov 15:34 next collapse

Do you need clusters that can failure ver from one machine to another? Is yes, proxmox is good. If no, there are less complex options.

Appoxo@lemmy.dbzer0.com on 03 Nov 12:47 collapse

Why rule out proxmox as “complex” just because there is no need for HA??

muusemuuse@sh.itjust.works on 03 Nov 18:03 collapse

Because it moves further from a vanilla setup without solving a problem.

EpicFailGuy@lemmy.world on 02 Nov 23:04 next collapse

The one factor that no one seems to have mentioned yet that is key for many of us is LEARNING …

It’s a great way to learn virtualization and containerization

I use it exclusively to run Linux containers, it makes it very convenient to backup and restore as well as replicate environments.

We are now migrating our lab at work away from VMW

irmadlad@lemmy.world on 03 Nov 14:28 next collapse

Best thing to do is give it a go and see what shakes out OP. I absolutely love both my Proxmox boxes. In my humble opinion, Proxmox was an easier set up, and the possibilities are endless really. It’s a solid freemium product. Couple it with the extensive Helper Scripts, and Jack’s a doughnut, Bob’s your uncle.

dieTasse@feddit.org on 04 Nov 08:26 collapse

Agreed. Proxmox is not worth the complexity. Install truenas, you can put all the apps on that and you can have home assistant in VM. All you need is one machine. I actually have this setup. I only installed Proxmox on other machine to test it and to install OpenWRT and a bunch of networking software on that. If I feel confident, I will use this as my new router, but that’s long way to go. Oh, and by the way, Truenas in virtual machine is not recommended. I originally thought I will also install Truenas in Proxmox, but after reading plenty resources and things about pass-through I finally decided that I was stick to the recommendation and not use proxmox with truenas. I do not regret the decision.