from LazerDickMcCheese@sh.itjust.works to selfhosted@lemmy.world on 05 Nov 21:40
https://sh.itjust.works/post/49272492
Fresh Proxmox install, having a dreadful time. Trying not to be dramatic, but this is much worse than I imagined. I’m trying to migrate services from my NAS (currently docker) to this machine.
How should Jellyfin be set up, lxc or vm? I don’t have a preference, but I do plan on using several docker containers (assuming I can get this working within 28 days) in case that makes a difference. I tried WunderTech’s setup guide which used an lxc for docker containers and a separate lxc of jellyfin. However that guide isn’t working for me: curl doesn’t work on my machine, most install scripts don’t work, nano edits crash, and mounts are inconsistent.
My Synology NAS is mounted to the host, but making mount points to the lxc doesn’t actually connect data. For example, if my NAS’s media is in /data/media/movies or /data/media/shows and the host’s SMB mount is /data/, choosing the lxc mount point /data/media should work, right?
Is there a way to enable iGPU to pass to an lxc or VM without editing a .conf in nano? When I tried to make suggested edits, the lxc freezes for over 30 minutes and seemingly nothing happens as the edits don’t persist.
Any suggestions for resource allocation? I’ve been looking for guides or a formula to follow for what to provide an lxc or VM to no avail.
If you suggest command lines, please keep them simple as I have to manually type them in.
Here’s the hardware: Intel i5-13500 64GB Crucial DR5-4800 ASRock B760M Pro RS 1TB WD SN850X NVMe
threaded - newest
There is a helper script for jellyfin LXC. From memory I can’t help much, but I suggest searching for that. I think the default specs for disk space and RAM were weak, But setup was easy enough. After the initial helper script, you will need to learn how to mount the NAS into the LXC as well.
I want to say iGPU makes things easier, not because of experience but only because I tried passing through an Nvidia card and the instructions all insinuated this was more difficult than any other option
If youre going LXC, its not going to matter much of you just map GIDs and provide the LXC access to the host.
Side bonus, multiple LXCs and they can all share that GPU. This is what I do, I have a couple of JF instances among other containers that use the GPUs.
Edited to add: Well, nvidia itself can be a pain. But that’d be because nvidia.
I tried, the script gave me errors
tteck.github.io/Proxmox/ this is a good place to start. Also highly recommend youtube videos lots of good stuff there.
Yes, I tried a couple of those. They were giving me errors
The linked repository is unmaintained, and some of the scripts are broken as a result.
The scripts have moved to this repository.
Last time i used then they worked fine.
Try the scripts in the new one, and if they still give you errors let me know and i’ll be happy to try and help you.
Also, please don’t run scripts from the internet without reading trough them first. Even from a trusted source. You never know what random people could have written in there. 😅
Can confirm. The scripts in the new repository work just fine. I’ve run a bunch of them.
I was struggling with it for a bit, but I got it working. Main concern (before I check transcoding/acceleration) is NAS file sharing. Huge headache. The host is connected, but the lxc and vms don’t use it
May i be so bold as to ask about how much experience you have with linux?
A basic understanding would go a long way i think.
But to answer your question:
You need to mount the nas drive directly to the lxc running jellyfin, not to the proxmox host.
Probably hundreds of hours, but very little was in a functional desktop…most of it was trying to get an install to boot and update software (I’m not joking).
That sucks. But this is a good way to learn.
Any tips on copy-pasting those commands into a console window? Every function I’ve tried has failed, but I’m willing to keep trying
It always works for me to just paste with ctrl+shift+v directly in the terminal window of the web gui.
Interesting, what browser do you use? Sounds like I may have to switch from Firefox if that’s the case because this lack of quality of life is ridiculous
I use firefox as well, on linux mint, not windows. So if you use windows, that may be the culprit. I haven’t used it in a long time so i don’t know if that could be the case.
Well, this is the first step in me eventually dropping Windows (assuming all goes well). I’d like to wipe my main PC and do a dual-boot situation eventually, but migrating services to an actual server takes priority…got kiddos counting on their cartoons, can’t let em down
I know how you feel. I hope you succeed both with your server and throwing out windows.
Godspeed on your linux journey.
Thanks! Its not easy, but I don’t give up. Took me 3 years to get the *arrs running, but I eventually got it
Tteck is unfortunately no longer with us.
The new proxmox scripts project can be found here.
Either way. I prefer lxc, personally, but to each their own. lxc I think is drastically easier, in part because you don’t need to pass through the whole GPU….
You don’t need to pass the igpu, you just need to give the LXC access to render and video groups, but yes, editing the conf is easiest. I originally wrote out a bunch here, then remembered there is a great video.
https://www.youtube.com/watch?v=0ZDr5h52OOE
Do they show up as resources? I add my mount points at the CLI personally, this is the best way imo:
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/mediaThis is done from the host, not inside the LXC.
Does your host see the mounted NAS? After you added the mount point, did you fully stop the container and start it up again?
Edit: You can just install curl/wget/etc BTW, its just Debian in there.
apt install curlEdit 2: I must have glossed over the mount part.
Dont add your network storage manually, do it through proxmox as storage, by going to Datacenter > Storage > Add, and enter the details there. This will make things a lot easier.
I’d love to check that, but you lost me…
So the NAS was added like you suggested; I can see the NAS’s storage listed next to local data. How does one command an lxc or vm to use it though?
This line right here shares it with the LXC, I’ll break it down for you:
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/mediapct is the proxmox container command, youre telling it to set the mount point (mp0, mp1, mp2, etc). That point on the host is /mnt/pve/yourmountname. In the container is on the right, mp=/your/path/. So inside the container if you did an ls command in the directory /your/path/, it would list the files in /mnt/pve/yourmountname.
The yourmountname part is the name of the storage you added. You can go to the shell at the host level in the GUI, and go to /mnt/pve/ then enter ls and you will see the name of your mount.
So much like I was mentioning with the GPU, what youre doing here is sharing resources with the container, rather than needing to mount the share again in your container. Which you could do, but I wouldn’t recommend.
Any other questions I’ll be happy to help as best as I can.
Edit: forgot to mention, if you go to the container and go to the resources part, you’ll see “Mount Point 0” and the mount point you made listed there.
Friend, thank you. My users and I greatly appreciate it. You just taught me how to solve one of the biggest problems I’ve been having. Just tested a movie through Jellyfin after using that cli.
Got any pointers for migrating config files from my NAS’s docker containers to Proxmox’s LXCs/VMs?
No worries!
So if you’ve got docker containers going already, you don’t need them to be LXCs.
So why not keep them docker?
Now there are a couple of approaches here. A VM will have a bit higher overhead, but offers much better isolation than lxc. Conversely, lxc is lightweight but with less host isolation.
If we’re talking the *arr stack? Meh, make it an lxc if you want. Hell, make it an lxc with dockge installed, so you can easily tweak your compose files from the web, convert a docker run to compose, etc.
If you have those configs (and their accompanying data) stored on the NAS itself - you dont have to move them. Let’s look at that command again…
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/mediaSo let’s say your container data is stored at /opt/dockerstuff/ on your NAS, with subdirectories of dockerapp1 and dockerapp2. Let’s say your new lxc is number 101. You have two options:
pct set 101 -mp0 /mnt/Pve/NAS/opt/dockerstuff,mp=/opt/dockerstuffEither will get you going
I think I’m getting a grip on some of the basics here. I was trying to make a new mount for my NAS’s docker data…separate drive and data pool. In the process of repeated attempts to get the SMB mount to get accepted, I noticed my NAS’s storage isn’t working as intended suddenly.
‘cat /etc/pve/storage.cfg’ shows the NAS still ‘pvesm status’ says “unable to activate storage…does not exist or is unreachable”
I thought it was related to too much resource usage, but that’s not the case
What do you get putting in:
showmount <ip address of NAS>“Hosts on 192.168.0.4:” As a novice, I get the feeling that means it’s not working
If you’ve got nothing under it, yeah.
OK, what I’d probably do is shutdown proxmox, reboot your nas, wait for the nas to be fully up and running (check if you can access it from your regular computer over the lab), then boot up the proxmox server.
Then run that command again, you should see a result.
Its possible you’ve got some conflicting stuff going on if you did manual edits for the storage, which may need to be cleaned up.
I restarted everything like you suggested, same ‘showmount’ result unfortunately…I double checked the SMB mount in the datacenter, and the settings look correct to me. The NAS’s storage icon shows that it’s connected, but it seems like that doesn’t actually mean it’s *firmly *connected
Ok, lets take a step back then and check things this way.
In shell (So datacenter, the host, then shell), if you enter
ls -la /mnt/pve/thenameofyourmount/, do you get an accurate and current listing of the contents of your nas?Yes! I do
Are there different rules for a VM with that command? I made a 2nd NAS share point as NFS (SMB has been failing, I’m desperate, and I don’t know the practical differences between the protocols), and Proxmox accepted the NFS, but the share is saying “unknown.” Regardless, I wanted to see if I could make it work anyway so I tried ‘pct set 102 -mp1 /mnt/pve/NAS2/volume2/docker,mp=/docker’
102 being a VM I set up for docker functions, specifically transferring docker data currently in use to avoid a lapse in service or user data.
Am I doing this in a stupid way? It kinda feels like it
For the record, I prefer NFS
And now I think we may have the answer….
OK so that command is for LXCs, and not for VMs. If youre doing a full VM, we’d mount NFS directly inside the VM.
Did you make an LXC or a VM for 102?
If its an lxc, we can work out the command and figure out what’s going on.
If its a VM, we’ll get it mounted with NFS utils, but how is going to depend on what distribution you’ve got running on there (different package names and package managers)
Ah, that distinction makes sense…I should’ve thought of that
So for the record, my Jellyfin-lxc is 101 (SMB mount, problematic) and my catch-all Docker VM is 102 (haven’t really connected anything, and I don’t care how it’s done as long as performance is fine)
Ok we can remove it as an SMB mount, but fair warning a few bits of CLI to do this thoroughly.
systemctl list-units "*.mount"That said - I like to be sure, so lets do a few more things.
-
umount -R /mnt/pve/thatshare- Totally fine if this throws an error- Lets check the mounts file.
cat /proc/mounts- a whooole bunch of stuff will pop up. Do you see your network share listed there? If so, lets go ahead and delete that line.nano /proc/mounts, find the line if its still there, and remove it.ctrl+xthenyto save.Ok, you should be all clear. Lets go ahead and reboot one more time just to clear out anything if you had to make any further changes. If not, lets re-add.
Go ahead and add in the NAS using NFS in the storage section like you did previously. You can mount to that same directory you were using before. Once its there, go back into the Shell, and lets do this again:
ls -la /mnt/pve/thenameofyourmount/Is your data showing up? If so, great! If not, lets find out whats going on.
Now lets add back to your container mount. You’ll need to add that mount point back in again with:
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media(however you had it mounted before in that second step).Now start the container, and go to the console for the container.
ls -la /whereveryoumountedit- if it looks good, your JF container is all set and now working with NFS! Go back to the options section, and enable “Start at Boot” if you’d like it to.Onto the VM, what distribution is installed there? Debian, fedora, etc?
Well, now the jelly lxc is failing to boot "run_buffer: 571 Script exited with status 2 Lxc_init: 845 failed to run lxc.hook.pre-start for container “101"”
But the mount seems stable now. And the VM is Debian 12
That usually means something has changed with the storage, I’d bet there is a lingering reference in the .conf to the old mount.
The easiest? Just delete the container, start clean. Thats what nice about containers by the way! The harder would be mounting the filesystem of the container, and taking a look at some logs. Which route do you want to go?
For the VM, its really easy. Go to the VM, and open up the console. If you’re logging in as root, commands as is, if you’re logging in as a user, we’ll need to add a sudo in there (and maybe install some packages / add the user to the sudoers group)
apt update && apt upgradeapt install nfs-commonmkdir /mnt/NameYourMountsudo mount -t nfs 192.168.1.100:/share/dir /mnt/NameYourMountls -la /mnt/NameYourMount. If you have an issue here, pause and come back and we’ll see whats going on.nano /etc/fstab192.168.1.100:/share/dir /mnt/NameYourMount nfs defaults,x-systemd.automount,x-systemd.requires=network-online.target 0 0ctrl+xthenyls -la /mnt/NameYourMountto confirm you’re all setI solved the LXC boot error; there was a typo in the mount (my keyboard sometimes double presses letters, makes command lines rough).
So just to recap where I am: main NAS data share is looking good, jelly’s LXC seems fine (minus transcoding, “fatal player error”), my “docker” VM seems good as well. Truly, you’re saving the day here, and I can’t thank you enough.
What I can’t make sense of is that I made 2 NAS shares: “A” (main, which has been fixed) and “B” (currently used docker configs). “B” is correctly connected to the docker VM now, but “B” is refusing to connect to the Proxmox host which I think I need to move Jellyfin user data and config. Before I go down the process of trying to force the NFS or SMB connection, is there any easier way?
Great!
Transcoding we should be able to sort out pretty easily. How did you make the lxc? Was it manual, did you use one of the proxmox community scripts, etc?
For transferring all your JF goodies over, there are a few ways you can do it.
If both are on the NAS, I believe you said you have a synology. You can go to the browser and go to http://NASIP:5000 and just copy around what you want if its stored on the NAS as a mount and not inside the container. If its inside the container only its going to be a bit trickier, like mounting the host as a volume on the container, copying to that mount, then moving around. But even Jellyfin says its complex - https://jellyfin.org/docs/general/administration/migrate/ - so be aware that could be rough.
The other option is to bring your docker container over to the new VM, but then you’ve got a new complication in needing to pass through your GPU entirely rather than giving the lxc access to the hosts resource, which is much simpler IMO.
I used the community script’s lxc for jelly. With that said, the docker compose I’ve been using is great, and I wouldn’t mind just transferring that over 1:1 either…whichever has the best transcoding and streaming performance. Either way, I’m unfortunately going to need a bit more hand-holding
LXC is going to be better, IMO. And we can definitely get hardware acceleration going.
So first, let’s do this from the console of the lxc:
Is there something like card0 and renderD128 listed?
LXC is fine with me, the “new Jellyfin” instance is mostly working anyway. It just has a few issues:
And yes, I see card0 and renderD128 entries. ‘vainfo’ shows VA-API version: 1.20 and Driver version: Intel iHD driver…24.1.0
Ok lets start with that rendering - seeing those is good! You should only need to add some group access, so run this:
The output should just say “jellyfin” right now. Thats the user thats running the Jellyfin service. So lets go ahead and….
You should now see the jellyfin user as a member of jellyfin, video, and render. This gives access for the jellyfin user to make use of the gpu/hardware acceleration.
Now restart that jellyfin and try again!
Ok, consider it done! My concern is this section of the admin settings:
<img alt="" src="https://sh.itjust.works/pictrs/image/601f0c9f-09cb-4e7d-8d34-e5654e4c4784.png">
I followed Intel’s decode/encode specs for my CPU, but there’s no feedback on my selection. I’m still getting “Playback failed due to a fatal player error.”
What do you have above that?
There should be a hardware acceleration dropdown, and then a device below that. Since you have /dev/dri/renderD128, that should be in the “device” field, and the Hardware Acceleration dropdown should be QSV or VAAPI (if one doesn’t work, do the other)
QSV and ‘/dev/dri/renderD128’. I’ll switch to VAAPI and see… Edit: no luck, same error
Just checked one of mine, VAAPI is where I’m set, with acceleration working. 7th or 8th gen or so on that box, so VAAPI should do the trick for you.
So should I be disabling some hardware decoding options then?
Might be a better question for someone who knows more JF ffmpeg configs, but I think the HEVC up top should be checked and the bottom range extended hevc should be unchecked. I think you should have AV1 support too.
Worst case, start with h264 and move down the list
Great point actually, time for c/jellyfin I think. Would you mind helping me with the transferal of config and user data? Is “NFS mount NAS docker data to host” > “pass NFS to jelly LXC” > “copy data from NAS folder to LXC folder” the right idea?
Also may be good for c/jellyfin, but what I’d see if you could do is leverage a backup tool. Export and download, then import, all from the web. I know there is a built in backup function, and I recall a few plugins as well that handled backups.
Seems to me that might be the most straightforward method - but again, probably better with a more jellyfin focused comm for that. I have moved that LXC around between a bunch of machines at this point, so snapshots and backups via proxmox backup server are all I need.
Yeah, it seems like the transplanting of LXCs, VMs, and docker is fairly pain-free…where I really shot myself in the foot is starting on an underpowered NAS and network transfers are clearly not my friend.
I’m not familiar with the backup stuff, but I remember hearing about it being added recently. I’ll look into it, thanks for the recommendation.
You taught me a lot of stuff in just a couple days. The overwhelming/anxious part of dealing with Proxmox for me is still the pass-through of data from outside devices. VMs aren’t bad at all, but everything else seems like a roll of the dice to see if the machine will allow the connection or not
It definitely is, especially if you get a cluster going. FWIW, my media is all on a synology NAS (well technically two, but one is a backup) that I got used through work, so your setup isn’t the wrong approach (imo) by any stretch.
What it comes down to in the connection is how you look at it - with a VM, its a full fledged system, all by its lonesome, that just happens to live inside another computer. A container though is an extension of that host, so think of it less like a VM and more like resource sharing, and you’ll start to see where the different approaches have different advantages.
For example, I have transcode nodes running on my proxmox cluster. If I had JF as a VM, I’d need another GPU to do that - but since its a container for both JF and my transcode node, they get to share that resource happily. Whats the right answer is always going to depend on individual needs though.
And glad I could be of some help!
In case you want to keep following, I did make that post in c/jellyfin
If your system is that fucked, I would wipe it and start over. And don’t run any scripts or extra setup guides, they’re not necessary.
Personally I run all my containers in a Debian VM because I haven’t bothered migrating them to anything proxmox native. But gpu accel should work fine if you follow the directions from jellyfin: jellyfin.org/docs/…/hardware-acceleration/
Just make sure you follow the part about doing it in docker.
That’s where I’m at, dude. I bought into the idea of Proxmox because I was led to believe that it makes docker deployment easier…but I’m thinking it would actually work if I just used a VM
Like docker directly on proxmox? Docker on proxmox isn’t going to be any better than docker on anything else.
VMs and LXC are where proxmox has its best integration.
Docker in a VM on proxmox, while maybe not the recommended way of doing things, works quite well though.
I don’t know if containers on proxmox is easy, but containers in a Debian VM is trivial.
It may be better now but I’ve always had problems with Docker in LXC containers; I think this has to do with my storage backend (Ceph) and the fact that LXC is a pain to use with network mounts (NFS or SMB); I’ve had to use bind mounts and run privileged LXCs for anything I needed external storage for.
Proxmox is about managing VMs and LXCs. I’d just create a VM and do all your docker in there. Perhaps make a second VM so you can shuffle containers around while doing upgrades.
If you plan to have your whole setup be exclusively Docker and you have no need for VMs or LXCs, then Proxmox might be a bunch of overhead you don’t need.
I use the LXCs for simple stuff that does a bare-metal type install within them, and I use the VMs for critical services like OPNSense firewall/routers. I also have a Proxmox cluster across three machines so I can live-migrate VMs during upgrades and prevent almost any downtime. For that use case it’s rock solid. It’s a great product and it offers a lot.
If you just need a single machine and only Docker, it’s probably overkill.
Well, the plan was to use a couple VMs for niche things that I’d love to have and many services. But if I can’t get Proxmox working as advertised, I’ll throw most of that out of the window
The easiest solution if you want to have managed VMs IMHO is to just make a large VM for all your docker stuff on Proxmox and then you get the best of both worlds.
Abstracting docker into its own VM isn’t going to add THAT much overhead, and the convenience of Proxmox for management of the other VMs will make that situation much easier.
LXC for docker can be made to work, but it’s fiddly and it probably won’t gain you much in the long run.
Now, all these other issues you seem to be having with the Proxmox host itself; are you sure you have networking set up correctly, etc? curl should be working no problem; I’m not sure what’s going on there.
That’s good to know at least. I was getting anxious last night thinking that I signed up for something I’d never get running. So curl is working now…not sure why it wasn’t earlier, but I’ve used it since and it is confirmed working. And networking (as in internet connectivity) is working, but now I’m struggling with the NAS mount: it was working perfectly at first, but now it’s randomly shifting between “available” and “unknown”.
I run jellyfin on an LXC, so first get jellyfin installed personally I would separate jellyfin and your other docker containers, I have a separate VM for my podman containers. I need jellyfin up 100% of the time so that’s why its separate.
Work on the first problem, getting jellydin installed I wouldn’t use docker, just follow the steps for installing it on Ubuntu directly.
Second, to get the unprivileged lxc to work with your nas share follow this forum post: …proxmox.com/…/tutorial-unprivileged-lxcs-mount-c…
Thirdly, read through the jellyfin docs for hardware acceleration. Its always best practice to not just run scripts blindly on your machine.
Lastly take a break if you can’t figure it out, when I’m stuck I always need to take a day and just think stuff over and I usually figure out why its not working by just doing that.
If you need any help let me know!
So I got Jellyfin running last night as an unprivileged LXC using a community script. It’s accessible via web browser, and I could connect my NAS. Now I’m having NAS-server connection issues and “fatal player” issues on certain items. I appreciate the support, I’m going to need a lot of it haha
This is most likely because of encoding. Did you change any settings in jellyfin for hardware acceleration? Have you passed theough your GPU? You will need to find out what codecs your GPU supports and enable those in the jellyfin hardware encoding spot.
I tried taking a screenshot of the full page to show you, but yes it’s set to QSV and /dev/dri/renderD128. I’ve tried QSV and VAAPI with similar results, I’m sticking with QSV for now as it’s Jellyfin’s official recommendation. I’ve enabled decoding for H264, HEVC, VP9, and AVI. I’ve enabled hardware encoding for H264 and HEVC. If I disable transoding completely it works fine, but some of the streaming devices need 720p functionality (ideally to transcode down to 4:3 480i).
Ah OK what GPU are you using? are you using the integrated graphics of your CPU?
Yes, just using the iGPU. Thought about an Nvidia card, but setting it up sounded like torture so just whatever is on the i5-13500 for now
Did you go here and look at the supported codecs for encoding and decoding?
www.intel.com/content/www/us/en/…/overview.html#E…
<img alt="" src="https://sh.itjust.works/pictrs/image/00ad6818-ea2f-4575-b6b3-81a7273bcefb.png">
So this looks good then?
Yeah I would say so. You still having issues?
Yeah, I’m about to start the process of trashing the system and starting anew with Ubuntu Server. Even if I had 24/7 community support, I think I’d still dread dealing with Proxmox. The whole reason I hopped on the Prox train was that videos make it seem like an alternative to deep-diving into cli…but everything I’ve been doing is cli, so screw it
community-scripts.github.io/ProxmoxVE/scripts?id=…
This is the way I’d imagine. I used this for Plex and this should make iGPU a lot easier.
I am running Jellyfin in an Open Media Vault VM on top of Proxmox.
Jellyfin is in docker in OMV.
All disks are mounted in OMV and then mounted in Docker.
I have a 14600k that has an iGPU passthrough and it works fine. (Same generation iGPU I think)
Then try this docker container to see if encoding works
Run the container
I’m on mobile so formatting maybe shit. But I think you can get the right idea with this.
So I starting this post with many intertwining issues, but most of them have been resolved thanks to extensive help. At this point, most of my issues are Jellyfin-specific so I made a new post in c/jellyfin. But thank you, I’ll be trying your method if mine continues to fail me