Help with Home Server Architecture and Hardware Selection?
from libretech@reddthat.com to selfhosted@lemmy.world on 27 Jan 23:16
https://reddthat.com/post/33862003

Tl;dr

I have no idea what I’m doing, and the desire for a NAS and local LLM has spun me down a rabbit hole. Pls send help.

Failed Attempt at a Tl;dr

Sorry for the long post! Brand new to home servers, but am thinking about building out the setup below (Machine 1 to be on 24/7, Machine 2 to be spun up only when needed for energy efficiency); target budget cap ~ USD 4,000; would appreciate any tips, suggestions, pitfalls, flags for where I’m being a total idiot and have missed something basic:

Machine 1: TrueNAS Scale with Jellyfin, Syncthing/Nextcloud + Immich, Collabora Office, SearXNG if possible, and potentially the *arr apps

On the drive front, I’m considering 6x Seagate Ironwolf 8TB in RAIDz2 for 32TB usable space (waaay more than I think I’ll need, but I know it’s a PITA to upgrade a vdev so trying to future-proof), and I am thinking also want to add in an L2ARC cache (which I think should be something like 500GB-1TB m.2 NVMe SSD); I’d read somewhere that back of the envelope RAM requirements were 1GB RAM to 1TB storage (though the TrueNAS Scale hardware guide definitely does not say this, but with the L2ARC cache and all of the other things I’m trying to run I probably get to the same number), so I’d be looking for around 48GB (though I am under the impression that using an odd number of DIMMs isn’t great for performance, so that might bump up to 64GB across 4x16GB?); I’m ambivalent on DDR4 vs. 5 (and unless there’s a good reason not to, would be inclined to just use DDR4 for cost), but am leaning ECC, even though it may not be strictly necessary

Machine 2: Proxmox with LXC for Llama 3.3, Stable Diffusion, Whisper, OpenWebUI; I’d also like to be able to host a heavily modded Minecraft server (something like All The Mods 9 for 4 to 5 players) likely using Pterodactyl

I am struggling with what to do about GPUs here; I’d love to be able to run the 70b Llama 3.3, it seems like that will require something like 40-50GB VRAM to run comfortably at a minimum, but I’m not sure the best way to get there; I’ve seen some folks suggest 2x3090s is the right balance of value and performance, but plenty of other folks seem to advocate for sticking with the newer 4000 architecture (especially with the 5000 series around the corner and the expectation prices might finally come down); on the other end of the spectrum, I’ve also seen people advocate for going back to P40s

Am I overcomplicating this? Making any dumb rookie mistakes? Does 2 machines seems right for my use cases vs. 1 (or more than 2?)? Any glaring issues with the hardware I mentioned or suggestions for a better setup? Ways to better prioritize energy efficiency (even at the risk of more cost up front)? I was targeting something like USD 4,000 as a soft price cap across both machines, but does that seem reasonable? How much of a headache is all of this going to be to manage? Is there a light at the end of the tunnel?

Very grateful for any advice or tips you all have!


Hi all,

So sorry again for the long post. Just including a little bit of extra context here in case it’s useful about what I am trying to do (I feel like this is the annoying part of an online recipe where you get a life story instead of the actual ingredient list; I at least tried to put that first in this post.) Essentially I am a total noob, but have spent the past several months lurking on forums, old Reddit and Lemmy threads, and have watched many hours of YouTube videos just to wrap my head around some of the basics of home networking, and I still feel like I know basically nothing. But I felt like I finally got to the point where I felt that I could try to articulate what I am trying to do with enough specificity to not be completely wasting all of your time (I’m very cognizant of Help Vampires and definitely do not want to be one!)

Basically my motivation is to move away from non-privacy respecting services and bring as much in-house as possible, but (as is frequently the case), my ambition has far outpaced my skill. So I am hopeful that I can tap into all of your collective knowledge to make sure I can avoid any catastrophic mistakes I am likely to blithely walk myself into.

Here are the basic things I am trying to accomplish with this setup:

• A NAS with a built in media server and associated apps
• Phone backups (including photos) 
• Collaborative document editing
• A local ChatGPT 4 replacement 
• Locally hosted metasearch
• A place to run a modded Minecraft server for myself and a few friends

The list in the tl;dr represent my best guesses for the write software and (partial) hardware to get all of these done. Based on some of my reading, it seemed that a number of folks recommend running TrueNAS baremetal as opposed to in ProxMox for when there is an inevitable stability issue, and that got me thinking more about how it might be valuable to split out these functions across two machines, one to hand heavier workloads when needed but to be turned off when not (e.g. game server, all local AI), and a second machine to function as a NAS with all the associated apps that would hopefully be more power efficient and run 24/7.

There are two things that I think would be very helpful to me at this point:

  1. High level feedback on whether this strategy sounds right given what I am trying to accomplish. I feel like I am breaking the fundamental Keep It Simple Stupid rule and will likely come to regret it.
  2. Any specific feedback on the right hardware for this setup.
  3. Any thoughts about how to best select hardware to maximize energy efficiency/minimize ongoing costs while still accomplishing these goals.

Also, above I mentioned that I am targeted around USD 4,000, but I am willing to be flexible on that if spending more up front will help keep ongoing costs down, or if spending a bit more will lead to markedly better performance.

Ultimately, I feel like I just need to get my hands on something and start screwing things up to learn, but I’d love to avoid any major costly screw ups before I just start ordering parts, thus writing up this post as a reality check before I do just that.

Thanks so much if you read this far down the post, and for all of you who share any thoughts you might have. I don’t really have folks IRL I can talk to about these sorts of things, so I am extremely grateful to be able to reach out to this community. -------

Edit: Just wanted to say a huge thank you to everyone who shared their thoughts! I posted this fully expecting to get no responses and figured it was still worth doing just to write out my plan as it stood. I am so grateful for all of your thoughtful and generous responses sharing your experience and advice. I have to hop offline now, but look forward to responding to any comments I haven’t had a chance to turn to tomorrow. Thanks again! :)

#selfhosted

threaded - newest

IllNess@infosec.pub on 27 Jan 23:51 next collapse

Reading the title and looking at the thumbnail, I was thinking, “sure I’ll do a good deed and help out a noob.” Then I read your post and I realized you know what you’re doing better than me.

HomerInBushes.gif

libretech@reddthat.com on 28 Jan 00:01 next collapse

Thank you for this! Honestly maybe it’s just been all of the Youtubers I watch but I constantly feel like I have no idea about how to make things work (and also, to be fair, basically everything I wrote is just me reading what other people who seem to know what they’re talking about think and then trying to fit all the pieces together. I sort of feel like a money at a typewriter in that way.) Really appreciate you commenting though! It’s given me a little more confidence :)

LandedGentry@lemmy.zip on 28 Jan 02:05 collapse

It’s easy to feel like you know nothing when A) there’s seemingly infinite depth to a skill and B) there are so many options.

You’re letting perfection stop you from starting my dude

libretech@reddthat.com on 28 Jan 02:46 collapse

Thank you! I think I am just at the “Valley of Despair” portion of the Dunning-Kruger effect lol, but the good news is that it’s hopefully mostly up from here (and as you say, a good finished product is infinitely better than a perfect idea).

freebee@sh.itjust.works on 28 Jan 10:56 collapse

Honestly why not just use an old laptop you have laying around to test 1 or 2 of your many project/ideas and see how it goes, before going 4000 $ deep.

libretech@reddthat.com on 29 Jan 01:34 collapse

This is definitely good advice. I tend to run my laptops into the ground before I replace them, but a lot of the feedback here has made me think experimenting with something much less expensive first is probably the right move instead of trying to do everything all at once (so that when I inevitably screw up, it at least won’t be a $4k screw up.) But thanks for the sanity check!

sunzu2@thebrainbin.org on 28 Jan 00:12 collapse

OP sharing decent DD tbh

source: i am regarded

cm0002@lemmy.world on 28 Jan 00:24 next collapse

Stick with DDR4 ECC for a server environment, if you want to not be limited to 70b models, id dump more money in trying to snag more GPUs, otherwise you’d probably be fine with the 3000 series as long as you meet vRAM requirements

Have you considered secondary variables? Where are you going to run this? If you’re running it in your house this is going to be noisy and power hungry. What room are you running it in? What’s the amperage of the lines going to the outlets there? Is your house older? It’s probably a 20 amp on a shared circuit and really easy to overload and cause a fire

This is what happens when you overload a homes circuit lines <img alt="" src="https://lemmy.world/pictrs/image/84241a36-0ed3-4fc8-87ac-16527bd6fd91.jpeg">

aberrate_junior_beatnik@midwest.social on 28 Jan 00:38 next collapse

How did the breaker not trip on that? It had one job

cm0002@lemmy.world on 28 Jan 00:45 next collapse

The way the electrician explained it to me at the time was that I didn’t technically exceed 20 AMPs but I was running close to it for sustained long periods of time heating up the wire in the wall and outlet slowly melting it over time until it finally buckled causing a small fire and then tripping the breaker

LandedGentry@lemmy.zip on 28 Jan 02:07 next collapse

Oh damn that’s YOUR photo lmao

cm0002@lemmy.world on 28 Jan 02:45 collapse

Yea, I keep the outlet around as a reminder lol

Andres4NY@social.ridetrans.it on 28 Jan 02:25 collapse

@cm0002 @aberrate_junior_beatnik That looks like a 15A receptacle (https://www.icrfq.net/15-amp-vs-20-amp-outlet/). If it was installed on a 20A circuit (with a 20A breaker and wiring sized for 20A), then the receptacle was the weak point. Electricians often do this with multiple 15A receptacles wired together for Reasons (https://diy.stackexchange.com/questions/12763/why-is-it-safe-to-use-15-a-receptacles-on-a-20-a-circuit) that I disagree with for exactly what your picture shows. That said, overloading it is not SUPER likely to cause a fire - just destroy the outlet and appliance plugs.

cm0002@lemmy.world on 28 Jan 02:48 collapse

Makes sense, this was also years ago so small details are being forgotten, could have also been a 15 or possibly 20. It was one circuit split between 2 rooms, which was the norm apparently for the time it was built in the early 80s (and not a damn thing was ever upgraded, including the outlets)

It was also a small extinguisher handleable fire, but it was enough to be scary AF LMAO

bradd@lemmy.world on 28 Jan 10:43 collapse

This could also be caused by a bad connection or poor contact between the wire and the receptacle. Notice the side is melted, where the terminal screws would be, thats where the heat would be generated. When you put a load on it and electrons have to jump the gap it arcs and generates heat. Load is also a factor, on this receptacle or any downstream, but the melting on the side might be caused by arcing.

libretech@reddthat.com on 28 Jan 00:59 collapse

Thanks so much! Appreciate the DDR4 and DRAM thoughts, and great point on secondaries. I have actually been debating the right place to put this as well. My ONT is in the basement (which is I feel like is probably the best place to put this from a noise perspective), though my sad cable company router is in a spare bedroom that I was considering as well (this option would require a little less rewiring, though honestly I’m probably going to have to either figure out how to run my own ethernet or hire out for it regardless of where I put it). No worries if not, but do you have a sense of what noise I might expect from the TrueNAS machine I am thinking of running 24/7 vs. the Proxmox that I won’t be using all the time? I think I could live with occasional noise spikes, but having something loud 24/7 in a bedroom would probably be cruel. And huge thank you for the warning on power draw: I have not been considering amperage at all and will need to look into that to figure out what I can sustain without burning the house down. Are there any other secondary variables you’d recommend I should consider? Appreciate all of your thoughts!

cm0002@lemmy.world on 28 Jan 06:20 collapse

Note, I say DDR4 for cost reasons, if you’re willing, able and desire to spend more upfront for some future proofing DDR5 is newer and faster and comes in higher per-stick capacities. But it is considerably more expensive than DDR4 for the newness

I clocked my 2 full 2u servers (1Us and 2Us (esp 1Us) tend to be the louder screecher variety because of the considerably smaller fans, 3 and 4U servers trend towards more quiet and lower tone, closer to a typical desktop fan) at 82DBs on a desible meter app whose most powerful GPU is a single 1080 for transcoding purposes that isn’t even under load ATM

A TrueNAS probably wouldn’t be too bad, if you’re the type to enjoy whitenoisr to sleep it might even be beneficial.

The Proxmox will be the one with the 2-4 GPUs yes? It’ll fucking sound like a 747 is taking off in your bedroom whenever it’s under load

Also don’t forget cooling, I basement is a good option because its naturally cooler, but you’ll still need to ensure good airflow. Assuming your basement is not in a hot state/country, if it is you’ll need to explore dedicated active cooling systems. If you own or otherwise can make modifications, a split mini/heat pump system would do well.

It will generate considerable heat, I posted a meme just the other day about my servers doubling as a heater supplement system. Kinda of exaggerating for the meme, but it does have an effect. It increases the temp in my basement office 8-10 degrees under load

Blisterexe@lemmy.zip on 28 Jan 00:48 next collapse

you seem pretty on track, and being broke i haven’t looked at the expensive stuff you’re considering, so i can’t give you any value tips.

However, i would like to point out that if you’re just going to be hosting minecraft game servers crafty controller is a much easier to setup&use tool than pterodactyl

libretech@reddthat.com on 28 Jan 01:18 collapse

Thank you! Honestly I am probably going way overboard myself (I think I’ve tried to convince myself that it might make sense given the likelihood of tariffs around the corner, but honestly might still end up downscaling to more like 10-20TB of storage and radically reduce my LLM expectations). And thanks also for the Crafty Controller rec, I hadn’t heard of them and will definitely check them out!

calamityjanitor@lemmy.world on 28 Jan 00:59 next collapse

Would you consider making the LLM/GPU monster server as a gaming desktop? Depends on how you plan to use it, you could have a beast gaming PC than can do LLM/stable diffusion stuff when not gaming. You can install loads of AI stuff on windows, arguably easier.

libretech@reddthat.com on 28 Jan 01:38 collapse

This is a great point and one I sort of struggled with tbh; I think you’re right that if I built it out as a gaming PC I would probably use Windows (not to say I am not very excited about the work Steam is doing for Linux gaming, it’s just hard to beat the native OS). I was leaning toward a Linux build for the server form though just to try to embrace a bit more FOSS (and because I am still a little shocked that Microsoft could propose the Recall feature with a straight face). Maybe I could try a gaming setup that uses some flavor of Linux as a base, though then I am not sure I take advantage of the ability to use the AI stuff easier. Will definitely think more on it though, thanks for raising this!

zox@lemmy.world on 28 Jan 02:31 next collapse

That’s the solution I take. I use Proxmox for a Windows VM which runs Ollama. That VM can then be used for gaming in the off chance a LLM isn’t loaded. It usually is. I use only one 3090 due to the power load of my two servers on top of my [many] HDDs. The extra load of 2 isn’t something I want to worry about.

I point to that machine through LiteLLM* which is then accessed through nginx which allows only Local IPs. Those two are in a different VM that hosts most of my docker containers.

*I found using Ollama and Open WebUI causes the model to get unloaded since they send slightly different calls. LiteLLM reduces that variance.

possiblylinux127@lemmy.zip on 28 Jan 05:34 collapse

You could look into building a game streaming server. Moonlight/sunshine runs decently well and if you have decent WiFi it will be fine. Theoretically you can divide up your GPU into vGPUs but the support for that is hit or miss.

DaGeek247@fedia.io on 28 Jan 01:01 next collapse

I know most of the less expensive used hardware is going to be server-shaped/rackmount. Don't go for it unless you have a garage or shed that you can stuff them in. They put out jet-engine levels of noise and require god tier soundproofing in order to quiet them. The ones that are advertised as quiet are quiet as compared to other server hardware.

You can grab an epyc motherboard that is ATX and will do all you want, and can then move it to a rackmount later if you end up going that way.

The NVIDIA launch has been a bit of a paper one. I don't expect the prices of anything else to adjust down, rather the 5090 may just end up adjusting itself up. This may change over time, but the next couple of months aren't likely to have major deals worth holding out for.

libretech@reddthat.com on 28 Jan 02:18 collapse

Thanks for this! The jet engine sound level and higher power draw were both what made me a little wary of used enterprise stuff (plus jumping from never having a home server straight to rack mounted felt like flying a little too close to the sun). And thanks also for the epyc rec; based on other comments it sounds like maybe pairing that with dual 3090s is the most cost effective option (especially because I fear you’re right on prices not being adjusted downward; not sure if the big hit Nvidia took this morning because of DeepSeek might change things but I suppose that ultimately unless underlying demand drops, why would they drop their prices?) Thanks again for taking the time to respond!

ArbiterXero@lemmy.world on 28 Jan 01:01 next collapse

Given the price of P40’s on eBay vs the price you can get 3090’s for, fuck the P40’s, in rocking quad 3090’s and they kick ass.

Also, Pascale is the OLDEST hardware supported……… for how long?

Also, you’ll want to look for strange specific things to host multiple 3090’s etc… on your motherboard You want a lot of pcie lanes from your chip and board. You want above 4g decoding (fairly common in newer hardware)

libretech@reddthat.com on 28 Jan 02:35 collapse

Thanks so much for flagging that, the above 4g decoding wasn’t even on my radar. And I think you and another commenter have sold me on trying for an EPYC mobo and dual 3090 combination. If you don’t mind my asking, did you get your 3090’s new or used? I feel like used is the way to go from a cost perspective, but obviously only if it wasn’t used 24/7 in a mining rig for years on end (and I am not confident in my ability to make a good call on that as of yet. I guess I’d try to get current benchmarks and just try to visually inspect from photos?) But thanks again!

DaGeek247@fedia.io on 28 Jan 02:56 next collapse

Don't worry about how a video card was used. Unless it was handled by howtobasic, they're gonna break long after they're obsolete. You might worry about a bad firmware setup, but you avoid that by looking at the seller rating, not the video card.

there's an argument to be made that a mining gpu is actually the better card to buy since they never went hot>cold>hot>cold (thus stressing the solder joints) like a regular user would do. But it's just that; an argument. I have yet to find a well researched article on the effects of long-term gaming as compared to long term mining, but I can tell you that the breaking point for either is long after you would have kept the card in use, even second or third hand.

libretech@reddthat.com on 28 Jan 05:28 collapse

Thanks for flagging this! I’d just passively absorbed second hand the mining rig fears, but you’re totally right that it’s not as though a regularly used overclocked gaming GPU isn’t going to also be subject similar degradation (especially if the miner is intentionally underclocking). I guess the biggest fears then are just physical damage from rough install and then potential heat damage (though maybe swapping thermal pads and paste helps alleviate that?) And of course checking benchmarks for any weirdness if possible I guess…

ArbiterXero@lemmy.world on 28 Jan 03:13 collapse

I’m rocking 4 used ones from 4 different people.

So far, all good

You can’t buy 3090’s new anymore anyways.

4090’s are twice as much for 15% better perf, and the 5090’s will be ridiculous prices.

2x3090 is more than enough for basic inference, I have more for training and fine tuning.

You want epyc/threadrupper etc.

You want max pcie lanes.

libretech@reddthat.com on 28 Jan 05:31 next collapse

Amazing, thanks again for all for all of this! I’ll start keeping my eyes peeled for any good deals on 3090s that pop up (though will probably end up prioritizing the NAS build first just to get my feet wet before diving straight into the localLLM world). But thanks again for taking the time to share!

sntx@lemm.ee on 28 Jan 11:27 collapse

I’m curious, how do you run the 4x3090s? The FE Cards would be 4x3=12 PCIe slots and 4x16=64 PCIe lanes… Did you nvlink them? What about transient power spikes? Any clock or even VBIOS mods?

ArbiterXero@lemmy.world on 28 Jan 14:14 collapse

I have some nvlinks on the way.

Sooooo I’ve got a friend that used pcie-oculus and then back to pcie to allow the cards to run outside the case, but that’s not what I do, that’s just the more common approach.

You can also get pcie extension cables, but they’re pricey.

I stumbled upon a cubix device by chance which is a huge and really expensive pcie bus extender that does some really fancy fucking switching. But I got that at a ridiculous price and they’re hard to come by.

If I do it right, I could host 10 cards total (2 in the machine and 8 in the cubix)

This also means that I’m running 3x 1600w psu’s and I’m most at risk for blowing breakers (adding in a 240V line is next lol)

atzanteol@sh.itjust.works on 28 Jan 01:21 next collapse

Am I overcomplicating this?

I fear that you may be overthinking things a bit. For a home server I wouldn’t worry about things like min/maxing memory to storage sizes. If you’re new to this then sizing can be tricky.

For a point of reference - I’m running a MD RAID5 with 4TiB x 4 disks (12TiB usable) on an old Dell PowerEdge T110 with 8GiB of RAM. It’s a file server (NFS) that does little else (just a bind9 server and dhcpd). I’ve had disks fail in the RAID but I’ve never had a 2 disk failure in 10+ years. I always keep my fileserver separate so that I can keep it simple and stable since everything else depends on it. I also do my backups to and from it so it’s a central place for all storage.

That’s just a file-server. I have 3 proxmox servers of widely variable stats from acquired machines… An old System76 laptop with 64GiB RAM (and NVidia 1070 GTX that is used by Jellyfin), a Lenovo Thinkserver with 16GiB RAM, and an old Dell Z740 with 128GiB RAM (long story).

None of these servers are speed demons by any current standards, but they support a variety of VMs comfortably (home assistant, jellyfin, web sever, DNS, DHCP, a 3 node microk8s cluster running searxng, subsonic, a docker registry etc.)

RAM has always mattered more to me for servers. The laptop is the most recent and has 8 cores, the Lenovo only has 4.

Could things be faster? Sure. Do they perform “well enough for me?” Absolutely. I’m not as worried about low-power as you seem to be but my point is that you can get away with pretty modest hardware for MOST of the types of things you’re looking to do.

The AI one is the thing to worry about - that could be your entire budget. VRAM is king for LLMs and gets pricey quick. My personal laptop’s NVidia 3070 with 8GiB VRAM runs models that fit in that low amount of memory just fine. But I’m restricted to models that fit…

libretech@reddthat.com on 28 Jan 03:22 collapse

Thanks so much for all of this info! You’re almost certainly correct that I’m overthinking this (it’s definitely a talent of mine). I had been leaning z2 on the NAS only because I’d heard that the resilvering process can be somewhat intensive on the drives, especially when they’re larger, but I had also seen folks say that this was probably overkill for most home settings and so I’m glad someone with experience on it could chime in. I think my biggest takeaway from what you shared is that it sounds like keeping the file system baremetal and fiddling with it as little as possible is the strategy (vs. virtualizing it and running it out of one large Proxmox machine, for instance). And I think you’re totally right on the LLMs being the real sticking point; I’d had no idea just how resource intensive they were not just to train but even to operate until I started looking into running one locally. It’s honestly making me think that maybe trying to roll this out in phases starting with the NAS (while also doing some other infrastructure upgrades like looking at running cat 6a and swapping out my router from the ISP all-in-one to something that can run OPNSense paired with some WAPs), might be a better place to start. Then, if I can get some early successes under my belt, I can move onto the LLM arena and see how much time, money, and tears I want to spend getting that up and running. Oh, and thanks also for mentioning TiB; it sent me down a very interesting rabbit hole on the base 10 vs. base 2 byte measurement and how drive companies use the difference to pump up the number they get to advertise; I had no idea that accounting for the discrepancy in drive size, but is definitely not surprising.

atzanteol@sh.itjust.works on 28 Jan 04:13 collapse

I would definitely scale things out slowly. While the NAS will eventually be the cornerstone of your setup it will be an investment. You could also try setting up a cheap server as a stand-alone to get the feel for running applications. Maybe even as cheap as a Raspberry PI or small single-board system. Some of them have pretty decent specs at very affordable costs. Such a system could go on to serve simple services in your final architecture (DHCP, DNS, etc.).

There are sometimes ways to upgrade a RAID later. In one scenario I replaced the drives one at a time with larger drives and created a second RAID on the same disks (in a second partition). Wasn’t a great idea perhaps - but it worked! I just expanded my LVM pool to the new RAID and was off to the races. I’m sure performance was hit with two RAIDs on the same disks - but it did the job and worked well enough for me. 😉

I’m not as familiar with zfs to know what options it has for expansion. With MD these days I think you can just fail and replace each disk one-by-one and expand the raid to the new size once they’re all replaced. MD can be pretty picky about drives having exactly the same number of sectors though so care must be taken to use the same disks or partition a bit smaller than the drive… Waiting for each disk to sync can take ages but it’s possible. There may be other options for ZFS (scaling with more disks maybe?).

Good luck with your project!

TseseJuer@lemmy.world on 28 Jan 01:23 next collapse

check out serverpartdeals

libretech@reddthat.com on 28 Jan 02:26 next collapse

Thanks, will do!

NudeNewt@lemm.ee on 28 Jan 02:47 collapse

They’re the best site around for high quality/capacity drives that don’t cost an arm and a leg. Another great resource for tools n’ stuff is awesomeselfhosted

Website: awesome-selfhosted.net

Github: github.com/awesome-selfhosted/awesome-selfhosted

libretech@reddthat.com on 28 Jan 04:30 collapse

Thanks so much for sharing! I just poked around for the Ironwolf 8TB drives I was thinking of an it unfortunately looks like they’re sold out for now (as are the 8TB WD Reds it looks like), but I’ll definitely keep an eye out for them here (and honestly maybe explore some different size options honestly; the drive costs I was seeing on other sites was more than I expected, but wasn’t sure if that was just the new normal; glad to have another option!) And thanks so much for the awesomeselfhosted list!! I don’t think I’d seen everything collected in one place like that before, that will be super helpful!

TechnicallyColors@lemm.ee on 28 Jan 10:17 collapse

Their prices lately have been very unimpressive.

TseseJuer@lemmy.world on 28 Jan 18:47 collapse

as with all things. prepare for it to get worse in every aspect

ikidd@lemmy.world on 28 Jan 03:09 next collapse

So, I’m a rabid selfhoster because I’ve spent too many years watching rugpull tactics from every company out there. I’m just going to list what I’ve ended up with, and it’s not perfect, but it is pretty damn robust. I’m running pretty much everything you talk about except much in the way of AI stuff at this point. I wouldn’t call it particularly energy efficient since the equipment isn’t very new. But take a read and see if it provokes any thoughts on your wishlist.


My Machine 1 is a Proxmox node with ZFS storage backing and machine 2 is mirror image but is a second Proxmox node for HA. Everything, even my OPNsense router runs on Proxmox. My docker/k8s hosts are LXCs or VMs running on the nodes, and the nodes replicate nearly everything between them as a first level, fast recovery backup/high availability failover. I can then live migrate guests around very quickly if I want to upgrade and reboot or otherwise maintain a node. I can also snapshot guests before updates or maintainance that I’m scared will break stuff. Or if I’m experimenting and like to rollback when I fuck up.

Both nodes are backed up via Proxmox Backup Server for any guests I consider prod, and I take backups every hour and keep probably 200 backups at various intervals and amounts. These dedup in PBS so the space utilization for all these extra backups is quite low. I also backup via PBS to removable USB drives on a longer schedule, and swap those out offsite weekly. Because I bind mount everything in my docker compose stacks, recovering a particular folder at a point in time via folder restore lets me recover a stack quite granularly. Also, since it’s done as a ZFS snapshot backup, it’s internally consistent and I’ve never had a db-file mismatch issue that didn’t just journal out cleanly.

I also zfs-send critical datasets via syncoid to zfs.rent daily from each proxmox node.

Overall, this is highly flexible and very, very bulletproof over the last 5 or 6 years. I bought some decade old 1-U dell servers with enough drive bays and dual xeons, so I have plenty of threads and ram and upgraded to IT-mode 12G SAS RAID cards , but it isn’t a powerhouse server or anything, I might be $1000 into each of them. I have considered adding and passing through an external GPU to one node for building an ollama stack on one of the docker guests.

The PBS server is a little piece of trash i3 with a 8TB sata drive and a GB NIC in it.

libretech@reddthat.com on 28 Jan 05:13 collapse

This is super interesting, thanks so much for sharing! In my initial poking around, I’d seen a lot of people that suggested virtualizing TrueNAS within Proxmox was a bit of a headache (especially when something inevitably goes wrong and everything goes down), but I hadn’t considered cutting out TrueNAS entirely and just running directly on Proxmox and pairing that virtualization with k8s and robust backups (I am pleasantly shocked that PBS can manage that many backups without it eating up crazy amounts of space). After the other comments I was sort of aligning around starting off with a TrueNAS build and then growing into some of the LLM stuff I mentioned, but I have to admit this is really intriguing as an alternative (even if as something to work towards once I’ve got some initial prototypes; figuring out k8s would be a really fun project I think). Just out of curiosity, how noisy do you find the old Dell servers? I have been hesitant both because of power draw and noise, but would love to get feedback from someone who has them. Thanks so much again for taking the time to write all of this out, I really appreciate it!

libretech@reddthat.com on 28 Jan 05:17 next collapse

(Also very curious about all of the HA stuff; it’s definitely on my list of things to experiment with, but probably down the line once I’ve gotten some basic infrastructure in place. Very excited at the prospect though)

ikidd@lemmy.world on 28 Jan 05:53 collapse

The HA stuff is as hard as prepping the cluster and making sure it’s repping fine, then enable whichever guests you want to HA. It’s seriously not difficult at all.

ikidd@lemmy.world on 28 Jan 05:52 collapse

Oh, they’re noisy as hell when they wind up because they’re doing a big backup or something. I have them in my laundry room. If you had to listen to them, you’d quickly find something else. In the end, I don’t really use much processor power on these, it’s more about the memory these boards will hold. RAM was dirt cheap so having 256GB available for experimenting with kube clusters and multiple docker hosts is pretty sweet. But considering that you can overprovision both proc and ram on PM guests as long as you use your head, you can get away with a lot less. I could probably have gotten by as well or better with a Ryzen with a few cores and plenty of ram, but these were cheaper.

At times, I’ve moved all the active guests to one node (I have the PBS server set up as a qdevice for Proxmox to keep a quorum active, it gets pissy if it thinks it’s flying solo), and I’ll WoL the other one periodically to let the first node replicate to the second, then down it again when it’s done. If I’m going to be away for a while, I’ll leave both of them running so HA can take over, which has actually happened without me even noticing that the first server packed in a drive, the failover was so seamless it took me a week to notice. That can save a bit of power, but overall, it’s a kWh a day per server which in my area is about 12 cents.

I’ve never seen the point of TrueNAS for me. I run Nextcloud as a docker stack using the AIO mastercontainer for myself and 8 users. Together, we use about 1TB of space on it, and that’s a few people with years of photos etc. So I mount a separate virtualdisk on the docker host that both nextcloud and immich can access on the same docker host, so they can share photos saved in users NC folders that get backed up from their phones. The AIO also has Collabra office set up by default, so that might satisfy your document editing ask there.

As I said, I’ve thought I might get an eGPU and pass it to a docker guest for using AI. I’d prefer to get my Home Assistant setup not relying on the NabuCasa server. I don’t mind sending them money and the STT service that buys me works very well for voice commands around the house, but it rubs me the wrong way to rely on anything on someone else’s computers. But it’s brutally slow when I try to run it even on my desktop ryzen 7800 without a GPU, so until I decide to invest in a good GPU for that stuff, I’ll be sending it out. At least I trust them way more than I ever would Google or Amazon. I’d do without if that was the choice.

All of this does not need to be a jump both feet first; you can just take some old laptop and start to build a PM cluster and play with this. Your only limit will be the ram.

I’ve also seen people build PM clusters using Mac Pro 2013 trashcans, you can get a 12core xeon with 64GB of ram for like $200 and maybe a thunderbolt enclosure for additional drives. Those would be super quiet and probably low power usage.

AdrianTheFrog@lemmy.world on 28 Jan 04:28 next collapse

for high vram ai stuff it might be worth waiting and seeing how the 24gb b580 variant is

Intel has a bunch of translation layer sort of stuff though that I think generally makes it easy to run most CUDA ai things on it, but I’m not sure if common ai software supports multi gpu with it though

IDK how cash limited you are but if it’s just the vram you need and not necessarily the tokens/sec it should be a much better deal when it releases

Not entirely related but I have a full half hourly shapshotted computer backup going to a large HDD in my home server using Kopia, its very convenient and you don’t need to install anything on the server except a large drive and the ability to use ssh/sftp (or another method, it supports several). It supports many compression formats and also avoids storing duplicate data. I haven’t needed to use it yet, but I imagine it could become very useful in the future. I also have the same set up in the cli on the server, largely so I can roll back in case some random person happens upon it and decides to destroy everything in my Minecraft server (which is public and doesn’t have a whitelist…). It’s pretty easy to set up and since it can back up over the internet, its something you could easily use for a whole family.

My home server (with a bunch of used parts plus a computer from the local university surplus store) was probably about ~170$ in total (i7 6700, 16gb ddr4, 256gb ssd, 8tb hdd) and is enough to host all of the stuff I have (very light modded MC with geyser, a gitlab instance, and the backup) very easily, but it is very much not expandable (the case is quite literally tiny and I don’t have space to leave it open, I could get a pcie storage controller but the psu is weak and there aren’t many sata ports), probably not all that future proof either, and definitely isn’t something I would trust to perform well with AI models.

this (sold out now) is the hdd I got, I did a lot of research and they’re supposed to be super reliable. I was worried about noise, but after getting one I can say that as long as it isn’t within 4 feet of you you’ll probably never hear it.

Anyways, it’s always nice to really do something the proper way and have something fully future proof, but if you just need to host a few light things you can probably cheap out on the hardware and still get a great experience. It’s worth noting that a normal Minecraft server, backups, and a document editor for example are all things that you can run on a Raspberry Pi if you really wanted to. I have absolutely no experience using a NAS, metasearch, or heavy mods however, those might be a lot harder to get fast for all I know.

libretech@reddthat.com on 29 Jan 01:30 collapse

Thank you so much for all of this! I think you’re definitely right that probably starting smaller and trying a few things out is more sensible. At least for now I think I am going to focus on putting something together for the lower-hanging fruit by focusing on the NAS build first and then build up to local AI once I have something stable (but I’ll definitely be keeping an eye out for GPU deals in the meantime, so thanks for mentioning the B580 variant, it wasn’t on my radar at all as an option). But I think the thread has definitely given me confidence that splitting things out that way makes sense as a strategy (I had been concerned when I first wrote it out that not planning out everything all at once was going to cause me to miss some major efficiency, but I feel like it turns out that self-hosting is more like gardening than I thought in that it sort of seems to grow organically with one’s interest and resources over time; sort of sounds obvious in retrospect, but I was definitely approaching this more rigidly initially). And thank you for the HDD rec! I think the Exos are the level above the Ironwolf Pro I mentioned, so will definitely consider them (especially if they come back online for a reasonable price at serverpartdeals or elsewhere). Just out of curiosity, what are you using for admin on your MC server? I had heard of Pterodactyl previously, but another commenter mentioned CraftyController as a bit easier to work with. Thank you again for writing all of this up, it’s super helpful!

AdrianTheFrog@lemmy.world on 29 Jan 03:07 collapse

I’m just using basic fabric stuff running through a systemd service for my MC server. It also basically just has every single performance mod I could find and nothing else (as well as geyser+floodgate) so there isn’t all that much admin stuff to do. I set up RCON (I think it’s called) to send commands from my computer but I just set up everything through ssh. I haven’t heard of either pterodactyl or crafty controller, I’ll check those out!

possiblylinux127@lemmy.zip on 28 Jan 05:30 next collapse

$4,000 seems like a lot to me. Then again, my budget was like $200.

I would start by setting yourself a smaller budget. Learn with cheaper investments before you screw up big. Obviously $200 is probably a bit low but you could build something simple for around $500. Focus on upgrade ability. Once you have a stable system up skill and reflect on what you learned. Once you have a bit more knowledge build a second and third system and then complete a Proxmox cluster. It might be overkill but having three nodes gives a lot of flexibility.

One thing I will add. Make sure you get quality enterprise storage. Don’t cheap out since the lower tier drives will have performance issues with heavier workloads. Ideally you should get enterprise SSD’s.

Tablaste@linux.community on 28 Jan 15:50 collapse

I did a double take at that $4000 budget as well! Glad I wasn’t the only one.

libretech@reddthat.com on 29 Jan 00:27 collapse

You are both totally right. I think I anchored high here just because of the LLM stuff I am trying to get running at around a GPT4 level (which is what I think it will take for folks in my family to actually use it vs. continuing to pass all their data to OpenAI) and it felt like it was tough to get there without spending an arm and a leg on GPUs alone. But I think my plan is now to start with the NAS build, which I should be able to accomplish without spending a crazy amount and then building out iteratively from there. As you say, I’d prefer to screw up and make a $500 mistake vs. a multiple thousand dollar one. Thanks for the sanity check!

Estebiu@lemmy.dbzer0.com on 28 Jan 07:49 next collapse

For llama 70B I’m using an rtx a6000; slightly older but it does the job magnificently with hers 48gb of vram.

bradd@lemmy.world on 28 Jan 10:51 next collapse

I’m running 70b on two used 3090 and an a6000 nvlink. I think i got these for $900ea, and maybe $200 for the nvlink. Also works great.

libretech@reddthat.com on 29 Jan 02:44 collapse

Thanks for sharing! Will probably try to go this route once I get the NAS squared away and turn back to localLLMs. Out of curiosity, are you using the q4_k_m quantization type?

sntx@lemm.ee on 28 Jan 11:16 next collapse

I’m also on p2p 2x3090 with 48GB of VRAM. Honestly it’s a nice experience, but still somewhat limiting…

I’m currently running deepseek-r1-distill-llama-70b-awq with the aphrodite engine. Though the same applies for llama-3.3-70b. It works great and is way faster than ollama for example. But my max context is around 22k tokens. More VRAM would allow me more context, even more VRAM would allow for speculative decoding, cuda graphs, …

Maybe I’ll drop down to a 35b model to get more context and a bit of speed. But I don’t really want to justify the possible decrease in answer quality.

libretech@reddthat.com on 29 Jan 01:48 collapse

This is exactly the sort of tradeoff I was wondering about, thank you so much for mentioning this. I think ultimately I would probably align with you in prioritizing answer quality over context length (but it sure would be nice to have both!!) I think my plan for now based on some of the other comments is to go ahead with the NAS build and keep my eyes peeled for any GPU deals in the meantime (though honestly I am not holding my breath). Once I’ve proved to myself I can something stable without burning the house down, I’ll on something more powerful for the localLLM. Thanks again for sharing!

libretech@reddthat.com on 29 Jan 01:42 collapse

Wow, that sounds amazing! I think that GPU alone would probably exceed my budget for the whole build lol. Thanks for sharing!

Waryle@jlai.lu on 28 Jan 11:25 next collapse

ZFS Raid Expansion has been released days ago in OpenZFS 2.3.0 : cyberciti.biz/…/zfs-raidz-expansion-finally-here-…

It might help you with deciding how much storage you want

libretech@reddthat.com on 29 Jan 00:21 collapse

Woah, this is big news!! I’d been following some of the older articles talking about this being pending, but had no idea it just released, thanks for sharing! Will just need to figure out how much of a datahoarder I’m likely to become, but it might be nice to start with fewer than 6 of the 8TB drives and expand up (though I think 4 drives is the minimum that makes sense; my understanding is also that energy consumption is roughly linear with number of drives, though that could be very wrong, so maybe I’ve even start with 4x a 10-12TB drive if I can find them for a reasonable price). But thanks for flagging this!

Krill@feddit.uk on 29 Jan 09:06 collapse

Pretty sure truenas scale can host everything you want so you might only want one server. Use Epyc for the pcie lanes, and a fractal design r7 XL and you could even escape needing a rack mount if you wanted. Use a pcie to m.2 adapter and you could easily host apps on them on a mirrored pool and use a special vdev to speed up the HDD storage pool

The role of the proxmox server would essentially be filled by apps and/or VM you could turn on or off as needed.