from orsetto@lemmy.dbzer0.com to selfhosted@lemmy.world on 17 Feb 19:20
https://lemmy.dbzer0.com/post/63935079
Hi! I’ve never had a server, except for a raspberry that I use as a DNS (pi-hole), but I’ve been wanting it for a long time. The other day I found something that is kinda old, but very cheap, and I’ve been thinking about buying it since then.
It’s an IBM System x3500 M4. It has an E5-2620, 32 GB of DDR3, and 7 wonderful 900 GB SAS hard drives (don’t know if actual hard disk or solid state), which would fulfill all of my linux ISOs needs for at least the next year (probably a bit more), and a RAID controller ServeRAID M5110. All for 210 euros, which I think is very cheap.
From what I know, the E5 is power hungry for modern standards, and the SAS drives are not exactly friendly for replacement parts. How much would that (mostly the SAS part) be a problem?
Also, what can I expect concerning RAID? That is definitely the most concerning thing for me, as I’ve never worked with it.
Another huge part is, I do not care about accessing it from the outside, but I’d be sharing this system with my brother, in another city, so we would have to figure out a way of doing it. Normally I’d use port forwarding, but we’re both behind CG-NAT. Is there any way of not using a third party server as a proxy/VPN/whatever? If not, what service would you recommend for this purpose?
Another thing, my brother just happens to have a probably working, 16 GB ECC DDR3 stick laying around, except that it’s 1600MHz, and the CPU only supports up to 1333MHz. I’m pretty sure that if I’d put two sticks with different frequencies, the CPU would use the lower one, but is that the case even if the CPU does not support the frequency of one of the stick? (in short, would putting the other stick work?)
If you have any other pointers or anything, let me know. Thank you :)
threaded - newest
I have an x3500m4 but found it using way too much energy for my requirements. A regular pc does the job for less than 25% of the electricity.
So, i’d say check your needs and the footprint. Electricity bill comes every month and something runnin 24/7 adds up real quick.
Released in 2012 (Product Spec). It certainly has potential, but as you say, it will consume more power than something more modern. There are things you can do to tame the beast. You might even adopt something that I’ve been doing, and that is shutting down the server before I retire for the evening. I am the only user, so it’s just sitting there eating up electricity while I sleep. I also have no midnight, mass Linux ISOs downloading, so there’s that. Good news tho, RAM will be pretty cheap.
Used/refurb SAS drives aren’t that expensive. Can someone with better memory than I please link to that site for second hand server components?
The reason why SAS drives are usually more expensive isn’t because the tech itself is more expensive (It’s largelt just a different kind of interface), but rather that “enterprise grade” hardware have a few additional Q&A steps, such as running a break-in cycle at the factory to weed out defective units.
While a server such as the one you described is slightly power hungry, it’s not that bad. Plus, if you wanna get into servers long term, it could serve as a useful way to get used to the hardware involved.
Server hardware is at its core not that different from consumer hardware, but it does often come with some nice and useful additions, such as:
RAID is entirely optional. I seem to be the only one in here who actually like hardware RAID, as software RAID is more popular in the self hosting community. Using it is entirely optional and depends on your use case, though. If you wanna live without, use JBOD mode, and access each drive normally. Alternatively, pool as many disks as you want into RAID6 and you have one large storage device with built-in redundancy. RAIDs can either be managed from the BIOS, or from the OS using tools such as storcli.
I got 3 Seagate Exos X16 14 TB drives for only $140 each (refurbished) at the end of 2022. I’ve got them in TrueNAS as a zfs array and they work great.
Mine were the SATA version, which isn’t currently in stock. The SAS version of my drives go for $299 now. The SATA X14 version is $350.
So prices for the same refurbished drives are more than double what they were like 3.5 years ago, so they really are expensive! I paid like $10 per TB, but they’re all $15-25 per TB now! I was looking for drives for a friend who wants to get started self hosting, and I was shocked by how much refurb drives had gone up.
This is all from https://serverpartdeals.com/ by the way, I’m assuming that’s the site you mean too.
I don’t know exactly what you want host but I have an old Fujitsu Q920 i5 with 16 GB of RAM and a 2 TB SSD drive which cost me around 200 Euro.
I’m serving around 20 services on it including Nextcloud, Navidrome, Paperless NG, Emby, Matrix, Friendica, Wireguard, Zabbix, DNS blocker and a few VMs. Processor utilization is around 5% most of the time and it uses about 8 GB of RAM.
I admit that Nextcloud could be more responsive and faster but for the family it is enough.
I haven’t exactly checked how much electricity I’m using but I would have noticed a bigger change on my bill.
For backups I have a remote node at another location that retrieves ZFS snapshots periodically.
Anyway. I have this setup for 2 years now and I’m happy. It does what I expect it to do.
As a side note. i use FreeBSD with jails and bhyve for this. A Proxmox running VMs with nested Docker apps may need something more performant so be aware.
Regarding CGNAT and Port Forwarding: I too am behind CGNAT with my ISP, and my solution to this is renting a cheap VPS (I use Contabo, others might have different recommendations) and installing Pangolin. It’s a tunneling software that uses some UDP fuckery to hole-punch straight through the network with their Newt tunnels. I use this so my friends and family can access my Plex and Overseerr requests.
Servers are just expensive hardware. You can accomplish 95% with a consumer grade desktop without all the extra power/heat/noise and dependencies on specific hardware.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
[Thread #101 for this comm, first seen 17th Feb 2026, 20:41] [FAQ] [Full list] [Contact] [Source code]
Generally speaking it’s recommended these days to use a software RAID rather than relying on hardware. If anything happens to that RAID controller you will need to replace it with a duplicate in order to mount your drives. Software RAID is controlled by the Linux OS and would be much easier to recover. There used to be a bit of a performance penalty for a software RAID but these days it’s negligible.