got a ripping and converting pc that ain’t any better. it’s all it does, so speed don’t matter any. hb has queue, so nbd. i just let it go… and go… and go…
Lol, I used to have an 08 Mac mini and that required a razor blade and putty knives to open. I got pretty good at it after separately upgrading the RAM adding an SSD and swapping out the cpu for the most powerful option that Apple didn’t even offer
When I used to work at the “Fruit Stand” I never had to repair those white back Mini’s thankfully, but I do remember the putty knives being around. The unibody iMac was the worse, had to pizza cutter the whole LCD off the frame to replace anything, then glue it back on!
Lol by the time I actually needed to upgrade from that mini, all the fruit stand stuff wasn’t really upgradable anymore. It was really frustrating, so I jumped ship to Windows.
Those iMac screens seemed so fiddley to remove just to get access to the drives. Why won’t they just bolt them in instead of using glue! (I know why, but I still don’t like it)
jws_shadotak@sh.itjust.works
on 08 Jan 19:00
nextcollapse
I was for a while. Hosted a LOT of stuff on an i5-4690K overclocked to hell and back. It did its job great until I replaced it.
Now my servers don’t lag anymore.
EDIT: CPU usage was almost always at max. I was just redlining that thing for ~3 years. Cooling was a beefy Noctua air cooler so it stayed at ~60 C. An absolute power house.
theunknownmuncher@lemmy.world
on 08 Jan 19:29
collapse
4690k was solid! Mine is retired, though. Now I selfhost on ARM
jws_shadotak@sh.itjust.works
on 08 Jan 19:57
collapse
I retired mine with a 12600K and I’m not sure what to do with it now.
NickwithaC@lemmy.world
on 08 Jan 19:04
nextcollapse
4 gigs of RAM is enough to host many singular projects - your own backup server or VPN for instance. It’s only if you want to do many things simultaneously that things get slow.
It is amazing what you can do with so little. My server has nas, jellyfin, plex, ebook reader, recipe, vpn, notes, music server, backups, and serves 4 people. If it hits 4gb ram usage it is a rare day.
Blue_Morpho@lemmy.world
on 08 Jan 19:26
nextcollapse
What hardware are you using where the cpu says you are limited to 4gb?
Even a 25 year old Pentium 4 supports 8GB.
theunknownmuncher@lemmy.world
on 08 Jan 19:31
nextcollapse
My guess is an x86 32bit machine
sundrei@lemmy.sdf.org
on 08 Jan 19:40
nextcollapse
Might be using a laptop where the RAM is soldered to the board. I’ve got a Thinkpad X280 that’s like that: no slots, just surface-mounted RAM.
Negative, Pentium 4 was x86 and thus could only address 32 bits.
64bit CPUs started hitting the mainstream in 2003, but 64bit Windows didn’t take off until Win7 in 2009. (XP had it, but no one bothered switching from 32b XP to 64b XP just to use more memory and have early adoption issues. Vista had it, but no one had Vista).
It’s kind of like how the 8086 was a 16 bit processor but could access 1 megabyte of ram (640k ram 384 k reserved for rom) . -Or the 286 which was 16 bit but could access 24 MB.
But even without that the Prescott P4’s supported 64 bits.
PAE was introduced with the Pentium Pro 30 years ago. I used it on Dell Pentium II servers that ran SQL Server. Even the 386 from 1985 could access 64 terabytes of ram using segmented mode.
While technically true, the P4 did support PAE, in reality you couldn’t really make use of it on consumer hardware for most of its lifetime. No ordinary socket 478 mainboard with DDR1 memory supported more than 4 GB of RAM. With socket 775 more RAM was possible, but that socket is “only” ~20 years old.
Besides that, there were other even newer systems that supported only 4 GB of RAM, like some Intel Atom mainboards with a single DDR2 socket. Same with Via C3 mainboards.
Then I don’t understand what your point is. A CPU on its own without a system isn’t of any use. Since there were no motherboards allowing you to use that much RAM, the point about the CPU supporting it is moot as far as I am concerned.
Imagine if I did a meme that blamed AMD for only supporting DDR4 because my motherboard only did DDR4 despite all AMD 7000 and newer supporting DDR4 or DDR5…
I’m sure a lot of people’s self hosting journey started on junk hardware… “try it out”, followed by “oh this is cool” followed by “omg I could do this, that and that” followed by dumping that hand-me-down garbage hardware you were using for something new and shiny specifically for the server.
My unRAID journey was this exactly. I now have a 12 hot/swap bay rack mounted case, with a Ryzan 9 multi core, ECC ram, but it started out with my ‘old’ PC with a few old/small HDDs
revv@lemmy.blahaj.zone
on 08 Jan 19:46
nextcollapse
7 websites, Jellyfin for 6 people, Nextcloud, CRM for work, email server for 3 domains, NAS, and probably some stuff I’ve forgotten on a $4 computer from a tiny thrift store in BFE Kansas. I’d love to upgrade, but I’m always just filled with joy whenever I think of that little guy just chugging along.
EspoCRM. I really like it for my purposes. I manage a CiviCRM instance for another job that needs more customization, but for basic needs, I find espo to be beautiful, simple, and performant.
brbposting@sh.itjust.works
on 08 Jan 23:08
collapse
Sweeeeet thank you! Demo looks great. Now to figure out whether an uber n00ber can self host it in a jiffy or not. 🙏
It does fine. It’s an i5-6500 running CPU transcoding only. Handles 2-3 concurrent 1080p streams just fine. Sometimes there’s a little buffering if there’s transcoding going on. I try to keep my files at 1080p for storage reasons though. This thing’s not going to handle 4k transcoding very well, but it does okay if you don’t expect too much from it.
I’m skeptical that you are doing much video transcoding anyway. 1080p is supported on must devices now, and h264 is best buddies with 1080p content - a codec supported even on washing machines. Audio may be transcoded more often.
RogueBanana@lemmy.zip
on 09 Jan 07:54
nextcollapse
Most of my content is h265 and av1 so I assume they are also facing a similar issue. I usually use the jellyfin app on PC or laptop so not an issue but my family members usually use the old TV which doesn’t support it.
AV1 is definitely a showstopper a lot of the time indeed. H265 I would expect to see more on 2k or 4k content (though native support is really high anyway).
My experience so far has been seeing transcoding done only becuase the resolution is unsupported when I try watching 4k videos on an older 1080p only chromecast.
What do you mean by showstopper? I only encode my shows into AV1/opus and I never had any transcoding happening on any of my devices.
It’s well supported on any recent Browser compared to x264/x265… specially 10bit encodes. And software decoding is nearly present on any recent device.
Dunno about 4k though, I haven’t the necessary screen resolution to play any 4k content… But for 1080p, AV1 is the way to go IMO.
Free open/source
Any browser supported
Better compression
Same objective quality with lower bitrate
A lot of cool open source project arround AV1
It has it’s own quirks for sure (like every codec) but it’s far from a bad codec. I’m not a specialist on the subject but after a few months of testing/comparing/encoding… I settled with AV1 because it was comparative better than x264/x265.
Showstopper in the sense that it may not play natively and require transcoding. While x264 has pretty much universal support, AV1 does not… at least not on some of my devices. I agree that it is a good encoder and the way forward but its not the best when using older devices. My experience has been with Chromecast with Google TV. Looks like google only added AV1 support in their newest Google TV Streamer (late 2024 device).
Not a huge amount of transcoding happening, but some for old Chromecasts and some for low bandwidth like when I was out of the country a few weeks ago watching from a heavily throttled cellular connection. Most of my collection is h264, but I’ve got a few h265 files here and there. I am by no means recommending my setup as ideal, but it works okay.
Absolutely, whatever works for you. I think its awesome to use the cheapest hardware possible to do these things. Being able to use a media server without transcoding capabilities? Brilliant. I actually thought you’d probably be able to get away with no transcoding at all since 1080p has native support on most devices and so does h264. In the rare cases, you could transcode beforehand (like with a script whenever a file is added) so you’d have an appropriate format on hand when needed.
My i5 6600k will turn 10 years old this year. I’m fortunate because upgrading to 32 GB should keep it running for a while still.
passiveaggressivesonar@lemmy.world
on 08 Jan 20:47
nextcollapse
Why didn’t you post this before I bought the RAM?!
thebardingreen@lemmy.starlightkel.xyz
on 08 Jan 20:49
nextcollapse
I’m hosting a minio cluster on my brother-in-law’s old gaming computer he spent $5k on in 2012 and 3 five year old mini-pcs with 1tb external drives plugged into them. Works fine.
Aw yep, bought an old HP pro-lient something something with 2 old-ass intel xeons and 64GB ram for practically nothing. Thing’s been great. It’s a bit loud but runs anything I throw at it.
Just keep an eye on the power usage, depending on how expensive electricity is in your area. I live in California which has very expensive electricity, and buying newer, more power efficient hardware works out cheaper than 10+ year old Xeons over the long run, even if you get the Xeon system for free.
potentiallynotfelix@lemmy.fish
on 08 Jan 21:07
nextcollapse
I’ve got a i3-10100, 16gb ram, and an unused gtx 960. It’s terrible but its amazing at the same time. I built it as a gaming pc then quit gaming.
possiblylinux127@lemmy.zip
on 08 Jan 21:13
nextcollapse
gortbrown@lemmy.sdf.org
on 08 Jan 21:40
nextcollapse
I used to self host some stuff on an old 2011 iMac. Worked fine, actually
myersguy@lemmy.simpl.website
on 08 Jan 21:41
nextcollapse
People in this thread have very interesting ideas of what “shit hardware” is
lka1988@sh.itjust.works
on 08 Jan 22:44
nextcollapse
My cluster ranges from 4th gen to 8th gen Intel stuff. 8th gen is the newest I’ve ever had (until I built a 5800X3D PC).
I’ve seen people claiming 9th gen is “ancient”. Like…ok moneybags.
rebelsimile@sh.itjust.works
on 08 Jan 22:47
collapse
My 9th gen intel is still not the bottleneck of my 120hz 4K/AI rig, not by a longshot.
TMP_NKcYUEoM7kXg4qYe@lemmy.world
on 09 Jan 13:25
collapse
Yep any core i3 is fine even for desktop given an SSD and enough RAM. Once you delve into the core2 era, you start having problems because it lacks the compression and encryption instructions necessary for the day to day smoothness. In a server you might get away with core 2 duo as long as you don’t use full disk encryption and get an SSD or at least use ram for caching. Though that would be kinda a bizarre setup on a computer with 512 MB of ram.
You can do quite a bit with 4GB RAM. A lot of people use VPSes with 4GB (or less) RAM for web hosting, small database servers, backups, etc. Big providers like DigitalOcean tend to have 1GB RAM in their lowest plans.
shadowtofu@discuss.tchncs.de
on 08 Jan 22:08
nextcollapse
I met someone that was throwing out old memory modules. Literally boxes full of DDR, DDR2 modules. I got quite excited, hoping to upgrade my server’s memory. Yeah, DDR2 only goes up to 2GiB. So I am stuck with 2×2GiB. But I am only using 85% of that anyways, so it’s fine.
mspencer712@programming.dev
on 08 Jan 22:12
nextcollapse
Yep, mspencer dot net (what little of it is currently up, I suck at ops stuff) is 2012-vintage hardware, four boxes totaling 704 GB RAM, 8x10TB SAS disks, and a still-unused LTO-3 tape drive. I’ll upgrade further when I finally figure out how to make proper use of what I already have. Until then it’s all a fancy heated cat tree, more or less.
My home Kubernetes cluster started out on a Core i7-920 with 8 GB of memory.
Upgraded to 16 GB memory
Upgraded to a Core i5-2400S
Upgraded to a Core i7-3770
Upgraded to 32 GB memory
Recently Upgraded to a Core i5-7600K
I think I’ll stay with that for rather long…
I did however add 2 Intel NUCs (gen 6 and gen 8) to the cluster to have a distributed control plane and some distributed storage.
sugar_in_your_tea@sh.itjust.works
on 08 Jan 23:09
nextcollapse
Wow, it’s been a long time since I had hardware that awful.
My old NAS was a Phenom II x4 from 2009, and I only retired it a year and a half ago when I upgraded my PC. But I put 8GB RAM into that since it was a 64-bit processor (could’ve put up to 32GB I think, since it had 4 DDR3 slots). My NAS currently runs a Ryzen 1700, but I still have that old Phenom in the closet in case that Ryzen dies, but I prefer the newer HW because it’s lower power.
That said, I once built a web server on an Arduino which also supported websockets (max 4 connections). That was more of a POC than anything though.
empireOfLove2@lemmy.dbzer0.com
on 08 Jan 23:10
nextcollapse
your hardware ain’t shit until it’s a first gen core2duo in a random Dell office PC and 2gb of memory that you specifically only use just because it’s a cheaper way to get x86 when you can’t use your raspberry pi.
Also they lie most of the time and it may technically run fine on more memory, especially if it’s older when dimm capacities were a lot lower than they can be now. It just won’t be “supported”.
31337@sh.itjust.works
on 08 Jan 23:19
nextcollapse
Oldest I got is limited to 16GB (excluding rPis). My main desktop is limited to 32GB which is annoying, because I sometimes need more. But, I have a home server with 128GB of RAM that I can use when it’s not doing other stuff. I once needed more than 128GB of RAM (to run optimizations on a large ONNX model, iirc), so had to spin up an EC2 instance with 512GB of RAM.
I’m self-hosting in a 500GB HDD, 2 cores AMD A6, 8GB RAM thinkcentre (access for LAN only) that I got very cheap.
It could be better, I’m going to buy a new computer for personal use and I’m the only one in my family who uses the hosted services, so upgrades will come later 😴
biscuitswalrus@aussie.zone
on 08 Jan 23:35
nextcollapse
3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.
I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.
Running that cluster 7 or so years now since I bought them new.
I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.
How is ceph working out for you btw? I’m looking into distributed storage solutions rn. My usecase is to have a single unified filesystem/index, but to store the contents of the files on different machines, possibly with redundancy. In particular, I want to be able to upload some files to the cluster and be able to see them (the directory structure and filenames) even when the underlying machine storing their content goes offline. Is that a valid usecase for ceph?
biscuitswalrus@aussie.zone
on 09 Jan 06:19
collapse
I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.
I can’t see why regular file would be any different.
I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.
I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.
I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.
I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.
Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.
Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else.
This is good advice, thanks! Pretty much what I’m doing right now. Already tried it with IPFS, and found that it didn’t meet my needs. Currently setting up a tahoe-lafs grid to see how it works. Will try out ceph after this.
Next, cheap, second hand mini desktop Asus Eee Box.
32 bit Intel Atom like N270, max. 1 GB RAM DDR2 I think.
Real metal under the plastic shell.
Could even run without active cooling (I broke a fan connector).
Mainly telemetry, like temperature inside, outside.
Script to read a data and push it into a RRD, later PostreSQL.
ligthttpd to serve static content, later PHP.
Once it served as a bridge, between LAN and LTE USB modem.
I have one of these that I use for Pi-hole. I bought it as soon as they were available. Didn’t realise it was 2012, seemed earlier than that.
ThunderLegend@sh.itjust.works
on 09 Jan 09:17
collapse
This was my media server and kodi player for like 3 years…still have my Pi 1 lying around. Now I have a shitty Chinese desktop I built this year with i5 3rd. Gen with 8gb ram
aluminium@lemmy.world
on 09 Jan 00:43
nextcollapse
Odd, I have a Celeron J3455 which according to Intel only supports 8GB, yet I run it with 16 GB
Same here in a Synology DS918+. It seems like the official Intel support numbers can be a bit pessimistic (maybe the higher density sticks/chips just didn’t exist back when the chip was certified?)
popekingjoe@lemmy.world
on 09 Jan 01:45
nextcollapse
The oldest hardware I’m still using is an Intel Core i5-6500 with 48GB of RAM running our Palworld server. I have an upgrade in the pipeline to help with the lag, because the CPU is constantly stressed, but it still will run game servers.
andrew_bidlaw@sh.itjust.works
on 09 Jan 01:50
nextcollapse
I faced that only with different editions of Windows limiting it by itself.
I had a old Acer SFF desktop machine (circa 2009) with an AMD Athlon II 435 X3 (equivalent to the Intel Core i3-560) with a 95W TDP, 4 GB of DDR2 RAM, and 2 1TB hard drives running in RAID 0 (both HDDs had over 30k hours by the time I put it in). The clunker consumed 50W at idle. I planned on running it into the ground so I could finally send it off to a computer recycler without guilt.
I thought it was nearing death anyways, since the power button only worked if the computer was flipped upside down. I have no idea why this was the case, the computer would keep running normally afterwards once turned right side up.
The thing would not die. I used it as a dummy machine to run one-off scripts I wrote, a seedbox that would seed new Linux ISOs as it was released (genuinely, it was RAID0 and I wouldn’t have downloaded anything useful), a Tor Relay and at one point, a script to just endlessly download Linux ISOs overnight to measure bandwidth over the Chinanet backbone.
It was a terrible machine by 2023, but I found I used it the most because it was my playground for all the dumb things that I wouldn’t subject my regular home production environments to. Finally recycled it last year, after 5 years of use, when it became apparent it wasn’t going to die and far better USFF 1L Tiny PC machines (i5-6500T CPUs) were going on eBay for $60. The power usage and wasted heat of an ancient 95W TDP CPU just couldn’t justify its continued operation.
HappyStarDiaz@real.lemmy.fan
on 09 Jan 05:39
collapse
Always wanted am x3, just such an oddball thing, I love this. I had a 965 x4
The X3 CPUs were essentially quad cores where one of the cores failed a quality control check. Using a higher end Mobo, it was possible to unlock the fourth core with varying results. This was a cheap consumer Acer prebuilt though, so I didn’t have that option.
VoteNixon2016@lemmy.blahaj.zone
on 09 Jan 04:22
nextcollapse
I just learned that this resolution resulted from 4:3 screens which got some wideness added to reach 16:9 from an awesome person in this comment thread 😊
VoteNixon2016@lemmy.blahaj.zone
on 09 Jan 07:21
collapse
I had to check the post not logged in, weirdly I only see your comment when I’m logged in, but yeah, I (almost) only ever ssh into it, so I never really noticed the resolution until you pointed it out
Which doesn’t sound like much, but if you have applications designed for 1024x768 (which was pretty much the standard PC resolution for years) then at least it would fit on the screen.
kind of…
a “AMD GX-420GI SOC: quad-core APU” the one with no L3 Cache, in an Thin Client and 8Gb Ram. old Laptop ssd for Storage (128GB)
Nextcloud is usable but not fast.
All my stuff is running on a 6-year-old Synology D918+ that has a Celeron J3455 (4-core 1.5 GHz) but upgraded to 16 GB RAM.
Funny enough my router is far more powerful, it’s a Core i3-8100T, but I was picking out of the ThinkCentre Tiny options and was paranoid about the performance needed on a 10 Gbit internet connection
I got a 1U rack server for free from a local business that was upgrading their entire fleet. Would’ve been e-waste otherwise, so they were happy to dump it off on me. I was excited to experiment with it.
Until I got it home and found out it was as loud as a vacuum cleaner with all those fans. Oh, god no…
I was living with my parents at the time, and they had a basement I could stick it in where its noise pollution was minimal. I mounted it up to a LackRack.
Since moving out to a 1 bedroom apartment, I haven’t booted it. It’s just a 70 pound coffee table now. :/
Interesting, I haven’t had any issues with things loading with mine, maybe it’s your adlists causing issues? Try disabling some, there might be false positives in there giving you issues
Rehabilitated HP z440 workstation, checking in! Popped in a used $20 e5-2620v4 xeon CPU and 64gb of RAM and it sails for my use cases. TrueNAS as the base OS and a TalOS k8’s cluster in a VM to handle apps. Old but gold.
pumpkinseedoil@mander.xyz
on 10 Jan 12:18
nextcollapse
2 GB RAM rasp pi 4 :))
we_avoid_temptation@lemmy.zip
on 15 Jan 01:44
collapse
It’s getting up there in years but I’m running a Dell T5610 with 128GB RAM. Once I start my new job I might upgrade cause it’s having issues running my MC server.
I started my self hosting journey on a Dell all-in-one PC with 4 GB RAM, 500 GB hard drive, and Intel Pentium, running Proxmox, Nextcloud, and I think Home Assistant. I upgraded it eventually, now I’m on a build with Ryzen 3600, 32 GB RAM, 2 TB SSD, and 4x4 TB HDD
My first server was a single-core Pentium - maybe even 486 - desktop I got from university surplus. That started a train of upgrading my server to the old desktop every 5-or-so years, which meant the server was typically 5-10 years old. The last system was pretty power-hungry, though, so the latest upgrade was an N100/16 GB/120 GB system SSD.
I have hopes that the N100 will last 10 years, but I’m at the point where it wouldn’t be awful to add a low-cost, low-power computer to my tech upgrade cycle. Old hardware is definitely a great way to start a self-hosting journey.
My first @home server was an old defective iMac G3 but it did the job (and then died for good)
A while back, I got a RP3 and then a small thin client with some small AMD CPU. They (barely) got the job done.
I replaced them with an HP EliteDesk G2 micro with a i5-6500T. I don’t know what to do with the extra power.
Prosody (XMPP server), a git instance, a searXNG instance, Tandoor (recipe manager), Next Cloud, Syncthing for my phone and my partner’s (one could say Next Cloud should be enough but I use it for different purposes), and a few other stuff.
It doesn’t even use an eight of its total RAM and I’ve never seen the CPU go past 20℅. But it uses a lot less power than the thin client it replaced so not a bad investment, especially considering its price.
interdimensionalmeme@lemmy.ml
on 09 Jan 14:48
collapse
How do you like searxng and have you considered hosting openstreetmaps?
Searxng is very good, I like it a lot.
As for OSM, I didn’t even know it could be hosted.
interdimensionalmeme@lemmy.ml
on 12 Jan 01:46
collapse
Thanks, I was put off by the install process but now I will give it another go.
evidences@lemmy.world
on 09 Jan 09:15
nextcollapse
My NAS is on an embedded Xeon that at this point is close to a decade old and one of my proxmox boxes is on an Intel 6500t. I’m not really running anything on any really low spec machines anymore, though earlyish in the pandemic I was running boinc with the Open Pandemics project on 4 raspberry pis.
Plex server is running on my old Threadripper 1950X. Thing has been a champ. Due to rebuild it since I’ve got newer hardware to cycle into it but been dragging my heels on it. Not looking forward to it.
Isn’t ryzen not recommended for transcoding? Plus, I’ve read that power efficiency isn’t great. Mostly regarding idle power consumption.
TMP_NKcYUEoM7kXg4qYe@lemmy.world
on 09 Jan 13:05
collapse
Ryzen is not recommended for transcoding because the Radeon integrated GPU’s encoding accelerator is not as fast as in intel iGPUs. But this does not come into play if you A) have 16 cores and B) don’t even have an integrated GPU.
And about idle power consumption: I don’t think it’s a point of interest if you are using a workstation class computer.
I think it’s a point of a interest for any hw running 24/7 but you do you.
Regarding transcoding, are you saying you’re not even doing it? If you are, doing it with your cpu is far more inefficient than using a gpu. But again, different strokes I guess.
TMP_NKcYUEoM7kXg4qYe@lemmy.world
on 09 Jan 14:11
collapse
Dunno whether they are transcoding or not nor why they have such a bizarre setup. But I would hope 16C/32T CPU from 2017 could handle software transcoding. Also peak power consumption while playing a movie does not really matter compared to idle power consumption. What matters more is that the motherboard is probably packed with pcie slots that consume a lot of power. But to OP it probably does not matter if they use a threadripper.
I would hope 16C/32T CPU from 2017 could handle software transcoding
I didn’t say it couldn’t handle it. Just that it was very inefficient.
peak power consumption while playing a movie does not really matter compared to idle power consumption
I mentioned both things. Did you actually read my comments?
TMP_NKcYUEoM7kXg4qYe@lemmy.world
on 09 Jan 12:58
nextcollapse
I used to selfhost on a core 2 duo thinkpad R60i. It had a broken fan so I had to hide it into a storage room otherwise it would wake up people from sleep during the night making weird noises. It was pretty damn slow. Even opening proxmox UI in the remotely took time. KrISS feed worked pretty well tho.
I have since upgraded to… well, nothing. The fan is KO now and the laptop won’t boot. It’s a shame because not having access to radicale is making my life more difficult than it should be. I use CalDAV from disroot.org but it would be nice to share a calendar with my family too.
GnuLinuxDude@lemmy.ml
on 09 Jan 14:30
nextcollapse
It’s not absolutely shit, it’s a Thinkpad t440s with an i7 and 8gigs of RAM and a completely broken trackpad that I ordered to use as a PC when my desktop wasn’t working in 2018. Started with a bare server OS then quickly realized the value of virtualization and deployed Proxmox on it in 2019. Have been using it as a modest little server ever since. But I realize it’s now 10 years old. And it might be my server for another 5 years, or more if it can manage it.
In the host OS I tweaked some value to ensure the battery never charges over 80%. And while I don’t know exactly how much electricity it consumes on idle, I believe it’s not too much. Works great for what I want. The most significant issue is some error message that I can’t remember the text of that would pop up, I think related to the NIC. I guess Linux and the NIC in this laptop have/had some kind of mutual misunderstanding.
Yeah, absolutely. Same here, I find used laptops often make GREAT homelab systems, and ones with broken screens/mice/keyboards can be even better since you can get them CHEAP and still fully use them.
I have 4 doing various things including one acting as my “desktop” down in the homelab. But they’re between 4 and 14 years old and do a great job for what they’re used for.
Andres4NY@social.ridetrans.it
on 10 Jan 01:55
collapse
- years 5-10: my kids use them (generally beating the crap out of them, covering them in boogers/popsicle juice, dropping them, etc).
- years 10-15: low-power selfhosted server which tucks away nicely, and has its own screen so that when something breaks I don't need to dig up an hdmi cable and monitor.
EDIT: because the OP asks for hardware: my current backup & torrent machine is a 4th gen i3 latitude e7240.
Running a bunch of services here on a i3 PC I built for my wife back in 2010. I’ve since upgraded the RAM to 16GB, added as many hard drives as there are SATA ports on the mobo, re-bedded the heatsink, etc.
It’s pretty much always ran on Debian, but all services are on Docker these days so the base distro doesn’t matter as much as it used to.
I’d like to get a good backup solution going for it so I can actually use it for important data, but realistically I’m probably just going to replace it with a NAS at some point.
7th gen intel, 96GB mismatched ram, 4 used 10TB HDD, one 12 with a broken sata connector that only works because it’s sitting just right in a sled. A couple of 14’s one M.2 and two sataSSD. It’s running Unraid with 2 VM’s (plex and Home Assistant), one of which has corrupted itself 3 times. A 1080 and a 2070.
I can get several streams off it at once, but not while it’s running parity check and it can’t handle 4k transcoding.
It’s not horrible, but I couldn’t do what I do now with less :)
My home server runs on an old desktop PC, bought at a discounter. But as we have bought several identical ones, we have both parts to upgrade them (RAM!) as well as organ donors for everything else.
SolaceFiend@lemmy.world
on 09 Jan 18:26
nextcollapse
I’m still interested in Self-Hosting but I actually tried getting into self-hosting a year or so ago. I bought a s***** desktop computer from Walmart, and installed window server 2020 on it to try to practice on that.
Thought I could use it to put some bullet points on my resume, and maybe get into self hosting later with next cloud. I ended up not fully following through because I felt like I needed to first buy new editions of the server administration and network infrastructure textbooks I had learned from a decade prior, before I could continue with giving it an FQDN, setting it up as a primary DNS Server, or pointing it at one, and etc.
So it was only accessible on my LAN, because I was afraid of making it a remotely accessible server unless I knew I had good firewall rules, and had set up the primary DNS server correctly, and ultimately just never finished setting it up. The most ever accomplished was getting it working as a file server for personal storage, and creating local accounts with usernames and passwords for both myself and my mom, whom I was living with at the time. It could authenticate remote access through our local Wi-Fi, but I never got further.
Hard to understad why it was difficult. For some reason windows admins are afraid of experimenting, breaking things. Practically I became sys admin by drinking beer and playing with linux, containers, etc.
Smokeydope@lemmy.world
on 09 Jan 18:41
nextcollapse
I run a local LLM on my gaming computer thats like a decade old now with an old 1070ti 8GB VRAM card. It does a good job running mistral small 22B at 3t/s which I think is pretty good. But any tech enthusiast into LLMs look at those numbers and probably wonder how I can stand such a slow token speed. I look at their multi card data center racks with 5x 4090s and wonder how the hell they can afford it.
Yeah, not here either. I’m now at a point where I keep wanting to replace my last host thats limited to 16GB. All the others - at least the ones I care about RAM on - all support 64GB or more now.
I use it for Plex/Jellyfin, it’s the cheapest NVIDIA GPU that supports both AV1 encoding and decoding, even though Plex doesn’t support AV1 yet IIRC it’s still more futureproof that way. I picked it up for like around $200 on a sale, it was well worth it IMO.
MystikIncarnate@lemmy.ca
on 10 Jan 12:18
nextcollapse
I just upgraded to a Xeon E5 v4 processor.
I think the max RAM on it is about 1.5 TiB per processor or something.
It’s not new, but it’s not that old either. Still cost me a pretty penny.
SirEDCaLot@lemmy.today
on 10 Jan 12:25
nextcollapse
The beauty of self hosting is most of it doesn’t actually require that much compute power. Thus, it’s a perfect use for hardware that is otherwise considered absolutely shit. That hardware would otherwise go in the trash. But use it to self host, and in most cases it’s idle most of the time so it doesn’t use much power anyway.
Aceticon@lemmy.dbzer0.com
on 10 Jan 13:13
nextcollapse
Look for a processor for the same socket that supports more RAM and make sure the Motherboard can handle it - maybe you’re lucky and it’s not a limit of that architecture.
If that won’t work, breakup your self-hosting needs into multiple machines and add another second hand or cheap machine to the pile.
I’ve worked in designing computer systems to handle tons of data and requests and often the only reasonable solution is to break up the load and throw more machines at it (for example, when serving millions of requests on a website, just put a load balancer in front of it that assigns user sessions and associated requests to multiple machines, so the load balancer pretty much just routes request by user session whilst the heavy processing stuff is done by multiple machines in such a way the you can just expand the whole thing by adding more machines).
In a self-hosting scenario I suspect you’ll have a lot of margin for expansion by splitting services into multiple hosts and using stuff like network shared drives in the background for shared data, before you have to fully upgrade a host machine because you hit that architecture’s maximum memory.
Granted, if a single service whose load can’t be broken down so that you can run it as a cluster, needs more memory than you can put in any of your machines, then you’re stuck having to get a new machine, but even then by splitting services you can get a machine with a newer architecture that can handle more memory but is still cheap (such as a cheap mini-PC) and just move that memory-heavy service to it whilst leaving CPU intensive services in the old but more powerful machine.
blackstrat@lemmy.fwgx.uk
on 10 Jan 16:10
nextcollapse
I moved from a Drll R710 with dual docket Xeons to a rack mount desktop case with a single Ryzen R5 5600G. I doubled the performance and halved the power consumption in one go. I do miss having idrac though. I need a KVM over IP solution but haven’t stomached the cost yet. For how often I need it it’s not an issue.
threaded - newest
Just down load more ram capacity. It the button right under the down load more ram button.
www.DownloadMoreRAM.com
Yup. Gateway E-475M. It has trouble transcoding some plex streams, but it keeps chugging along. $5 well spent.
it can do it!
… just not today
got a ripping and converting pc that ain’t any better. it’s all it does, so speed don’t matter any. hb has queue, so nbd. i just let it go… and go… and go…
2012 Mac Mini with a fucked NIC because I man handled it putting in a SSD. Those things are tight inside!
( ͡° ͜ʖ ͡°)
Had to buy a special two pronged tool to get her out!
Lol, I used to have an 08 Mac mini and that required a razor blade and putty knives to open. I got pretty good at it after separately upgrading the RAM adding an SSD and swapping out the cpu for the most powerful option that Apple didn’t even offer
When I used to work at the “Fruit Stand” I never had to repair those white back Mini’s thankfully, but I do remember the putty knives being around. The unibody iMac was the worse, had to pizza cutter the whole LCD off the frame to replace anything, then glue it back on!
Lol by the time I actually needed to upgrade from that mini, all the fruit stand stuff wasn’t really upgradable anymore. It was really frustrating, so I jumped ship to Windows.
Those iMac screens seemed so fiddley to remove just to get access to the drives. Why won’t they just bolt them in instead of using glue! (I know why, but I still don’t like it)
I was for a while. Hosted a LOT of stuff on an i5-4690K overclocked to hell and back. It did its job great until I replaced it.
Now my servers don’t lag anymore.
EDIT: CPU usage was almost always at max. I was just redlining that thing for ~3 years. Cooling was a beefy Noctua air cooler so it stayed at ~60 C. An absolute power house.
4690k was solid! Mine is retired, though. Now I selfhost on ARM
I retired mine with a 12600K and I’m not sure what to do with it now.
4 gigs of RAM is enough to host many singular projects - your own backup server or VPN for instance. It’s only if you want to do many things simultaneously that things get slow.
It is amazing what you can do with so little. My server has nas, jellyfin, plex, ebook reader, recipe, vpn, notes, music server, backups, and serves 4 people. If it hits 4gb ram usage it is a rare day.
What hardware are you using where the cpu says you are limited to 4gb?
Even a 25 year old Pentium 4 supports 8GB.
My guess is an x86 32bit machine
Might be using a laptop where the RAM is soldered to the board. I’ve got a Thinkpad X280 that’s like that: no slots, just surface-mounted RAM.
That’s Lenovo’s fault, not Intel.
Oh wow, I just saw the comment about it being an ancient Atom. Yeah, fair enough!
8GB can be stuffy on certain programs
Maybe an Atom?
Maybe. But it would need to be an Atom from 15 years ago. Anything newer does 32 GB.
Of course motherboards don’t support it but that’s not the cpu’s fault.
Yep. Intel atom D525
Have you looked into using zram?
Negative, Pentium 4 was x86 and thus could only address 32 bits.
64bit CPUs started hitting the mainstream in 2003, but 64bit Windows didn’t take off until Win7 in 2009. (XP had it, but no one bothered switching from 32b XP to 64b XP just to use more memory and have early adoption issues. Vista had it, but no one had Vista).
The Pentium 4 supported PAE and 36 bit PSE
en.m.wikipedia.org/…/Physical_Address_Extension
en.m.wikipedia.org/…/Physical_Address_Extension#:….
It’s kind of like how the 8086 was a 16 bit processor but could access 1 megabyte of ram (640k ram 384 k reserved for rom) . -Or the 286 which was 16 bit but could access 24 MB.
But even without that the Prescott P4’s supported 64 bits.
All of that was introduced in 2004. When you said “25 years ago” I assumed you meant the original P4 from 2000.
PAE was introduced with the Pentium Pro 30 years ago. I used it on Dell Pentium II servers that ran SQL Server. Even the 386 from 1985 could access 64 terabytes of ram using segmented mode.
Full 64 bit Prescott P4 was 2004.
While technically true, the P4 did support PAE, in reality you couldn’t really make use of it on consumer hardware for most of its lifetime. No ordinary socket 478 mainboard with DDR1 memory supported more than 4 GB of RAM. With socket 775 more RAM was possible, but that socket is “only” ~20 years old.
Besides that, there were other even newer systems that supported only 4 GB of RAM, like some Intel Atom mainboards with a single DDR2 socket. Same with Via C3 mainboards.
Oh sure. But as I said, that’s the motherboard’s fault, not the cpu.
Then I don’t understand what your point is. A CPU on its own without a system isn’t of any use. Since there were no motherboards allowing you to use that much RAM, the point about the CPU supporting it is moot as far as I am concerned.
Because the meme blames Intel.
Imagine if I did a meme that blamed AMD for only supporting DDR4 because my motherboard only did DDR4 despite all AMD 7000 and newer supporting DDR4 or DDR5…
Got all my docker containers on an i3-4130T. It's fine.
I had quite a few docker containers going on a Raspberry Pi 4. Worked fine. Though it did have 8GB of RAM to be fair
I’m sure a lot of people’s self hosting journey started on junk hardware… “try it out”, followed by “oh this is cool” followed by “omg I could do this, that and that” followed by dumping that hand-me-down garbage hardware you were using for something new and shiny specifically for the server.
My unRAID journey was this exactly. I now have a 12 hot/swap bay rack mounted case, with a Ryzan 9 multi core, ECC ram, but it started out with my ‘old’ PC with a few old/small HDDs
Me on a RPi4.
7 websites, Jellyfin for 6 people, Nextcloud, CRM for work, email server for 3 domains, NAS, and probably some stuff I’ve forgotten on a $4 computer from a tiny thrift store in BFE Kansas. I’d love to upgrade, but I’m always just filled with joy whenever I think of that little guy just chugging along.
Hell yeah, keep chugging little guy 🤘
Heck yeah
Which CRM please?
EspoCRM. I really like it for my purposes. I manage a CiviCRM instance for another job that needs more customization, but for basic needs, I find espo to be beautiful, simple, and performant.
Sweeeeet thank you! Demo looks great. Now to figure out whether an uber n00ber can self host it in a jiffy or not. 🙏
Interested in how it does jellyfin, decent GPU or something else?
It does fine. It’s an i5-6500 running CPU transcoding only. Handles 2-3 concurrent 1080p streams just fine. Sometimes there’s a little buffering if there’s transcoding going on. I try to keep my files at 1080p for storage reasons though. This thing’s not going to handle 4k transcoding very well, but it does okay if you don’t expect too much from it.
I’m skeptical that you are doing much video transcoding anyway. 1080p is supported on must devices now, and h264 is best buddies with 1080p content - a codec supported even on washing machines. Audio may be transcoded more often.
Most of my content is h265 and av1 so I assume they are also facing a similar issue. I usually use the jellyfin app on PC or laptop so not an issue but my family members usually use the old TV which doesn’t support it.
AV1 is definitely a showstopper a lot of the time indeed. H265 I would expect to see more on 2k or 4k content (though native support is really high anyway). My experience so far has been seeing transcoding done only becuase the resolution is unsupported when I try watching 4k videos on an older 1080p only chromecast.
What do you mean by showstopper? I only encode my shows into AV1/opus and I never had any transcoding happening on any of my devices.
It’s well supported on any recent Browser compared to x264/x265… specially 10bit encodes. And software decoding is nearly present on any recent device.
Dunno about 4k though, I haven’t the necessary screen resolution to play any 4k content… But for 1080p, AV1 is the way to go IMO.
It has it’s own quirks for sure (like every codec) but it’s far from a bad codec. I’m not a specialist on the subject but after a few months of testing/comparing/encoding… I settled with AV1 because it was comparative better than x264/x265.
Showstopper in the sense that it may not play natively and require transcoding. While x264 has pretty much universal support, AV1 does not… at least not on some of my devices. I agree that it is a good encoder and the way forward but its not the best when using older devices. My experience has been with Chromecast with Google TV. Looks like google only added AV1 support in their newest Google TV Streamer (late 2024 device).
Not a huge amount of transcoding happening, but some for old Chromecasts and some for low bandwidth like when I was out of the country a few weeks ago watching from a heavily throttled cellular connection. Most of my collection is h264, but I’ve got a few h265 files here and there. I am by no means recommending my setup as ideal, but it works okay.
Absolutely, whatever works for you. I think its awesome to use the cheapest hardware possible to do these things. Being able to use a media server without transcoding capabilities? Brilliant. I actually thought you’d probably be able to get away with no transcoding at all since 1080p has native support on most devices and so does h264. In the rare cases, you could transcode beforehand (like with a script whenever a file is added) so you’d have an appropriate format on hand when needed.
My i5 6600k will turn 10 years old this year. I’m fortunate because upgrading to 32 GB should keep it running for a while still.
Why didn’t you post this before I bought the RAM?!
I’m hosting a minio cluster on my brother-in-law’s old gaming computer he spent $5k on in 2012 and 3 five year old mini-pcs with 1tb external drives plugged into them. Works fine.
Aw yep, bought an old HP pro-lient something something with 2 old-ass intel xeons and 64GB ram for practically nothing. Thing’s been great. It’s a bit loud but runs anything I throw at it.
Just keep an eye on the power usage, depending on how expensive electricity is in your area. I live in California which has very expensive electricity, and buying newer, more power efficient hardware works out cheaper than 10+ year old Xeons over the long run, even if you get the Xeon system for free.
I’ve got a i3-10100, 16gb ram, and an unused gtx 960. It’s terrible but its amazing at the same time. I built it as a gaming pc then quit gaming.
That’s a pretty solid machine
10th gen is hardly “shit hardware”.
I used to self host some stuff on an old 2011 iMac. Worked fine, actually
People in this thread have very interesting ideas of what “shit hardware” is
My cluster ranges from 4th gen to 8th gen Intel stuff. 8th gen is the newest I’ve ever had (until I built a 5800X3D PC).
I’ve seen people claiming 9th gen is “ancient”. Like…ok moneybags.
My 9th gen intel is still not the bottleneck of my 120hz 4K/AI rig, not by a longshot.
Yep any core i3 is fine even for desktop given an SSD and enough RAM. Once you delve into the core2 era, you start having problems because it lacks the compression and encryption instructions necessary for the day to day smoothness. In a server you might get away with core 2 duo as long as you don’t use full disk encryption and get an SSD or at least use ram for caching. Though that would be kinda a bizarre setup on a computer with 512 MB of ram.
You can do quite a bit with 4GB RAM. A lot of people use VPSes with 4GB (or less) RAM for web hosting, small database servers, backups, etc. Big providers like DigitalOcean tend to have 1GB RAM in their lowest plans.
I met someone that was throwing out old memory modules. Literally boxes full of DDR, DDR2 modules. I got quite excited, hoping to upgrade my server’s memory. Yeah, DDR2 only goes up to 2GiB. So I am stuck with 2×2GiB. But I am only using 85% of that anyways, so it’s fine.
Yep, mspencer dot net (what little of it is currently up, I suck at ops stuff) is 2012-vintage hardware, four boxes totaling 704 GB RAM, 8x10TB SAS disks, and a still-unused LTO-3 tape drive. I’ll upgrade further when I finally figure out how to make proper use of what I already have. Until then it’s all a fancy heated cat tree, more or less.
My home Kubernetes cluster started out on a Core i7-920 with 8 GB of memory.
Upgraded to 16 GB memory
Upgraded to a Core i5-2400S
Upgraded to a Core i7-3770
Upgraded to 32 GB memory
Recently Upgraded to a Core i5-7600K
I think I’ll stay with that for rather long…
I did however add 2 Intel NUCs (gen 6 and gen 8) to the cluster to have a distributed control plane and some distributed storage.
Wow, it’s been a long time since I had hardware that awful.
My old NAS was a Phenom II x4 from 2009, and I only retired it a year and a half ago when I upgraded my PC. But I put 8GB RAM into that since it was a 64-bit processor (could’ve put up to 32GB I think, since it had 4 DDR3 slots). My NAS currently runs a Ryzen 1700, but I still have that old Phenom in the closet in case that Ryzen dies, but I prefer the newer HW because it’s lower power.
That said, I once built a web server on an Arduino which also supported websockets (max 4 connections). That was more of a POC than anything though.
your hardware ain’t shit until it’s a first gen core2duo in a random Dell office PC and 2gb of memory that you specifically only use just because it’s a cheaper way to get x86 when you can’t use your raspberry pi.
Also they lie most of the time and it may technically run fine on more memory, especially if it’s older when dimm capacities were a lot lower than they can be now. It just won’t be “supported”.
Oldest I got is limited to 16GB (excluding rPis). My main desktop is limited to 32GB which is annoying, because I sometimes need more. But, I have a home server with 128GB of RAM that I can use when it’s not doing other stuff. I once needed more than 128GB of RAM (to run optimizations on a large ONNX model, iirc), so had to spin up an EC2 instance with 512GB of RAM.
I’m self-hosting in a 500GB HDD, 2 cores AMD A6, 8GB RAM thinkcentre (access for LAN only) that I got very cheap.
It could be better, I’m going to buy a new computer for personal use and I’m the only one in my family who uses the hosted services, so upgrades will come later 😴
3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.
I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.
Running that cluster 7 or so years now since I bought them new.
I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.
Point is, it’s still capable today.
How is ceph working out for you btw? I’m looking into distributed storage solutions rn. My usecase is to have a single unified filesystem/index, but to store the contents of the files on different machines, possibly with redundancy. In particular, I want to be able to upload some files to the cluster and be able to see them (the directory structure and filenames) even when the underlying machine storing their content goes offline. Is that a valid usecase for ceph?
I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.
I can’t see why regular file would be any different.
I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.
I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.
I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.
I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.
Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.
This is good advice, thanks! Pretty much what I’m doing right now. Already tried it with IPFS, and found that it didn’t meet my needs. Currently setting up a tahoe-lafs grid to see how it works. Will try out ceph after this.
Maybe not shit, but exotic at that time, year 2012.
The first Raspberry Pi, model B 512 MB RAM, with an external 40 GB 3.5" HDD connected to USB 2.0.
It was running ARM Arch BTW.
<img alt="" src="https://feddit.nl/pictrs/image/8583a3a3-1357-47d1-9a90-0d689fc3936a.jpeg">
Next, cheap, second hand mini desktop Asus Eee Box.
32 bit Intel Atom like N270, max. 1 GB RAM DDR2 I think.
Real metal under the plastic shell.
Could even run without active cooling (I broke a fan connector).
What’re you hosting on them?
Mainly telemetry, like temperature inside, outside.
Script to read a data and push it into a RRD, later PostreSQL.
ligthttpd to serve static content, later PHP.
Once it served as a bridge, between LAN and LTE USB modem.
I have one of these that I use for Pi-hole. I bought it as soon as they were available. Didn’t realise it was 2012, seemed earlier than that.
This was my media server and kodi player for like 3 years…still have my Pi 1 lying around. Now I have a shitty Chinese desktop I built this year with i5 3rd. Gen with 8gb ram
Odd, I have a Celeron J3455 which according to Intel only supports 8GB, yet I run it with 16 GB
Same here in a Synology DS918+. It seems like the official Intel support numbers can be a bit pessimistic (maybe the higher density sticks/chips just didn’t exist back when the chip was certified?)
The oldest hardware I’m still using is an Intel Core i5-6500 with 48GB of RAM running our Palworld server. I have an upgrade in the pipeline to help with the lag, because the CPU is constantly stressed, but it still will run game servers.
I faced that only with different editions of Windows limiting it by itself.
I had a old Acer SFF desktop machine (circa 2009) with an AMD Athlon II 435 X3 (equivalent to the Intel Core i3-560) with a 95W TDP, 4 GB of DDR2 RAM, and 2 1TB hard drives running in RAID 0 (both HDDs had over 30k hours by the time I put it in). The clunker consumed 50W at idle. I planned on running it into the ground so I could finally send it off to a computer recycler without guilt.
I thought it was nearing death anyways, since the power button only worked if the computer was flipped upside down. I have no idea why this was the case, the computer would keep running normally afterwards once turned right side up.
The thing would not die. I used it as a dummy machine to run one-off scripts I wrote, a seedbox that would seed new Linux ISOs as it was released (genuinely, it was RAID0 and I wouldn’t have downloaded anything useful), a Tor Relay and at one point, a script to just endlessly download Linux ISOs overnight to measure bandwidth over the Chinanet backbone.
It was a terrible machine by 2023, but I found I used it the most because it was my playground for all the dumb things that I wouldn’t subject my regular home production environments to. Finally recycled it last year, after 5 years of use, when it became apparent it wasn’t going to die and far better USFF 1L Tiny PC machines (i5-6500T CPUs) were going on eBay for $60. The power usage and wasted heat of an ancient 95W TDP CPU just couldn’t justify its continued operation.
Always wanted am x3, just such an oddball thing, I love this. I had a 965 x4
The X3 CPUs were essentially quad cores where one of the cores failed a quality control check. Using a higher end Mobo, it was possible to unlock the fourth core with varying results. This was a cheap consumer Acer prebuilt though, so I didn’t have that option.
Somehow Jellyfin works ¯\_(ツ)_/¯
<img alt="" src="https://lemmy.blahaj.zone/pictrs/image/fe5ae775-b36e-47ab-a1cd-206607b5b509.webp">
1366x768 ?? WTF
Some old netbook I guess, or unsupported hardware and a driver default. If all you need is ssh, the display resolution hardly matters.
Sure, just never saw this numbers for resolution, ever 😆
Most 720p TVs (“HD Ready”) used to be that resolution since they re-used production lines from 1024x768 displays
Ahh, I see, they took the 4:3 Standard screen and let it grow to 16:9, that makes a lot of sense 😃
I am to young for knowing 4:3 resolutions 😆
That’s a whole 86x48 more than 1280x720!
😆nice
I just learned that this resolution resulted from 4:3 screens which got some wideness added to reach 16:9 from an awesome person in this comment thread 😊
I had to check the post not logged in, weirdly I only see your comment when I’m logged in, but yeah, I (almost) only ever ssh into it, so I never really noticed the resolution until you pointed it out
Which doesn’t sound like much, but if you have applications designed for 1024x768 (which was pretty much the standard PC resolution for years) then at least it would fit on the screen.
This was common in budget laptops 10 years ago. I had a Asus laptop with the same resolution and I have seen others with this resolution as well
Here in Brazil, there are still a lot of laptops, monitors and tvs being sold with that resolution.
kind of… a “AMD GX-420GI SOC: quad-core APU” the one with no L3 Cache, in an Thin Client and 8Gb Ram. old Laptop ssd for Storage (128GB) Nextcloud is usable but not fast.
edit: the Best thing: its 100% Fanless
All my stuff is running on a 6-year-old Synology D918+ that has a Celeron J3455 (4-core 1.5 GHz) but upgraded to 16 GB RAM.
Funny enough my router is far more powerful, it’s a Core i3-8100T, but I was picking out of the ThinkCentre Tiny options and was paranoid about the performance needed on a 10 Gbit internet connection
Enterprise level hardware costs a lot, is noisy and needs a dedicated server room, old laptops cost nothing.
I got a 1U rack server for free from a local business that was upgrading their entire fleet. Would’ve been e-waste otherwise, so they were happy to dump it off on me. I was excited to experiment with it.
Until I got it home and found out it was as loud as a vacuum cleaner with all those fans. Oh, god no…
I was living with my parents at the time, and they had a basement I could stick it in where its noise pollution was minimal. I mounted it up to a LackRack.
Since moving out to a 1 bedroom apartment, I haven’t booted it. It’s just a 70 pound coffee table now. :/
Maybe a more reasonable question: Is there anyone here self-hosting on non-shit hardware? 😅
I’m happy with my little N100
Me using Threadripper 7960X and R5 6600H for my servers: 🤭
You can pry my gen8 hp microserver from my cold, dead hands.
It’s not top of the line, but my Ryzen 1700 is way overkill for my NAS. I’ll probably add a build server, not because I need it, but because I can.
10400F running my NAS/Plex server and raspberry pi 5 running PiHole
I have pi-hole on my Mac mini using docker but I stopped using it, it makes some things super laggy to load
Interesting, I haven’t had any issues with things loading with mine, maybe it’s your adlists causing issues? Try disabling some, there might be false positives in there giving you issues
I tried the default ones
Rehabilitated HP z440 workstation, checking in! Popped in a used $20 e5-2620v4 xeon CPU and 64gb of RAM and it sails for my use cases. TrueNAS as the base OS and a TalOS k8’s cluster in a VM to handle apps. Old but gold.
2 GB RAM rasp pi 4 :))
It’s getting up there in years but I’m running a Dell T5610 with 128GB RAM. Once I start my new job I might upgrade cause it’s having issues running my MC server.
I started my self hosting journey on a Dell all-in-one PC with 4 GB RAM, 500 GB hard drive, and Intel Pentium, running Proxmox, Nextcloud, and I think Home Assistant. I upgraded it eventually, now I’m on a build with Ryzen 3600, 32 GB RAM, 2 TB SSD, and 4x4 TB HDD
My first server was a single-core Pentium - maybe even 486 - desktop I got from university surplus. That started a train of upgrading my server to the old desktop every 5-or-so years, which meant the server was typically 5-10 years old. The last system was pretty power-hungry, though, so the latest upgrade was an N100/16 GB/120 GB system SSD.
I have hopes that the N100 will last 10 years, but I’m at the point where it wouldn’t be awful to add a low-cost, low-power computer to my tech upgrade cycle. Old hardware is definitely a great way to start a self-hosting journey.
My first @home server was an old defective iMac G3 but it did the job (and then died for good) A while back, I got a RP3 and then a small thin client with some small AMD CPU. They (barely) got the job done.
I replaced them with an HP EliteDesk G2 micro with a i5-6500T. I don’t know what to do with the extra power.
What are you running on it?
Prosody (XMPP server), a git instance, a searXNG instance, Tandoor (recipe manager), Next Cloud, Syncthing for my phone and my partner’s (one could say Next Cloud should be enough but I use it for different purposes), and a few other stuff.
It doesn’t even use an eight of its total RAM and I’ve never seen the CPU go past 20℅. But it uses a lot less power than the thin client it replaced so not a bad investment, especially considering its price.
How do you like searxng and have you considered hosting openstreetmaps?
Searxng is very good, I like it a lot. As for OSM, I didn’t even know it could be hosted.
Thanks, I was put off by the install process but now I will give it another go.
My NAS is on an embedded Xeon that at this point is close to a decade old and one of my proxmox boxes is on an Intel 6500t. I’m not really running anything on any really low spec machines anymore, though earlyish in the pandemic I was running boinc with the Open Pandemics project on 4 raspberry pis.
Plex server is running on my old Threadripper 1950X. Thing has been a champ. Due to rebuild it since I’ve got newer hardware to cycle into it but been dragging my heels on it. Not looking forward to it.
Isn’t ryzen not recommended for transcoding? Plus, I’ve read that power efficiency isn’t great. Mostly regarding idle power consumption.
Ryzen is not recommended for transcoding because the Radeon integrated GPU’s encoding accelerator is not as fast as in intel iGPUs. But this does not come into play if you A) have 16 cores and B) don’t even have an integrated GPU.
And about idle power consumption: I don’t think it’s a point of interest if you are using a workstation class computer.
I think it’s a point of a interest for any hw running 24/7 but you do you.
Regarding transcoding, are you saying you’re not even doing it? If you are, doing it with your cpu is far more inefficient than using a gpu. But again, different strokes I guess.
.
Dunno whether they are transcoding or not nor why they have such a bizarre setup. But I would hope 16C/32T CPU from 2017 could handle software transcoding. Also peak power consumption while playing a movie does not really matter compared to idle power consumption. What matters more is that the motherboard is probably packed with pcie slots that consume a lot of power. But to OP it probably does not matter if they use a threadripper.
I didn’t say it couldn’t handle it. Just that it was very inefficient.
I mentioned both things. Did you actually read my comments?
I used to selfhost on a core 2 duo thinkpad R60i. It had a broken fan so I had to hide it into a storage room otherwise it would wake up people from sleep during the night making weird noises. It was pretty damn slow. Even opening proxmox UI in the remotely took time. KrISS feed worked pretty well tho.
I have since upgraded to… well, nothing. The fan is KO now and the laptop won’t boot. It’s a shame because not having access to radicale is making my life more difficult than it should be. I use CalDAV from disroot.org but it would be nice to share a calendar with my family too.
It’s not absolutely shit, it’s a Thinkpad t440s with an i7 and 8gigs of RAM and a completely broken trackpad that I ordered to use as a PC when my desktop wasn’t working in 2018. Started with a bare server OS then quickly realized the value of virtualization and deployed Proxmox on it in 2019. Have been using it as a modest little server ever since. But I realize it’s now 10 years old. And it might be my server for another 5 years, or more if it can manage it.
In the host OS I tweaked some value to ensure the battery never charges over 80%. And while I don’t know exactly how much electricity it consumes on idle, I believe it’s not too much. Works great for what I want. The most significant issue is some error message that I can’t remember the text of that would pop up, I think related to the NIC. I guess Linux and the NIC in this laptop have/had some kind of mutual misunderstanding.
Yeah, absolutely. Same here, I find used laptops often make GREAT homelab systems, and ones with broken screens/mice/keyboards can be even better since you can get them CHEAP and still fully use them.
I have 4 doing various things including one acting as my “desktop” down in the homelab. But they’re between 4 and 14 years old and do a great job for what they’re used for.
@ripcord @GnuLinuxDude The lifecycle of my laptops:
- years 1-5: I use them.
- years 5-10: my kids use them (generally beating the crap out of them, covering them in boogers/popsicle juice, dropping them, etc).
- years 10-15: low-power selfhosted server which tucks away nicely, and has its own screen so that when something breaks I don't need to dig up an hdmi cable and monitor.
EDIT: because the OP asks for hardware: my current backup & torrent machine is a 4th gen i3 latitude e7240.
Solid. My backup is a T440p, and behind that a X230, fucking bulletproof.
Running a bunch of services here on a i3 PC I built for my wife back in 2010. I’ve since upgraded the RAM to 16GB, added as many hard drives as there are SATA ports on the mobo, re-bedded the heatsink, etc.
It’s pretty much always ran on Debian, but all services are on Docker these days so the base distro doesn’t matter as much as it used to.
I’d like to get a good backup solution going for it so I can actually use it for important data, but realistically I’m probably just going to replace it with a NAS at some point.
A NAS is just a small desktop computer. If you have a motherboard/CPU/ram/Ethernet/case and a lot of SSDs/HDDs you are good to go.
Just don’t bother to buy something marketed as NAS. It’s expensive and less modular than any desktop PC.
Just my opinion.
7th gen intel, 96GB mismatched ram, 4 used 10TB HDD, one 12 with a broken sata connector that only works because it’s sitting just right in a sled. A couple of 14’s one M.2 and two sataSSD. It’s running Unraid with 2 VM’s (plex and Home Assistant), one of which has corrupted itself 3 times. A 1080 and a 2070.
I can get several streams off it at once, but not while it’s running parity check and it can’t handle 4k transcoding.
It’s not horrible, but I couldn’t do what I do now with less :)
My home server runs on an old desktop PC, bought at a discounter. But as we have bought several identical ones, we have both parts to upgrade them (RAM!) as well as organ donors for everything else.
I’m still interested in Self-Hosting but I actually tried getting into self-hosting a year or so ago. I bought a s***** desktop computer from Walmart, and installed window server 2020 on it to try to practice on that.
Thought I could use it to put some bullet points on my resume, and maybe get into self hosting later with next cloud. I ended up not fully following through because I felt like I needed to first buy new editions of the server administration and network infrastructure textbooks I had learned from a decade prior, before I could continue with giving it an FQDN, setting it up as a primary DNS Server, or pointing it at one, and etc.
So it was only accessible on my LAN, because I was afraid of making it a remotely accessible server unless I knew I had good firewall rules, and had set up the primary DNS server correctly, and ultimately just never finished setting it up. The most ever accomplished was getting it working as a file server for personal storage, and creating local accounts with usernames and passwords for both myself and my mom, whom I was living with at the time. It could authenticate remote access through our local Wi-Fi, but I never got further.
Hard to understad why it was difficult. For some reason windows admins are afraid of experimenting, breaking things. Practically I became sys admin by drinking beer and playing with linux, containers, etc.
I run a local LLM on my gaming computer thats like a decade old now with an old 1070ti 8GB VRAM card. It does a good job running mistral small 22B at 3t/s which I think is pretty good. But any tech enthusiast into LLMs look at those numbers and probably wonder how I can stand such a slow token speed. I look at their multi card data center racks with 5x 4090s and wonder how the hell they can afford it.
Not anymore. My main self-hosting server is an i7 5960x with 32GB of ECC RAM, RTX 4060, 1TB SATA SSD, and 6x6TB 7200RPM drives.
I did used to host some services on like a $5 or $10 a month VPS, and then eventually a $40 a month dedi, though.
Yeah, not here either. I’m now at a point where I keep wanting to replace my last host thats limited to 16GB. All the others - at least the ones I care about RAM on - all support 64GB or more now.
64GB would be a nice amount of memory to have. I’ve been okay with 32GB so far thankfully.
What do you use the 4060 for?
I use it for Plex/Jellyfin, it’s the cheapest NVIDIA GPU that supports both AV1 encoding and decoding, even though Plex doesn’t support AV1 yet IIRC it’s still more futureproof that way. I picked it up for like around $200 on a sale, it was well worth it IMO.
Does this count ARMv6 256MB RAM running OpenMediaVault…hmm I have to fix my clock. LOL
<img alt="" src="https://lemmy.ca/pictrs/image/8022ca35-e2d1-4b30-bf2e-15f1a1a51ec5.png">
I just upgraded to a Xeon E5 v4 processor.
I think the max RAM on it is about 1.5 TiB per processor or something.
It’s not new, but it’s not that old either. Still cost me a pretty penny.
The beauty of self hosting is most of it doesn’t actually require that much compute power. Thus, it’s a perfect use for hardware that is otherwise considered absolutely shit. That hardware would otherwise go in the trash. But use it to self host, and in most cases it’s idle most of the time so it doesn’t use much power anyway.
Look for a processor for the same socket that supports more RAM and make sure the Motherboard can handle it - maybe you’re lucky and it’s not a limit of that architecture.
If that won’t work, breakup your self-hosting needs into multiple machines and add another second hand or cheap machine to the pile.
I’ve worked in designing computer systems to handle tons of data and requests and often the only reasonable solution is to break up the load and throw more machines at it (for example, when serving millions of requests on a website, just put a load balancer in front of it that assigns user sessions and associated requests to multiple machines, so the load balancer pretty much just routes request by user session whilst the heavy processing stuff is done by multiple machines in such a way the you can just expand the whole thing by adding more machines).
In a self-hosting scenario I suspect you’ll have a lot of margin for expansion by splitting services into multiple hosts and using stuff like network shared drives in the background for shared data, before you have to fully upgrade a host machine because you hit that architecture’s maximum memory.
Granted, if a single service whose load can’t be broken down so that you can run it as a cluster, needs more memory than you can put in any of your machines, then you’re stuck having to get a new machine, but even then by splitting services you can get a machine with a newer architecture that can handle more memory but is still cheap (such as a cheap mini-PC) and just move that memory-heavy service to it whilst leaving CPU intensive services in the old but more powerful machine.
I moved from a Drll R710 with dual docket Xeons to a rack mount desktop case with a single Ryzen R5 5600G. I doubled the performance and halved the power consumption in one go. I do miss having idrac though. I need a KVM over IP solution but haven’t stomached the cost yet. For how often I need it it’s not an issue.
<img alt="" src="https://lemmy.world/pictrs/image/cea04264-64b5-43e1-9048-07e8efdabe5b.jpeg">
N…not quite…
Testing federation from my shit hardware… 😅
Not seeing other comments… but see this over at .world
Looks like it works! Congrats!
Fuck ive been dealing with that + max RAM speed limitations for a month.