I just won an auction for 25 computers. What should I setup on them?
from Trainguyrom@reddthat.com to selfhosted@lemmy.world on 21 Apr 13:58
https://reddthat.com/post/17656410

I placed a low bid on an auction for 25 Elitedesk 800 G1s on a government auction and unexpectedly won (ultimately paying less than $20 per computer)

In the long run I plan on selling 15 or so of them to friends and family for cheap, and I’ll probably have 4 with Proxmox, 3 for a lab cluster and 1 for the always-on home server and keep a few for spares and random desktops around the house where I could use one.

But while I have all 25 of them what crazy clustering software/configurations should I run? Any fun benchmarks I should know about that I could run for the lolz?

Edit to add:

Specs based on the auction listing and looking computer models:

Possible projects I plan on doing:

#selfhosted

threaded - newest

seaQueue@lemmy.world on 21 Apr 14:14 next collapse

Distcc, maybe gluster. Run a docker swarm setup on pve or something.

Models like those are a little hard to exploit well because of limited network bandwidth between them. Other mini PC models that have a pcie slot are fun because you can jam high speed networking into them along with NVMe then do rapid fail over between machines with very little impact when one goes offline.

If you do want to bump your bandwidth per machine you might be able to repurpose the wlan m2 slot for a 2.5gbe port, but you’ll likely have to hang the module out the back through a serial port or something. Aquantia USB modules work well too, those can provide 5gbe fairly stably.

Edit: Oh, you’re talking about the larger desktop elitedesk g1, not the USFF tiny machines. Yeah, you can jam whatever hh cards into these you want - go wild.

Trainguyrom@reddthat.com on 21 Apr 15:46 collapse

From the listing photos these actually have half-height expansion slots! So GPU options are practically nonexistant, but networking and storage is blown wide open for options compared to the miniPCs that are more prevalent now.

seaQueue@lemmy.world on 21 Apr 16:14 collapse

Yeah, you’ll be fairly limited as far as GPU solutions go. I have a handful of hh AMD cards kicking around that were originally shipped in t740s and similar but they’re really only good for hardware transcoding or hanging extra monitors off the machine - it’s difficult to find a hh board with a useful amount of vram for ml/ai tasks.

Matthew_Gasoline@lemmy.world on 21 Apr 14:17 next collapse

Senior year of Highschool, I put Unreal Tournament on the school server. If it were me, I’d recreate that experience, including our teacher looking around the class. That was almost 20 years ago, I hope everyone is doing alright.

Wojwo@lemmy.ml on 21 Apr 14:31 collapse

I have a box with 10 old laptops that I keep around, just for that. Unreal tournament 2004, Insane, Brood Wars and all the Id classics. I don’t get to set it up a lot, but when I do it’s always a hit.

notfromhere@lemmy.ml on 21 Apr 14:29 next collapse

You could possibly run ai horde if they have enough ram or vram. You could run bare metal kubernetes or inside proxmox.

someguy3@lemmy.world on 21 Apr 14:35 next collapse

God damn. What are the specs on those? I gotta check out some government auctions.

Trainguyrom@reddthat.com on 21 Apr 15:44 collapse

4th gen intel i5s, 8GB of RAM and 256GB SSDs, so not terrible for a basic Windows desktop even today (except of course for the fact that no supported Windows desktop operating system will officially support these system come Q4 2025)

But don’t get your hopes up, when I’ve bid on auctions like this before the lots have gone for closer to $80 per computer, so I was genuinely surprised I could win with such a low bid. Also every state has entirely different auction setups. When I’ve looked into it in the past, some just dump everything to a third party auction, some only do an in-person auction annually at a central auction house, and some have a snazzy dedicated auction site. Oh and because its the US, states do it differently from the federal government. So it might take some research and digging around to find the most convenient option for wherever you are (which could just be making a friend in an IT department somewhere that will let you dumpster dive)

sabreW4K3@lazysoci.al on 21 Apr 15:58 collapse

They’re actually decent. Congratulations!

Bishma@discuss.tchncs.de on 21 Apr 14:35 next collapse

If I had 25 surprise desktops I imagine I’d discover a long dormant need for a Beowulf cluster.

Trainguyrom@reddthat.com on 21 Apr 16:06 collapse

The thought did cross my mind to run Linpack and see where I fall on the Top500 (or the Top500 of 2000 for example for a more fair comparison haha)

wewbull@feddit.uk on 23 Apr 09:54 collapse

  • Slurm cluster
  • MPI development
Doombot1@lemmy.one on 21 Apr 14:41 next collapse

Shitty k8s cluster/space heater?

just_another_person@lemmy.world on 21 Apr 14:48 next collapse

NOT any kind of crypto mining bullshit.

halm@leminal.space on 21 Apr 15:14 collapse

There’s always a good reason not to put another crypto mining cluster into the world.

Diabolo96@lemmy.dbzer0.com on 21 Apr 14:56 next collapse

Run 70b llama3 on one and have a 100% local, gpt4 level home assistant . Hook it up with coqui.Ai xttsv2 for mind baffling natural language speech (100% local too ) that can imitate anyone’s voice. Now, you got yourself Jarvis from Ironman.

Edit : thought they were some kind of beast machines with 192gb ram and stuff. They’re just regular middle-low tier pcs.

SaintWacko@midwest.social on 21 Apr 15:05 next collapse

I tried doing that on my home server, but running it on the CPU is super slow, and the model won’t fit on the GPU. Not sure what I’m doing wrong

Diabolo96@lemmy.dbzer0.com on 21 Apr 15:29 collapse

Sadly, can’t really help you much. I have a potato pc and the biggest model I ran on it was Microsoft phi-2 using the candle framework. I used to tinker with Llama.cpp on colab, but it seems they don’t handle llama3 yet. ollama says it does , but I’ve never tried it before. For the speed, It’s kinda expected for a 70b model to be really slow on the CPU. How much slow is too slow ? I don’t really know…

You can always try the 8b model. People says it’s really great and even replaced the 70b models they’ve been using.

SaintWacko@midwest.social on 21 Apr 16:09 collapse

Show as in I waited a few minutes and finally killed it when it didn’t seem like it was going anywhere. And this was with the 7b model…

Diabolo96@lemmy.dbzer0.com on 21 Apr 22:49 collapse

It shouldn’t happen for a 8b model. Even on CPU, it’s supposed to be decently fast. There’s definitely something wrong here.

SaintWacko@midwest.social on 21 Apr 23:21 collapse

Hm… Alright, I’ll have to take another look at it. I kinda gave up, figuring my old server just didn’t have the specs for it

Diabolo96@lemmy.dbzer0.com on 22 Apr 08:59 collapse

Specs? Try mistral with llama.ccp.

SaintWacko@midwest.social on 22 Apr 14:23 collapse

It has a Intel Xeon E3-1225 V2, 20gb of ram, and a Strix GTX 970 with 4gb of VRAM. I’ve actually tried Mistral 7b and Decapoda Llama 7b, running them in Python with Huggingface’s Transformers library (from local models)

Diabolo96@lemmy.dbzer0.com on 22 Apr 15:45 collapse

Yeah, it’s not a potato but not that powerful eaither. Nonetheless, it should run a 7b/8b/9b and maybe 13b models easily.

running them in Python with Huggingface’s Transformers library (from local models

That’s your problem right here. Python is great for making llms but is horrible at running them. With a computer as weak as yours, every bit of performance counts.

Just try ollama or llama.ccp . Their github is also a goldmine for other projects you could try.

Llama.ccp can partially run the model on the gpu for way faster inference.

Piper is a pretty decent very lightweight tts engine that can be directly run on your cpu if you want to add tts capabilities to your setup.

Good luck and happy tinkering!

SaintWacko@midwest.social on 22 Apr 16:18 collapse

Ah, that’s good to know! I’ll give those other options a shot. Thank you so much for taking the time to help me with that! I’m very new to the whole LLM things, and sorta figuring it out as I go

Diabolo96@lemmy.dbzer0.com on 23 Apr 09:36 collapse

Completely forgot to tell you to only use quantized models. Your pc can run 4bit quantized versions of the models I mentioned. That’s the key for running llms on at consumer level hardware. You can later read further about the different quantizations and toy with other ones like Q5_K_M and such.

Just read phi-3 got released and apparently it’s a 4B that reach gpt 3.5 level. Follow the news and wait for it to be add to ollama/llama.ccp

Thank you so much for taking the time to help me with that! I’m very new to the whole LLM things, and sorta figuring it out as I go

I became fascinated with llms after the first AI booms but all this knowledge is basically useless where I live, so might as well make it useful by teaching people what i know.

possiblylinux127@lemmy.zip on 23 Apr 03:19 collapse

These are 10 year old mid range machines. Llama 7b won’t even run well

Diabolo96@lemmy.dbzer0.com on 23 Apr 09:18 collapse

The key is quantized models. A full model wouldn’t fit but a 4bit 8b llama3 would fit.

possiblylinux127@lemmy.zip on 23 Apr 12:51 collapse

It would fit but it would be very slow

Diabolo96@lemmy.dbzer0.com on 23 Apr 14:34 collapse

No. Quantization make it go faster. Not blazing fast, but decent.

PhlubbaDubba@lemm.ee on 21 Apr 15:23 next collapse

According to Bush Jr. And Cheney you are now capable of building a super computer dangerous enough to warrant a 20+ year invasion

Depending on the actual condition of all those computers and your own skill in building I’d say you could rig a pretty decent home server rack out of all of those for really most purposes you could imagine, including as a personal VPN, personal RDP to conduct work on, personal test server for experimental code and/or testing potentially unsafe downloads/links for viruses

Shit you could probably build your own OS that optimizes for all that computing power just for the funzies, or even use it to make money by contributing its computing power to a crowd sourced computing project where you dedicate memory bandwidth to the project for some grad student or research institute to do all their crazy math with. Easiest way to rack up academic citations if you ever want to be a researcher!

zach@lemmy.dbzer0.com on 21 Apr 18:30 collapse

What are you referencing in regard to the super computer investigation? Internet search failed me

downhomechunk@midwest.social on 21 Apr 20:33 collapse

ign.com/…/iraq-scores-hordes-of-ps2s-at-us-gamers…

I’m pretty sure this is it.

zach@lemmy.dbzer0.com on 21 Apr 21:57 collapse

Weird. Thanks for finding it!

Decronym@lemmy.decronym.xyz on 21 Apr 15:25 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
ESXi VMWare virtual machine hypervisor
IP Internet Protocol
NAT Network Address Translation
NVMe Non-Volatile Memory Express interface for mass storage
PSU Power Supply Unit
SSD Solid State Drive mass storage
VPN Virtual Private Network
k8s Kubernetes container management package

8 acronyms in this thread; the most compressed thread commented on today has 12 acronyms.

[Thread #697 for this sub, first seen 21st Apr 2024, 15:25] [FAQ] [Full list] [Contact] [Source code]

mmhmm@lemmy.ml on 21 Apr 15:36 next collapse

So on one I would suggest frigate. Those support (I think) and pci-e coral ai chip.

Home assistant would be great bare metal

HumanPerson@sh.itjust.works on 21 Apr 15:45 next collapse

If I were you I might try deploying a mini enterprise network with permissions and things. It would be fun to do it with active directory to try to practice pentesting, or it would also be fun to do with linux to try to learn more about deploying linux in enterprise environments.

Trainguyrom@reddthat.com on 21 Apr 16:12 collapse

This is pretty high on the to-do list. I plan on virtualization a bunch of it, but it would be pretty easy to have one desktop hosting each subnet of client PCs and one hosting the datacenter subnet. Having several hosts to physically network means less time spent verifying the virtual networks work as intended.

Also playing with different deployment tools is a goal too. Having 2-3 nearly-identical systems should be really useful for creating unified Windows images for deployment testing

HumanPerson@sh.itjust.works on 21 Apr 17:53 collapse

I don’t like windows, so I don’t deploy any of this for real, but yesterday and the day before I set up a windows server, a few clients, and a Kali VM and manager to get in. I found out if you type “\\anything” into the windows bar it will send that user’s name and hash out very easily with llmnr poisoning on every keystroke. What’s worse is that is the default behavior. It is super fun to learn about all this though.

Edit: upon posting this comment it made the double backslash look like a single backslash so I changed it to a triple so it looks right on my end but just know I meant for it to be double.

ares35@kbin.social on 21 Apr 16:27 next collapse

a pallet of 4th gens? i have a dozen left here from around that era that i can't get rid of without literally giving them away. they're 'tolerable' for a gui linux or win10 with an ssd, but the 'performance per watt' just isn't there with hardware this old. i used a few of them (none in an always-on role, though), but the rest just sit in the corner, without home nor purpose.

these 800 g1s are, iirc, 12vo, so upgrade or reuse potential is a bit limited. most users would want windows, and win10 does run 'ok enough' on 4th gen, just make sure they're booting from ssd (120gb minimum). but they'll run into that arbitrarily-errected wall-of-obsolescence with trying to upgrade or install win11 when win10 retires in ~ 18 months (you can 'rufus' a win11 installer, but there's no guarantee that you will be able to in the future). that limits demand and resale value of pretty much all the pre-8th gen hardware.

Trainguyrom@reddthat.com on 21 Apr 16:54 next collapse

I think you’re not giving 4th gen enough credit. My wife’s soon-to-be-upgraded desktop is built on a 4th gen i5 platform, and it generally does the job to a decent level. I was rocking a 4790k and GTX970 until 2022, and my work computer in 2022 was on an even older i5-2500 (more held back by the spinning hard drive than anything. Obviously not a great job, but I found something much better in 2022) my last ewaste desktop-turned-server was powered by an i5-6500 (which is a few percentage points better performance than the 4th gen equivalent) and I have a laptop I use for web browsing and media consumption that’s got a 6700HQ in it.

I’ve already got a few people tentatively interested, and I honestly accepted the possibility of having to pay to recycle them later on. Should be a fun series of projects to be had with this pallet of not-quite-ewaste

possiblylinux127@lemmy.zip on 23 Apr 03:18 collapse

The i5-6500 is my favorite CPU

possiblylinux127@lemmy.zip on 23 Apr 03:18 collapse

They are fine honestly. It depends on your expectations

requiem@lemmy.world on 21 Apr 17:11 next collapse

I think the only answer is “Doom”

Potatos_are_not_friends@lemmy.world on 21 Apr 18:48 next collapse

We’re you thinking like Doom Lan party, or some weird supercluster with the pure focus of running Doom?

seaQueue@lemmy.world on 21 Apr 19:08 next collapse

But can they run Crysis?

mlg@lemmy.world on 21 Apr 19:17 collapse

If OP actually does do this I recommend Odamex

Although he’d also need 25 monitors lol

Trainguyrom@reddthat.com on 21 Apr 21:15 collapse

Although he’d also need 25 monitors lol

Back to the government auctions then!

foggy@lemmy.world on 21 Apr 17:18 next collapse

Setup a CS 1.6 LAN party arena.

No pen testing lab sounds fun. 8 PCs for a segmented network, a few red team PCs.

empireOfLove2@lemmy.dbzer0.com on 21 Apr 17:30 next collapse

BOINC! Do some science. :)

Bigoldmustard@lemmy.zip on 21 Apr 17:41 next collapse

If you purchased them from the federal government they won’t have hard drives.

Trainguyrom@reddthat.com on 21 Apr 21:07 collapse

State government, and it says they come with SSDs. They came from a school so presumably they’re from a lab or are upgraded staff PCs, both would be pretty low sensitivity. Maybe I’ll learn the final test answers for Algebra 1 at worst!

Might be fun to do some forensic data recovery and see if anything was missed though

Bigoldmustard@lemmy.zip on 21 Apr 23:54 collapse

Ooh, or behind the scenes email drama from staff!

Mango@lemmy.world on 21 Apr 17:54 next collapse

Xonotic!

cmnybo@discuss.tchncs.de on 21 Apr 20:33 next collapse

I certainly wouldn’t want pay the power bill from leaving a bunch of these running 24/7, but would work fine if you wanted to learn cluster computing.

You could always load them up with a bunch of classic games and get all your friends over for a LAN party.

h3ndrik@feddit.de on 21 Apr 20:41 next collapse

Hmm, get 25 monitors and friends and play one of those starship bridge simulators like smcameron.github.io/space-nerds-in-space/

fed0sine@lemm.ee on 21 Apr 21:31 next collapse

You made me remember PULSAR - Lost Colony which is a decent iteration of co-op space bridge sim!

…steampowered.com/…/&sa=U&ved=2ahUKEwj0mZ6uoNSFAx…

Bahnd@lemmy.world on 23 Apr 15:55 collapse

Oh, that one was a blast! I need to get my nerd herd to revisit it… Although all we did was play liars dice while the ship was on fire.

GhostTheToast@lemmy.world on 22 Apr 01:46 collapse

I volunteered as tribute to be one of these ‘Friends’

solrize@lemmy.world on 21 Apr 17:42 next collapse

Do you have particularly cheap or free electricity?

Trainguyrom@reddthat.com on 21 Apr 21:09 collapse

12 cents per kilowatt-hour. I certainly don’t plan on leaving more than a couple on long term. I might get lucky with the weather and need the heating though :)

solrize@lemmy.world on 22 Apr 00:23 next collapse

25 machines at say 100W each is about 2.5KW. Can you even power them all at the same time at home without tripping circuit breakers? At your mentioned .12/KWH that is about 30 cents an hour, or over $200 to run them for a month, so that adds up too.

i5-4560S is 4597 passmark which isn’t that great. 25 of them is 115k at best, so about like a big Ryzen server that you can rent for the same $200 or so. I can think of various computation projects that could use that, but I don’t think I’d bother with a room full of crufty old PC’s if I was pursuing something like that.

Trainguyrom@reddthat.com on 22 Apr 00:43 next collapse

I won’t be leaving all of them on for long at all. I’ve got a few basically unused 15A electrical circuits in the unfinished basement (can see the wires and visually trace the entire runs) I’ll probably only run all 25 long enough to run a linpack benchmark and maybe run some kind of AI model on the distributed compute then start getting rid of at least half of them

FryAndBender@lemmy.world on 22 Apr 09:15 next collapse

UK here, we could run that from 1 plug.

11111one11111@lemmy.world on 22 Apr 16:14 collapse

Psh 1 plug aint shit. Every Pic I see from anyone who lives out in those ghettos of India, Central America or any spacific islands they also only rock 1 plug but theyre running the corner store, the liquor store, the hospital, their style of little school middle school and old school, 3 hair salons if Latin or 3 nail salons if spasific, Bollywood, every stadium from every country in the world cup, and always 1 dude trying to squeeze 1 more plug in cuz hes runing low bats. Idk why the American ghetto is so pussy. One time i seen a family that fuckin put covers over empty sockets?!? Come on dog thats like wearing a condom jerking off. NGL tho, I get super jelly seeing pictures from those countries thp with their thousands of power lines, phone lines, sidelines, cable lines, borderlines, internet lines… fuck I don’t know much about how my AOL works but those wizards must be streaming some Hella fast Tokyo banddrifts with all them wires.

ulterno@lemmy.kde.social on 22 Apr 17:25 collapse

those wizards must be streaming some Hella fast Tokyo banddrifts with all them wires.

That part is wrong for India, at least.
Here’s a random site with some stats
India, you can expect ~100Mb/s with FTTH and 50Mb/s otherwise. Reliability is even worse.

Rest is right.

MenacingPerson@lemm.ee on 23 Apr 10:05 collapse

India has gigabit fiber

ulterno@lemmy.kde.social on 23 Apr 13:30 collapse

And Japan has a 300+ Tb/s connection. Your point?
My point is that the average Indian is not doing “Hella fast Tokyo banddrifts” (not sure what banddrift even means, but no).

And yes, a 1Gb/s connection is theoretically available, but how many people are using the ~₹4000/month connection?

Considering how many people tend to just not have Broadband at home, relying just on mobile internet, we can see how things compare with others.

Also, to point to the tread starter, most of the “thousands of” cables that you see on poles in congested areas, are just abandoned cables from older installations which nobody cared to remove.

MenacingPerson@lemm.ee on 24 Apr 07:15 next collapse

I’m not the same dude that was talking about banddrifts and congested poles.

Indian, btw.

MenacingPerson@lemm.ee on 24 Apr 07:16 collapse

Also ~100Mb/s is in no way the average speed in an Indian household. It’s usually lower. I also don’t see any specific mentions of india in your link up there to that random site.

ulterno@lemmy.kde.social on 24 Apr 23:37 collapse

Also ~100Mb/s is in no way the average speed in an Indian household.

You’re right. It’s not.

I also don’t see any specific mentions of india in your link up there to that random site.

I don’t see any either. Guess why. Because it only has the top 10, further emphasising the point that :

the average Indian is not doing “Hella fast Tokyo banddrifts”

billwashere@lemmy.world on 22 Apr 18:19 next collapse

This is only about 21 amps. Most outlets in a home are 15amps but 20amps isn’t unheard of. From one outlet doubtful but yes one house would provide that much power easily if you split them up to three or 4 rooms on different breakers.

Now it would be fun to watch his electric meter spin like a saw blade … (yes I’m old … I remember meters that had spinning discs)

Zorg@lemmings.world on 23 Apr 00:28 next collapse

Just two 15A breakers is enough actually. Outlets are supposed to be able to sustain 80% power, so you should be able to pull 1.44kW from a singly puny Nema 5-15.

billwashere@lemmy.world on 23 Apr 03:46 collapse

Well true but I was assuming the circuits had some things drawing a little power. Flipping on a device and tripping a breaker with 12 machines on it wouldn’t be ideal :)

I have done this before in my upstairs home lab. 3 beefy ESXi machines, some nas storage, and a basic 10gbe switch eats up a lot of a single 15amp circuit. And apparently turning on a TV pushes it over the edge. Luckily the UPS saved my but while a reset the breaker and shut some stuff off.

[deleted] on 23 Apr 00:28 collapse

.

possiblylinux127@lemmy.zip on 23 Apr 03:16 next collapse

Jack into the local coffee shop

rsolva@lemmy.world on 23 Apr 07:44 next collapse

I have a couple of these (only the G2 and G3 SFF) and they consume between 6-10w when not under load, and they max out at 35w (or 65w depending on CPU). I run proxmox with 64gb ram and they are surprisingly efficient.

Blackmist@feddit.uk on 23 Apr 09:32 collapse

That’s less than a kettle, in the UK at least.

Of course I wouldn’t want to be running that all the time, because electric ain’t cheap.

bradorsomething@ttrpg.network on 22 Apr 17:04 next collapse

Put a different operating system on each one, and make each a gateway to access the next. See who can make it through.

billwashere@lemmy.world on 22 Apr 18:13 collapse

HungerGamesOS. I love this idea!!!

possiblylinux127@lemmy.zip on 23 Apr 03:16 next collapse

Do a giant Proxmox cluster. You are going to need one hell of a switch though

deFrisselle@lemmy.sdf.org on 23 Apr 09:13 next collapse

<img alt="" src="https://lemmy.sdf.org/pictrs/image/c393cf07-37a4-4514-8c28-570da29bc022.png">

Linkerbaan@lemmy.world on 23 Apr 09:32 next collapse

I don’t understand why people want to use so many PC’s rather than just run multiple VM’s on a single server that has more cores.

___@lemm.ee on 23 Apr 10:15 next collapse

Learning.

towerful@programming.dev on 23 Apr 10:16 next collapse

Having multiple machines can protect against hardware failures.
If hardware fails, you have dono machines.
It’s good learning, both for provisioning and for the physical (cleaning, customising, wiring, networking with multiple nics), and for multi-node clusters.

Virt is convenient, but doesn’t teach you everything

Linkerbaan@lemmy.world on 23 Apr 12:21 collapse

I’m not sure if running multiple single SSD machines would provide much redundancy over a server with multiple PSU’s and drives. Sure the CPU or mobo could fail but the downtime would be less hassle than 25 old PC’s.

Of course there is a learning experience in more hardware but 25 PC’s does seem slightly overkill. I can imagine 3-5 max.

I’m probably looking at this from a homelab point of view who just wants to run stuff though, not really as the hobby being “setting up the PC’s themselves”.

LukyJay@lemmy.world on 23 Apr 10:36 collapse

“I don’t understand why you’d run so many VMs can you can just run it on bare metal”

It’s fun! This is a hobby. It doesn’t have to be practical.

Linkerbaan@lemmy.world on 23 Apr 11:51 collapse

Of course, but installing everything on multiple bare metal machines which take IP addresses, against just running it in VM’s which have IP addresses… It just takes a lot of extra power and doesn’t achieve much. Of course that can be said about any hobby, but I just want OP to know that there is no real reason to do this and I don’t understand so many people hyping it up.

TseseJuer@lemmy.world on 24 Apr 16:37 next collapse

Damn zuck meta is eating you up. Take a breather it’s just for fun. Bro doesn’t have to find the cure for cancer just to poke around on some new hardware

Trainguyrom@reddthat.com on 24 Apr 18:20 collapse

I already said in the original post I plan on sellong off and giving away ~15 of them, keeping a few as spares, and only actually leaving one on 24/7

bare metal machines which take IP addresses, against just running it in VM’s which have IP addresses

Both bare metal and VMs require IPs, it’s just about what networks you toss them on. Thanks to NAT IPs are free and there’s about 18 million of them to pick from in just the private IPv4 space

Big reason for bare metal for clustering is it takes the guess work out of virtual networking since there’s physical cables to trace. I don’t have to guess if a given virtual network has an L3 device that the virtual network helpfully added or is all L2, I can see the blinky lights for an estimate as to how much activity is going on on the network, and I can physically degrade a connection if I want to simulate an unreliable connection to a remote site. I can yank the power on a physical machine to simulate a power/host failure, you have to hope the virtual host actually yanks the virtual power and doesn’t do some pre shutdown stuff before killing the VM to protect you from yourself. Sure you can ultimately do all of this virtually, but having a few physical machines in the mix takes the guesswork out of it and makes your labbing more “real world”

I also want to invest the time and money into doing some real clustering technologies kinda close to right. Ever since I ran a ceph cluster in college on DDR2 era hardware over gigabit links I’ve been curious to see what level of investment is needed to make ceph perform reasonably, and how ceph compares to say glusterFS for example. I also want to setup an OpenShift cluster to play with and that calls for about 5 4-8 core 32GB RAM machines as a minimum (which happens to be the maximum hardware config of these machines). Similar with Harvester HCI

It just takes a lot of extra power and doesn’t achieve much

I just plan on running all of them just long enough to get some benchmark porn then starting to sell them off. Most won’t even be plugged in for more than a few hours before I sell them off

there is no real reason to do this and I don’t understand so many people hyping it up.

Because it’s fun? I got 25 computers for a bit more than the price of one (based on current eBay pricing). Why not do some stupid silly stuff while I have all of them? Why have an actual reason beyond “because I can!”

25 PC’s does seem slightly overkill. I can imagine 3-5 max.

25 computers is definitely overkill, but the auction wasn’t for 6 computers it was for 25 of them. And again, I seriously expected to be out of and the winning bid to be over a grand. I didn’t expect to get 25 computers for about the price of one. But now I have them so I’m gonna play with them

Linkerbaan@lemmy.world on 24 Apr 18:23 collapse

I see I was picturing a 25 pile stack of PC’s this makes a lot more sense thanks for the explanation.

ricdeh@lemmy.world on 23 Apr 09:35 next collapse

I would personally attempt the Kubernetes cluster if I had that many physical machines!

Charadon@lemmy.sdf.org on 25 Apr 19:33 next collapse

distcc cluster?

Boomkop3@reddthat.com on 26 Apr 05:24 collapse

25 screens, 25 dancing gandalfs