This is fun. I'm more of a minimalist with my homelab setup. It's a laptop and an old NAS. I love it either way: running a homelab is a nonsensical and fun hobby in any case.
I feel like we live in a world in which it's either racks or cheap VPSs. In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second. Consumer hardware is insane.
I've handled 10's of thousands of unique visitors per minute and more than a couple front page of Reddit + Hacker News herds on this little laptop through a residential ISP.
Depends, I get a lot of utility from mine, as it manages my media collection for streaming at home and on the go. I've tried using SaaS alternatives and managed hosting of the apps I run, but those experiences were both lackluster and relatively expensive. And since the apps I run and my media collection aren't locked behind proprietary systems or limited APIs that might disappear, the amount of integration and automation makes for a very pleasant experience.
> In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second. Consumer hardware is insane.
I just add my old devices to my cluster and call it a day. Even ancient hardware is suitable for it, especially if you're using old laptops that are optimized for power savings. Even old Core 2 processors in laptops can idle at a low wattage, and TDP can be less than a light bulb's when maxed out.
One of the most irritating things about "homelabs" is that most people seem to think a "homelab" means "a rack of very expensive, way-overspec'd ubiquiti gear, an OTS NAS unit, and a docker container server running media server/torrent shit."
I have a laptop running a dozen different containers - bookstack, torrent client, rss reader, and so on. I don't think of it as a "homelab."
I have to respectfully push back here. "hw/sw" garage is the perfect way to describe it. All of this takes place at one's home. There is no arbiter, but the individual over what is in or out to one's time at home.
Distinction springs from the person themselves.
I think ClumsyPilot's way to put it was just fine.
I just said that the distinction is not "pointless" or "gatekeeping", as some people accused, but pretty common convention which can be quite useful for readers. In particular, you are pushing back against something nobody said.
You're mistaking the consumerist urges that a lot of people partake in their hobbies and the idea of a homelab itself. A homelab can be overspec'd ubiquiti gear or it could be a RasPi running a bunch of services. It's just one or more servers that sit on your home network that you can fuck around with. Yeah, I guess something you care for stability with doesn't necessarily constitute the "lab" distinction, but a lot of times these stable things come out of experimenting in a homelab.
My "media server" (browser and downloaded media played via smplayer on a stock Ubuntu install) emerged from an experimental server running a lightweight distro that I used to do anything and everything from. Once I found that which parts of the media usecase fit into my partner and my lifestyle, it graduated to a stable decently-specced Ubuntu machine that is rarely touched other than for updates and downloading new content.
A home lab was generally seen as something to experiment with and learn something from. Setting up a Plex server is literally installing one piece of software on anything newer than a Core 2 Duo.
I started my own homelab like this, and still learned from day 1.
It grew over time to serve many purposes, has seen many stopped/failed attempts, has had many lives (I mean, recreate from scratch to learn X, Y or Z).
There's not a particular day or a particular addition or a particular level of complexity when it became a homelab, I think of it as such from day 1.
I think of a homelab as one or more servers (or a computer, laptop etc.) located in a home to play around with software, virtualization, hosting stuff both for testing and actual functional (home) use. Basically everything that's experimenting (like in a real lab) with technology. Of course the definition will be different for everyone :)
I would agree with this. While I run Plex and/or miniflux or as he put an OTS Nas, I also use it to provision Luns, or test things that I may consider for work.
Here are a few of the deployments that originated in my home lab but ended up in a prod environment at work
1. Replacing hardware load balancers with haproxy. (This started with a few options, including Nginx, and some others. But haproxy and it’s web management, csv monitoring gave me the best capability to integrate at work)
2. Vpn appliances for covid. I was able to whip up 2-4 scalable VPN appliances based on openvpn in “1/2 a day” at work because I was able to flesh out most of that at home.
3. Vulnerability scanners
4. HIDS security tools. In the end we went with a OTS vendor but options like ossec, wazuh etc were ruled out in a lab.
5. Ansible (over some of the other options)
6. Squid for reporting on the HIDS mentioned above.
There’s probably more. I know there is. And point blank a lot of this stuff had mock ups done at home because I have full control and am not subject to auditors etc when evaluating them. Whereas I do thst at work and I have to do more work writing up justification or change requests etc. it’s just easier at home.
All that said I try to keep the house as flat and plain Jane as I can.
I have a production hypervisor (HP EliteDesk 800 G3 Mini). This is where things run that my spouse cares about, in particular, Home Assistant. I don't generally mess with this machine.
I also have a lab hypervisor (Dell T30). This is where I feel free to experiment with VMs and accidentally on purpose have to rebuild it every once in a while, take it down to swap out hardware, etc.
My "media server" consists of a web application, backend application, multiple *arr services, transcoding automations, fibre termination, user account management shared across multiple machines and services, multiple VLANs and LUNs, etc.
All these are spready across 16RU or so, but really only serve as a "media server".
Yeah the best part of the home lab hobby is gatekeeping because you spent more than someone else and you need to belittle them to justify how much you spent on your Juniper or whatever.
Plex servers and containerizing all of the services that support it are how I learned to use docker before I became a professional. It’s now turned into a collection of containerized services all talking to each other with dynamic dns on cron, pihole, wireguard, HomeKit running all my appliances and more…
I don’t get it. Is it only a homelab if I use it to practice network certification exams?
There is some grass fed, organic, grade A level gatekeeping in this thread…and it made me want to rant.
With less technical management, I've had repeated, and bewildered, conversations trying to get them to understand that our one "computer" sitting on my desk is many many times faster than the "server" our IT team provides. "But it's a server!".
I like to point out to people who haven’t worked it out for themselves, that the load balanced HA pair of EC2 instances with the multi AZ RDS that runs almost $200 a month at on demand rates, is somewhat less computing power and storage that the phone in my pocket.
Many times faster doesn't mean shit if it takes up 4-6x more space than it needs to in what is likely the most expensive commercial real estate the company owns/leases.
Many times faster doesn't mean shit if it can't be remotely lights-out managed and its hardware monitored using standardized tools (or at all.)
Many times faster doesn't mean shit if it doesn't have redundant PSUs.
Many times faster doesn't mean shit if failed drives can't be hotswapped.
Also, the computer sitting on your desk is not "many many times faster" than a current, or even few years old, server.
Etc.
If you want better hardware from IT, tell management to give them more money. IT is almost always viewed as a cost center and given a shoestring budget yet asked to do, and be responsible for, he world.
You know how you're experienced from all your years as a programmer? Imagine IT people are the same, instead of assuming they're all idiots who are too stupid to go out and buy desktop computers instead of servers like your genius self.
> Also, the computer sitting on your desk is not "many many times faster" than a current, or even few years old, server.
The server is a big pie. If you're buying a single slice, then yes, it's very very easy for a cheap old desktop to be way faster than a cheap VPS.
> Imagine IT people are the same, instead of assuming they're all idiots who are too stupid to go out and buy desktop computers instead of servers like your genius self.
It's the managers that are idiots. Not everything needs to run in a datacenter. Some things really are kittens and not cattle.
> Many times faster doesn't mean shit if it takes up 4-6x more space than it needs to in what is likely the most expensive commercial real estate the company owns/leases.
Unless you're hoping to monetize that spot on your desk, the real estate market means nothing in terms of cost.
> Many times faster doesn't mean shit if it can't be remotely lights-out managed and its hardware monitored using standardized tools (or at all.)
What stops you from "using standardized tools" on a box you own?
> Many times faster doesn't mean shit if it doesn't have redundant PSUs.
What leads you to believe that all those 9s are relevant or even not comparable with cloud alternatives? In fact, I'm not sure that the latest rounds of outages at AWS allow it to claim more than 3 9s during the past year.
> Also, the computer sitting on your desk is not "many many times faster" than a current, or even few years old, server.
So you are basically saying that 99% of the time it will work fine. Got it.
But seriously, they were comparing to the server they got, not the one you have or can provide.
It’s entirely reasonable for the IT team to provide a VPS that doesn’t have nearly the amount of power for a application that’s barely used. Doesn’t mean it’s easy to explain to management.
> Imagine IT people are the same, instead of assuming they're all idiots who are too stupid to go out and buy desktop computers instead of servers like your genius self.
Nearly all of your assumption here are incorrect or flawed, except the redundant PSU (we only had one). But, I do think they're just like me: working in a non-ideal environment with constraints outside of our direct control. The non-ideal constraint that they had, in that instance, at that time, was that they could only give us a VPS with 4 threads. It wasn't possible to do what we needed with their server. Or, to put it into your language, five nines doesn't mean shit if, in practicality, it makes a reliable space heater.
I think a lot of people build a homelab to learn about technologies and get realworld experience deploying them. That was what drove mine for a couple of years. Once you’ve mastered servers and networking and so on, then it just becomes a fun hobby, I agree there. But someone who wants to get into networking and the like (and who lacks experience) is definitely going to need to practice with real or simulated networks to get good at it.
looks better than my quick and dirty wireguard setup to get NAT Type: A behind CGNAT on game consoles - Basically put whatever is connected to a device in a "public DMZ", separate from your network: https://raw.githubusercontent.com/genewitch/opensource/maste...
Wireguard is both very frustrating and very cool. I'm currently using it similarly to give a VM a public IP, and i'm testing the details on getting multiple SSL/https hosts behind that single IP, which is something you couldn't easily do a decade ago without the host with the single IP having all of the certificates and "MITM" the entire session.
Speaking of "CPU horsepower" i just replaced a 1.5kW HP[0] server with a .2kW Ryzen 5950x "server" that is about 5% faster overall - don't forget that old stuff, while capable, adds to the electric bill, usually at a constant rate.
[0] the iLO (lights out) actually reported the server's power usage in BTU. It drew the same power as a portable space heater.
> Speaking of "CPU horsepower" i just replaced a 1.5kW HP[0] server with a .2kW Ryzen 5950x "server" that is about 5% faster overall - don't forget that old stuff, while capable, adds to the electric bill, usually at a constant rate.
What I’ve observed is people on subs like r/homelab and r/sysadmin ridicule people who appreciate the available horsepower with modern consumer tech because “no ecc memory” or the like and I wonder if you people who are looking to make labs using the latest ryzen or i7/i9 (really, I’m thinking of getting started by converting an old thinkcentre with a 4th gen i5, possibly undervolting the cpu, and 24gb ddr3 into a pfsense router and some sort of server) will really be missing out on some necessary enterprise feature?
Enterprise servers make no sense at home, but they are more fun to play with than old laptops. It's a hobby. After a while you appreciate buying good tools rather than making do, like any other hobby.
Old HPE servers have jumped in price though. Last year you could buy insanely powerful stuff for under $200 but it's all $400 and up for the same gear at the moment.
ECC is about long term stability and data integrity. For a router, meh, the network protocols will deal with any flips. For a file server or database it's better if those random bit flips don't happen to critical data.
AMD based systems can sometimes be forced to use ECC mode even if the BOIS doesn't support it.
ECC is more important in systems with very large ram footprints because there's that much larger of a memory footprint area for cosmic rays to corrupt. If you've got one-two sticks of ram, and you're not running vital business data, meh, it's not required.
I really like ECC. But I'm not really willing to pay a significant premium for it.
I run companion systems to production out of my house, mostly development lanes comparable to production deployments. If they're down, it's really not mission critical. I also run my home security/surveillance systems. The other significant systems are those related to my children's computer lab.
One argument I’ve heard, which seems valid enough, is that running a homelab setup without EEC memory is a good thing, because you then need to build stuff that can handle memory errors (or at least notice and learn from them) which you will occasionally see in your half a dozen or so homelab machines. Once your deploying production work systems “at scale” with hundreds (or thousands) of servers, you will be guaranteed to see occasional memory errors (probably even with ECC).
One of my first real homelab setups, was 6 raspberry pi’s, laid out as two load balancer, two web app servers, and two database servers. The “unreliability” of pi’s running everything off SD cards was a _feature_ because it gave me lots of experience in how my (work) software stack running on our typical AWS design held up in face of failures.
I was running Wok + Kimchi + gingerbase on the HP, i'm now using proxmox instead. Short of having lights out management (out of band) built in, i haven't noticed much difference between the platforms being "a server" and "a desktop"; make no mistake, 5950x is a monster chip, but it's still a desktop with two few PCIe lanes for me to consider it "a server" - luckily i only require enough PCIe to have an old GPU and extra SATA ports. If i was building out stuff to do more research i'd want more pcie lanes than the Ryzen Desktop supports.
I recently bought an old HP server thinking it would be fun to play with. It turned out that it was loud and power-hungry, and for most things my needs could be served just as well by an old laptop. I ended up giving the server to a friend (who has their electric bill included in their rent).
> In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second. Consumer hardware is insane.
Yeah. I've got my search engine basically a beefy gaming PC with no graphics card and 128 Gb ram. I've not only gotten supposed HN death hugs multiple times, I've had my search engine see multiple queries per second without as much as flinching. It took Elon Musk tweeting a link to one of my blog entries before I started getting more traffic than my computer could handle and it started dropping a few connections.
Modern consumer PCs are ridiculously powerful if you make good use of the hardware.
Very cool! Running the not-well-maintained https://hndex.org search engine (and other memory hungry linear algebra based services) was also my original motivation for tunneling to my home as opposed to hosting on a VPS.
Are you hosted via a residential ISP? It's my hunch that peering agreements favor routes of consumer -> data center -> consumer as opposed to consumer -> consumer. That's mainly why I tunnel. Has that been your experience?
Yeah it's on residential broadband. Haven't really had much trouble to be honest. Though I'm based in Sweden and we have fairly well robust networking infrastructure all around, I guess that may be a factor.
I was hit by a botnet after my first big HN hug, so right now at least the search engine goes visitor->cloudflare->server, but if anything that's just added a bunch of ping.
I'm also doing crawling and so on on the same network and it's really not bad for that either. Granted my crawls are fairly slow and very far from saturating any connections.
Anyone else annoyed at how narrow the term Homelab really is relative to what it could be? Any scientific or maker hobbies could take place in a "home lab," from breeding seedlings, to soldering and electronics work, to 3D printing. But it really means just networking and servers?
My personal homelab (a stack of old laptops connected to a network switch) is mostly built around various experimental antenna arrays used for rtl-sdr hobbies (aircraft and maritime telemetry collection mostly) + home automation over a zwave mesh network. The home automation ecosystem also has a lot overlap with automated gardening/growing, since automated sensing and irrigation are a great use case for tools like home assistant.
It's likely easier, safer and more socially acceptable to set up this kind of home lab than the kind of maker spaces you are talking about. A lot of people go to (shared/public) maker spaces precisely because their home is not suitable for that kind of physical experimenting. You probably need space, money and expertise to do home experiments of that sort and then it's probably generally wise to keep it on the down low in most cases so you don't freak out the neighbors or otherwise draw problems to yourself.
Agreed. I always click the link expecting some kind of electronics or bio lab, and am instead greeted by a server and some networking equipment.
I am not really interested in reading about this kind of stuff[1]. I have a few raspberry pi's that serve as my "home lab" and that is all I really need right now. But I suspect the term took off because it gets a bunch of people like me to click.
[1] Not that I don't think it should exist, its just not high on my list of interests right now.
>Anyone else annoyed at how narrow the term Homelab really is relative to what it could be?
I wasn't before I read this thread!
I mean, my "home lab" are old computers from 2008 to 2016 and half of what I "do" involves simply testing what weird installations (eg netbsd) I can get up and running. It's for tinkering -playing; the "lab" part coming from experimentation: "What happens when I do this...?"
I agree when it comes to the definition being to narrow. Then again, I mostly engage with it on /r/homelab and the HN attitude seems to be much more restrictive...too restrictive for my tastes, personally.
Agreed. My home lab does include some services and networking to support them, but also some nutso wifi and other radio data hardware, electronics, 3d printing, large format 2d printing, mechanical fabrication, precision metrology...
I've never understood why fiddling with IT equipment in your home is considering a "home lab" since you aren't necessarily creating anything (like a laboratory would do), you are just integrating together other people's industrial components to serve non business purposes. The most egregious factor of this hobby is that you are often buying equipment that costs enterprise prices with your own money, whether or not it is used equipment.
> I've never understood why fiddling with IT equipment in your home is considering a "home lab" since you aren't necessarily creating anything
To paraphrase the first item in the HN guidelines:
"anything that gratifies one's intellectual curiosity."[0]
You can get to explore configurations and scenarios you might never be able to do at work. But it's not about work, it's about scratching an itch.
> The most egregious factor of this hobby is that you are often buying equipment that costs enterprise prices
If you know where to look you can pick up enterprisey kit cheap. It may not be the latest and greatest, but it's probably good enough to play with.
Just because you don't understand the attraction of this pastime (or any pastime) you don't get to judge or complain about folks who enjoy tinkering.
Once upon a time I built out a home lab to run a piece of software called Zebra to learn about BGP4 and CIDR. It was about scratching an itch. If energy prices in the UK weren't so utterly bonkers at the moment then I'd love to build out a new lab to mess about with some stuff. Again, to scratch a curiosity itch.
I have a home ESXi server which I use to host all my own stuff. I don't consider that a lab as such. It's just my home production infrastructure.
And then I have another ESXi which hosts the stuff I'm testing for work, and only work.
In my work home lab I have full access which I don't do even in the testing environment at work due to fragmentation of responsibilities. Or if I do I step on people's toes when trying stuff out that belongs to their realm. Most things are interconnected so testing something at work usually gets slowed down by the need to involve other people who don't always have time or interest. That's where my own lab comes in.
It's also a lot faster due to not needing VPN and having to deal with servers halfway across the world. And not having to deal with the evil SSL-MITM Internet proxy at work saves me SO much time.. Of course that work needs to be done to put it in production but a lot of stuff never makes it to that stage and then figuring how to smooth things with the proxy is just wasted effort.
A lab is a place to experiment. The person conducting an experiment does not need to be pushing the envelope of human knowledge to enjoy and benefit from the process.
When I was in college at the turn of the century, the rooms full of computers at university were called "labs". During the same timeframe, folks paying for a networking course+certification (eg Cisco's CCNA) would get access to a "lab" full of networking gear to play with.
I guess I'm struggling with why I would buy the same commercial grade equipment to play with at home when my profession should be fulfilling the needs of that skill set. Sounds like if you feel like you need to do so, you're more than likely unfulfilled in your current career.
That's why you buy the last generation of common enterprise components used when industries mass upgrade, which happens every few years.
I built mine in 2018 and I paid $65 each for two E5 2670 v3 ($1600 MSRP, 2014 CPU), and $300 for a dual socket motherboard with 128GB of ram. Yeah it's not the latest and greatest but it's been going strong since then.
Yeah, it's cheap and the hardware is still enterprise level hardware so it's more than capable of handling personal lab workloads (esxi and proxmox still fully support my hardware) and it's been very valuable for me to learn how to stand up and manage similar environments, which is not something I usually would be expected to do in my profession, but has been great for supplemental skills in my career.
Used this shit is crazy cheap. I picked up a 16-core Xeon monster with hyperthreading and 128gb of RAM for under a grand, and the power bill for it is maybe $20 a month
My "homelab" is only used for things that actually provide value to me. I host a Matrix server for chat, Home Assistant, a Minecraft server, plus a few hundred gigs of backups and photos/videos.
I don't really know if I qualify though. All that's on a 2011 Dell, a 2-bay Synology nas, and a UPS I found on Amazon. I have zero urge to go out and buy a rack; it works great as is.
Because you're experimenting.
what happens when I use this combination, what would it take to be able to get this group of computers to do that task
And learning also fits into this: like the computer version of a school chemistry laboratory. Setting up computers to perform tests for certification.
For what it's worth I agree with you about the equipment -but I read on reddit a lot of /r/homelabbers getting equipment for work so that would make sense.
I don't work in the field so for me personally it's more a question of wanting to experiment with software while not throwing away perfectly working (but old) computers.
You sound worked up. Most people who seriously invest like fiddling around and creating their own networking structure. It's a feature that they don't have to worry about "serving business needs"!
I'm not sure what the difference is between this and any other moderately expensive hobby.
It's the phrase "homelab", I suppose. Why not call it "Home IT" or "Home Data Center" or "Home Network Infrastructure". Calling it a home laboratory sounds like enthusiasts are trying to escape some preconceived notions about the topic. I digress...
You can actually do some crazy things with a homelab. I have a store-and-forward network joining roadwarrior machines to machines on my home network organized in a hub-spoke-ish model. It lets me access media from my home network even when I'm on some super oversubscribed AP (at 256 KB/s) at some crappy hotel with awful firewall rules (like only letting TCP traffic on 80 and 443 out) without opening any security holes on the network itself. I was able to experiment and deploy the network using the machines in my homelab.
As far as the name, I think it comes from the term "networking lab" that netops actually use to trial out network topologies before deploying them in the field. It's a home networking lab, that's all.
FYI LXD can do both LXC containers and KVM VMs so it can replace Proxmox in a lot of cases. LXD is packaged in Arch, Gentoo, Alpine, Void, NixOS, and probably some other distros.
I manage a network of a few hundred multi-layer switches, several routers, and a couple thousand wireless access points across a couple dozen campuses.
My home network is an Eero. The network engineer’s kids have no homelab, as it were.
Occasionally, I get the urge to build out a proper homelab but quickly realize I don’t want to come home and deal with the same stuff and put the Eero back in.
It’s not that I don’t like learning and experimenting, it’s just that I get to do that all day at work and I need some work/life balance.
But, I still love seeing these posts and commenters chiming in with their own configurations.
I like your observation. I think the flip side applies in some cases too: the people who don't get to run fancy network gear at part of $dayjob get some enjoyment running them for fun at home.
And indeed that’s how I got started, temporarily borrowing hardware from my workplace and cobbling it together in my cramped apartment: beige Cisco routers, switches and hubs, sun3 and sun4 hardware, a microVAX II, etc etc.
I luckily had a mentor that let me bend the rules and bring stuff home. If you are a mentor you should also bend the rules for noobs.
I'm in a similar boat as you, but it's across numerous various clients. HOWEVER, I REFUSE to use consumer-class equipment at home when possible. I spend A LOT more, but within some sense of reason, to buy similar business-class equipment at home. Why? Because consumer-class equipment often has bugs I can't easily resolve. On the other hand, business-class equipment has the debugging and logging capabilities that I need to quickly pinpoint and resolve an issue.
In the end, my network at home works better, has higher availability and, for the times that it doesn't, I can quickly resolve the issue and get back to the enjoyment of my home versus being frustrated as to why some consumer-class thing is not working yet again.
I never spent much time with networking so a lot of it was a bit of a mystery to me so I spent the COVID time building out really awesome but overly built network.
I learned a ton and can now actually understand what's going on under the hood!
Since then though I definitely have simplified it a lot because it was a lot to manage but it was an excellent learning experience.
This is my long winded way of saying, network engineers don't get enough praise - that shit is actually really damn difficult to get right at scale!
Best part is debugging networking stuff is always really hard. Opening up a packet dump can help in some cases, but when you're trying to figure out why your nftables rule isn't registering a connection in the kernel connection table, you have to do some fun stuff to figure it out.
In the end though, I got fed up with my consumer gear (Orbi mesh system) never being very reliable, so had the house wired for Cat 6 and shoved a UDM Pro, USW-Pro-48-Poe and 3 U6 Pro access points in and it's been far more stable and problem free than the Orbi, to the point where the kids have stopped telling me "Dad, the wifi sucks!" and needing to reboot the whole thing, which I had to do at least once a month previously.
I feel as I get older I have less and less desire to geek out at home, especially now with Covid/WFH. When I log off for the day I really don’t want to go back upstairs to my home office to do anything.
I used to have a FreeBSD server running pf and jails to partition out services. Now I simplified everything with a little tiny Ubiquti cube that I use an app on my phone to control.
I have 3-node vSphere cluster with 36 cores, 300 GB of RAM, bunch of different disk options (magnetic, SSD, NVMe)
It is all powered by Ubiquiti networking gear.
VMware VUG is providing free and legal VMware licenses. I get to run VSAN, on top of it I deploy Kubernetes and I can decide if my pod wants to use poor performance VSAN disks (like EBS on AWS) or faster local NVMe‘s.
These are just couple of things I can make at home. Everything started as an experiment and it started to look like a replica of what we can have in a datacenter, but without a SLA.
First of all, all these stuff is extremely fun for me. Secondly, I won‘t go into details but as a cloud consultant, this thing I started as a hobby became one of the big boosters of my career. I never expected such an effect but I am extremely impressed by this.
- network switches: if you want enterprise features, low price, higher stability.. and you are willing to deal with painful configuration: check out Mikrotik
- mentioned servers consume quite some electricity, check out this Dutch server thread on how to have efficient server for inspiration: https://gathering.tweakers.net/forum/list_messages/2096876 (use Google translate) note that the Framework main boards also look quite nice as server
- also check out k8s at home template: https://github.com/k8s-at-home/template-cluster-k3s it uses GitOps to setup services which has proven very valuable to me by messing with configurations. I also build a search for helm chart releases: https://whazor.github.io/k8s-at-home-search/
- if going cluster mode: having a NAS for storage is more stable than using kubernetes storage solutions
All in all, I learned a lot about servers and networking by doing this as a hobby.
Attitudes toward dust ingress in particular delight me. Look inside any old Dolch data acquisition unit or tire alignment computer from the 1980s and find minimal reason to be horrified.
To think a home environment would necessitate anything special is excessive.
If you think restaurants are bad, and they are, you should try machine shops. I used to maintain CNC milling machines; one was in a shop that milled EDM sinkers[1] out of solid blocks of graphite. Graphite dust is a fracking nightmare. I quickly lost count of how many servo amplifiers and PLCs I had to replace.
Am I the only one who just slaps a few vms together on an umanaged switch?
I don't need to tag anything and the two vlans I have are for the wifi and the house. One vm runs openwrt, another openmediacault, and then I have arch for my dev projects. Psu hanging out sure but all that is just on a fold out table under the stairs and it's been rock solid for 3 years.
Don't get me wrong it's nice to do this stuff and I'm not knocking people for spending so much time on it but when you spend your day at work logging into systems and fixing broken junk the last thing I want to do is come home to a dead network or vm issue to solve.
Great to see a write up about someones experiences building a homelab.
I love the homelabs, and I recently built a new house and made some dedicated space for mine.
See, I have no interest in running large server hardware at home. I did it different.
At first i Had a stack of 2-3 old laptops running stuff. Now I just use 4 raspberry pi 4s, with 8gb. It lives in a cubby in my office desk. Low energy, low maintenance and works really well. Got rid of the laptops.
My network is the TPLink Omada stuff, sittign on some shelves in our family room. Everythign is wifi and I run a 2nd ap on the opposite side of the house using MoCA 2.5 to connect them over the coax in the 2 rooms to improve performance.
I’m always sad that 10”/10.5” racks aren’t more common for this sort of thing. I have my “homelab” in one and it’s the perfect size for a home router, an 8 port switch, a Raspberry Pi, a Thinkcentre tiny, and a 1-bay NAS.
Basically nothing actually rack-mounts in 10.5” so I currently have things cable-tied to shelves, but I’m planning to 3d print some adaptors.
Yeah, though one of the benefits of a 19" standard rack is you can pick up old data center gear cheap. Including some exotic stuff (like the big blue Palo Alto Networks box I owned, until the novelty wore off).
For mixing small gear with it, there's a lot to be said for rackmount shelves, including with zip ties or velcro.
For people that really only want something small, I've seen people 3D print out of plastic, build out of wood, IKEA desk organizers and furniture...
This is what I did. My wife had an IKEA Besta media center cabinet and it's already mounted beneath our television. I just re-purposed it and put in a homemade NAS, router, switch, and some fanless mini-PCs as my server cluster. The downside is no ventilation, but I got out the jigsaw and cut holes in the cabinet and stuck USB fans with magnetic dust filters in them. Has worked fine for two years now. Upside is everything is small and quiet and I didn't need additional furniture to put servers in. You have to get Windows off the mini-PCs, but since Arch started bundling cloud-init with their installer media, I just put my configuration scripts on a USB stick with the cidata label on the disk, plug that in with the installer, hit the power button, and system installation up to configuring ssh so I can get in from my laptop happens unattended without me needing to hook up to a keyboard and monitor.
Back when my Bride and I were living in a one bedroom apartment, she looked at my jumble of computers around my desk and asked "why not just get one big box?". I was browsing/shopping at Lockheed's outlet store, and stumbled across a glorious Sun 180 server with a 19", 8' tall rack. Only $25! Who could resist! We picked it up and my Bride was horrified to discover the refrigerator sized machine next to my desk. It is still used today - though it is holding a bunch of servers in the basement rather than front and center in my office.
That's the "one big box" version of the 3/75 and pair of Sun shoeboxes that I once bought for home. (Bought used, from the local Sun office, which would occasionally email out of list old gear to sysadmins at customer sites, but for quite a lot more money than $25, unfortunately.)
Hopefully the 3/180 isn't as crazy-loud as the big old 4/390 that my employer was still using for email and some NFS.
The model showed 180, but the part was 3/280s-8r1, with all the upgrades! The actual hardware was dead - but it had all the panels and covers for the rack including the SCSI access. The original 180 chassis has housed everything from a pentium 90 to a 32-core threadripper. A fun bit of history. Hanging a mini-itx board in there was a bit comical for a few years, when folks were asking to see what was inside that monster.
For the UPS, I always err on the side of ones that are PFC-compatible, pure sine wave, line interactive. Partly because some of the gear I rack has PFC PSUs, but also because I don't want to be flirting with edge cases in other gear. I've owned 3 such rackmount UPSes, but it doesn't seem to be the default.
The two APC SMX1500RM2U rackmount units that I had at home seemed built very well.
(Currently have two Cyberpower rackmount units, which don't need as much rack depth in my living room, but I kinda expect something industrial-sounding like AMERICAN POWER CONVERSION to be more reliable than cheesy-sounding CYBERpower. :)
I used to swear by APC but they have really gone down hill in the last decade. The first rule of a UPS should be, don't burn down the house. So when buying a new unit this year I went with the brand whose consumer PFC-compatible model didn't have several reviews about it catching fire: Cyberpower.
I had a UPS fail last week, but I have two of them and my servers have dual PSUs plugged into each one. I don’t really need the uptime, but I’m mostly glad I was able to finally justify my unnecessarily high power bill to myself. :)
"Idle" is often measured in the hundreds of watts on enterprise/DC gear.
Newer stuff is sometimes a little better about ramping down at idle, but heck, some server fans will use more power than either of my desktop computers at idle.
Excellent taste in Linux distros, pedestal rackmount cabinets, and space heaters.
But everyone please be careful not to do much keyboarding work without your keyboard and monitor straight in front (as well as observing other usual ergonomics advice).
> As you know, trying out new things on production equipment never ends well.
That is not my experience. Most of the time it works just fine if researched well in advance.
The problem is just the impact for the few times it does not go well. That's why you have test environments. And test groups in production for the first rollout.
My home lab is more to try out the benefits of different options, especially as I work for a huge multinational. Admin rights are split among different teams and sometimes the others don't understand what I need. It's really helpful to have full access to my own lab so I can show them what I need from them and why it works best a certain way. Without access that's hard to figure out.
With rising energy costs I recently coalesced onto a pared down homelab.
2U server with 6 4TB drives (ZRaid 2 leaving 16TB of usable space) running on an Asrock Rack WSI 246 mini itx board, a Core i3 9100T (35w, can be configured as a 25w part) and 32GB of ECC DDR 4.
It idles around 20w and going full pelt with all disks running at around 40-50w.
This is running UrBackup for the computers in the house to backup to, SeaFile for the phones, a Minecraft server for the kids, Plex, Wireguard (for remote access) and InfluxDB for collecting metrics.
Whilst playing with ex enterprise kit would be fun, the noise, and the power consumption, just isn't worth it to me.
I'm not a network guy, so apologies for the stupid question, but what is the patch panel for in this setup? It doesn't really look like it has a purpose from my view.
It looks nice but it's not necessary. Typically in a setup like this, where you have network equipment with ports on the front and other stuff with ports on the back, you'll want either a patch panel or a 1U brush-style passthrough to make the cable runs nicer.
For network devices around the house, for example PoE cameras, wifi access points, or wall plate jacks. The patch panel is a clean way to terminate those cables in the rack.
Patch panel is where your cable run terminates. Since that part is more expensive to fix, you don't want to touch it. The only moving part is a small cable from your switch to patch panel.
I made a rack out of some dumpster-dived supermarket shelves, lumber, a couple of truck air filters and a forced draft fan. The thing doubles as drying cabinet for produce (mint, mushrooms, fruit etc.) by having the equipment in the top half of the rack followed by an air flow divider and 8 rack-sized metal-mesh-covered drying frames. From top to bottom the thing contains:
* HP ProCurve 2910al-24G J9145A 24 port Gigabit switch (managed switch, €47)
* HP DL380G7 with 2xX5675 @3.07GHz, 128GB (ECC) RAM and 8x147GB SAS drives (€450)
* NetApp DS4243 (24x3.5” SAS array, currently populated with 24x650GB 15K SAS drives (4 as inactive spares), €400)
* the mentioned airflow divider
* 8 drying frames
It is managed through Proxmox on Debian and runs a host of services including a virtual router (OpenWRT), serving us here on the farm and the extended family spread over 2 countries. The server-mounted array is used as a boot drive and to host some container and VM images, the DS4243 array is configured as a JBOD running a mixture of LVM/mdadm managed arrays and stripe sets used as VM/container image and data storage. I chose mdadm over ZFS because of the greater flexibility it offers. The array in the DL380 is managed by the P410i array controller (i.e. hardware raid), I have 4 spare drives in storage to be used as replacements for failed drives.
The rack is about 1.65m high, it looks like this (here with the old D-Link switch (now deceased) and minus the DS4243 array which now sits just above the air flow divider):
In the not-too-distant future I’ll replace the 15K SAS drives with larger albeit slower (7.2K) SAS or SATA drives to get more space and (especially) less heat - those 15K drives run hot. After a warm summer I added an extra air intake + filter on the front side (not visible on the photos), facing the equipment. This is made possible by the fact that cooling air is pulled through the contraption from the underside instead of being blown in through the filter(s).
I chose this specific hardware - a fairly loaded DL380G7, the DS4243 - because these offered the best price/performance ratio when I got them (in 2018). Spare parts for these devices are cheap and easily available, I made sure to get a full complement of power supplies for both devices (2 for the DL380G7, 4 for the DS4243) although I’m only using half of these. I recently had to replace a power supply in the DL380 (€20) and two drives in the DS4243 (€20/piece), for the rest everything has been working fine for close to 4 years now.
On the question whether this much hardware is needed, well, that depends on what you want to do. If you just want to serve media files and have a shell host to log in to the answer is probably ‘no’, depending on the size of the library. Instead of using ‘enterprise class’ equipment you could try to build a system tailored to the home environment which prioritizes a reduction in power consumption and noise levels over redundancy and performance. You’ll probably end up spending about the same amount of money for hardware, a bit more in time and get a substantially lower performing system but you’d be rewarded by the lower noise levels and reduced power consumption. The latter can be offset by adding a few solar panels, the former by moving the rack to a less noise-sensitive location - the basement, the barn, etc.
As to having 19" rack equipment in the home I'd say this is feasible as long as you don't have to sit right next to the things. Even with the totally enclosed, forced-draft rack I made the thing does produce enough noise to make it hard to forget it is there.
After the stack has taken what it needs there are 35 solar panels left on the barn roof. Excess heat is used to heat the upper floor, given that it is well-insulated it needs no additional heating. Problem, solved.
Selling that excess power is possible, as is buying some extra panels - there is enough space for at least 36 more on just that single roof. For now I choose to add panels - and batteries in the not too distant future, once the market is flooded with used EV-batteries - and continue on my course of decentralising the 'net using 'old' hardware - not a single new product is used in any of my computing endeavours. Most of it is either dumpster-dived and repaired, bought 2nd hand or acquired through places like Freecycle.
I also like recycling electronics but some stuff is just not worth keeping powered on. If you can sell electricity back then your switch has an opportunity cost of (assuming 11¢/KWh, which is US avg for 2021 and 5W idle for the replacement switch) ≈45 USD/year, which is almost 90% of its value.
This is really cool. I’ve been wanting to build my own lab, but I’m a bit worried about power consumption. If I do build something, then it would probably be something I power up and down as required.
That said, are there “Cloud” versions of this? Can I get my feet wet that way?
> I’ve been wanting to build my own lab, but I’m a bit worried about power consumption.
I’ve been concerned about this too. Initially I wasn’t worried, but in some arguments some people have warned me that even a single rack can incur significant monthly bill increases. I have a lot of retro hard too which I imagine is less power efficient.
As someone who has never done this before, what's the benefit of setting up a server rack, compared with just using my desktop? Is it purely for the fun of tinkering with hardware? Is there something that's not doable on a desktop?
There isn’t anything that isn’t doable on a sufficiently sized desktop. IMHO a rack is more helpful when you start doing larger cable runs or networking gear that isn’t available as a VM. With a well sized desktop (or pedestal server more likely), you can recreate most homelab setups virtually.
You might use less power if you have separate low power devices as opposed to a single (large) server though, so that’s one consideration. Another those is what it is you hope to accomplish. If you want to test gear or hardware that you can’t virtualize, then a rack can be handy. But even things like pulling out hard drives from a running server can be simulated with a VM in some way.
But it’s easier to brag about having a rack than a really elaborate VM configuration.
My primary home server _is_ my old desktop with some extra drives in there. It can be useful to have a bit more stability for something that you want to be generally up and change less than your workstation setup, and that drives the need for the extra setup. Then I've got a pi for some low load services that I'd rather not be on the same host as anything internet accessible in case of a security vulnerability (things are containerised, but container escape exploits still feel like they come up relatively frequently).
The only thing I feel I'm missing over over a real server would be an IPMI so I can boot it remotely in the event of a power loss. There are solutions to use a rPi for that which I plan to look into... some day.
A PC is ideal as long as there is only one. Once you start adding machines, it is time to move to a rack. Otherwise you end up with a giant ball of wires. Racks have rails like drawers, so the units are easy to service. Thats not the case with a heap of ATX towers. Also, your "collection" remains self contained and doesn't grow. Past some scale, you start to also make use of other rack mountable accessories like UPSes, PDUs, patch panels and switches. In the past, you had to move pretty early because older consumer machines like Pentium4 could only do so much, so you needed many of them even for a basic setup like a LAMP server or render/compile cluster.
It’s been years since I messed with “physical” networking stuff, but it wasn’t complicated. Complex, yes, but not complicated.
If you’re a “complete noob” and want to level up this skill, then learning what all the equipment listed does, why you’d want it, and how to configure it is an excellent way to go about it.
Just get a 12 core+ Ryzen and a bunch of RAM (64GB+). It'll be more efficient/faster GHz than a bunch of old servers and more than capable for running three node clusters.
Honestly, depending on what you're doing? Not even that.
* Don't underestimate the number of services you can having running in containers on a Raspberry Pi 4B with 8GB of RAM, sipping just 2.1W (yes, that little!) at idle over PoE or as low as 1.35W over WiFi powered by USB-C. With distributions like DietPi you can have a mere 10 processes at startup (using just 44MiB of RAM), but still all the flexibility in the world.
* Also don't forget you can cluster them. Want Kubernetes experience? Perfect opportunity to have a cluster than pulls at most like 17W under extreme loads.
* Have an old laptop? Fantastic. Great second life for it. You get the bonus of an integrated keyboard, UPS and display if and when you need it, even if it's normally headless.
* Don't forget Mac Minis and 1L form factors. I have an 1L form factor (that weighs about 1.3kg) 8-core Zen 3 Ryzen 7 PRO APU that boosts to 4.6GHz, with 64GB of memory, a 1TB SSD and 2.5GbE that sips a mere 10W at idle, and pulls maybe 45W at full bore. Its silent. takes up as much space as like 25 paper napkins, and I can pass the GPU cores through to VMs.
* Don't forget hypervisors as well. For truly ephemeral stuff you can just spin up virtual machines on your main desktop or laptop -- just stuff more RAM in it.
-- ----- -----
What's great is you can put the power sipping stuff basically ANYWHERE. It can be easily accessible, but entirely hidden from view. It makes no noise. It's not spewing heat or demanding cooling. There's no blinking lights. You're not driving up the electricity bill. Yet you can do very real work on them.
For cost? Raspberry Pis aside (sorry, use rpilocator!) check eBay or other marketplace completed listings to establish a price floor, and just be patient and only grab quality stuff that approaches the price floor. Nothing better than getting new (but open box) or barely used modern hardware for peanuts (60-70% off) because the seller is impatient or ignorant.
The Deskmini x300 is good. Takes everything from 2400g up to 4750g (or whatever it is). 2x nvme ssd and can fit 2x 2.5 drives. You can also buy barebones for ~$150 (everything but storage, RAM, and CPU).
Had x300 for quite a few years but have started playing games again, so decided to trade in for a desktop...but will probably go back if RDNA 2 is good/x300 gets updated for it (imo, the iGPU on the 4750g is terrible...RDNA 2 is way better). Great for running VMs on though, can take a Noctua NH-L9 so is quiet, far less space-intensive.
> * Don't underestimate the number of services you can having running in containers on a Raspberry Pi 4B with 8GB of RAM, sipping just 2.1W (yes, that little!) at idle over PoE or as low as 1.35W over WiFi powered by USB-C. With distributions like DietPi you can have a mere 10 processes at startup (using just 44MiB of RAM), but still all the flexibility in the world.
I've been disappointed in the general reliability, terrible IO, and heat/power of the RPI4. I'm getting rid of them in favor of second hand thin clients.
> Also don't forget you can cluster them. Want Kubernetes experience?
Just don't be thinking about mixing ARM and x86 in the same Kubernetes cluster. Like even if you technically can run a multi-arch kubernetes cluster, it's a major footgun. Docker's multi-architecture support leaves a lot to be desired and is a real pain to work with.
HP ProDesk 405 G8 Mini, or HP EliteDesk 805 G8 Mini.
The ProDesk 405 G8 can take a 35W RYZEN 7 PRO 5750GE. This includes AMD DASH (IPMI), and the Ryzen Pro has a few extra security features from EPYC/Threadripper. I got mine shipped for just $665 including tax, and did the SSD (1TB 980 Pro) and memory (64GB DDR4-3200 CL16) myself for another $300ish.
If you get three of them, you can have a pretty fantastic Spark cluster that consumes only about 100-120W! But yes, they do XCP-NG or Proxmox clusters extremely well.
The EliteDesk 805 G8 can have the 65W Ryzen 7 PRO 5750G, has a copper heatsink (instead of aluminum), has a 2nd NVMe slot, and can be shipped with a 2.5GbE port (you can order the part for the ProDesk 405 G8). Though if you get the 35W 5750GE, you can have an NVIDIA GeForce 1660Ti 6GB installed as well.
Since both use Ryzen PRO processors, they actually support ECC memory as well. I'd argue they may be the best overall 1L form factor systems on the market, especially in terms of performance per watt. The only other thing I'd ever want is 10GbE instead of 2.5GbE -- though in the EliteDesk 805, you COULD get an NVMe to 10GbE Ethernet and install it, just need to make a faceplate to properly mount the port on the back of the chassis.
Did you get it yet? Multiple reports of 1-month-plus shipping delay notifications after the order is placed. I ordered one of these myself and canceled after the delay. Still plan on ordering one albeit with a different config.
Originally I opted for the 5700GE (apparently slightly faster than the 5750GE) thinking I wanted the most power possible in the box. But my new thinking is I really want a 5300GE to pull out and put in an AM4-based silent HTPC for the living room, then upgrade the HP with a separately purchased 5700GE. QuietPC seems the only vendor selling these "GE" 35W models -- but not any 53X0GE parts. In other words buying one of these non-vendor-locked HP minis and ripping them out might be the only way on Earth right now to get one's grubby hands on any 53X0GE.
Also, are you sure there are two M.2 slots on the EliteDesk G8's? I know they are on the G6 but haven't found confirmation the G8's continue the tradition.
Look at the TMM series at servethehome.com. They go over vendors, models, specs of the 1L segment as well as the new/used market for them.
It's generally a really good form factor for a small server. The only things most people sometimes wish they had in them is 2.5-10GBE instead of the stock 1GB, and formal ECC support.
I went the "prosumer" route too. Threadripper maxed with 256G memory.
It runs esxi that auto boots my "workstation" vm on startup. This vm has my GPU, nvme, and usb passthru.
This way I can boot my workstation "on metal" if needed, but the vm works great. I then manage my lab infra running on the same machine via vCenter html5 ui.
Why ESXi just out of curiousity? Seems like most businesses I work with are ripping it out (some with glee) and going acropolis/kubevirt/QEMU or cloud.
* Best support for PCIe passthrough, easily toggle PCIe devices from UI without reboot.
* Best networking with virtual distrubuted switches - more intuitive to manage and use
* vSAN integration with k8s for block volumes is great, including volume snapshots. Also supports RWX (nfs) volumes
* vSphere api is a first class citizen with cloud native tooling such as terraform, ansible and packer. It also has a great golang-based cli via govc.
With all that said I actually still experiment with kvm (kubevirt, harvester etc) by running it nested inside esxi which works exactly how you would expect. Much prefer vSphere as the base of my lab though.
For really modern stuff sure, but a lot of enterprises are not using containers, they're still Hyper-V or VMware based. The only reason to go Hyper-V is because it's cheaper, so if you can use ESXi free... you choose ESXi.
ESXi remains one of the most ridiculously and stupidly stable platforms of all time. Someday someone will find an ESXi host with a twenty year uptime. Not because you should, but because it can.
Pretty much spot on, though the ceiling is a lot lower, at least in the US.
-- -----
A 12-core Ryzen 9 5900X is now $395 everywhere in the US, though a patient soul can get one on new eBay for $370-375 before sales tax (but including shipping), best $350 used.
DDR4-3200 CL16 from top brands isr $3.50-$3.65/GB new, or under $250 for 64GB. Again, a patient soul can save more still and get it closer to $200. Want 32GB? Cut those prices in half.
So ~$530-650 new with sales tax and shipping, with a floor of maybe $455-575. For a tremendously capable processor and 32-64GB of memory.
I guess that's pretty cheap. I've also wondered about a Mac Mini running a bunch of Linux VM's. I've to this date never seen a piece of hardware as reliable as the Mac Mini, based on my own and anecdotal evidence.
It's time to update this kind of guides with at least 2.5GbE networking. I suspect most FTTH ISPs will start defaulting to 2 or even 5 Gbps soon (mine already offers theses speeds in PNW, but at premium costs).
There's very few consumer options at 2.5 / 5 GbE; you can usually pick up cheaper 10 GbE gear (but on older standards, non-multi-gig compatible).
> It's time to update this kind of guides with at least 2.5GbE networking. I suspect most FTTH ISPs will start defaulting to 2 or even 5 Gbps soon
Indeed. I'm ordering my 8 Gbit/s "fiber to the home" router and it should come soon (45 EUR / month). The router comes with one SFP port doing 10 Gbit/s (!) while the four other ports are regular 1 Gbit/s ones.
The areas this sort of thing is available is way too scarce to matter. I'm in a major metropolitan highly competitive area, and 1.2Gbps non-symmetrical coax is the fastest option.
Most PCs where I work run through 100 Mbps switches.
I commiserate, up until last year I was in Comcast monopoly land as well, paid top dollar for the "1.2Gbps" plus $30 to remove the 1Tb data cap. Luckily we got Ziply FiOS installed and I'm in heaven; I'm paying $60 ($80 after a year) for 1Gbps symmetric.
They also have a 2-Gig ($120/mo) and 5-Gig ($300/mo); I'm waiting for those to drop in price and I'll definitely jump on-board (most of my home network is on 2.5GbE with a few 1Gb segments).
Heh, yep. Where I live, Comcast is the only choice and they know it. I pay $60/mo to get 25Mb down. And pushing bits upstream is so slow that I have to schedule any "big" uploads to start around bedtime.
I feel like we live in a world in which it's either racks or cheap VPSs. In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second. Consumer hardware is insane.
I've handled 10's of thousands of unique visitors per minute and more than a couple front page of Reddit + Hacker News herds on this little laptop through a residential ISP.
Here's my setup (pic down at the bottom): https://kiwiziti.com/~matt/wireguard/