Hacker News new | past | comments | ask | show | jobs | submit login

This is fun. I'm more of a minimalist with my homelab setup. It's a laptop and an old NAS. I love it either way: running a homelab is a nonsensical and fun hobby in any case.

I feel like we live in a world in which it's either racks or cheap VPSs. In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second. Consumer hardware is insane.

I've handled 10's of thousands of unique visitors per minute and more than a couple front page of Reddit + Hacker News herds on this little laptop through a residential ISP.

Here's my setup (pic down at the bottom): https://kiwiziti.com/~matt/wireguard/




> running a homelab is a nonsensical

Depends, I get a lot of utility from mine, as it manages my media collection for streaming at home and on the go. I've tried using SaaS alternatives and managed hosting of the apps I run, but those experiences were both lackluster and relatively expensive. And since the apps I run and my media collection aren't locked behind proprietary systems or limited APIs that might disappear, the amount of integration and automation makes for a very pleasant experience.

> In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second. Consumer hardware is insane.

I just add my old devices to my cluster and call it a day. Even ancient hardware is suitable for it, especially if you're using old laptops that are optimized for power savings. Even old Core 2 processors in laptops can idle at a low wattage, and TDP can be less than a light bulb's when maxed out.


A media server is not a "homelab."

One of the most irritating things about "homelabs" is that most people seem to think a "homelab" means "a rack of very expensive, way-overspec'd ubiquiti gear, an OTS NAS unit, and a docker container server running media server/torrent shit."

I have a laptop running a dozen different containers - bookstack, torrent client, rss reader, and so on. I don't think of it as a "homelab."


Given that a homelab is basically a garage for fucking around, arguing for what qualifies for fucking around is rather pointless


Many places make a distinction between "homeserver" (e.g mediacenter, NAS,.. ) and "homelab" (learning, test bed and fucking around).

Some people are interested in only one of these, so the distinction is not pointless.


I have to respectfully push back here. "hw/sw" garage is the perfect way to describe it. All of this takes place at one's home. There is no arbiter, but the individual over what is in or out to one's time at home.

Distinction springs from the person themselves.

I think ClumsyPilot's way to put it was just fine.


Sure, I never claimed that it is not "just fine".

I just said that the distinction is not "pointless" or "gatekeeping", as some people accused, but pretty common convention which can be quite useful for readers. In particular, you are pushing back against something nobody said.


You're mistaking the consumerist urges that a lot of people partake in their hobbies and the idea of a homelab itself. A homelab can be overspec'd ubiquiti gear or it could be a RasPi running a bunch of services. It's just one or more servers that sit on your home network that you can fuck around with. Yeah, I guess something you care for stability with doesn't necessarily constitute the "lab" distinction, but a lot of times these stable things come out of experimenting in a homelab.

My "media server" (browser and downloaded media played via smplayer on a stock Ubuntu install) emerged from an experimental server running a lightweight distro that I used to do anything and everything from. Once I found that which parts of the media usecase fit into my partner and my lifestyle, it graduated to a stable decently-specced Ubuntu machine that is rarely touched other than for updates and downloading new content.


Why are you trying to gatekeep what is or is not a homelab. Just because it isn't sufficiently complex doesn't mean it doesn't fit the definition.


A home lab was generally seen as something to experiment with and learn something from. Setting up a Plex server is literally installing one piece of software on anything newer than a Core 2 Duo.


> A home lab was generally seen as something to experiment with and learn something from.

It looks to me that rolling your own media center is something to experiment with and learn something from.


They mentioned one aspect of their setup and you're the one assuming that's all they have.

Not to mention even if it was just a Plex server, that still meets your criteria.


I started my own homelab like this, and still learned from day 1.

It grew over time to serve many purposes, has seen many stopped/failed attempts, has had many lives (I mean, recreate from scratch to learn X, Y or Z).

There's not a particular day or a particular addition or a particular level of complexity when it became a homelab, I think of it as such from day 1.


you don't need to spend money to have a homelab. you can do it all using VMs in your personal device (laptop/desktop) just fine


I think of a homelab as one or more servers (or a computer, laptop etc.) located in a home to play around with software, virtualization, hosting stuff both for testing and actual functional (home) use. Basically everything that's experimenting (like in a real lab) with technology. Of course the definition will be different for everyone :)


I would agree with this. While I run Plex and/or miniflux or as he put an OTS Nas, I also use it to provision Luns, or test things that I may consider for work.

Here are a few of the deployments that originated in my home lab but ended up in a prod environment at work

1. Replacing hardware load balancers with haproxy. (This started with a few options, including Nginx, and some others. But haproxy and it’s web management, csv monitoring gave me the best capability to integrate at work)

2. Vpn appliances for covid. I was able to whip up 2-4 scalable VPN appliances based on openvpn in “1/2 a day” at work because I was able to flesh out most of that at home.

3. Vulnerability scanners

4. HIDS security tools. In the end we went with a OTS vendor but options like ossec, wazuh etc were ruled out in a lab.

5. Ansible (over some of the other options)

6. Squid for reporting on the HIDS mentioned above.

There’s probably more. I know there is. And point blank a lot of this stuff had mock ups done at home because I have full control and am not subject to auditors etc when evaluating them. Whereas I do thst at work and I have to do more work writing up justification or change requests etc. it’s just easier at home.

All that said I try to keep the house as flat and plain Jane as I can.


I have a production hypervisor (HP EliteDesk 800 G3 Mini). This is where things run that my spouse cares about, in particular, Home Assistant. I don't generally mess with this machine.

I also have a lab hypervisor (Dell T30). This is where I feel free to experiment with VMs and accidentally on purpose have to rebuild it every once in a while, take it down to swap out hardware, etc.


That depends on your thinking.

My "media server" consists of a web application, backend application, multiple *arr services, transcoding automations, fibre termination, user account management shared across multiple machines and services, multiple VLANs and LUNs, etc.

All these are spready across 16RU or so, but really only serve as a "media server".


Who said anything about the cluster being just a media server?


That’s not a “HomeLab”. You can create a Plex Media server with a $249 Nvidia Shield and some low cost USB hard drives.


Yeah the best part of the home lab hobby is gatekeeping because you spent more than someone else and you need to belittle them to justify how much you spent on your Juniper or whatever.


Plex servers and containerizing all of the services that support it are how I learned to use docker before I became a professional. It’s now turned into a collection of containerized services all talking to each other with dynamic dns on cron, pihole, wireguard, HomeKit running all my appliances and more… I don’t get it. Is it only a homelab if I use it to practice network certification exams? There is some grass fed, organic, grade A level gatekeeping in this thread…and it made me want to rant.


> Consumer hardware is insane.

With less technical management, I've had repeated, and bewildered, conversations trying to get them to understand that our one "computer" sitting on my desk is many many times faster than the "server" our IT team provides. "But it's a server!".


I like to point out to people who haven’t worked it out for themselves, that the load balanced HA pair of EC2 instances with the multi AZ RDS that runs almost $200 a month at on demand rates, is somewhat less computing power and storage that the phone in my pocket.


Many times faster doesn't mean shit if it takes up 4-6x more space than it needs to in what is likely the most expensive commercial real estate the company owns/leases.

Many times faster doesn't mean shit if it can't be remotely lights-out managed and its hardware monitored using standardized tools (or at all.)

Many times faster doesn't mean shit if it doesn't have redundant PSUs.

Many times faster doesn't mean shit if failed drives can't be hotswapped.

Also, the computer sitting on your desk is not "many many times faster" than a current, or even few years old, server.

Etc.

If you want better hardware from IT, tell management to give them more money. IT is almost always viewed as a cost center and given a shoestring budget yet asked to do, and be responsible for, he world.

You know how you're experienced from all your years as a programmer? Imagine IT people are the same, instead of assuming they're all idiots who are too stupid to go out and buy desktop computers instead of servers like your genius self.


> Also, the computer sitting on your desk is not "many many times faster" than a current, or even few years old, server.

The server is a big pie. If you're buying a single slice, then yes, it's very very easy for a cheap old desktop to be way faster than a cheap VPS.

> Imagine IT people are the same, instead of assuming they're all idiots who are too stupid to go out and buy desktop computers instead of servers like your genius self.

It's the managers that are idiots. Not everything needs to run in a datacenter. Some things really are kittens and not cattle.


> Many times faster doesn't mean shit if it takes up 4-6x more space than it needs to in what is likely the most expensive commercial real estate the company owns/leases.

Unless you're hoping to monetize that spot on your desk, the real estate market means nothing in terms of cost.

> Many times faster doesn't mean shit if it can't be remotely lights-out managed and its hardware monitored using standardized tools (or at all.)

What stops you from "using standardized tools" on a box you own?

> Many times faster doesn't mean shit if it doesn't have redundant PSUs.

What leads you to believe that all those 9s are relevant or even not comparable with cloud alternatives? In fact, I'm not sure that the latest rounds of outages at AWS allow it to claim more than 3 9s during the past year.

> Also, the computer sitting on your desk is not "many many times faster" than a current, or even few years old, server.

Actually, it is.


So you are basically saying that 99% of the time it will work fine. Got it.

But seriously, they were comparing to the server they got, not the one you have or can provide.

It’s entirely reasonable for the IT team to provide a VPS that doesn’t have nearly the amount of power for a application that’s barely used. Doesn’t mean it’s easy to explain to management.


> Imagine IT people are the same, instead of assuming they're all idiots who are too stupid to go out and buy desktop computers instead of servers like your genius self.

Nearly all of your assumption here are incorrect or flawed, except the redundant PSU (we only had one). But, I do think they're just like me: working in a non-ideal environment with constraints outside of our direct control. The non-ideal constraint that they had, in that instance, at that time, was that they could only give us a VPS with 4 threads. It wasn't possible to do what we needed with their server. Or, to put it into your language, five nines doesn't mean shit if, in practicality, it makes a reliable space heater.


> running a homelab is a nonsensical

I think a lot of people build a homelab to learn about technologies and get realworld experience deploying them. That was what drove mine for a couple of years. Once you’ve mastered servers and networking and so on, then it just becomes a fun hobby, I agree there. But someone who wants to get into networking and the like (and who lacks experience) is definitely going to need to practice with real or simulated networks to get good at it.


Deploying servers and networking is a lot more fun when you can just sit down to Visual Studio Code and write some HCL or Yaml…

And yes I was around before the “cloud” was a thing. I first networked a Mac LC II and PowerMac 6100/60 to play with a Gopher server.


looks better than my quick and dirty wireguard setup to get NAT Type: A behind CGNAT on game consoles - Basically put whatever is connected to a device in a "public DMZ", separate from your network: https://raw.githubusercontent.com/genewitch/opensource/maste...

Wireguard is both very frustrating and very cool. I'm currently using it similarly to give a VM a public IP, and i'm testing the details on getting multiple SSL/https hosts behind that single IP, which is something you couldn't easily do a decade ago without the host with the single IP having all of the certificates and "MITM" the entire session.

Speaking of "CPU horsepower" i just replaced a 1.5kW HP[0] server with a .2kW Ryzen 5950x "server" that is about 5% faster overall - don't forget that old stuff, while capable, adds to the electric bill, usually at a constant rate.

[0] the iLO (lights out) actually reported the server's power usage in BTU. It drew the same power as a portable space heater.


> Speaking of "CPU horsepower" i just replaced a 1.5kW HP[0] server with a .2kW Ryzen 5950x "server" that is about 5% faster overall - don't forget that old stuff, while capable, adds to the electric bill, usually at a constant rate.

What I’ve observed is people on subs like r/homelab and r/sysadmin ridicule people who appreciate the available horsepower with modern consumer tech because “no ecc memory” or the like and I wonder if you people who are looking to make labs using the latest ryzen or i7/i9 (really, I’m thinking of getting started by converting an old thinkcentre with a 4th gen i5, possibly undervolting the cpu, and 24gb ddr3 into a pfsense router and some sort of server) will really be missing out on some necessary enterprise feature?


Enterprise servers make no sense at home, but they are more fun to play with than old laptops. It's a hobby. After a while you appreciate buying good tools rather than making do, like any other hobby.

Old HPE servers have jumped in price though. Last year you could buy insanely powerful stuff for under $200 but it's all $400 and up for the same gear at the moment.


> “no ecc memory” or the like

ECC is about long term stability and data integrity. For a router, meh, the network protocols will deal with any flips. For a file server or database it's better if those random bit flips don't happen to critical data.

AMD based systems can sometimes be forced to use ECC mode even if the BOIS doesn't support it.

ECC is more important in systems with very large ram footprints because there's that much larger of a memory footprint area for cosmic rays to corrupt. If you've got one-two sticks of ram, and you're not running vital business data, meh, it's not required.

I really like ECC. But I'm not really willing to pay a significant premium for it.

I run companion systems to production out of my house, mostly development lanes comparable to production deployments. If they're down, it's really not mission critical. I also run my home security/surveillance systems. The other significant systems are those related to my children's computer lab.


One argument I’ve heard, which seems valid enough, is that running a homelab setup without EEC memory is a good thing, because you then need to build stuff that can handle memory errors (or at least notice and learn from them) which you will occasionally see in your half a dozen or so homelab machines. Once your deploying production work systems “at scale” with hundreds (or thousands) of servers, you will be guaranteed to see occasional memory errors (probably even with ECC).

One of my first real homelab setups, was 6 raspberry pi’s, laid out as two load balancer, two web app servers, and two database servers. The “unreliability” of pi’s running everything off SD cards was a _feature_ because it gave me lots of experience in how my (work) software stack running on our typical AWS design held up in face of failures.


I was running Wok + Kimchi + gingerbase on the HP, i'm now using proxmox instead. Short of having lights out management (out of band) built in, i haven't noticed much difference between the platforms being "a server" and "a desktop"; make no mistake, 5950x is a monster chip, but it's still a desktop with two few PCIe lanes for me to consider it "a server" - luckily i only require enough PCIe to have an old GPU and extra SATA ports. If i was building out stuff to do more research i'd want more pcie lanes than the Ryzen Desktop supports.


On that note ryzen does support ecc.

It is not validated. So YMMV, but it works and is seen by Linux for me at least .


I recently bought an old HP server thinking it would be fun to play with. It turned out that it was loud and power-hungry, and for most things my needs could be served just as well by an old laptop. I ended up giving the server to a friend (who has their electric bill included in their rent).


> In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second. Consumer hardware is insane.

Yeah. I've got my search engine basically a beefy gaming PC with no graphics card and 128 Gb ram. I've not only gotten supposed HN death hugs multiple times, I've had my search engine see multiple queries per second without as much as flinching. It took Elon Musk tweeting a link to one of my blog entries before I started getting more traffic than my computer could handle and it started dropping a few connections.

Modern consumer PCs are ridiculously powerful if you make good use of the hardware.


Very cool! Running the not-well-maintained https://hndex.org search engine (and other memory hungry linear algebra based services) was also my original motivation for tunneling to my home as opposed to hosting on a VPS.

Are you hosted via a residential ISP? It's my hunch that peering agreements favor routes of consumer -> data center -> consumer as opposed to consumer -> consumer. That's mainly why I tunnel. Has that been your experience?


Yeah it's on residential broadband. Haven't really had much trouble to be honest. Though I'm based in Sweden and we have fairly well robust networking infrastructure all around, I guess that may be a factor.

I was hit by a botnet after my first big HN hug, so right now at least the search engine goes visitor->cloudflare->server, but if anything that's just added a bunch of ping.

I'm also doing crawling and so on on the same network and it's really not bad for that either. Granted my crawls are fairly slow and very far from saturating any connections.


Nice! Have you considered publishing your crawl data?


> In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second.

To be fair they're not spinning every single piece of the CPU.

My desktop can play games from 2009-ish, but at idle it clocks down to like 30 watts.

It could play games from 2011 if I put the GPU back in, but the GPU's idle power draw is ridiculous...


Thanks for the guide. Congrats on the wedding!


Mines an old laptop and a new NAS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: