Fun! If you're thinking of doing this, say so here, maybe we can all share an amp or two.
* If you just want to play with something that isn't an Amazon instance, and are not sure you want to commit to a >$50 a month server somewhere, https://lowendbox.com/ always has a few crazy deals.
* Never underestimate just slinging a box under your desk at the end of a consumer internet connection. I think people sometimes overestimate how difficult it is; I honestly think everyone should do it once, if only to get that visceral feel of being a peer on the global Internet.
(I typically offer VPSes, but just added some colo options for my hosting services because of this thread, as you can probably tell by the package names I chose. The coupon only applies to colocation.)
Thanks for offering us a discount. Can I ask why your cabinet and the Mission facility mentioned in the parent comment have just a single 10Gbps fiber connection shared between colo servers. I can get a 5Gbps at home and probably 10 if I asked nicely, I thought in SF it would be straight forward to get multiples of that.
In case there's some confusion - I'm not with MonkeyBrains. I hope my comment didn't read like I was. (And just FYI, I know people who host with MonkeyBrains, and they have no complaints.) I'm just (self-interestedly) providing an alternative across the Bay.
I can't speak for them, but my service is on a 10Gbps line because that covers the needs of my clients at the moment. With a network provider at a data center, you'll typically have a good Service Level Agreement, and throughput will actually match what you're paying for. Looking at the home fiber offering from one of the big players just now, they tell you up front, "Actual speeds may vary."
Maybe my information is out-of-date but traditionally home internet connections were heavily over-subscribed - so that "10 Gbps" ISP would transfer at that speed for short bursts, but their business model relies on you averaging <100 Mbps over the course of a week. That's still enough for you to watch 24 screen-hours of 4K video per day - but the reason residential bandwidth was 99% cheaper than commercial bandwidth was that if you routinely used more than 1% they'd cut you off, throttle you, or apply traffic shaping.
teruakohatu asked why a colocation facility would have a mere 10Gbps link when a 5Gbps residential links are so affordable. If you tried to run a colocation facility on a residential link, it would spend most of the time severely throttled.
I should have clarified I am a New Zealander. We get multi-gig connections to home that are not over-subscribed, but I would be very surprised if we could saturate that over the trans-Pacific cables.
I live in Alameda and colo in San Jose and would way prefer the shorter drive, even though I never visit my server.. but... this webpage isn't a real product offering is it? Like, you made it in response to the topic post (right?)
I lol'ed regardless. Bravo, and... I may look you up if OpenColo keeps irritating me. They have been surfing that line just below "irritated enough to move". They recently rug-pulled my whole IP block and forced a bunch of reconfig on me I wasn't down for. I'm still salty. Quadranet never did that to me in ~13 years with them. AWS neither come to that.
Clearly my business/branding sense and my sense of humor are off... it is a real product offering, and I'll honor the offer and the coupon if you sign up. Typically I provide website and VPS hosting, just hadn't advertised colo services publicly so threw the product page together. I'll change the product names eventually when I come up with something I like.
CGNAT needs to be routed around, which means you need at least a "$5 VPS" with a routable IP address to bastion for you. If you monitor the word "wifi" on social media it's a pretty decent proxy for where CGNAT is prevalent. Used in a sentence, "we just got wifi at the new house."
I've written a couple guides on this and subjects like it; i've had to use squid in a datacenter to forward proxy my lightning detector at the house, otherwise it would never receive a heartbeat - even though it would auth and start trying to send data; i've bypassed CGNAT so i could host console "co-op" games (diablo III, for example); and i've "mirrored" my phone media sync to a colocated server as well, just to avoid trying to route around (triple?) CGNAT to my synology at home.
CGNAT is a scourge, but at least around me, it doesn't seem to be the norm for residential wired internet.
5G yes; starlink, I think so?; geosynchronous satellite, I don't know, but probably. T-Mobile 5G fixed location internet does offer static IPs if you have a SMB account with an EIN, but not for home accounts or SMB with a SSN.
> at least around me, it doesn't seem to be the norm for residential wired internet.
It used to be rare in France, too, but now it seems to be spreading. AFAIK SFR, one of the major ISPs, doesn't even offer the option (paid or otherwise) to avoid it.
Edit: they do provide what looks like a fixed IPv6, though.
SFR will switch you from cgnat to ipv4 if you contact customer support and explain that you work from home and need to open specific ports to access your work servers or VPN. I did just this a month ago. Takes less than 24 hours and you should be OK on ipv4.
> It used to be rare in France, too, but now it seems to be spreading. AFAIK SFR, one of the major ISPs, doesn't even offer the option (paid or otherwise) to avoid it.
They realised they can sell the ipv4 blocks for far more and most consumers don't know any better or care. At least I keep my ipv4 adres if I keep the connection up (dhcp lease time is ignored, it is 4 hours despite dhcp saying 24). Asking for a ipv6 adres is like shouting into the void and waiting for an echo which never comes.
Note for the unfamiliar that ISP (CG) NAT is not at all universal, nor always mandatory when initially present; my past several carriers all issued public IPv4 and public IPv6 /64 or greater that were accessible if I set the appropriate settings. Caveats apply; i.e. the AT&T BGW320 routers seem to cap at 8192 connections even when in bridging mode. Such are the tradeoffs of buying residential rather than business service. (The else-described proxies can be a cost-effective counter to this and other such concerns, i.e. dynamic IP assignments versus dyndns update intervals.)
Odd, I never heard of this. So I might be sharing a public IPv4 address with a neighbor?
I run a server out of my bedroom with a TON of services on it. I used to use google domains + DDNS + nginx reverse proxy to manage it, and I still use that for two services that wasn't playing nice with cloudflare (jellyfin + tubearchivist), but now mostly use cloudflare tunnels. For the two services, I just pray my IP address doesn't change. For some reason, it doesn't.
But I never new I might be sharing that IP address. I've never had routing issues, does that mean I'm lucky? Is this like, a major security vulnerability I've exposed users (just friends) of my hosted services to? Maybe because I'm in Taiwan our ISPs just operate a lot differently? For example, not a single complaint at terabytes up/down every month.
If you've got Chunghwa/HiNet, you most likely don't suffer from CG-NAT. They're the most expensive offer because they're the best offer.
If you're with those shitty resellers, you 99% will get CG-NATed. You pay 50% of the CHT price, but you get 25% the quality. You see this in large apartment buildings: They take Chunghwa fiber and resell it, NAT the entire building into the 192.168.0.0/16 block to save on costs, because IPv4 is $$$. Forget about IPv6 support. The bandwidth they have is oversold, so the 400M you bought might not actually reach those speeds during peak hours, unless you're lucky enough to be living in a new building where more than half the apartments are vacant.
I have some experience with the latter. Their support staff is often utterly incompetent, too.
This sounds like the solution the provider offered when they commercialised my student internet connection. A building with over 1k connections had 4 ipv4 adresses and no proper cgnat. This means you get niceties like:
* Everyone gets their neighbors favourite language in the search engine if you have a incognito session.
* Only about 32k sessions possible per adres. And yes you can claim them all and kill everyone's internet connection.
* All local ip's were interconnected and not firewalled. That was fun.
* You get banned or soft throttled everywhere due to "strange behaviour".
They were doing it on the cheap and fortunately got told it would void their contract if they did not provide something better and it was "fixed" after about a year. But during rush you would still end up maxing the 4gbit fiber uplink even with only a fast ethernet connection for everyone.
Plenty of alternatives exist - https://github.com/anderspitman/awesome-tunneling. My issue with Funnel is that it includes no auth, exposing you to anyone in the world. I will advocate for zrok.io as I work on its parent project, OpenZiti. zrok is open source and has a free SaaS (as well as auth and hardening in general).
ipv4 space and bandwidth cost non-zero dollars so I'd ask "who is paying and why?"
I'm understand that a significant portion of HN is on fiber or whatever, but not everyone has access to fiber, cable, or even - dare i speak it - DSL. CGNAT was mentioned as being prevalent in the UK and germany almost 6 years ago (for example.) I haven't had a non-cgnat ISP the entire time i've lived in Louisiana, and not for lack of trying. (at&t, t-mobile, at&t fixed wireless (x2!), starlink) I'm not paying >$5/month to solve the problem "but what if i want to self host a public service" to switch to business or whatever, either, that's ridiculous.
edit: also i have colo hardware; i write guides for people who either don't or don't know how. cool if tailscale works. wireguard does too...
Tailscale runs over Wireguard. I'm pretty sure they just subsidise their free plan using earnings from their paid plans. Getting a foot in the door and all that. If devs use it at home, they might vouche for it at work.
Tailscale is essentially a free-to-home-users, paid-for-enterprise very usable frontend to Wireguard. They really make it easy. The free service is paid for by the enterprise users.
That's a pretty much orthogonal issue if you compare dedicated servers to, say AWS EC2. There aren't that many tasks you would have to perform on bare metal that you don't have to do in EC2.
PaaS such as Heroku or Vercel, that's another story.
Colocation, I don't know... Without questioning peoples preferences but I think at that point I'd rather look for a decent fiber network connection for my home and let that raspberry run in my own cupboard. I mean, its a RASPBERRY you would probably do fine even without the fiber connection.
It depends what it is. I have 25/25 fiber at home for practically nothing (~70 USD a month) but I can only go so far even with a UPS. If I loose power for too long or my internet goes down I have no backup which I would have at a co-location facility.
If those circumstances occur, what would you be serving that can't afford to go offline for a couple days?
It's important to have an answer to that question, rather than to assume that being offline when your home internet is down is inherently a problem. You can safely estimate using "X nines will cost X digits per month":
1 nines, 36.5 days/year downtime is $#/month. (openwifi tier)
2 nines, 3.65 days/year downtime is $##/month. (residential tier)
4 nines, 1 hour/year downtime, is $####/month. (datacenter tier)
5 nines, 5 minutes/year downtime, is $#####/month. (carrier tier)
Speaking from experience, it's both important to decide which 'nines' you require before you invest in making things more resilient — and it's important to be able to say things to yourself like, for example, "I don't care if it's down 4 days per year, so I won't spend more than $##/month on hosting it".
What does 25/25 mean here? Gbps feels high for the home, but Mbps feels insanely expensive at that rate. (And also I didn't know they did fiber that slow, it's really only available in 1 Gbps and sometimes 500 Mbps here)
I don't see why 25Gbps symmetric would be so surprising. My current ISP Ziply Fiber offers 100Mbps, 300Mbps, 1Gbps, 2Gbps, 5Gbps, 10Gbps, and 50Gbps (all of them symmetric) in most of their service areas. I’m sure there are other providers with similar offerings, in some parts of the country. My previous ISP, Sonic.net, offers speeds up to 10Gbps. The reported price is pretty nice though.
Damn, that sounds nice, the fiber in Switzerland linked to in other comment.
Though the small cost is probably overshadowed by the large infra costs at home. So now you need a 25Gbp/s router, together with the rest of topology like qsfp+ switches, and then actually computers with >= 25Gb/s nics to make use of it. And then all the appropriate cooling for it. It’s starting to sound a lot like a home data center :P
You can get 25G switches/router for not much nowadays, check Mikrotik. Throw a couple Intel NICs from ebay in your machines' PCIe ports and really it's not that much of a deal.
It's always a surprise to me how expensive internet access can be in the US. Here in France 1Gb/700Mb fiber connection costs 30€/month (and this is without commitment and includes TV stuff - "more than 180 channels" whatever that means, and landline phone)
The EU invested pretty heavily into making sure even very remote parts of Europe, like northern Finland, have great Internet. I was very pleasantly surprised when I was able to work from home at the in-laws'!
Because of these new subsidised fiber deployments, it's not uncommon anymore for rural/semi-rural areas to have better connectivity than urban or sub-urban areas, which is bit awkward.
Internet speeds and prices are all over the place in the US. I pay $60 per month for 1Gb synchronous fiber (which really performs at 1.2 synchronous, yay me) at my house and $60 per month for 500/30 cable internet at my rental. Two different areas three postal codes apart with different vendors, prices, and products (even when the vendor is available in both).
The way we sliced up space for utilities (lots of legal shared monopolies/guided capitalism) and their desire to build the last mile in their area leads to many different prices and products within a walkable distance. Before that 500/30 service showed up the best we had was unreliable 200/15 from another provider.
and it varies widely. I pay $170 a month for 30mbps down and 15 up lmao and I have 2 options to choose from who have the exact same service for the exact same price. Telecoms in the US is beyond horrifyingly bad.
These days I'm less excited about residential fiber deployments as they are more often than not some passive optical setup, which is worlds apart from a proper active fiber that you'd get in a DC or a dedicated business line. For example standard 10G-PON is asymmetric shared 10G down/2.5G up (10G-EPON is even worse, 10G/1G asymmetric), with up to 128 way split. That means that with your fancy fiber in the worst case you might get barely 20 Mbps upload capacity.
IME most new residential fiber deployments in the US are using XGS-PON which provides 10 Gbps in both directions. Typically ISPs don't put the maximum number of clients in a node that the standard allows. I've heard 32 is a common number in practice.
Obviously it'd still be a bad idea to run a high traffic server on a residential connection, but as long as you're not streaming 4K video 24/7 or something you'll probably be OK.
Here proper fiber is the norm. Doesn't mean that it's not oversubscribed to the next hop though, typical oversubscription is 30x, it would be insanely expensive of they didn't do it.
> in the worst case you might get barely 20 Mbps upload capacity
"in the worst case" being the key point, and frankly, 20 Mbps doesn't actually sound too bad as the theoretical minimum.
In practice you're unlikely to hit situations where this is a problem even if everyone was hosting their blog/homelab/SaaS/etc.
This is only a problem (and your ISP will end up giving you hell for it) if you're hosting a media service and are maxing out the uplink 24/7. For most services (even actual SaaS) it's unlikely to be the case.
PSA: if you're having RPi SD-card-corruption issues, get a higher-amperage power supply (the official power supplies work well). Low-voltage warnings are a telltale sign.
SD cards are indeed a really bad deal when it comes to reliability, especially if like me you tend to slam raspis everywhere almost as a reflex: before you know it, you end up with a large-ish fleet of the things in your house.
But: Raspis these days work 100% fine with SSD's, and while a small SSD is not yet as cheap as an SD card, it's not far off.
I have entirely stopped using SD cards for my Raspis for quite a long while now.
I also had good experience setting up PIs with read-only root devices. All data needs to be sent off-device (or at least onto external storage), but it wasn’t too tricky and should avoid the usual SD-card issues.
My last experience with colocation hosting was that just the monthly fee was way more than a dedicated server on the same host. It was so confusing and I really want to hear some feedback if anyone else has had a similar experience.
Colo is for racks. It's for businesses. If you just need a single server, you'll be better served by a dedicated server from hetzner, ovh, etc. The only exception I can think of to this is GPUs. If you have your own GPUs and have a legitimate use for them, colo pricing may beat server rentals.
This blog post mentions parking an old switch with a derpy little raspberry pi.
Also, what makes colocation something for businesses is cost. The likes of Hetzner also sells colocation, and nowadays you can buy a used rack for around $100.
Moreover, today's COTS computers are not like your parent's. A mini PC selling nowadays for peanuts has gigabit Ethernet, 16GB of RAM and half a dozen or so cores. You'd pay a small fortune for servers with those characteristics in the early 2000s, and nowadays that hardware is used to check emails.
> This blog post mentions parking an old switch with a derpy little raspberry pi.
This blogger has weird preferences and money to burn on them. Doesn't mean it's a sensible way to do things if your main aim isn't reminiscing about being a '90s-'00s sysadmin.
> This blogger has weird preferences and money to burn on them.
I don't know. Spending $100/month on colocation costs is hardly a tophat-wearing level of expense. I recall reading from the old Reddit-to-Lemmy migration discussions that some self-hosting instances were costing that much on AWS, and they are still up.
For perspective, we're in an internet forum where from time to time we get posts of users spending thousands on their home labs.
My read of that was that the Raspberry Pi was mostly there for bootstrapping, so that they could drop servers into the rack and have something there already they knew they'd be able to get onto the local network with, not that they were buying colo just for a Raspberry Pi.
> (...) not that they were buying colo just for a Raspberry Pi.
I don't know. Might be, might be not. All I know is that there are already companies that are even selling colocation specifically for Raspberry Pis. It's not that weird, and not a step too far away from colocating Mac mini instances.
Please be aware that this doesn't include electricity and cooling, which is really expensive in Germany. They charge 0.476€/kWh. Running a single 4090 @ 450W 24/7 would add another ~150€/month.
Absolutely my experience as well... Plus your vendor options are reduced since the location has to be somewhere within driving distance for you, or your "caretaker" so that you can replace that flatlined drive with a new one without significant downtime.
Where I am in the market, the managed dedicated server is 10+ years old, so there's no capital cost there. More upscale, the host may have volume discounts with the OEM that you won't get, unless you've got other business (maybe you buy a lot of laptops for your staff, so you can get servers for less as well)
It's got to be cheaper to have me as a customer when I never show up in person, and don't even know precisely where the server is. For a colo, they need to provide me with access, and supervise me when I visit, and put up locking cabinets (and make sure they aren't all keyed alike!), they'd also need to be able to meter power, etc.
There's definitely reasons to colo instead as a customer, but usually it's lots of hardware, or specialized hardware, or sometimes if you have networking needs that the host won't accomidate in dedicated; maybe you want to run an ASN and have direct connections to other peers / transit / IX.
As a data point, for two 1U servers co-located in Australia I'm paying about AU$500 per month.
If these two servers were low spec, then that would be more expensive than just renting dedicated servers.
But I can put whatever spec servers fit in those two rack units, including things with quite a lot of ram, quite a few ssds, 10GbE+ network links, etc.
Doing the same thing as that with dedicated servers would be quite a bit more than the AU$500/mo I'm currently spending.
---
As a data point for anyone interested, those two servers are running Proxmox and host a bunch of (Linux) VMs that provide online services. Live migration (etc) between them works fine, etc. :)
If you need something close to the full compute a 1U can offer, then you can definitely get value for money from ordinary colo like that. But you’ll get a lot more value if you need a full or half rack. But if you just need a single VPS it’s not going to add up.
I'm not sure how much power is included. They asked for the specs for the servers, which we provided, and they gave us the above quote which we took them up on.
That price also includes some extra stuff. An IPv4 /28, and dedicated VPN for the IPMI/iDRAC ports on the two servers.
Each 1U server seems to idle at just over 100W, though when they get busy they ramp up to several times that.
Location seems to matter. Back when I was using colocation in Docklands (London area where all the main hosting happens) it was very expensive. ISTR 2U was hundreds per month. I'm guessing Rachel is not paying that for her Raspberry Pi, but maybe not hosting in such a central area.
Yes, that's my biggest "let down" with this writing: at least in my corner of the world (Paris), colo starts at 300€/month for something reasonable, or at the very least 100€ for low power 1U (so you can host... a switch). That's as much as a rent !
On the other hand you can get a nice enough dedicated server for 15€/month.
So yeah, if I'm missing something please tell me because I'd love to but I can't justify the price for hobby stuff.
Co-location has never scaled down to 1U very well, because of the overheads - you can't just let trust 40+ customers to slide their 1U server into the same rack without security concerns and/or losing lots of space, and having extra work with power distribution and networking that the customers themselves are responsible for with a full rack.
Just access arrangement for 40x more customers adds up in admin overheads, and colocation isn't really a tech play as much as it's a real-estate play similar to parking where the goal is minimum effort rent-extraction with as little staff as possible...
Regarding security aspect, as for what customer can do: you can bring your device and put it in shared rack while support personnel accompanies you. Power/Ethernet and if keyboard/screen is needed will be managed/connected/wired only by support staff.
One can rent a whole rack if you want dedicated access. And 1/2 or 1/4 racks are available if fullsize is not needed.
That the price for 1U will tend to not be very competitive with renting dedicated servers (I can rent a server including hosting for that price) because the overheads to the provider of subdividing that 42U rack adds up. The point wasn't that the security can't be dealt with, but that dealing with it is one more thing that contributes to a higher cost per U if you rent 1U than if you rent quarter rack or more.
A 1U can very easily contain 64+ physical cores, many TB of storage, and a few hundred GB of RAM. A 1U colo can be a great deal if you’re looking to use that much compute/storage.
The admin for those arrangements is pretty simple really. Even if you’re providing supervised access, it’s not going to be much work. I run several small colo deployments like this, and I probably only visit the sites every couple of years.
If you only need one VPS, then you potentially only need a tiny fraction of 1U worth of compute/storage. That’s not a sensible colo use case.
From the DC perspective, the biggest costs for providing colo are power, AC (which is mostly power), network and real estate. Supervising rack access is a very small line item in their accounts.
Supervising rack access and/or using physical barriers was one of a list of different reasons for why the cost per 1U is so different if you buy 1U rather than a full rack. It may not be the most significant one, but it is there.
As for power, and network, it's often charged separately, and you will still find the 1U vs. full rack difference then. Sure, you can to some extent perhaps assume a slightly lower load factor for customers that rent a full rack, and that may contribute too.
But the point remains: The person above me should not be surprised that renting space by the 1U slot is expensive.
It depends what you do with it. If you only need a small VM to manage your emails nothing will beat a VPS. But I also use it for offsite backup, so need a bit of storage. When you add dozens of TB, dedicated quickly becomes way more expensive. Particularly given that I tend to park there old hard drives which cost I have already amortised.
If you buy your drives smartly that's amortised after 8 months. When you are looking at the cost over 5 years, it is hard to beat colocation for specialised hardware.
I'm ignoring upfront cost here, and going based on the premise of 100€-300€ as the monthly fee for colocation. You can't amortize your way out of that.
> My last experience with colocation hosting was that just the monthly fee was way more than a dedicated server on the same host.
Cheap dedicated servers are usually very old and buggy hardware. With colocation, you can actually install some very good server hardware (pick up some good refurbished stuff rather than buying new)
Sounds like simple economy of scale? Same reason why for many things buying brand new can be cheaper than trying to get defects repaired. (I'd assume coloc is more labor intensive for the host.)
>just the monthly fee was way more than a dedicated server on the same host
What kind of "dedicated server"? When you colocate, you have full control over the choice of hardware, meaning you can stuff 200TB worth of hard drives and 512GB of RAM into a machine and pay a few hundred dollars max for it. Good luck finding someone willing to rent out a dedicated server with those specs for the same price as the colocation.
Never underestimate the co-location power of personal homes of employees with their own HVAC, p.v. with storage and FTTH bandwidth: with good compensation to them might be cheaper and better supervised than choosing a third party.
Ladies and Gentleman's it's about time to show the real purpose of homelabs and get paid for the ownership instead of paying third parties, it's about time to use modern logistic to work in a decentralized world instead of insisting with the old mainframe model renamed cloud.
Imaging this world and how any remote worker in his/her own home could be a single-person company in a thriving society, instead of a flat+vertex one where most own nothing and smile like Canon workers in China https://www.theverge.com/2021/6/17/22538160/ai-camera-smile-...
You're making me think: communal Kubernetes cluster. You setup your nodes at home and share them with some people to make a cluster. The main problem would be securing secrets and data or you'd have to limit yourself to things you don't mind being public.
No need to imitate giants, meaning there is no reason to use k*s. We have declarative systems like NixOS and Guix System, there is no need to use paravirtualization, who is born for commercial reasons: selling ready made stuff to those who do not know at a price you can't sell for a snippet of nix/scheme.
A company can simply develop their own decentralized infra, hosted by their employees. If it will need more it will also have money to buy sheds with p.v. and storage with good enough fiber links to have their own small datacenters.
I was thinking more of a cooperative operation. So people could totally join and quit on a whim and I think k8s would help with automatically moving your app were there is available resources.
I do not say it's not possible, just from history every "community of good citizens" never last long nor perform well. Beside the fact containers are a very absurd model as full stack virtualization before them.
If you really try to draw a big picture, we actually waste immense computing resources just to commercial evolution needs, to go "autonomous" we need to avoid copying the bad commercial model but introduce a better one. Declarative systems are MUCH better, with much less attack surface, easy development, documented evolution, ... they are not "a FLOSS version of $giantname solution", they are another paradigm.
Similarly we should avoid working on VDIs which is the giants way (let's say Amazon Workspaces, Windows 365 etc) so not going let's say Guacamole but going on bare desktop computing syncing just files, using logistic to send iron, configuration management to manage it from remote etc. We should avoid NextCloud/other clones of giants webapps to works instead locally with different tools, let's say do use R/Python not a spreadsheet. Use email with a modern MUA (we currently miss, for non techie) not a web app for development and so on. Not trying to mimic someone else paradigm but proposing a new one better fit for our purpose.
Let's say I want to setup some blog, portfolio or fun experimental site but my internet connection is not perfect. I'm not alone on this kind of problems: we should be able to pool our resources and have those websites be delivered by whoever has a working connection to the computer requiring access.
I think it could be done with k8s but maybe not. And websites are the easy model as they don't require a permanent connection between "server" and client, with things like IRC it should start being fun.
You contact $WeHostWebsites a company that do what the name imply, they have no datacenter, instead they are 10 people offering the service, some ready-made static website generators with a portfolio of themes, option for ecommerce etc, hosting their infra in their own homes. You pay them few $currency per year and you have your hosted website. At a certain point they need more iron but not much more people the owners buy a shed aside his/her own home and add the need iron, another own do the same...
A day their business goes bad, they still own the iron, they can sell if as a service for others who need computing power for a short period of time or they can reinvent themselves in something else because they still have something tangible behind them, not a rent of someone else resources. That's the model.
People seem to be scared of owning servers. But my experience has been that generally speaking they're very reliable. If it's important you would always want duplicates, fallbacks etc but it's only the same with cloud services - multi region, multi cloud.
I am fairly sure remote working and hybrid working are here to stay.
Which means the commercial real estate market is going to shift with more and more “co-working” spaces being created.
One thing I wonder is if they will simply turn one office room into a co-low space - where it’s easier to drive to, it’s simpler to manage. Because cloud first is a rubbish solution for most people - just have a server with a TB of disk and most companies can live on that happily running away in the corner.
And there might be a growth for companies that have expertise in the basics of racks and HVACs just to run that room for them.
I'm in the process of finding where to put $100,000 of 4090s for training non-llm models.
It looks like "my garage" is cheaper by far even when I include installing an HVAC system and 3 phase power when I look at yearly costs for a spot with 20kW's of power.
You can just train a vision model to detect people, then detect people on a white list. If you're not on the white list you send a head request to the shooter microservice which deals with the rest of the intruders lifecycle.
The issue is it's pretty hard to do because we have high standards when it comes to putting humans in living spaces.
This isn't true though for computers. It's trivial (ish...) to turn an office floor into a datacenter. I know because that's what the company I worked for did. It's nice, because our servers are literally an elevator ride away.
Interesting, their definition of managed hosting appears to be quite different from my experience at a provider for managed hosting.
We explicitly didn't grant customers "root" access unless they requested it. This both made it easier for us (less cleanup when a customer mucks up the setup) and for some customers)(no easy way to mess up your installation). Most people just wanted a managed deployment of their PHP app, a webshop or WordPress and this worked well.
I've been doing colocation in the UK for my personal hosting for 15+ years now, and it has worked fantastically well for me.
There are three things I'd recommend though that the articles kind of gloss over:
* get reasonable OOB access as a priority
* get real server hardware
* set up backup on day 1
OOB access can be as simple as a serial console server that your colo provider lets you access (most modern servers still have DE9 serial ports!), but ideally you want server hardware with an IPMI. It is fantastic for saving you from the 'oh no, I just firewalled myself out' problem (my colo provider is about an hour away, I don't want to do that trek on my weekends!), and with a proper IPMI you can boot from an ISO file over USB to do things like OS upgrades/reinstalls/recovery or firmware updates with ease. I personally have a Mikrotik router doing WireGuard for me to connect to for remote access to secure the IPMI behind a VPN.
Real server hardware saves you a ton of time & money in the long run for less upfront cost than you think. There's a lot of options - HP/Dell for more traditional choices (incl on site servicing options), Supermicro are well established, and there are more recent challengers that are putting out reasonable hardware like ASRock Rack. You should expect to buy this in a B2B fashion - you'll probably talk to/email a salesperson, but I've found I get ridiculously better pricing this way than buying online. As a bonus, you get an IPMI, ECC RAM that works, hot-swap drive trays, etc. In my experience, this kind of hardware really rarely fails - it's designed for 24x7 usage.
There's lots of backup options. You might push to something like Backblaze b2, use someone like BorgBase, or push backups to home. I have a home server with a bunch of hard disks, so I do a mix - I use borgbackup & push my backups daily to both my home server and to BorgBase.
My personal colo right now is an ASRock Rack EPYC 4004 barebone with a 12 core 24 thread CPU, 128G of RAM, 2x 2TB NVMe SSD, 2x 4TB SATA SSD, another 2x 2TB SATA SSD. I use quite a bit of the storage, and I use it to self-host a bunch of useful things - eg GitLab, CI, email, containers via Docker, side projects, the odd game server for playing things with friends. Adding up what those would cost me to host individually via VMs, AWS, GitHub subscriptions, etc makes paying for colocation an option I'm very happy with!
I've said this to any of my engy friends who will listen. (not many) There's a small, yet tangible benefit to having a blank hardware slate to just riff on an idea when it strikes you, and having ample compute/storage/bandwidth to play with the idea. Or to send yourself down a spaghetti sysadmin nightmare you get to untangle. :)
It's been a consistent part of my career since 2007 and I'd hate to not have it. Homelab is great, but spinning up your own VMs, architectures, all of it... is a superset of that same satisfaction.
I've also used m247 in the past, but had terrible experiences with them since they were acquired by private equity - network & support quality went really downhill.
For servers, I use https://serverfactory.com/ and https://www.rackservers.com/. As is typical for this kind of thing ignore the online pricing and email in - you'll usually do rather better, but they'll want you to pay by bank transfer.
edit: being outside of London means power is a lot cheaper, but I get about an extra 2ms of latency to my server. I consider that a reasonable tradeoff :)
In similar news, inexpensive internet connections and old workstations are much faster, and more reliable than entry level cloud crap that's constantly shifting like sand underneath you.
A $400-$600 workstation PC and a 400/40 connection with static IPs worked great for our first couple years of development.
Why are colocation providers reluctant to have clear pricing on their websites? Most of them have some form of "Contact us for a quote" link. I get their target market is B2B, but the additional friction probably prevents many colo-curious homelab operators from making the jump.
The weird thing is some do publish the price of 1U/1 Amp along with half-racks and full racks, and nothing in between - not even a formula to estimate costs if one doesn't nearly fall into one of those buckets. The 1U pricing suggests that some do care about 1U collocation, while simultaneously limiting price information to quote requests for a 2U or an 800W 1U. It's strange.
What a coincidence, I spun up my first VM on DigitalOcean just yesterday for $4 a month. It felt... freeing after a few years of working with the Big Three for various things, big and small.
Still a step above schlepping your own hardware, but it was still the cheapest Debian VM I could find by about a factor of 2. (Admittedly, I didn't look too hard, let me know if you know a better one.)
The cheapest VPS on 1blu costs 9€ and is about the level of a 48€ DigitalOcean droplet, with infinite traffic. (On my VPS I get a couple hundred Mb/s download speed.)
Tradeoff: It's hosted in Germany only and the website is pretty bad.
I've been considering changing the SSH port, just for that extra bit of security. As for fail2ban -1, I just like seeing the Banned IP list slowly engulf the screen over time ;)
We are seriously considering moving downwards from a dedicated server to some kind of BYOD/co-location.
1) Our server is at least 5 years old but you are still paying a high monthly charge. We have probably paid back 5 times its cost already (I know this includes power/air-con etc)
2) Even a recent quote for another 256GB RAM was quoted at $1200 install and $300 PER MONTH!
3) We find that the support are not quite good enough for anything other than the basics. We had a network performance issue that we insisted was something in their infra and they denied it, it was only after I used Wireshark to prove the problem that they finally found it. YMMV of course.
4) We run some test Hypervisors and then only support Windows + Hyper-V (or VMWare on their "cloud"). I don't mind Windows except the monthly Windows updates are very slow and tedious and require VMs to be shutdown.
5) We are getting performance issues because ultimately, each Hypervisor has one disk (might be RAID1) and 10 VMs all writing to it, this seems to have affected throughput, particularly on build agents which write a lot of stuff to disk.
On the plus side:
1) We get 24 hour support for major outages, whether caused by them or us. They obviously can't fix anything we have broken but we at least have someone to call if the network seems to have gone down, for example.
2) We can have additional equipment run up reasonably quickly without having to purchase it or go to site.
If we move to co-location, we can probably get an amazing server with 4 disks and 1024GB RAM for less than $10K, run Xen or Proxmox on it and get much better performance, although whether they would charge much less is the question. I don't know how many co-loc providers want people with only 1 or 2 servers, I guess they prefer the big corps who want to move 10 racks worth of stuff there.
I’m always interested in a cheap Colo somewhere. Anywhere in continental US is fine. I’ll ship my stuff in, fly there, and set it up and try to make it work without repeat trips. Anyone have any ideas?
One of the more crazy ones was that I just buy a home in Utah that’s rural ish to be cheap enough but also near fiber. Hard but there are some options.
Webhostingtalk, lowendbox, and lowendtalk should have some options for you.
Anywhere with a major internet exchange is likely to have lots of options. If you don't know where those are, you can just look for cities with Equinix locations. Equinix isn't cheap, but some will have a warehouse on the cheap side of town with fiber to Equinix and you can rent a cabinet from them.
While we are at non-clown hosting and others suggesting smaller, alternative hosting options:
Is there a hoster that offers hard pricing caps or maybe prepaid plans? The background is that I would like to give my kid a way to play around without worrying about the bill, even if that could mean being blocked for the remaining part of a month after allocated resources are exhausted.
We are using Neocities at the moment, which is great (thanks @kyledrake) but we are outgrowing it, especially since at some point you need a real backend.
When I learned web, I self hosted and could use university resources but today I find it hard to find a risk free environment for experimentation.
Hosting what? VM? Managed hardware? Hard pricing definitely an option as traffic is not metered and one can pay in advance for x months and after that terminate the service.
A VM would do it, but what service has unmetered traffic? Don't they all have fine print where they reserve the right to charge you if you overextend your allowance?
To be clear: I want to pay for our traffic and don't want to free ride, but I want the failure mode to be "we cut you off" and not "we'll send you the bill later (possibly much later)".
We really do have free traffic. ToS ofcourse applies and we will warn and cut off but not bill - we don't have a mechanism to bill for traffic anyways.
Thank you very much for the education, the grassroots works. To be honest there is a lot of things nowadays which are covering up easier, cheaper and sometimes better ways which used to be normal. Younger people for example no longer know they can buy their own music or movies. And sometimes they send things from one computer to another using a Dropbox
I've pretty much always had a raspberry pi running a few services in my basement. I never really considered moving it to some else's property. Interesting to hear it's an option.
The real value-add from Lambda is integration with the rest of the AWS ecosystem. It doesn't even scale up without breaking the bank. But it does scale down to zero. And can process events from EventBridge, DDB, S3, SQS, SNS, and every AWS service under the sun. It's really good at that.
If you want to run an API, slap a Go executable on an EC2 or ECS instance.
I'm a little less grizzled, but I'm so glad I never have to deal with shuffling around broken hardware anymore, much less diagnosing the most inane, obscure bugs due to the hardware slowly crapping out. I've lost so much sleep to shit hardware and unstable, untrustworthy colos. APIs and automation rather than cage monkeys, IPMI, expect and crossed fingers. The Cloud is my church and I've seen the light, hallelujah.
While colocation offers greater control and potentially lower costs for high-performance needs, it's worth noting that for many small to medium-sized operations, cloud services can still be more cost-effective when factoring in the total cost of ownership (TCO). This includes not just hardware costs, but also the time and expertise required for maintenance, upgrades, and troubleshooting.
Another one of the primary advantages of cloud services is the ease of scaling. In a colocation environment, scaling up often means purchasing new hardware and physically installing it. How do you address the scalability needs of rapidly growing applications or services in a colocation setup?
While that may be true, it's not relevant in the context of the article. As the final paragraphs indicate, this is about telling people that datacenters exist at all (yes, people don't realize this!) and allow anyone to sign up and host stuff:
> If this is old hat to you, great! It means you're probably a grizzled 1990s sysadmin just like me, consarn it! This isn't for you, then.
> This is for the newer folks who might not have realized that there's an alternative to paying tribute to one of the three churches of the Clown: M, G, or A. If you want to "get your stuff online", there are other ways... and there always have been!
If you don't realize that self-service and managed-tier datacenters exist, you can't properly investigate, calculate, and compare TCO — and certainly the cloud providers aren't going to bring up datacenters unless they have to.
SendGrid ran our own data centers and scaled massively. Yes, orders to dell took forever and we were often worried if we were going to have enough compute. It worked out to be vastly cheaper than aws but we still moved load into their cloud to help new projects spin up and burst activity. And moving over to managed services eventually became our default, not because it was monetarily cheaper, but because it let teams develop faster.
I'd add that cloud services are dramatically cheaper when your application is able to tolerate preemption. Spot pricing in AWS is around $0.015/vcpu/hr.
That still going to be more expensive than a dedicated server, but it's much closer. If you have cyclical traffic patterns and other elastic workloads, the gap can be almost non-existent, even without considering personnel.
* In San Francisco, good old MonkeyBrains offers an off-the-menu, In'n'Out-style coloc deal: https://www.monkeybrains.net/colocation.php
Fun! If you're thinking of doing this, say so here, maybe we can all share an amp or two.
* If you just want to play with something that isn't an Amazon instance, and are not sure you want to commit to a >$50 a month server somewhere, https://lowendbox.com/ always has a few crazy deals.
* Never underestimate just slinging a box under your desk at the end of a consumer internet connection. I think people sometimes overestimate how difficult it is; I honestly think everyone should do it once, if only to get that visceral feel of being a peer on the global Internet.