Hacker News new | past | comments | ask | show | jobs | submit login

Exactly. I'm constantly surprised how many companies carelessly upload all their internal documents to Google Docs, S3, Dropbox, GitHub, Slack et al. I'd suggest that anything that's not supposed to be public never be uploaded to such services, specially if you are not from the US. The way to go even for small tech companies is self hosting, except maybe for email, due to GMail marking your emails as spam from what I've heard.



My company self hosts everything. They’re so bad at it. Something is always down and we waste so much time with our shorty tooling. We even had this amazing idea that we could implement our own version of GCP from scratch. The result is dismal, we dread using it because it’s very unreliable, and it costs double or triple what GCP costs. We’re not a small company, we have about 1k employees and our business is software.


I've come to appreciate the value of a really good, dedicated sysadmin. I used to think I was pretty good at it and that was fine, but I've come to realize that the skillset for an awesome sysadmin are quite a bit different than a developer, even though there is some overlap. And there is definitely a range.


>quite a bit different than a developer

Exactly! I'm a sysadmin (sometimes a Dev), but sysadmin is what i'm really good at. The most important thing is to see a solution from another perspective.


As a developer I completely agree. I have some knowledge of system administration, but I much prefer with dedicated professional where possible. It's partly just a different set of expertise, but I think it's partly also why it's best to have dedicated people doing QA work. The attitude required to do the work is completely different to dev work, and it's hard for one person to wear both hats.


Seconded. A sysadmin dedicates their professional life to deeply understanding systems, networks, and their interconnection. I am lucky if I can keep up with changes to my languages and frameworks as a programmer! Thank goodness there are sysadmins out there who can help us poor programmers out when our relatively basic understanding knowledge of linux fails us.


>Thank goodness there are sysadmins out there who can help us poor programmers out when our relatively basic understanding knowledge of linux fails us

Sometimes, and sometimes we have to ask the developer of that exact OS-subsystem/driver/firmware(AARGH!!) because it fails us too...and here we have a closed circle ;)


Unfortunately the market doesn't appreciate systems operations skills as much as software development, despite those skills being rarer and having a wider organizational impact in more industry verticals. Software developer salaries trend 20-30% higher at every single career stage than sysadmin/ops salaries, and at the top, there's usually management adjacent engineering track stages at larger firms for software developers like "principal", "distinguished", and "architect" which are not open to operations folks.

I was in ops for 13 years, and if you were to talk to any of my former coworkers they would bury you with praise for the quality of my work. Yet, I eventually chose to move into a management track because I had peaked my career about 7 years in and didn't realize it until later. There was nowhere I could go up, because I wasn't a software developer. Now I'm a manager that has technology understanding, which has a high value prop for many orgs all on its own, but I do sometimes miss "getting my hands dirty".

I've worked with a lot of software developers over the years, and while there are a handful who are really incredible, the majority of people are just mediocre. That's expected and okay. The same is true for Ops folks, as it happens, although generally it takes more competence to rise to "Senior" on the Ops side vs software. The thing is, "Senior" is as high as it goes for Ops folks. So you might meet really stellar Ops folks who are effectively titled and paid the same as a mediocre developer with 3 years of work experience. It's simply not sustainable, and the push towards moving everything to the cloud and off-premise is probably a symptom of this (not enough quality Ops folks to keep things on-prem) and exacerbates this (reducing need for quality Ops folks, driving down market demand, unless you want to work at a cloud provider).

Pretty much all of the other Ops people I've respected and admired over the years have moved into different career paths. I find the same is not true for software developers. So when younger people ask me about career paths, I always recommend software over Ops, if they are adamant they never want to go into management.

It's kind of sad, I suppose, but that's the way of it. I appreciate that there's a subthread on HN where folks recognize and respect the value of competent Ops folks, but I think you'll find that most are being pushed out of that career path.


Probably because of places like Netflix and Google where everyone is a software engineer they just happen to have different titles. If the industry wanted to counter the move to cloud raising the salaries of truly competent ops people would be a thing.


A tradeoff might be to keep encrypted data on GCP with keys managed on-premise with transparent encryption decryption by way of a local proxy.


Most businesses need a lot more than dumb storage from their IT systems though...

As soon as you start using the full suite of cloud tools, it's impossible to not give the provider the encryption key...


I see what you mean. If those 'value additions' can work on metadata, the separation of data and metadata might help some, but that will become complex very quickly.


> The result is dismal, we dread using it because it’s very unreliable, and it costs double or triple what GCP costs.

I'll be honest - you guys need to find a new technology partner then. Creating a private cloud that's reliable and offers the basic services that the major providers have is not difficult in 2020. Some of the aaS stuff can get tricky but is still entirely do-able.

Regardless, if they found a way to make your infrastructure MORE expensive than the public clouds they either have no idea how to negotiate or are really, really bad at their jobs. The public cloud is a lot of things - but cheap isn't one of them.


Oh no that’s the best part: we built it in-house! Easily the worst place I’ve ever worked at, technically wise.


Is there yet an open-source alternative to GCP? Can you buy bare metal and just have your own cloud?


Presumably Kubernetes is the answer here, but you need to run a lot of things yourself that come for free with GKE. Also “just buy bare metal” ignores the massive effort involved in operating a data center.


I don’t think so. But knowing my company’s capabilities and proficiency, it was obvious it was never going to work. Would have been much more logical to buy from one of the many local providers who run their own data centers. It doesn’t really matter because in the end the solution is so bad that we try and use GCP anyway whenever we get the chance.


Yep, you have OpenStack, which is the standard, and running some huge deployments, like these: https://www.openstack.org/use-cases/


OpenStack is probably the closest but it will not have everything GCP or AWS has


It's probably enough for most organizations, though.


Depends on what is desired... theoretically using Proxmox and VMs containing CapRover hosting docker containers should get you everything you want... but the ability instantly scale upwards is difficult to do yourself.


> the ability instantly scale upwards is difficult to do yourself.

If your business involves any amount of low priority bulk compute, this gets much much easier. You simply let the low priority stuff fall behind while your order to Dell for new servers is being delivered...

Also, if you have compute that could be on-prem or could be in the cloud, you can set up a kubernetes cluster spanning both and let non-privacy-sensitive overflow to GCP as needed.

All of the above rarely comes out cost-effective though, because while the raw compute is cheaper to DIY-it, when you factor in the staff time to build, maintain, and deal with the shortcomings of your bodged-together on-site solution, it's going to come out much more expensive.


To be honest, it depends on your SLA for uptime.

We've built out something homebrew like this using cheap compute desktops but you're still going to pay a lot upfront.


I use both MS and Google at various locations. The result is also dismal... But we can all be exasperated together and vent about how awful it is without upsetting anyone.


Yeah this is something I have seen. And a lot costs both in people and hardware go into things that aren’t core to your business.


I work in banking in Europe, one of the internet-only ones. We have super rigid governance routines regarding cloud storage. You can’t store a single byte on any cloud service, even incidentally, without a thorough review ensuring that no customer data is present.

Meaning, e.g. there are specific, rigid rules regarding how Postman can be used while developing backend services, to avoid that customer info is inadvertently transmitted during testing.

Of course, it’s a PITA, but it serves its purpose.


Yeah, Postman has gotten dirty. Why the hell should a REST client/testing tool transmit everything to the cloud?


Postwoman might be enough for you if looking for a replacement and it's FOSS



Depending on what you're doing, just use curl


Just like you could just use nano in place of IntelliJ.

But the idea that Postman is actually git in that analogy is hogwash and a way to wrest data out of people.


I think you’re really overestimating the ability of most small tech companies to self-host reliably and securely.


You’d be surprised. We had security that rivaled even the standards required by the banks. Truly crazy high multiple physical key paired with vault and some faraday cage protected offline signing thing — I wasn’t privy to it all just saw bits of it while it was being implemented. Suffice to say it can be done and by going off cloud you get the flexibility to do things like this but... in the end it’s overkill


For every 1 of those crazy high security companies, there’s probably 1000’s that couldn’t secure MongoDB instances.


Perimiter security is not vouge but its better than publicly accessible on the Internet. I would argue if are not comfortable with security take it off the Internet and put it in a dmz. You still need security, but it's a more forgiving environment if you get it wrong between setup and pen test.


And they will host it on Amazon, Google or Azure where their firewall admin won't possibly notice..


They could if the spent the $ and recruited experienced people.


I work for a relatively small company (about 15-20 odd people) and we've got a server rack in the office. It's positioned in a very specific angle (with tape on the floor) along an AC unit because of cooling, but we self-host a lot of our stuff.

It's done on a budget as well so we're kinda forced to use open source software.

Weirdly enough we use Skype for work chat and Zoom for video meetings, so it feels a bit inconsistent.


So you're one fire away from a complete company collapse, got it.


What do you think they did before the cloud?


This is what amazes me. I suspect there's an entire generation of developers and technologists that don't realize we used to manage infrastructure and deploy applications ontop of it.

It's like as if somehow, in many eyes, that's impossible now. Depending on your scale, it may be impractical given external hosting options but it's certainly not impossible. Lots of data centers rose and started automated processes of shared hosting and co-locating hardware which was a step forward. I remember working with Rackspace, Equinix, The Planet, etc. which further automated quicker server deployment, had applications for UPS resets/interrupts, etc. The more you moved towards a specific business's automation services, the less portable your infrastructure was outside that environment.

That continued on until you now have more sophisticated hosting like what AWS and GC provide. Now, abstractions exist for about everything in a data center and the trade off is that you now have to manage all that complexity through proprietary APIs, consoles, and so forth.

In addition, the tradeoff here is the more complex the infrastructure, the less easy it is to shift it to another provider. That may be fine for you, it may also not be. It's all definitely possible though.


I'm not devops, but how hard is it to properly set up VPN / ssh ?

We have a gitlab server, and connecting requires being on VPN, which requires 2FA, and then ssh, which requires your keys to be properly set up.


Parent comment is talking about self hosting “Google Docs, S3, Dropbox, GitHub, Slack“. Running all those things (and more) instead of focusing on your core business is probably a mistake for most companies.


I don't think so. The company I work for is really focused on digital independence. Their core business is industrial electronic component design. The whole supporting office runs on libreoffice, thunderbird, mattermost and nextcloud, all hosted on company premise. They employ two full-time admins, one for the windows clients, the other one for the linux clients (which one can ask for) and servers. This whole setup is, according to them, surprisingly easy to manage and maintain, you just have to find a boss who's willing to try it. Maybe it's different when you're a software shop and really need s3 or something


Indeed.

As long as you have decent hardware (cpu/ram mostly) and reliable storage (netapp or something similar) you can get stuff done very easily.

Also, most people seems to not have noticed how fast computers and disks got recently and how resources you can pack into a single physical machine: you can nowadays fill a 2u, 2socket machine with 128c/256t (2x amd epyc) and literally terabytes of ram...


Parent comment is also talking about small companies. Below a certain size all those things (well, equivalent services, not those exactly) don't need more than a single server which is fairly painless. When you grow to a size where you need it at scale, that's where the pain starts.


You could still host it in a country with a less bad track record of industrial espionage.


> You could still host it in a country with a less bad track record of industrial espionage.

Good luck finding one. The world is divided in 3 spheres of influence: US, China and Russia. To make business you must obey one of them.


Plus the EU, surely?


> Plus the EU, surely?

For most practical purposes, the EU is nested within the US sphere of influence.


US sphere of influence.


Ah, gotcha, agree that makes no sense. We only self-host gitlab and zulip.


It might not be hard, but now you need a number of skilled sysadmin devops people who can also be availble oncall.

Thats not a insignificant cost.


You have to do some planning, but it isn't too hard. The most issues arise from compromises between security and convenience. Cloud services can offer both in the best case, but aside from O365, which is really sluggish, we are actually in the process of migrating back to on premise solutions.

Nobody attacks code repositories of non-software companies anyway, people are after CRM and ERP data. There is the occasional issue with malware from mails and special users, but a backup solution with 15min snapshots solves that issue. Although the latter can cost a bit and might be too expensive for smaller companies.


This is a new thing. Earlier tech companies use to start with an on premise server.


And doing this is so simple. Just host your own gitlab server, and version all your company's IP on git. For chat we use Zulip, which we also self-host.

And honestly moving to these tools from slack, confluence, etc. has been awesome.

Zulip threading model is great. So much better than slack.

And using markdown and jupyter notebooks for documentation on gitlab? Damn awesome.


A few questions:

- Is your company mainly a software company?

- How much time do you spend on a week on maintaining your servers?

- How do you make sure that your servers are secure? Maybe you are being hacked every night, does your company have the means to check if there has been a security breach?

- Do you follow/apply the security patches for the OS you are using on the server and all the software you are using on the server?

- Do you have regular offline backups? What would happen if there is a fire in your offices?

These are some of the reasons to go for a cloud solution, especially when you are not a software company (hence you don't have many people who have the knowledge for setting up/maintaining such stuff) or when you don't have the resources to hire dedicated sysadmins.


> Is your company mainly a software company?

Mostly yes. We obviously have sales, marketing, etc. as well.

> - How much time do you spend on a week on maintaining your servers?

I don't do devops. There is a team of people that works full-time on IT infrastructure. No idea how time they spend. Gitlab and Zulip servers are updated every couple of weeks. No idea how much time these cost.

> - How do you make sure that your servers are secure? Maybe you are being hacked every night, does your company have the means to check if there has been a security breach?

There is a team of people that work on cybersecurity monitoring. No idea what they do. Normal IT people just make sure that everyone's computer is encrypted, setting up people's credentials, etc.

> Do you follow/apply the security patches for the OS you are using on the server and all the software you are using on the server?

I don't do anything, somebody does this for me.

> Do you have regular offline backups? What would happen if there is a fire in your offices?

We have multiple locations and the backups are replicated across our own locations.


Thanks for the answers. The reason I asked these questions was because your previous reply started with "And doing this is so simple.". But having full-time teams of devops, cybersecurity and IT is not so simple or cheap after all.


SAAS it's increasingly requiring security experts in house anyway. Yes the cloud providers can help, but they may also be an unknown liability.

Obviously we're not all going to build our own CPUs from sand at a local beach. So there is a balance between DIY and vetted suppliers


I got to use zulip for a big project last year, I do very much recommend it, lots of emotes too which was pretty fun


And that's official policy; in a lot of places I've worked at, there's a lot of bring-your-own-device things; self-employed, consultants, etc who are quite casual about using things like dropbox and co to share company data.

I mean I like to think the data has no value to e.g. the US or competitors, and that the sheer volume makes it worthless, but I suspect that's just a lack of imagination on my part.


surely encrypting at source would solve that? wouldn't matter where it's stored so long as it's unintelligible to prying eyes


As the recent court ruling on privacy shield decided: no, it's not, you have to treat encrypted user data just like unencrypted user data, and giving it to US hosters violates EU privacy laws.


you can encrypt. In face isn't it best practice to encrypt in storage and in transit?


This is way too high level a statement, only good for satisfying an auditor. For actual security, it's much harder.


But you have to follow through. You can't stop at the cloud and happily use closed source software from (eg) Microsoft or Apple. Or letting people carry Google-powered microphone arrays into meetings in their pockets.


I think there's big a difference between using Windows (with local account only) and using Amazon cloud services, in terms of data safety.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: