> So what? I would rather have Google/Amazon employees on the issue than some random DevOps dude.
This is fine if three nines of availability is all you need. Doesn't matter much if you prefer a big brand employee fixing things or a small brand employee. It doesn't change the outcome.
However there are a lot of things that simply cannot live with crappy three nines availability. And the only way to do better is to stop relying on any single cloud, which inevitably requires infrastructure engineers aka random devops dudes.
In fairness one "random DevOps dude" might be equally capable and less expensive for your infrastructure. Generally speaking any software company can succeed without a cloud provider's infrastructure, it's just a matter of cost and developing that competency in-house. There are many site reliability engineers who specialize in high availability and downtime resolution on baremetal hardware. StackExchange notably has this competency internally.
Site reliability is a new fancy name for the sucker who is on call.
They will change career after being forced to work on week ends and holidays a few times. Incidentally, today is a Sunday AND the most taken holiday of the year.
I wouldn’t. Hire the right person and you have immediate response instead of waiting or somebody else. A large reason we are not going cloud for our new infrastructure.
At work we have a 24/7 20min response time clause. If the phone for work emergency calls we are ready to help in 20 minutes at any time around the clock even on Sunday.
Why would you do anything else for your sysop/sysadmin?
You surely realize that no human being can be available 24/7 within 20 minutes. It's beyond slavery to expect that from any employee.
You need at least 10 sysop/sysadmin to achieve anything close to that SLA, with a sustainable rota. Contrary to the parent posters who believe it can be done with THE right guy.
With 3 people you can have a "follow the sun" rotation during business hours which takes care of the entire week, and I don't think you would need 7 more people for the weekend.
Not all 3 have to respond all the time. It's a rotating schedule for having the overnight phone and the weekend phone (during weekend nights the SLA is relaxed to 1 hour).
And keep in mind, 20 minute response doesn't mean you fix the problem in 20 minutes, it means you respond in 20 minutes to the callout.
I think you're victim of an easy-going startup culture.
I’ve been oncall for escalations for like 15 years. That’s miserable enough, IMO frontline guys need fixed schedules and rotation if the volume is high.
I definitely cast my perspective on this and apologize if I came on too strong,
Sorry to say, you're right and he has Stockholm syndrome. It's a very aggressive schedule, must be missing holidays half of the time to keep the phone.
This just doesn’t make sense. Google/Amazon employees basically are some random DevOps “dudes”. Whereas your own people would be...whoever you decided to hire to work on your infrastructure.
Problem being that Infrastructure is made to be very, very compicated because they're selling tons of managed features that have reliance upon each other. I don't know why this idea of having your own devops is suddenly now bad. renting your own bare hardware and managing solutions yourself is still very much a thing and something you have to do if you're using lots of bandwidth. Dropbox for example made their own infrastructure to get off of AWS [1].
I don't think that anyone thinks having your own infrastructure is bad, or if they do, they likely don't have much experience. Rather, I think it can be a nightmare if it's not properly managed, and it's hard to develop the skill to properly manage the process unless you've been burned in the past.
It's like an insurance. You wouldn't be able to hire as many and as highly paid random DevOps on your own. So you and others pay a 3th person to do it, just in case you need a competent person.
I’d need to see some spy shots of Stallman in an Uber, talking on an iPhone and tapping out a denial on a Surface to really believe this news was true.
Speculating here, but I was woken a half-hour back by an SMS from our prod monitoring system. The people at Azure had required maintenance for some instances scheduled for this morning, which I had had performed during the scheduled window over the last two weeks, but they seem to have brought down two thirds of the instances anyways. Possibly unrelated; just my two cents.
That's pretty vacuous. Everyone's computer is someone's computer. The more important point is how capable you are at managing it yourself.
What you're trying to get at is this: would you rather trust your infrastructure to a large organization whose core competency it is to do so, or would you rather manage it yourself? For many companies it makes more sense to have someone else manage it because of division of labor.
If you believe you're better suited to managing your own hardware for cost or capability reasons, you should. But of the arguments in favor of that decision, pointing out that "you cannot do anything" when GCP/AWS/Azure has downtime is a pretty poor one. It's an exceptional circumstance if you're 1) able to achieve better uptime than a cloud provider, 2) at nearly the same cost (in personnel, hardware and software), and 3) while being relatively unaffected by the downtime of major cloud providers anyway.
The companies for which the calculus shifts in favor of managing their own hardware probably don't need to be told "the cloud is just someone else's computer." In contrast, most companies using a cloud provider do not have a readily available alternative because they do not have in-house talent capable of maintaining baremetal hardware (local or colocated).
I consider myself personally capable of maintaining a baremetal distributed system with high availability, because I presently do that. But for the most part I wouldn't encourage companies using a cloud provider to invest in their own infrastructure. It's usually expensive in personnel, time or both.
Although I like the concept at this point status pages are very disappointing to me, between those that stay green when everything is failing because they're not updated properly, those that stay green because "it was a localized partial failure only" even though the whole thing breaks (hi aws !), ... Sure some are reliable, but enough aren't that it feels like you can't trust them.
You can't look at the status page and believe what it says, so you go and ask people anyway (on irc, reddit, hnews, whatever community you like). Meaning that page might as well not have existed.
Couldn’t agree more. I pushed for one to be implemented at my last job (api) as I felt it was ridiculous that we didn’t have a means to communicate downtime, outages, issues.
Initially the status page worked. But as more and more people subscribed to it, it became a bigger issue, to issue an alert.
And unfortunately an issue couldn’t be raised only to those it was relevant for.
All this lead to was, not updating the status page and thus it becoming a useless tool to determine if an issue was occurring.
Back to Twitter...
I feel the product needs a lot work in practice, and possibly in implementation and training.
Ah, sadly I believe your personal experience is very common.
It's insane really; a company puts out a status page to say to their customers "you can trust and rely on us through that dedicated medium to know our status", and if the customers in question buy into the proposition and use it the very first thing that company does is make it so you cannot trust and rely on them through that dedicated medium. Succedding is what causes it to ultimately fail.
Status page should have stayed as undocumented features for "the little guys" behind the scene to communicate and never get into the open world where PR and marketing and decision makers can roam.
Status pages shouldn't need any manual intervention.
I setup mine to automatically monitor my website from another service provider in a different datacenter. That way I know if the server is down for any reason and it updates automatically.
If my server goes down, within 5 minutes the status page is red. End of story.
I'm Turkish and have been watching the news but I don't see any reason why someone correlates large websites being down with Turkey. With no explanation too.
Can you elaborate please? This is an honest question and I would like to know if my government is hacking foreign sites in retaliation for sanctions.
Why would you think that Turkey would retaliate against Trump's tweets by taking down reddit and GNU.org? I don't think that the Turkish government has nearly enough technical knowledge to pull of something like that. That is the problem with Turkey right now, it seems to me that the government doesn't want to work with qualified people.
Retaliation would make sense, but I haven't dug deep in Turkeys APT crews lately. Most of the stuff I hear/read about is talking about Iranian, Russian, Chinese and roaming APT groups doing attacks. It also would make attacks from Turkish AS's more logical as the government would not likely do something about 'their own' for free.
Remember: the cloud is someone's else computer. When it's broken, you cannot do anything