GitHub was down for 5-10 minutes today. Two people got upvoted for reporting it. That makes no sense to me. Whenever GitHub goes down for more than 30-40 minutes, then yes, it's a serious disruption, anything else pollutes people's RSS feeds unnecessarily. Please consider this next time. Thanks.
Why report it at all? Seriously. If I'm affected, I'll find out. If I'm not affected, why should you tell me?
It seems that every time a well-known service goes down, for however long, for whatever reason, instantly there's a flurry of posts here making sure everyone knows something that they either already know, or won't care about.
I'd really like that to stop, although perhaps I'm just a curmudgeonly old grey-beard.
It has become quite (unsuprisingly) prevalent on HN. Wonder if pg has any plans on fighting this and if the high-karma enabled functionalities take into account karma whoring in their formula.
You can easily spot the trend if you read HN through RSS. And it sucks :\
I would guess it's easier to spot the amount of noise when you skim through all the submissions as opposed to peeking at the highest voted stories on the front page every once in a while.
The the posts that end up in the front page have been filtered, an unfiltered RSS feed has even the failed attempts to get easy points.
Yes, I too hope pg decides to fight the good fight against others gaining precious Internet Points via methods I find unsavory, even if their accumulation doesn't affect me in the least.
Sock puppets = Proliferation of guns.
With karma = Availability of bullets.
What doesn't affect you individually, affects all of us collectively. I.E. It diminishes the legitimacy of this forum as a whole when those very same mechanisms that are meant to cultivate genuine discussion are abused for petty reasons easily.
Heres an idea: remove karma total - its just an epenis and nothing else. Keep points for posts so you can see if your post was good and for burying trolls but dont show the total in profile or news page. Done.
I disagree. When Twilio went down a few weeks ago, the second place I headed to was HN, and found that it wasn't just for me. And since this is a meeting place for the kind of people I want to hear from (like Twilio's developers), reading the thread helped me a lot.
So I agree with the "if it's less than 5 minutes, don't report it" sentiment, but I do like it when larger outages are reported.
I would guess that it's just the digital version of the internet going down at work, and everyone hanging out in the office kitchen talking about how the internet is down, until it goes back up.
In other words, if you're waiting for [service] to come up in order to get something done, there's not much better to do.
> if you're waiting for [service] to come up in order
> to get something done, there's not much better to do.
I don't understand this - it's a distributed document control system. You can continue to work on your local repo - why is it that you can get nothing done? Perhaps I just don't understand your workflow, but why is github the site essential to your progress?
It would be useful to me if someone could explain how their workflow requires github to be available, because it would seem that they are using facilities or features of which I am unaware.
Today it went down, just as I needed our CI server to do a pass. Which is fine, grab a coffee, then I sat down and thought, I'll catch up on some code reviews.
Thanks for the reply, but if your CI[0] server does passes reasonably often, why would one of them not happening then prevent you from continuing to work? It runs in passes, so it's not really continuous, and I'd expect it to survive running slightly less often, or slightly more irregularly.
Do you stop work when it runs? I don't understand your workflow that implies you have to stop while a process that runs often fails to run on one occasion.
I think the idea is that they just so happened to at a point in their workflow when they needed GitHub to be up. For some reason, wakeless needed to know the results of the CI server. There aren't many such points for people, but if you have a large enough sample of people, some of them will be at such points at any given moment of time.
Thanks for your reply - it's a useful data point for me. However ...
What I think I'm repeatedly asking, and what I think I'm not being told, is what there is in their workflow that requires github to be up at a given moment.
Here I am, working on some code, or working on some documentation. I'm using my local repo, and I decide that it would be a good idea to push to the shared repo on github.
Oh, it's down.
Never mind, I'll carry on with the next bug-to-fix/feature-to-add.
What is it that people are doing that requires github to be up, otherwise at that moment they can't work and have to wait?
I feel like I'm asking a question that makes sense to me, and yet people are staring back blankly, unable to comprehend the question. Perhaps my understanding of people's use of github is so radically wrong that my question is based on total misconceptions of everything.
I don't know. I'm trying to find out. I'm getting downvoted.
Pretty soon I'll go away without having learned something from the people who clearly have the knowledge, but can't understand my ignorance.
I wonder if this is born of the fact that I always arrange my workflow so as not to require any external services at any specific time. In part, this is a result of getting into computing at a time when remote services were inherently unstable. Perhaps times have changed enough that people assume remote services will always be up, and then when they aren't, everything has to stop, because their workflow is predicated on availability.
This is like programming an API querying system that just assumes the remote server won't hang. Enough of the time it's true to make it not worth worrying about. I wonder if I'm just from a culture that's so foreign, no one knows where to being in explaining the modern world to me.
How do you carry on to the next bug-to-fix/feature-to-add when you're using Github Issues as your bug tracker? Or what if all you've got on your plate for the day is "review and merge everyone's pull requests to create our next release candidate build?" Or, even ignoring Github's extra features, what if you need to integrate a new prototype-stage third-party library, which is hosted on Github?
Or, to be less charitable, and to assume some incompetence on someone's part (though not necessarily the developer doing the work)--what if you're trying to use bundle/npm install to set up a working environment for one of your codebases, but one of the dependencies is listed as a git ref of a repo hosted on Github?
And honestly, this is all assuming you would "just move on to the next [whatever]." Most people will take any excuse to procrastinate. :)
Still no one is explaining a workflow that requires github to be up.
This was a genuine question. I have no doubt that my usage is different from yours, and I appreciate the opportunity to learn. Perhaps I can do things better, and you can be the one to teach me.
I don't care about the downvote(s), but I do care about missing an opportunity to do things better, or more effectively, or more efficiently, or something. Clearly people here use github differently from me, and I'd appreciate the opportunity to learn.
Presumably, interacting with Github's hosted issue tracking, editing your project's documentation on the wiki, submitting pull requests, doing a code-review of someone else's pull request before accepting it--any "clerical" project task other than writing and committing code, really.
I was speaking more generally than just Github's case with my original comment, though; for example, it's really obvious that people talk about Reddit being down on HN because when Reddit is down, they're A. bored, and B. want to talk about Reddit being down. They'd do that on Reddit if they could ...but it's down.
Usually there is a technical discussion on why this service went down, on the other hand sometimes someone comes back with an explanation of what happened and how to deal/prevent it, which is kind of helpful.
This post is worse than what it derides, of course. It fits well within the category of things that pollute people's RSS feeds unnecessarily.
This is hacker news, and many of the hackers here use GitHub so when it goes down they might be spinning their wheels. If it is down for less than three minutes it probably won't make it off the new page. I think your arbitrary 30-40 minutes isn't better than what became the norm based on user behavior on HN. Why do you, Toshio, think you know with a high level of precision, how long GitHub needs to be down for it to be relevant to HN?
Lighten up, he just posted a suggestion and request that maybe Github being down isn't something that people need to rush here to post the minute a web request times out. If he succeeds in lowering the rate of people posting this useless information than it's a net win.
I don't think it would be a net win, though. Neither you nor OP has established that 30-40 minutes is a better threshold for when a story is made about GitHub being down than what it currently is (apparently 5-10 minutes when there aren't a lot of other good stories to compete with on HN). You and OP might prefer it, but you're just a couple of data points.
I was reminded of someone who emails a list of thousands of people and this is the first Reply to All saying, "Can you please take me off this list?". Then there are many more such requests and then they will be followed by emails desperately asking people to, "STOP REPLYING TO ALL!!!!"
There are many automated systems that'll detect outages, and take some sort of an action in response. There's no need for this site to be involved in any way.
We launched our status board for exactly this reason, to get notified of random disruptions (specifically API disruptions). You can even chose your flavor, we support email, SMS, IM, webhooks, etc... It sure beats relying on HN or random failures for disruption notification.
Based on the speed with which the post received votes, I suspect most of the "votes" came from people trying to submit the news that had already been submitted.
I agree with you, OP, but it seems there were a lot of people jumping on that bandwagon.
I find twitter a much better forum for finding if a service has gone down. We experienced Lovefilm going down the other night and could easily confirm it was a server issue via twitter.
Of course, HN is a great place for people to discuss how to avoid a productivity disruption when such services go down.
I thought it was pretty serious. Github is no longer a tiny company and when so many people rely on you for everything a 5 min outage becomes significant. E.g. I was not able to deploy to my server a few mins ago due to this outage. This makes me question the decision to use github going forward.
What value does it add to HN to have outages like this reported? As far as I can see, and as I said elsewhere, if you're affected you'll already know, and if you're not affected it won't matter.
I concede that it was important to you, as no doubt it was important to many. The question is, why should it be posted to HN? There might be value in a longer post with real analysis of outage statistics and patterns, but this isn't it.
I don't think the point is to merely be aware of the outage (as most who rely on github so heavily would most likely already have checks in place to notify them), but rather start a conversation, no matter how un-actionable, on whether the next guy should be using github.
There is a local git repo for local env. But I don't see why it's wrong to use git to deploy to staging/prod using different branches. Surely there is a more perfect solution but who said anything had to be perfect, we're hackers.
This is where people go wrong. Why throw the advantages of Git out the window and depend upon one master repo? You are aware that you can use your local repo are you not?
Git makes it pretty awkward to sync a branch via multiple paths. A globally-accessible repo that never goes down is the only easy way to transfer revisions.
Yes, that's true but that's acting like there's no fallback when things do go down. A 'repo that never goes down' doesn't really exist (as is proven here)
Not sure HN is the best place to report it, however working at a company that extensively uses github on a large scale, for the business I think even an outage of 5-10 minutes counts as serious. I'm amazed that people seem to find downtime of 'cloudy' services more acceptable somehow. We have similar issues when JIRA goes down that has a significant impact on overall productivity.
I would respectfully disagree. One of the ways that I assess whether to use a service is based on its reliability. There is not a great way to look at historical records for when a particular service was down to understand how this might impact me. Searching HN has been very useful to assess other people's experiences with services, so why not for uptime too?
Come on, GitHub is used for serious work for several big companies. One hour downtime is an eternity, I'm quite happy with reporting 10 minute downtimes, as most of my workflow involves github, every step of the development process (ticketing system, dev vm's, CI server, capistrano, etc...) all at some point connect with GitHub.
It's quite odd how so may Git advocates relentlessly stress the importance of decentralized source control, yet they turn around and centralize so heavily on GitHub (including source control).
I agree that it's an important tool and should be discussed.
But...if you're a big company, who's doing serious work, then take advantage of the fact that git is a DVCS and have something as a failover measure. GitHub is great and has lots of great features, but there's nothing stopping people from having a mirror synced without all the sugar but to keep productivity going.
I recommend monitoring third-party services automatically all the time, e.g. we do it using https://github.com/alphasights/open_nut.
I do agree reporting outages on HN is totally irrelevant
Agreed, this just pollutes the eco-system around. If someone needs to know and gets impacted - they can check the status page themselves. Wasted 5 mins reading it!!
But to play devil's advocate, in aggregate, the reports of the outages, even the small ones, could be useful to someone deciding whether or not to use GitHub.
I'm going to be that snarky fellow who submits a post and turns this thread into an ultimate meta-inception:
"Beg HN: Please only beg about serious issues (250upvotes+)"
Now upvote me for making clever comments about infinite-regressions, which all hackers are obviously interested in; or downvote me because I failed to add "</sarcasm>" to my comment - but wait, I just did, which would then cause an alligator paradox! (woohoo, now you'll want to upvote me because I mentioned paradoxes - but wait isn't that a paradox to upvote me for... nevermind)
But obviously you now want to downvote me because it is apparent I'm procrastinating and wasting time on HN and have nothing better to do. But wait, oh snap - now you want to upvote me because I'm writing satire about people who write about infinite regressions... which, wait, hold on, would mean that I'm not -- nope, nevermind. I'm shutting up here, because I'm sure you could figure out what my next 100 paragraphs will be, which means I don't even need to write it.
But gasp, I just did write th -- this author was shot dead* (then who wrote that he was shot dead? Obviously it was only a flesh woun -- this author was `Rabbit of Caerbannog`ed)
rofl, I can't tell if the person who downvoted me is actually making a satirical statement about downvoting... which ironically would just be validating my initial satire! ;)
It seems that every time a well-known service goes down, for however long, for whatever reason, instantly there's a flurry of posts here making sure everyone knows something that they either already know, or won't care about.
I'd really like that to stop, although perhaps I'm just a curmudgeonly old grey-beard.