Two hours ago, they were down constantly, but now they seem to be up. Github seems to be getting increasingly reliable!
You can always pick an appropriate window of time, point to it, and say "See, there's a trend!". That doesn't make it so.
Github used to go down much more often than they do now - calling them 'increasing unreliable' really just shows that you have begun depending on it more heavily.
Their status page [1] still indicates a 99.85% uptime in the past month, and before this and yesterday's problem, their status page was mostly green across the board for a couple of weeks. It depends on your requirements, really. Nobody can guarantee 100% uptime.
'Sufficiently' is meaningless without a qualifier. Sufficiently for what purpose?
There is a value proposition involved - you can run your own source code hosting and anything else, but it costs money and time to do it. Especially if you need six nines of uptime.
For a large DDOS attack there aren't any easy ways to only drop the DDOS traffic - especially if it's hard to identify the DDOS traffic in the first place. If they're getting more 'bad' incoming traffic than their connections can handle I don't know how they'd drop that stuff - they have to receive the packets before they can filter them. Maybe their bandwidth provider has tricks for this...
They're using Prolexic by the looks of things.. You'd think a company that specializes in mitigating DDoS attacks would be able to mitigate DDoS attacks.
Maybe I'm just misunderstanding the word mitigate
It's not their fault, we were in the middle of provisioning and service validation with them but it wasn't completed. We're had to work through some issues on the fly that we were trying to do non-disruptively, but they're mitigating well for us now that we've got it dialed in.
In response to your comment and the parent comment, from someone in InfoSec: security can never be completed in the way a product can be. It's an ongoing war, and sometimes your opponent gets the upper hand for a while. The problem with being the "good guys" in security is that you have to make sure every hole is closed while still letting the business run. It's easier to be the bad guy, because you just have to find one thing the security team missed.
Security doesn't exist without the business and the business doesn't exist without security, but the business tends to trump security for the sake of features and convenience. It's a very delicate see-saw, and all you can really do is trying to run back and forth from side to side hoping that the other end doesn't hit the ground before you can get over there again.
Actually as a solution architect I have to deal with all sides of the problems: people attacking, audit companies, penetration testing companies and software engineers leaving gaping holes.
The only people who deliver little value are the paid up consultants. When a full penetration and code review misses 4 purposely placed obvious vulnerabilities (by myself) they get told to fuck off. Application firewalls which are circumvented trivially. QoS solutions that don't work.
So far, four well known, well respected companies offering certification and testing have missed the holes and have been fired.
That's the problem: no delivery.
My attitude might be wrong in your eyes but I refuse to employ box tickers which is what the entire white hat side of the industry is about. Canned report, where's my cheque?
No seesaw other than a bent twisted one that sucks up cash in exchange for a half arsed job.
Every defense system has its limitations. There's a truism that if brute force doesn't work, you aren't using enough of it and I think that applies to running a DDOS attack.
Couldn't they use something like cloudflare to have the IP point to local servers?
Then the traffic is split on location, with each edge server taking only local requests. That should greatly reduce the incoming traffic, at which point they can try to filter out the 'bad'.
Indeed you would not be able to. CloudFlare only does HTTP/HTTPS right now. Technically since Nginx can also support SMTP they should be able to do that as well but it's not implemented currently. Basically if you want to protect SSH it would have to be a provider that does layer-level protection like Prolexic.
Well, cloud flare can certainly forward (or not forward, in the case of bad) ssh traffic. But they would need to dedicate an IP to your account, or provide you with a port number to use.
That doesn't work in all cases. If you can't distinguish between good and bad traffic or if it isn't specifically targeting an entity it becomes much more difficult to handle.
It also depends on the ingenuity of the DDoS attack, none of which are known to the public so you can't really say anything sensible about it.
If the anti-DDoS mitigating tools they are using aren't working nor is using other services like Prolexic that's usually hint enough that this isn't particularly common or easily filtered out.
You can gripe on all the companies and tools if you will but a good DDoS is quite a bit more complex than 'just filter away the bad crap'.
(4) Some things in life can never be fully appreciated nor
understood unless experienced firsthand. Some things in
networking can never be fully understood by someone who neither
builds commercial networking equipment nor runs an operational
network.
Would it help if they allowed users to specify some port other than the default, perhaps only as a fallback? I am presuming that it would be easier for them to prioritize traffic to a given range of ports -- in this instance the NON standard ones -- but perhaps that's wrong.
If so, I wonder if they could counter this now by 1) implementing that 2) opening and publishing a not-standard port for login solely for that purpose 3) maybe moving that change-the-default port around if the DDOS shifts on to it.
Even if not-perfect they'd force the attack to spread resources.
The solution is to use a bug tracker within the SCM:
http://bugseverywhere.org/ (my personal favorite, but there are 3/4 other options that you can look into).
Not only it does offer distributed bug tracking on the command line (without breaking your workflow), but it implicitly allows to isolate bugs to branches. You can fix a bug in a branch, and a subsequent merge of the changeset will automatically fix the current branch.
I don't understand why these projects are so underrated. In "early git times", distributed bug tracking on top of git was quite a hot subject. They solve many issues nicely.
Github might be a "nifty" viewer, and I do host projects on github for added visibility (by simply using a second push remote), but that's about it. I find "tig" and "bugseverywhere" to complement git nicely and work much better than any web browser could.
DDOSing Github reminds me of a study I read about a while ago. It showed that many burglars tend to break into homes close to their own instead of targeting wealthier neighborhoods.
Many of the reasons for that will be very different from this attack on Github as there is no money in attacking Github. But one reason may be similar: Lack of imagination, or in other words stupidity.
Well then you have learned not to deploy from github, get an AWS micro instance which you can also push to and deploy from that or many other possibilies that a Dcvs gives you.
Keeping code in sync with your colleagues should not be a major problem, given how you should be able to sync with each other (distributed CVS, etc). Azure... I don't know how that works. Heroku would work without github being online, since you'd push to github - maybe that'd be an idea?
bower_install is annoying, yeah; it should allow for a backup location (s) to resolve dependencies. Maven allows people to configure multiple repositories, which are often mirrored against each other while hosted by vastly different parties; if one repo mirror is offline, there's a dozen others available, in a lot of cases.
For those components, github is a single point of failure.
While GIT is distributed, working collaboratively with others still requires a central platform where everybody working on that GIT repo can connect. GitHub is a very convenient central platform.
To GITs credit; you can, with a little server know-how, set up your own git server and give all previous contributors access. However for a small downtime this could be overkill.
And even if you don't want to use a dedicate git hosting service, good site hosting platforms (Webfaction) make it incredibly easy to install a git server.
I think it's interesting to note how people have gone to expecting 5-digit reliability out of an internet service. Not only is GitHub under fire, but the whole IPSec industry gets blamed.
Back when my dad installed physical PBXes, the big ones that could be the size of a mainframe, uptime the biggest argument: they had to have reliability to five nines (99.99999%, if you don't get it). Then when cellphones first came out, everyone got lackadaisical about dropped calls. And overnight an entire industry worried about reliability "to five nines" changed, and "whatever, it's a new service, you've got to expect some difficulties."
The internet started with relatively low reliability. No web host I've ever seen has truly been able to achieve 99.999% uptime. And yet, when GitHub goes down under a "large DDOS attack" but still manages to maintain 99.85% uptime over the last month (with several DDOS-caused outages) everyone comes out of the woodwork to complain. After all, it isn't as-if hosting a massive service while keeping everything secure and running happily is an easy thing.
If you're tired of GitHub outages, then get a Bitbucket account, or host your own Git repository for backup. What serious developer, or service, would keep all their eggs in one basket if they really depended on the uptime of just one centralized service?
Bitbucket has had it's own fair share of issues, let alone the "we're pulling Bitbucket offline for 5hrs to move to a new datacenter" debacle not so long ago. I understand why they had to do it but it is indicative of some issues with their architecture.
They probably haven't gone offline through a DDoS yet because they're just not that popular to warrant an attack but I wouldn't bet on it that Bitbucket would fare any better.
I'm a rookie so how does Github being down for a few hours cause problems? I push to Github a few times a day, but if I don't, I just push to Github the next day.
Are there teams that need to be in constant sync pushing and pulling multiple times an hour?
Bitbucket: $10/month for 10 users and Unlimited private repositories.
Github: $200/month for Unlimited users and 125 private repositories.
If you're a team of 10 or less, have a few dozen clients and dozens more supporting libraries in a small company Bitbucket blows Github out of the water.
For the same $200/month Bitbucket also offers unlimited users (again, with unlimited private repositories).
I wouldn't call Github's pricing unreasonable. But I have learned to appreciate Bitbucket's service (they're really on top of things on their Twitter feed) and their pricing is lunch money for a day (as opposed to skipping lunches for a month).
I've seen a lot of stuff in their Twitter feed they seem to work through, but I've never actually run into any issues. So I've interpreted that as transparency I guess.
Been with 'em for maybe a year now? Never had a failed push or pull. That's happened a number of times with Github but I wouldn't suggest it's been damaging to the business. Only a minor inconvenience at times.
So with your anecdote and my anecdote, we get to call this "data" now right? :-)
I'd be interested to hear what issues you had with Bitbucket - we just started using them (switched from Github) and haven't had any issues just yet... but I'd be interested to know what to be wary of!
Is there a git tool which syncs remotes? I could set up a second remote for the times github is down, but how do i share it with my team members? Does everybody need to add it manually? That could become tedious with larger teams or more remotes.
Plus: There are a bunch of decentralized issue trackers, can any of them sync with github? Is that possible with their api?
> Plus: There are a bunch of decentralized issue trackers, can any of them sync with github? Is that possible with their api?
Last time I looked, their issues are not stored in git itself. This is something that has kept me from using their issue tracker for my projects as it encourages lockin.
The great thing about GitHub is that it's still Git. If GitHub is down, that just means that your central publishing site is down. It doesn't mean that you're developers can't work. They can still share amongst themselves. Like they probably should be doing even when GitHub is up.
How would this work? (This is a serious question.)
Would each of us set up each other's internal IP addresses (192.168.0.101, etc) as remote repositories? Would each of us run a git repository on our own boxes? Or would we set one up on our own AWS box or something?
Well, ideally you'd have DNS setup internally so you don't have to use raw IP addresses, but essentially yes, you map each of the people with which you're collaborating as remote repositories. Because they are remote repositories, all on their own. There is nothing special between your repository on your machine and the repository on github.
Since the VCS are distributed, you could always mirror a repository on a NAS at home or something.
Edit: I might have misread the parent's comment. If CmonDev was referring to public availability, just a local repo won't do. It depends on who needs access to the code etc
Also, I'd be interested to know how complete lack of service is 'mitigating a DDOS attack' - to me it sounds like 'successful DDOS attack'.