Hacker News new | past | comments | ask | show | jobs | submit login
Denial of Service Attacks (github.com/blog)
227 points by silenteh on March 14, 2014 | hide | past | favorite | 172 comments



To GitHub and everyone: please use UTC timestamps when there are potential readers outside of your timezone. Since every technical person should know their current UTC difference, calculating the local time is easy.


You're totally right. I just adjusted the post to convert the times to UTC.


Thank you!


>Since every technical person should know their current UTC difference,

Awww, that's so optimistic.


Don't forget the ancient proverb handed down to us from the original neckbeards:

"At some point in their career, every programmer must pass through the fires of timezone hell."


Timezone hell is, of course, followed up with the three-headed hound of locales, charmaps, and keyboard layouts. And if you pass by that, you drop into the river of cache coherency...

(Esoteric software-engineering topics could make a wonderful video game, you know?)



Every programmer must enter those fires, but it's not certain that all of them will survive to emerge from the other side.


I once wrote a scheduling SAAS app, thinking it would be a short, trivial project. It turns out that UTC offsets don't even stay consistent within time zones throughout the year due to differences in daylight savings policy.

It took a bit longer than I expected.


We've probably all been there at one time or another, thinking that "unix timestamps" are all that's needed to represent and perform date/time calculations.

Along with the recommendation to never inventing your own cryptography, you should also never write your own date/time routines. Use well tested functions in your database or programming language libraries.


At my first job I inherited a project from a snr dev who decided to implement his own date logic instead of using the ones provided in .net

Anyway come feb 29th 2008 I get a call about some users were having issues. After not to long I figured out it was caused because of the leap year and his logic did not account for leap years. So I called them back and told them it would fix its self by tomorrow. I never actually bothered to fix the bug because a) I thought it was unlikely that the pos system would still be used in 4 years and b) the code was so bad in all areas that trying to fix anything had a good chance of breaking something else.

Lets just say that job was very character building


Yeah, I ended up just using Ruby's time functions, but there were still a number of issues despite that (and I don't think they were fully baked).


There was a comment in a piece of code I was looking through once which read, "Except here universal time is actually Pacific Time, maybe they will get the prime meridian moved."

Much of the code assumed that all times were in PST and the commenter had apparently been incorporating data sources from other time zones into the code. I thought it was pretty funny.


Except I have the unholy irrational evil daylight savings time in my area and I can't remember when the clocks go forward or backwards so I can't tell when my UTC offset is right or not.


Fall back - spring ahead


GMT is more reliable than UTC, doesn't change with daylight savings. It's also the SI-unit for science methods, so it works internationally.


You have that the wrong way around - GMT has daylight savings, UTC does not.


UTC most certainly does not have DST. Many people use GMT as a synonym for UTC, and in that sense, it doesn't either. Britain goes to "BST" (British Summer Time) during the summer. From Wikipedia[1]:

> Greenwich Mean Time (GMT) originally referred to the mean solar time at the Royal Observatory in Greenwich, London, which later became adopted as a global time standard. It is for the most part the same as Coordinated Universal Time (UTC), and when this is viewed as a time zone, the name Greenwich Mean Time is especially used by bodies connected with the United Kingdom, such as the BBC World Service, the Royal Navy, the Met Office and others particularly in Arab countries, such as the Middle East Broadcasting Center and OSN. It is the term in common use in the United Kingdom and countries of the Commonwealth, including Australia, South Africa, India, Pakistan and Malaysia, and many other countries in the Eastern Hemisphere.

> Before the introduction of UTC on 1 January 1972, Greenwich Mean Time (also known as Zulu time) was the same as Universal Time (UT), a standard astronomical concept used in many technical fields.

> In the United Kingdom, GMT is the official time during winter; during summer British Summer Time (BST) is used. GMT is the same as Western European Time.

[1]: https://en.wikipedia.org/wiki/GMT


If anyone wants to convert from a given time zone to your local time zone, you can use: date -d '2014-03-14 14:25 PDT'

Google does this conversion automatically on their own outage pages.


3 letter timezones are ambiguous though. For example, Australia's east coast timezone is "EST"/"EDT" just like in the US. If you want to present time in a non-UTC timezone, you really need to use syntax like "UTC-0700".


From the 'date' command manual, there's more readable alternatives:

show New York time 14:25 in your local time zone: ~$ date --date 'TZ="America/New_York" 2014-03-14 14:25'

show New York time 14:25 in Alberta time zone: ~$ TZ="Australia/Alberta" date --date 'TZ="America/New_York" 2014-03-14 14:25'

On a GNU/Linux system, for more timezone strings see ls -Ral /usr/share/zoneinfo/


It's bad customer service to require a user to crack open a terminal in order to understand your announcements. UTC is coordinated universal time. It's a standard. Just use it.


Better still would be to use some kind of JavaScript that converts the time into the local browser's time zone. Or a link to such a service.


Just don't put that script on your event schedule pages. I've seen more than one conference make that mistake.


Sadly, the user's TZ isn't available to JavaScript. This is why so many sites have you input that as part of your preferences.

You can get the local time with JS, and you can get UTC, and from those two you can narrow it down (you know the current UTC offset, but that's it), but you're not going to get a single answer.


But you're just converting one time to another. If you know their reported UTC time and their reported offset to local, isn't that all you need? For greater precision, ask for their geolocation or use an IP address to narrow down the possibilities, though offset obvious helps.

Now that I think about it, you could auto-suggest based on offset and embed a list of locations per option if they needed to pick between daylight savings or not, for instance. There's room for an interesting widget or service, but it would need to be updated almost as much as a US sales tax calculation widget ;-)


Why hasn't this been fixed yet?


It seems like it has been, but it's basic. http://norbertlindenberg.com/2012/12/ecmascript-internationa... A more complete version might look like goog.i18n's timezone support in closure, though I'm sure you can link the two.


> For example, Australia's east coast timezone is "EST"/"EDT" just like in the US

I've always seen it refered to as AEST/AEDT, reducing the ambiguity.


There are other ambiguities - try Ireland, Israel, and India for IST [1] Turns out those abbreviations aren't part of an official standard anyway, afaict, so best to avoid altogether.

[1] http://en.wikipedia.org/wiki/List_of_time_zone_abbreviations


For converting without using Unix tools

http://www.timeanddate.com/worldclock/converter.html


I'd rather see the local time for whoever is writing the article. I'm not a machine, and hopefully neither are they.


That is when you live in the US of course. Europeans, to my knowledge, all use GMT and have nearly nothing to do with UTC.


You ever sit there and wonder who the person is on the other end of the attack? Someone sitting there, I guess with not much on that day, decides to command their army of infected bots to attack github.

Why github I wonder? Perhaps it provides a challenging target. Perhaps github is used as a testing ground for a more profitable future attack.

We often get technical writeups after a DDoS attack, however we very rarely get a writeup sumising the motive behind the attack. I can't believe every attack is simply driven by 'because they can'.


There's a good piece of journalism (EDIT: see the reply to this comment for the link), in which someone who was being DDoSed went into the various black-market forums where such people as would DDoS hang out, to see if they could find their attacker.

What they found was that, at one forum, there was a convention where new, as-yet-untrusted sellers of DDoSing-as-a-Service are expected to take down some big, technically-respected target (e.g. GitHub) to prove their mettle, before anyone would hire them. And conveniently-enough, some new DDoSaaS seller was advertising right then, telling everyone to "look and see that _________ is down. That was me! Hire me now!"

It further turned out that the site-owner doxxed the account, found them to be a thirteen-year-old(!), and called their parents to tell them what their child was doing on the internet.



I met some folks from GitHub last year, and this is what they postulated as well.


Github is a valuable target if attackers are trying to get access to private repositories. A lot of organizations have their entire code base on Github.


How is a DDOS attack going to help with that?


I believe if you're doing that to get at something you use the DDOS to hide your true intentions. e.g. you leave the DDOS at the level to slow the site way down so that the admins are paying attention to that while you go after what you need. It's like blowing up the building across the street during a bank robbery, you distract the people looking for you so that you can get away.


If you can get a crash, you can possibly find a weakness. If a system relies on Github to work properly, maybe you can find vulnerability in case Github is down. Maybe a MITM (to your target, not Github itself) is easier to perform when Github is slowed down and takes some minutes to respond to requests.

There are various possibilities, nevertheless, bringing down Github is surely a juicy objective.


I'm not entirely sure why this question was downvoted. How exactly would a DDoS attack help attackers compromise the target system?


I'm not entirely sure why this question was downvoted

Because HN is now full of people who already know the answer and who like to troll HN.


> Because HN is now full of people who already know the answer and who like to troll HN.

I thought submitting or referring to a good (possibly obscure or unknown) article would be more helpful in that instance, including to myself.


Perhaps in this case it's advertising - a proof of the power of the bot farm that a cracker somewhere is trying to sell?


It just isn't even possible to say anything specific in terms of anonymous person's motivation. If they had a name and face that is the only time you can talk with some certainty.

Perhaps some developer was not going to make his deadline or wanted to take down the network to give him more time? Who knows...


As people leverage the deployment API more and more for production use-cases, an attack on github might come to represent an attack on the ability of MANY products to push changes and respond to their OWN attacks in a comfortable manner...


RIAA trying to take down popcorntime


It just shows that we need some kind of distributed version control system.


Ha! This is good satire, but for me personally, github going down isn't a version control problem nearly as much as it is a project collaboration problem. I can't go review pull requests and discuss issues when github is down, but I can still do all the traditional version control activities. Github is so much more than distributed version control. If somebody started doing the non-version-control things that github does in a distributed fashion, I would be very interested in taking a look.


Take a look at Fossil, then. It's a distributed version control system, bug tracker, wiki and blog, from the author of SQLite.


Yeah, cool, thanks for the pointer. I remember looking at it before but never quite wrapped my head around it.


Actually there are quite a large number of researchers who have concluded that the underlying architecture of the internet itself needs to be more distributed. They call it data oriented or name-based networking, content centric etc. The hardest part about switching over to a fully distributed internet is to maintain existing business models. Either someone will figure out and successfully market a business friendly universal data/computing distribution distribution system/network, or we will see free ones pop up that start to supplant more and more centralized services that are associated with particular domain names on the traditional internet. We are also eventually likely to see very strong push back against ISPs that overcharge for business internet. Well, at least in a sane and just world all of this would come true shortly.


I am still laughing at this comment.


Call me naive but I fail at imagining why would someone want to DOS Github.

I mean, if you're into this, it's certainly fun to launch DOS attacks against large "evil" things such as government services, large corps and Micro$oft becoz w1ndoz sux0rz, but... Github? Why?


Can you think of an easier way to negatively impact productivity at nearly every tech startup? There might not be a reason other than simple bullying. In fact, I think these attackers might be the same ones behind that 2048 game.


Or bitcoin! I've always referred to it as a DDoS on engineers.


Others mentioned Github competitors, but I think it could be anyone's competitors. I.e. Your competitor knows you're using github to handle your continuous deployment (or whatever reasons), by bringing github down, they bring you're whole development environment down.

That's just a (dumb) hypothesis. The cost of doing so is probably not worth it.


Isn't the whole point of distributed VCS like git not to rely on a unique root ?

I don't use Git at work (SourceSafe ftw), but if I had to, I would use Github as a long-term backup storage, not as the nexus point of my dev env.


Yes, and not exactly.

Git is federated, as in Jabber. A branch has a place where it lives. If you do something with a particular branch, you need access to the place where it lives but not to anywhere else.

Monotone is distributed, as in Usenet. A branch may only exist on certain servers, but it is not logically tied to any one server (or set of servers). There is a global namespace for branches.

I think other systems (hg) tend to be more like the Git model, because permissions handling is far simpler.


yup i kindof had this same realization as well once cuz i didn't want to sign up for private repos after i'd left the company that paid github subscription...

hard to recreate the logic tho O.o but it was something in the vein of "if i learn to set up git correctly, i shouldnt need a unique root, so what is it then.... a backup with vizualization? I dont need that for a private repo i can use offline tools"


If you value your time, you'll realize github is very cheap. I also particularly like how everyone knows it so adding new contributors is painless.


yea no argument there i just only have 1 or 2 clients a year who do closed-source git so $100 seems like a lot given that i am usually the sole dev


The problem is that github itself agregates value. Issues, Gist, not to mention that various package management systems use github as the repository.


Perhaps you have a botnet and want to test its capability on a "small scale"/unimportant target before unleashing it on a larger target?

Sort of like testing without getting massive amounts of repercussions.


Cred. If your botnet took down github, that's impressive.


Competition would be my first guess. Or an active employee somewhere that doesn't want to use github: "See, it's offline, we should roll out our own solution/keep using what we have". Or just testing out botnet on an appropriately sized target. Or anything else really. There are dozens of valid reasons.


I could only see someone DDoSing other services so that their employer would let them back onto GitHub ;)


Considering that so many software companies use Github for version control, and have it tied into their deployment workflow, it might make sense for Company A (not dependent on Github) to DOS in order to hurt Company B (dependent on Github).

In fact, some software companies run Github on their private servers (https://enterprise.github.com/) that prevents them being affected from a DOS to Github's servers. Granted, these companies probably made the decision to use their private servers for other performance and security reasons as well, but avoiding DOS could very well be a reason.


It is the same thing I've wondered. What has github done to anybody? Is it for the L0lZ? Is it because they happen to host somebody's gaming code, or IRC code that they feel should be taken offline?

Though when you think about it, taking out github is a very effective way to basically kill the productivity of many dev shops. At the place I'm working now, we are basically dead in the water while github is down (can't do deployments, can't merge pull requests, can't run automated testing, etc).


Is it for the L0lZ?

But laughter == good and I don't have to think about ethics and things, right!?


Because, although some DOS attacks follow a 'moral' code (According to whoever's morals are relevant at the time), not every DOS attack does, in the same way that some people steal from those who have less, some people vandalise state property, some launch wars based on lies. In fact, I'd go as far to say that 'bad' things that happen probably don't have a justifiable moral purpose underpinning them more often than not, and that goes for DOS attacks too.


Github hosts a bunch of full websites too. It's not terribly uncommon for an entire hosting provider to be attacked simply because someone doesn't like what one user posted.


This is getting as pernicious as horse thieving in he old west. We should do what they eventually did about it: Concentrated law enforcement. (though the interim plan sounds attractive)


-criminals demanding ransom

-"our agency has received intel that this site is a spawning ground for computer hackers" see: freenode ddos


Yea I was thinking exactly the same thing - DDOS attacking github? There are better things to do with your time.


1) Attack and show'em who's king 2) Ask for money to not repeat the attack 3) ??? 4) PROFIT!!

Malware, DOS, DDOS, etc nowadays is driven by simple and plain crime. Spy's stories are really the exception.


Spy's stories?


NSA and such are really the exception (big one though). There are countless extortion attacks every day.


extortion, see meetup.com recently


Why would someone want to DoS Mozilla's Bugzilla installation? People try monthly or so. Maybe even for similar reasons.


I was wondering the same thing? Maybe Github competitors?


I'm running GitLab and the first thing I think when I read about their DDOS is 'how long before they attack us'. DDOS attacks just cause everyone a lot of pain and it would be great if nobody needed extensive countermeasures, DDOS operators waste everyone's time and resources.


If the attacks against Github are mostly proving grounds for fledgling DDoSaaS, I would assume write-ups like these only serve to elevate their status as a good proving ground.

Did this article contain anything particularly useful for anyone thinking about DDoS hardening? I didn't find anything. I guess it's not really supposed to be a technical article, just a smattering of buzzwords to let you know how hard they try.

The postmortem-half-apology has become quite an art form; as getting it right can actually draw a lot of positive publicity, and getting it wrong can be brutal. But I can definitely see how this post would feel like a pat on the back to whoever launched the attack.


I was going to disagree with you, but then I realized I didn't understand what you were saying: What do you suggest they should have done?


Github downtime (and subsequent postmortems) are a regular feature of the HN front page. The postmortems have come to command their own audience, similar to the CloudFlare reports.

It's actually a pretty bad position Github's being put in. They sit at the crossroads of playing defense against DDoS and trying to dispel or at least ameliorate any blame for the downtime.

My point was, if they have indeed become the internet's DDoS proving ground (as several others were speculating), then while you can see how much effort they're putting into these postmortems, I can see it becoming a vicious cycle.

Then the challenge is, how does Github placate their users without basically pinning a ribbon on the attacker? The funny thing is how the "best practice" checklist for a postmortem (say what happened, say how you thought you were safe, say how something unexpected broke your assumptions, apologize, say what you're doing differently in the future) basically ties their hands.

A pretty bad position for Github all around.


There are likely many people who have not experienced this form of attack. Many who may not have even been aware of it explicitly, and this article may have made this aware of it, and what the common strategy is for dealing with them. That's good, and so many developers user github, so the article likely reaches a larger audience than those "thinking about DDoS hardening".

What isn't useful is the narcissism of your comment, and the assumption is that everything should be targeted at you and people like you.


It appears I not only failed to make my point to you, but may have offended you in the process. Sorry about that, perhaps my reply to asolove clears things up a bit.


I honestly feel bad for the engineers at GitHub for having to deal with stuff like this. GitHub is large, so they are a target, and the specifics of what they do means that caching is not a straightforward task. I imagine there are a lot more vectors of attack that have not been used yet and guarding against them is always going to be on a case-by-case basis. In the meantime, when GitHub is having downtime or even badtime it impacts its users pretty significantly. The private repo's I work on are a source of income for GitHub, but if this gets common enough the people in charge might just move away from it to a smaller competitor that doesn't have these problems just so that my time is not wasted on waiting on GitHub to come back up.


Isn't the whole point of git that you don't need to even have internet access to get work done?


You still have your local source code, but GitHub gives you issue tracking, comments, pull requests, and other things you could still use for work - none of which can be cloned locally.


For me, issues are the most important part. I might set up a script to run daily that would import my issue list into a local csv.


Okay yes, but it probably isn't going to be down for more than a few hours at a time. I'm sure pull requests and issues can wait a few hours vs. actually fixing bugs or writing new code. Right?


No, they can't wait a few hours. Not when you have a 60 developer team on a private GitHub enterprise account, and it's the middle of the workday.


If you have a team that large and are relying on cloud services, then you need to have a plan in place for short interruptions like this. If you really have a 60-dev team, you should have knocked out all the SPOFs in the systems that support them - 60-dev teams are millions of dollars a year in salary, even at bargain basement wages. There is not a single serious provider that will guarantee you 100% uptime, because outages, however rare, do happen.


I don't disagree with you, but it's also not that simple. Anyway - I'm simply trying to point out that it is a bit deal when a large cloud service like GitHub goes down.


I'm not sure what you mean by "a private GitHub enterprise account", but GitHub has a product called "GitHub Enterprise", which is essentially GitHub in a VM that you can install in your own data center. We have a lot fewer than 60 developers, and we have a GitHub Enterprise installation, partly out of concern about issues like this (and also maybe a little IP protection paranoia).


if all 60 are working together sure, but if it's a handful of devs working with each other, you can pull directly from each others repository.

PITB, yes. But you can still get work done.


Again, not so easy when it's a massive multi-team effort - we have a continuous integration system that builds every branch (we use a GitFlow based model) on every commit that gets pushed to GitHub.


You can pull from eachother, but since you know github will be up in a few hours and it would take more than that to really coordinate any workflow change, most of the time people just goof off until github is back.

Plus you are forgetting that lot of automated jobs get triggered on github changes. Many shops kick off all kinds of tests, deployments, and other things based on changes to the github repo.


If devs want to use that as an excuse to do other things, that's cool.

My point was simply that you can still be productive when github goes down, if you want to be.


Sure, developers can still be productive. But what about QA, UX, Product Managers? They rely on automated jobs that get triggered by GitHub changes.

I don't think you're "wrong" for the developer use case, but the reality is that it can bring large teams to a screeching halt.


I find it easy to imagine that if the task you are doing right now requires looking at open issues, it could be a blocker. I depend on an internally controlled issue tracking system, and if my current task is creating or responding to defects and requirements, I'm not going to be happy if that system is down.


One other problem is that if you're using a tool like Capistrano to deploy, you usually set your remote repo to Github. When Github goes down, you can't deploy until it comes back up or you set up your own remote server.


You'd have to hope so! I wanted to use the site, but having it down didn't bother me. But if I depended on it for my day job, I'd want more than just my source code available.


Sort of. When was the last time you clones a repo remotely from another user's laptop who is sitting hundreds of miles away? Do you even have an account on their laptop to be able to SSH to it? I am no Linus. I don't email patches. I want to grab my coworker's latest pushed changes and if GitHub is down, I cannot do that. That sucks.

Also, our deploy process is to do a `git pull` on the remote server, then run the build process, and finally to deploy the built stuff. When GitHub is down, we don't have a process for this.

I agree, both of these things could be avoided by having a different procedure in place, but that would obviate the need for GitHub altogether. Why use it when we can already share code and push it to our servers to be built? The point of GitHub is to provide a nice upstream that you can push to and pull from a la SVN because for most projects that model works really well. git provides all the niceties of local branching, rebasing, etc. while GitHub makes it easy to collaborate.

Having said that I'd love to hear how others handle this type of challenge.


It depends on what you mean by "work". In a lot of organizations, github downtime is a blocker for doing a release. Writing and committing code is not the only kind of work.


No, that's one of several features of git. It's not "the point"


I thought the error message was

"Commit locally github is down"


Kudos to the folks at Github for such summary of the attack! Clear, with a decent amount of info and honest.


I'm not sure why someone would attack GitHub. Extortion? But aren't there more valuable targets? Showing off their botnet, perhaps? These attacks seem frequent.


Unfortunately there may not be a "good" reason. It's not hard for me to imagine that someone could do it for kicks, or perhaps to test some new attack vectors against a reasonably hardened target.


It's not hard for me to imagine that someone could do it for kicks, or perhaps to test some new attack vectors against a reasonably hardened target.

Though when you try out a new attack vector, the community (hopefully) publishes enough details about the attack to help the next target more effectively deal with that particular attack.

If someone is attacking whitehouse.gov or a similar target, I somewhat understand their reasons why (though I don't agree with them). github.com, on the other hand, while a for-profit corporation, is also a valuable bit of Internet infrastructure that makes the world a better place. Attacking targets like github.com, Wikipedia, and others helps no one, and forwards no coherent political agenda.

So I'm going with the "for kicks" / jerkwad theory.


Lots of businesses heavily relies on Github for their operations. Sure, Git is distributed, but there's more to Github than just Git; think of continuous integration/deployment tools that integrate with the service, PRs.

Attacking Github means you deny many more than only Github as a target. Which I guess, makes it more valuable than another target.


It surprises me that GitHub doesn't sell/license a 'GitHub Appliance' that can get installed locally, but mirrors with something on their side too.


Isn't that Github Enterprise? You install it fully on your own servers.



Usually it's some kid who wants to show off their botnet (or, more likely their $5 booting service investment) to their friends on hackforums.

I deal with this crap all the time.


Just as anti-virus providers are sometimes suspected of creating viruses to help drum up business, I wonder if this could be a case of anti-DDOS service providers either doing some nasty marketing, or, looked at another way, running a protection racket.


Perhaps the attackers are drawn to sites they believe will later publicly document the attack, to learn whatever they can on how the operations team responded.


I always wonder this myself. Were I skilled enough to be able to execute such big scale attacks against someone or something, I wouldn't think of Github as a potential target. Github is just some code, it doesn't bear any political or financial weight. But then again, some people get a kick out of ruining stuff so why not.


GitHub has been targeted by the Chinese government hackers before, with a man-in-the-middle attack, and blocking GitHub with the Great Firewall. Maybe they are at it again?

http://www.theregister.co.uk/2013/01/31/github_ssl_man_in_th...

https://en.greatfire.org/blog/2013/jan/github-blocked-china-...


I'd be interested to know who their "DDoS mitigation service provider" is.


Their Pages hosting feature uses Fastly -- I would assume they use the same service across their infrastructure.


Cloudflare is a very popular choice.


Very popular among consumers. I think large organizations are more likely to pay someone like http://www.prolexic.com/


I have seen the folks from prolexic at work and they service/platform is impressive.

On their spare time they take down botnets: http://www.prolexic.com/knowledge-center-ddos-vulnerability-...

http://arstechnica.com/security/2012/08/ddos-take-down-manua...

Shameless plug for hackmiami, if anyone is interested in learning how its done up and close they run frequent talks/meetups locally: http://hackmiami.org/


I read this as 'prilosec.com' and was like "Yeah, if I worked for a really visible organization, I'd probably have an anti-heartburn medication habit."


Yes, but if you largely deal with non-web protocols - like git - Cloudflare's much less effective.


We have solutions that would definitely work for GitHub. :)


It is not CloudFlare, but I would be fascinated to learn some more technical detail of these attacks as I work on systems to block this type of Layer 7 attack.


$ dig origin.github.com +short

github.map.fastly.net.

199.27.76.133


That's not really a valid test. If they're talking about dynamically rerouting traffic through the DDOS protection system, they're almost certainly using someone who starts announcing their IPs, then feeds the clean traffic back to them via a tunnel.

If that's indeed what they're using, the only way to tell would be to look at BGP announcements when they're actually under attack.


That's what I thought they were using, because of this:

"A simple Hubot command can reroute our traffic to their network which can handle terabits per second. They're able to absorb the attack, filter out the malicious traffic, and forward the legitimate traffic on to us for normal processing."


Anyone with a Renesys account can tell us who they used. Anyone want to shed some light here?


What motive does the attacker have?

There are lots of articles on HN about DDoS attacks on various websites or online services. Most of the discussion is about the bandwidth used and the technical mechanics of the attack and defense.

This is interesting, but there's little discussion of the economic motivation.

I assume the kind of infrastructure used to launch this attack is not free. I understand people or groups might be using this as a way to further various political agendas or simply for bragging rights. I also understand DDoS attacks might be an extortion tool.

In the former case, wouldn't the attacker try to loudly and publicly claim responsibility? In the latter case, wouldn't the defenders take pride in their "we don't negotiate with extortionists" stance while they're in disclosure mode?

Or maybe this is just some rich guy's private hobby, and he does it for the amusement he gets out of reading about people's reactions when they can't figure out who's responsible?

It seems like the set of rich guys who have the technical skills to do this kind of thing without getting caught would be kinda small. And if they hire people, the bigger their organization gets, the likelier they'll hire a law enforcement plant -- or simply someone with a conscience -- and the game will be up.

Organized crime might be a possibility, but I assume those guys are interested in making money, not just committing crimes and wreaking havoc. So what's the business model that motivates these attacks? If it's extortion, why do the targets feel comfortable revealing the attack, but uncomfortable revealing they're being squeezed for money?


> In addition to managing the capacity of our own network, we've contracted with a leading DDoS mitigation service provider. A simple Hubot command can reroute our traffic to their network which can handle terabits per second. They're able to absorb the attack, filter out the malicious traffic, and forward the legitimate traffic on to us for normal processing.

That's kind of awesome


This is what we use at our company. Recently bought by Akamai.

http://www.prolexic.com/why-prolexic-best-dos-and-ddos-scrub...


It is too bad ICMP Source Quench couldn't have been repurposed to help deal with these kinds of attacks. It would be extremely nice to be able to simply send a packet to each host involved in an attack and have them (and optimally routers in between) slow their rate to the target host.


The smaller a service is the easier it is to mitigate such attacks. All kinds of tools that smaller services can use (whitelists, software based filters such as iptables, location based filters and so on) are not available once you cross a certain level of scale. So any simplistic solutions that you might think of for a smaller service will likely simply not be applicable.


Wondering if, for a service like github, it would be possible to setup a whitelist of allowable ip addresses.

If an attack was launched only that whitelist would be allowed until the attack was mitigated.

So while certain legitimate traffic would be blocked for sure, people who connect through fixed ip addresses that were whitelisted would get through and be able to do what they needed to do.

Thoughts?


Seems like it'd be pretty trivial to circumvent that. Just have your botnet do a few regular old requests to the network a few days before launching. That way the IPs of the botnet members get white listed.

For a website of GitHub's scale, I don't think it would be very effective, though maybe it could be helpful in combination with other measures.


No, I'm specifically talking about clients who have signed up entering in their ip address that they access from.

"Just have your botnet do a few regular old requests to the network a few days before launching."

Not talking about "whitelist sites that have made access in the last x days".

For example on HN it would be easy to create a white list. They do it now recognizing new people who signed up and keeping track of activity as well (by points).

You could either have people identify the ip address that they accessed from and further limit the whitelist to a certain period of time and activity additionally.

The idea is not to be 100% perfect but enough so that if you are a regular user of github from an IP address at your office (as opposed to wifi cafe) you will be able to get through.

This is, by the way, how registries limit access to their system. It's all whitelist you have to pre identify the ip addresses that you will access the system from.

The whitelist only comes into play when under attack. And for sure yes if you are connecting from a new place you will be blocked. But others will not be blocked and there will be some access for some people.


There are ISPs who change your ip address every 10 seconds, not just on reconnection. This would complicate github too much, I'd rather have 2 hrs downtime every now and then, than have to input my ip address classes from all the locations :D


It's not practical with the sheer number of legitimate IP addresses that access GitHub, unfortunately.


You could handle even the large numbers of IPs that access github using a Bloom filter (http://en.wikipedia.org/wiki/Bloom_filter).

It will let a small percentage of non-whitelisted IPs in, but would filter the majority of them out.


Is there any way to mitigate DDOS attacks systematically without sacrificing network neutrality?


If the majority of ASNs in the world followed BCP 38, these attacks would be more difficult because the origin would be easily identifiable. Today you can't tell where it's coming from because backwater networks see value in letting their customers emit forged packets however they like. So all you can do is mitigate and wait for them to move on.

BCP 38/RFC 2827 would change the DoS game, but it's been a best practice for longer than most of this audience has been alive and nobody yet gives a shit and/or they are too lazy to automate the implementation. So operators waste their lives cleaning up after bad actor ASNs that they can't even identify. I shouldn't be mitigating 65 Gbps destined for a controversial customer, the attacker should be removed from the Internet before I even notice.

You can tell from my tone that attacks are part of life for me. I'd venture that denials are the second largest problem facing the Internet today, behind the organizational structure of critical systems like DNS and ahead of spam and surveillance. However, there is now a sizable DoS prevention industry so I wouldn't be surprised if BCP 38 drifts into even more obscurity, but that's the cynic typing.


Actually, since this attack wasn't volumetric and was instead attacking GitHub's (TCP-based) applications, they have the rare ability to identify the attacker's drones and possibly hand the list off to someone that can get them shut down. Hopefully GitHub does the right thing here.


Most popular sites have huge lists of compromised machines. You can't really do anything with them though. If you block compromised machines, you'll blow up your support team by people complaining they can't reach your site.

It's not in my interest to "Citizen's Arrest" someone with a pwnt node.


I answered a general question with a general answer, and you understand the point I made as evidenced by your usage of the word "rare," so I struggle to understand your usage of the word "actually" to express disagreement.


Is this really that uncommon? I thought botnets took advantage of compromised machines to perform TCP connections. Otherwise attacking a website would be "trivially" prevented by larger connections.


Direct peering with the eyeball networks helps a lot, as you can use smaller links (and thus, smaller scrubbing devices) on them. I do not believe that it sacrifices network neutrality, really.

Comcast is really in a class of it's own regarding one-sided peering policies, but the other providers like Cox for example are fairly easy to peer with.


I'm quite surprised this happened to github...Sometimes I'm trying to look at some repos, but I apparently click too fast and have to wait before I can do other things. I thought they had ddos attacks under control.


I find it odd that github can even be subjected to DOS attacks, but it seems its only HTTP traffic. I also wonder why or if it is even possible to DOS the raw tcp layer of the git protocol.


You can DOS anything that has a network interface.


"A simple Hubot command can reroute our traffic to their network which can handle terabits per second."

Really? You have to round-trip through Campfire to control your network?


It's just the most efficient and visible way for us to do it, it's not the only way. Here's a couple of reasons why we like it:

1. It's scripted so you don't have to think about it at 3am.

2. The rest of the team can see it happening in realtime so you don't have to explain what you're doing via a side channel. They can see it happening.

3. It doesn't require specialized knowledge of routing to enable it. If the on-call engineer sees an attack and calls someone for guidance, it's super easy to tell them "type /mitigation enable" for instance.

4. Of course we can run the exact same script or login to our routers and manually change our BGP announcements if we need to.


Why wouldn't they do this? They presumably want someone to look at the attack before engaging the protection, and I'm sure not all of their staff is able to make network changes. If they've got it automated to the point where a single command can do it, what does it matter via what method they use?

If they're all in Campfire anyway, there's no overhead here.


WTF is wrong with people attacking github and meetup.

DDoSing a government site I can understand, sure. (Aaaand now I'm on a list.)


tl;dr We're bad at detecting and handling layer 7 attacks. We're better now.

Dear github dudes, netflow is your friend.


Am I the only person that gets slightly annoyed whenever I read "an order of magnitude" and the article doesn't mention whether it's binary or decimal. What do you people think they're talking about, I'm guessing decimal order of magnitude?


A good heuristic: if in doubt, assume the counting system we all use for everything, all the time.


a common sense suggestion, thank you.


A binary order of magnitude is only doubling, which doubling traffic isn't exactly a DDoS just a busy day and likely within the spec of their current network to handle.

A decimal order of magnitude is a factor of 10. And would likely represent a problem.

Really not hard, you don't use the term 'order of magnitude' in binary. Instead opting for 'bit shift', or doubling.


You're not annoyed that they don't mention octal or hexadecimal? Base 6? Base 60?


For binary I would expect them to just say "double".


Doesn't matter to me, 2^X and 10^X are close enough to each other for pretty small to large values of X. When I say it, I usually mean something between x5 and x15.

I think avoiding the temptation to false precision is more important. Inconsistent units and the retention of insignificant digits in order to make numbers look bigger (or smaller) drive me up the wall, though.


2^10 = 1000 10^10 = 10000000000

Not that close, if you ask me.

If you mean something between 5 and 15, you are using base 10. *16 are 4 orders of magnitude in binary, but 1 in decimal.

That said, everybody asking that same question, please, do not use binary orders of magnitude. Our language suffers every time somebody does that.


3-5 orders of magnitude is a decent margin of error for things like astrophysics.


I said something like that to a hot redhead geologist once, and she laughed in my face.


Most probably :)

I'd be surprised to hear someone using binary to talk about bandwidth... Or pretty much anything else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: