To GitHub and everyone: please use UTC timestamps when there are potential readers outside of your timezone. Since every technical person should know their current UTC difference, calculating the local time is easy.
Timezone hell is, of course, followed up with the three-headed hound of locales, charmaps, and keyboard layouts. And if you pass by that, you drop into the river of cache coherency...
(Esoteric software-engineering topics could make a wonderful video game, you know?)
I once wrote a scheduling SAAS app, thinking it would be a short, trivial project. It turns out that UTC offsets don't even stay consistent within time zones throughout the year due to differences in daylight savings policy.
We've probably all been there at one time or another, thinking that "unix timestamps" are all that's needed to represent and perform date/time calculations.
Along with the recommendation to never inventing your own cryptography, you should also never write your own date/time routines. Use well tested functions in your database or programming language libraries.
At my first job I inherited a project from a snr dev who decided to implement his own date logic instead of using the ones provided in .net
Anyway come feb 29th 2008 I get a call about some users were having issues. After not to long I figured out it was caused because of the leap year and his logic did not account for leap years. So I called them back and told them it would fix its self by tomorrow. I never actually bothered to fix the bug because a) I thought it was unlikely that the pos system would still be used in 4 years and b) the code was so bad in all areas that trying to fix anything had a good chance of breaking something else.
Lets just say that job was very character building
There was a comment in a piece of code I was looking through once which read, "Except here universal time is actually Pacific Time, maybe they will get the prime meridian moved."
Much of the code assumed that all times were in PST and the commenter had apparently been incorporating data sources from other time zones into the code. I thought it was pretty funny.
Except I have the unholy irrational evil daylight savings time in my area and I can't remember when the clocks go forward or backwards so I can't tell when my UTC offset is right or not.
UTC most certainly does not have DST. Many people use GMT as a synonym for UTC, and in that sense, it doesn't either. Britain goes to "BST" (British Summer Time) during the summer. From Wikipedia[1]:
> Greenwich Mean Time (GMT) originally referred to the mean solar time at the Royal Observatory in Greenwich, London, which later became adopted as a global time standard. It is for the most part the same as Coordinated Universal Time (UTC), and when this is viewed as a time zone, the name Greenwich Mean Time is especially used by bodies connected with the United Kingdom, such as the BBC World Service, the Royal Navy, the Met Office and others particularly in Arab countries, such as the Middle East Broadcasting Center and OSN. It is the term in common use in the United Kingdom and countries of the Commonwealth, including Australia, South Africa, India, Pakistan and Malaysia, and many other countries in the Eastern Hemisphere.
> Before the introduction of UTC on 1 January 1972, Greenwich Mean Time (also known as Zulu time) was the same as Universal Time (UT), a standard astronomical concept used in many technical fields.
> In the United Kingdom, GMT is the official time during winter; during summer British Summer Time (BST) is used. GMT is the same as Western European Time.
3 letter timezones are ambiguous though. For example, Australia's east coast timezone is "EST"/"EDT" just like in the US. If you want to present time in a non-UTC timezone, you really need to use syntax like "UTC-0700".
It's bad customer service to require a user to crack open a terminal in order to understand your announcements. UTC is coordinated universal time. It's a standard. Just use it.
Sadly, the user's TZ isn't available to JavaScript. This is why so many sites have you input that as part of your preferences.
You can get the local time with JS, and you can get UTC, and from those two you can narrow it down (you know the current UTC offset, but that's it), but you're not going to get a single answer.
But you're just converting one time to another. If you know their reported UTC time and their reported offset to local, isn't that all you need? For greater precision, ask for their geolocation or use an IP address to narrow down the possibilities, though offset obvious helps.
Now that I think about it, you could auto-suggest based on offset and embed a list of locations per option if they needed to pick between daylight savings or not, for instance. There's room for an interesting widget or service, but it would need to be updated almost as much as a US sales tax calculation widget ;-)
There are other ambiguities - try Ireland, Israel, and India for IST [1] Turns out those abbreviations aren't part of an official standard anyway, afaict, so best to avoid altogether.
You ever sit there and wonder who the person is on the other end of the attack? Someone sitting there, I guess with not much on that day, decides to command their army of infected bots to attack github.
Why github I wonder? Perhaps it provides a challenging target. Perhaps github is used as a testing ground for a more profitable future attack.
We often get technical writeups after a DDoS attack, however we very rarely get a writeup sumising the motive behind the attack. I can't believe every attack is simply driven by 'because they can'.
There's a good piece of journalism (EDIT: see the reply to this comment for the link), in which someone who was being DDoSed went into the various black-market forums where such people as would DDoS hang out, to see if they could find their attacker.
What they found was that, at one forum, there was a convention where new, as-yet-untrusted sellers of DDoSing-as-a-Service are expected to take down some big, technically-respected target (e.g. GitHub) to prove their mettle, before anyone would hire them. And conveniently-enough, some new DDoSaaS seller was advertising right then, telling everyone to "look and see that _________ is down. That was me! Hire me now!"
It further turned out that the site-owner doxxed the account, found them to be a thirteen-year-old(!), and called their parents to tell them what their child was doing on the internet.
Github is a valuable target if attackers are trying to get access to private repositories. A lot of organizations have their entire code base on Github.
I believe if you're doing that to get at something you use the DDOS to hide your true intentions. e.g. you leave the DDOS at the level to slow the site way down so that the admins are paying attention to that while you go after what you need. It's like blowing up the building across the street during a bank robbery, you distract the people looking for you so that you can get away.
If you can get a crash, you can possibly find a weakness. If a system relies on Github to work properly, maybe you can find vulnerability in case Github is down. Maybe a MITM (to your target, not Github itself) is easier to perform when Github is slowed down and takes some minutes to respond to requests.
There are various possibilities, nevertheless, bringing down Github is surely a juicy objective.
It just isn't even possible to say anything specific in terms of anonymous person's motivation. If they had a name and face that is the only time you can talk with some certainty.
Perhaps some developer was not going to make his deadline or wanted to take down the network to give him more time? Who knows...
As people leverage the deployment API more and more for production use-cases, an attack on github might come to represent an attack on the ability of MANY products to push changes and respond to their OWN attacks in a comfortable manner...
Ha! This is good satire, but for me personally, github going down isn't a version control problem nearly as much as it is a project collaboration problem. I can't go review pull requests and discuss issues when github is down, but I can still do all the traditional version control activities. Github is so much more than distributed version control. If somebody started doing the non-version-control things that github does in a distributed fashion, I would be very interested in taking a look.
Actually there are quite a large number of researchers who have concluded that the underlying architecture of the internet itself needs to be more distributed. They call it data oriented or name-based networking, content centric etc. The hardest part about switching over to a fully distributed internet is to maintain existing business models. Either someone will figure out and successfully market a business friendly universal data/computing distribution distribution system/network, or we will see free ones pop up that start to supplant more and more centralized services that are associated with particular domain names on the traditional internet. We are also eventually likely to see very strong push back against ISPs that overcharge for business internet. Well, at least in a sane and just world all of this would come true shortly.
Call me naive but I fail at imagining why would someone want to DOS Github.
I mean, if you're into this, it's certainly fun to launch DOS attacks against large "evil" things such as government services, large corps and Micro$oft becoz w1ndoz sux0rz, but... Github? Why?
Can you think of an easier way to negatively impact productivity at nearly every tech startup? There might not be a reason other than simple bullying. In fact, I think these attackers might be the same ones behind that 2048 game.
Others mentioned Github competitors, but I think it could be anyone's competitors. I.e. Your competitor knows you're using github to handle your continuous deployment (or whatever reasons), by bringing github down, they bring you're whole development environment down.
That's just a (dumb) hypothesis. The cost of doing so is probably not worth it.
Git is federated, as in Jabber. A branch has a place where it lives. If you do something with a particular branch, you need access to the place where it lives but not to anywhere else.
Monotone is distributed, as in Usenet. A branch may only exist on certain servers, but it is not logically tied to any one server (or set of servers). There is a global namespace for branches.
I think other systems (hg) tend to be more like the Git model, because permissions handling is far simpler.
yup i kindof had this same realization as well once cuz i didn't want to sign up for private repos after i'd left the company that paid github subscription...
hard to recreate the logic tho O.o
but it was something in the vein of "if i learn to set up git correctly, i shouldnt need a unique root, so what is it then.... a backup with vizualization? I dont need that for a private repo i can use offline tools"
Competition would be my first guess. Or an active employee somewhere that doesn't want to use github: "See, it's offline, we should roll out our own solution/keep using what we have". Or just testing out botnet on an appropriately sized target. Or anything else really. There are dozens of valid reasons.
Considering that so many software companies use Github for version control, and have it tied into their deployment workflow, it might make sense for Company A (not dependent on Github) to DOS in order to hurt Company B (dependent on Github).
In fact, some software companies run Github on their private servers (https://enterprise.github.com/) that prevents them being affected from a DOS to Github's servers. Granted, these companies probably made the decision to use their private servers for other performance and security reasons as well, but avoiding DOS could very well be a reason.
It is the same thing I've wondered. What has github done to anybody? Is it for the L0lZ? Is it because they happen to host somebody's gaming code, or IRC code that they feel should be taken offline?
Though when you think about it, taking out github is a very effective way to basically kill the productivity of many dev shops. At the place I'm working now, we are basically dead in the water while github is down (can't do deployments, can't merge pull requests, can't run automated testing, etc).
Because, although some DOS attacks follow a 'moral' code (According to whoever's morals are relevant at the time), not every DOS attack does, in the same way that some people steal from those who have less, some people vandalise state property, some launch wars based on lies. In fact, I'd go as far to say that 'bad' things that happen probably don't have a justifiable moral purpose underpinning them more often than not, and that goes for DOS attacks too.
Github hosts a bunch of full websites too. It's not terribly uncommon for an entire hosting provider to be attacked simply because someone doesn't like what one user posted.
This is getting as pernicious as horse thieving in he old west. We should do what they eventually did about it: Concentrated law enforcement. (though the interim plan sounds attractive)
I'm running GitLab and the first thing I think when I read about their DDOS is 'how long before they attack us'. DDOS attacks just cause everyone a lot of pain and it would be great if nobody needed extensive countermeasures, DDOS operators waste everyone's time and resources.
If the attacks against Github are mostly proving grounds for fledgling DDoSaaS, I would assume write-ups like these only serve to elevate their status as a good proving ground.
Did this article contain anything particularly useful for anyone thinking about DDoS hardening? I didn't find anything. I guess it's not really supposed to be a technical article, just a smattering of buzzwords to let you know how hard they try.
The postmortem-half-apology has become quite an art form; as getting it right can actually draw a lot of positive publicity, and getting it wrong can be brutal. But I can definitely see how this post would feel like a pat on the back to whoever launched the attack.
Github downtime (and subsequent postmortems) are a regular feature of the HN front page. The postmortems have come to command their own audience, similar to the CloudFlare reports.
It's actually a pretty bad position Github's being put in. They sit at the crossroads of playing defense against DDoS and trying to dispel or at least ameliorate any blame for the downtime.
My point was, if they have indeed become the internet's DDoS proving ground (as several others were speculating), then while you can see how much effort they're putting into these postmortems, I can see it becoming a vicious cycle.
Then the challenge is, how does Github placate their users without basically pinning a ribbon on the attacker? The funny thing is how the "best practice" checklist for a postmortem (say what happened, say how you thought you were safe, say how something unexpected broke your assumptions, apologize, say what you're doing differently in the future) basically ties their hands.
There are likely many people who have not experienced this form of attack. Many who may not have even been aware of it explicitly, and this article may have made this aware of it, and what the common strategy is for dealing with them. That's good, and so many developers user github, so the article likely reaches a larger audience than those "thinking about DDoS hardening".
What isn't useful is the narcissism of your comment, and the assumption is that everything should be targeted at you and people like you.
It appears I not only failed to make my point to you, but may have offended you in the process. Sorry about that, perhaps my reply to asolove clears things up a bit.
I honestly feel bad for the engineers at GitHub for having to deal with stuff like this. GitHub is large, so they are a target, and the specifics of what they do means that caching is not a straightforward task. I imagine there are a lot more vectors of attack that have not been used yet and guarding against them is always going to be on a case-by-case basis. In the meantime, when GitHub is having downtime or even badtime it impacts its users pretty significantly. The private repo's I work on are a source of income for GitHub, but if this gets common enough the people in charge might just move away from it to a smaller competitor that doesn't have these problems just so that my time is not wasted on waiting on GitHub to come back up.
You still have your local source code, but GitHub gives you issue tracking, comments, pull requests, and other things you could still use for work - none of which can be cloned locally.
Okay yes, but it probably isn't going to be down for more than a few hours at a time. I'm sure pull requests and issues can wait a few hours vs. actually fixing bugs or writing new code. Right?
If you have a team that large and are relying on cloud services, then you need to have a plan in place for short interruptions like this. If you really have a 60-dev team, you should have knocked out all the SPOFs in the systems that support them - 60-dev teams are millions of dollars a year in salary, even at bargain basement wages. There is not a single serious provider that will guarantee you 100% uptime, because outages, however rare, do happen.
I don't disagree with you, but it's also not that simple. Anyway - I'm simply trying to point out that it is a bit deal when a large cloud service like GitHub goes down.
I'm not sure what you mean by "a private GitHub enterprise account", but GitHub has a product called "GitHub Enterprise", which is essentially GitHub in a VM that you can install in your own data center. We have a lot fewer than 60 developers, and we have a GitHub Enterprise installation, partly out of concern about issues like this (and also maybe a little IP protection paranoia).
Again, not so easy when it's a massive multi-team effort - we have a continuous integration system that builds every branch (we use a GitFlow based model) on every commit that gets pushed to GitHub.
You can pull from eachother, but since you know github will be up in a few hours and it would take more than that to really coordinate any workflow change, most of the time people just goof off until github is back.
Plus you are forgetting that lot of automated jobs get triggered on github changes. Many shops kick off all kinds of tests, deployments, and other things based on changes to the github repo.
I find it easy to imagine that if the task you are doing right now requires looking at open issues, it could be a blocker. I depend on an internally controlled issue tracking system, and if my current task is creating or responding to defects and requirements, I'm not going to be happy if that system is down.
One other problem is that if you're using a tool like Capistrano to deploy, you usually set your remote repo to Github. When Github goes down, you can't deploy until it comes back up or you set up your own remote server.
You'd have to hope so! I wanted to use the site, but having it down didn't bother me. But if I depended on it for my day job, I'd want more than just my source code available.
Sort of. When was the last time you clones a repo remotely from another user's laptop who is sitting hundreds of miles away? Do you even have an account on their laptop to be able to SSH to it? I am no Linus. I don't email patches. I want to grab my coworker's latest pushed changes and if GitHub is down, I cannot do that. That sucks.
Also, our deploy process is to do a `git pull` on the remote server, then run the build process, and finally to deploy the built stuff. When GitHub is down, we don't have a process for this.
I agree, both of these things could be avoided by having a different procedure in place, but that would obviate the need for GitHub altogether. Why use it when we can already share code and push it to our servers to be built? The point of GitHub is to provide a nice upstream that you can push to and pull from a la SVN because for most projects that model works really well. git provides all the niceties of local branching, rebasing, etc. while GitHub makes it easy to collaborate.
Having said that I'd love to hear how others handle this type of challenge.
It depends on what you mean by "work". In a lot of organizations, github downtime is a blocker for doing a release. Writing and committing code is not the only kind of work.
I'm not sure why someone would attack GitHub. Extortion? But aren't there more valuable targets? Showing off their botnet, perhaps? These attacks seem frequent.
Unfortunately there may not be a "good" reason. It's not hard for me to imagine that someone could do it for kicks, or perhaps to test some new attack vectors against a reasonably hardened target.
It's not hard for me to imagine that someone could do it for kicks, or perhaps to test some new attack vectors against a reasonably hardened target.
Though when you try out a new attack vector, the community (hopefully) publishes enough details about the attack to help the next target more effectively deal with that particular attack.
If someone is attacking whitehouse.gov or a similar target, I somewhat understand their reasons why (though I don't agree with them). github.com, on the other hand, while a for-profit corporation, is also a valuable bit of Internet infrastructure that makes the world a better place. Attacking targets like github.com, Wikipedia, and others helps no one, and forwards no coherent political agenda.
So I'm going with the "for kicks" / jerkwad theory.
Lots of businesses heavily relies on Github for their operations. Sure, Git is distributed, but there's more to Github than just Git; think of continuous integration/deployment tools that integrate with the service, PRs.
Attacking Github means you deny many more than only Github as a target. Which I guess, makes it more valuable than another target.
Just as anti-virus providers are sometimes suspected of creating viruses to help drum up business, I wonder if this could be a case of anti-DDOS service providers either doing some nasty marketing, or, looked at another way, running a protection racket.
Perhaps the attackers are drawn to sites they believe will later publicly document the attack, to learn whatever they can on how the operations team responded.
I always wonder this myself. Were I skilled enough to be able to execute such big scale attacks against someone or something, I wouldn't think of Github as a potential target. Github is just some code, it doesn't bear any political or financial weight. But then again, some people get a kick out of ruining stuff so why not.
GitHub has been targeted by the Chinese government hackers before, with a man-in-the-middle attack, and blocking GitHub with the Great Firewall. Maybe they are at it again?
Shameless plug for hackmiami, if anyone is interested in learning how its done up and close they run frequent talks/meetups locally: http://hackmiami.org/
I read this as 'prilosec.com' and was like "Yeah, if I worked for a really visible organization, I'd probably have an anti-heartburn medication habit."
It is not CloudFlare, but I would be fascinated to learn some more technical detail of these attacks as I work on systems to block this type of Layer 7 attack.
That's not really a valid test. If they're talking about dynamically rerouting traffic through the DDOS protection system, they're almost certainly using someone who starts announcing their IPs, then feeds the clean traffic back to them via a tunnel.
If that's indeed what they're using, the only way to tell would be to look at BGP announcements when they're actually under attack.
That's what I thought they were using, because of this:
"A simple Hubot command can reroute our traffic to their network which can handle terabits per second. They're able to absorb the attack, filter out the malicious traffic, and forward the legitimate traffic on to us for normal processing."
There are lots of articles on HN about DDoS attacks on various websites or online services. Most of the discussion is about the bandwidth used and the technical mechanics of the attack and defense.
This is interesting, but there's little discussion of the economic motivation.
I assume the kind of infrastructure used to launch this attack is not free. I understand people or groups might be using this as a way to further various political agendas or simply for bragging rights. I also understand DDoS attacks might be an extortion tool.
In the former case, wouldn't the attacker try to loudly and publicly claim responsibility? In the latter case, wouldn't the defenders take pride in their "we don't negotiate with extortionists" stance while they're in disclosure mode?
Or maybe this is just some rich guy's private hobby, and he does it for the amusement he gets out of reading about people's reactions when they can't figure out who's responsible?
It seems like the set of rich guys who have the technical skills to do this kind of thing without getting caught would be kinda small. And if they hire people, the bigger their organization gets, the likelier they'll hire a law enforcement plant -- or simply someone with a conscience -- and the game will be up.
Organized crime might be a possibility, but I assume those guys are interested in making money, not just committing crimes and wreaking havoc. So what's the business model that motivates these attacks? If it's extortion, why do the targets feel comfortable revealing the attack, but uncomfortable revealing they're being squeezed for money?
> In addition to managing the capacity of our own network, we've contracted with a leading DDoS mitigation service provider. A simple Hubot command can reroute our traffic to their network which can handle terabits per second. They're able to absorb the attack, filter out the malicious traffic, and forward the legitimate traffic on to us for normal processing.
It is too bad ICMP Source Quench couldn't have been repurposed to help deal with these kinds of attacks. It would be extremely nice to be able to simply send a packet to each host involved in an attack and have them (and optimally routers in between) slow their rate to the target host.
The smaller a service is the easier it is to mitigate such attacks. All kinds of tools that smaller services can use (whitelists, software based filters such as iptables, location based filters and so on) are not available once you cross a certain level of scale. So any simplistic solutions that you might think of for a smaller service will likely simply not be applicable.
Wondering if, for a service like github, it would be possible to setup a whitelist of allowable ip addresses.
If an attack was launched only that whitelist would be allowed until the attack was mitigated.
So while certain legitimate traffic would be blocked for sure, people who connect through fixed ip addresses that were whitelisted would get through and be able to do what they needed to do.
Seems like it'd be pretty trivial to circumvent that. Just have your botnet do a few regular old requests to the network a few days before launching. That way the IPs of the botnet members get white listed.
For a website of GitHub's scale, I don't think it would be very effective, though maybe it could be helpful in combination with other measures.
No, I'm specifically talking about clients who have signed up entering in their ip address that they access from.
"Just have your botnet do a few regular old requests to the network a few days before launching."
Not talking about "whitelist sites that have made access in the last x days".
For example on HN it would be easy to create a white list. They do it now recognizing new people who signed up and keeping track of activity as well (by points).
You could either have people identify the ip address that they accessed from and further limit the whitelist to a certain period of time and activity additionally.
The idea is not to be 100% perfect but enough so that if you are a regular user of github from an IP address at your office (as opposed to wifi cafe) you will be able to get through.
This is, by the way, how registries limit access to their system. It's all whitelist you have to pre identify the ip addresses that you will access the system from.
The whitelist only comes into play when under attack. And for sure yes if you are connecting from a new place you will be blocked. But others will not be blocked and there will be some access for some people.
There are ISPs who change your ip address every 10 seconds, not just on reconnection. This would complicate github too much, I'd rather have 2 hrs downtime every now and then, than have to input my ip address classes from all the locations :D
If the majority of ASNs in the world followed BCP 38, these attacks would be more difficult because the origin would be easily identifiable. Today you can't tell where it's coming from because backwater networks see value in letting their customers emit forged packets however they like. So all you can do is mitigate and wait for them to move on.
BCP 38/RFC 2827 would change the DoS game, but it's been a best practice for longer than most of this audience has been alive and nobody yet gives a shit and/or they are too lazy to automate the implementation. So operators waste their lives cleaning up after bad actor ASNs that they can't even identify. I shouldn't be mitigating 65 Gbps destined for a controversial customer, the attacker should be removed from the Internet before I even notice.
You can tell from my tone that attacks are part of life for me. I'd venture that denials are the second largest problem facing the Internet today, behind the organizational structure of critical systems like DNS and ahead of spam and surveillance. However, there is now a sizable DoS prevention industry so I wouldn't be surprised if BCP 38 drifts into even more obscurity, but that's the cynic typing.
Actually, since this attack wasn't volumetric and was instead attacking GitHub's (TCP-based) applications, they have the rare ability to identify the attacker's drones and possibly hand the list off to someone that can get them shut down. Hopefully GitHub does the right thing here.
Most popular sites have huge lists of compromised machines. You can't really do anything with them though. If you block compromised machines, you'll blow up your support team by people complaining they can't reach your site.
It's not in my interest to "Citizen's Arrest" someone with a pwnt node.
I answered a general question with a general answer, and you understand the point I made as evidenced by your usage of the word "rare," so I struggle to understand your usage of the word "actually" to express disagreement.
Is this really that uncommon? I thought botnets took advantage of compromised machines to perform TCP connections. Otherwise attacking a website would be "trivially" prevented by larger connections.
Direct peering with the eyeball networks helps a lot, as you can use smaller links (and thus, smaller scrubbing devices) on them. I do not believe that it sacrifices network neutrality, really.
Comcast is really in a class of it's own regarding one-sided peering policies, but the other providers like Cox for example are fairly easy to peer with.
I'm quite surprised this happened to github...Sometimes I'm trying to look at some repos, but I apparently click too fast and have to wait before I can do other things. I thought they had ddos attacks under control.
I find it odd that github can even be subjected to DOS attacks, but it seems its only HTTP traffic. I also wonder why or if it is even possible to DOS the raw tcp layer of the git protocol.
It's just the most efficient and visible way for us to do it, it's not the only way. Here's a couple of reasons why we like it:
1. It's scripted so you don't have to think about it at 3am.
2. The rest of the team can see it happening in realtime so you don't have to explain what you're doing via a side channel. They can see it happening.
3. It doesn't require specialized knowledge of routing to enable it. If the on-call engineer sees an attack and calls someone for guidance, it's super easy to tell them "type /mitigation enable" for instance.
4. Of course we can run the exact same script or login to our routers and manually change our BGP announcements if we need to.
Why wouldn't they do this? They presumably want someone to look at the attack before engaging the protection, and I'm sure not all of their staff is able to make network changes. If they've got it automated to the point where a single command can do it, what does it matter via what method they use?
If they're all in Campfire anyway, there's no overhead here.
Am I the only person that gets slightly annoyed whenever I read "an order of magnitude" and the article doesn't mention whether it's binary or decimal. What do you people think they're talking about, I'm guessing decimal order of magnitude?
A binary order of magnitude is only doubling, which doubling traffic isn't exactly a DDoS just a busy day and likely within the spec of their current network to handle.
A decimal order of magnitude is a factor of 10. And would likely represent a problem.
Really not hard, you don't use the term 'order of magnitude' in binary. Instead opting for 'bit shift', or doubling.
Doesn't matter to me, 2^X and 10^X are close enough to each other for pretty small to large values of X. When I say it, I usually mean something between x5 and x15.
I think avoiding the temptation to false precision is more important. Inconsistent units and the retention of insignificant digits in order to make numbers look bigger (or smaller) drive me up the wall, though.