Hacker News new | past | comments | ask | show | jobs | submit login

Unless you have intimate knowledge of their network topology, and know the specifics of where those pinged IP's live in that topology, and what routes were used to provide DNS results, you can't say that it wasn't a routing issue.

"Routing" is a rather generic term when it comes to large networks, and everything from border routers, firewalls, load balancers, and switches actually perform routing.

Especially (as I've mentioned in another post) when you add fault tolerance / failover configurations to the mix.

Routing failure doesn't have to be an all-or-nothing thing. There are a number of ways in which I can see ICMP echo packets working but other traffic not, especially when you include complexities of source routing, load balancing, failover, etc.

Even something as "simple" as a poisoned ARP cache in a single box could screw up the entire internal network and cause the problems they've had, and still be considered a "routing issue".

$0.02




None of that is necessarily incorrect... but per their news release 'corrupted router data tables' (their words) were the issue. I can't read too much into that, but that still doesn't change the fact that DNS wasn't resolving for a while after they made their Verisign change (for clients), yet their website was resolved with this change.

You are correct that I don't know the details of their internal network and I never said otherwise, just that the chain of events and their claims don't necessarily match up!


I can imagine that they'd understandably work to get their own site/etc up and running first as the priority, as a manual "hack". After all, it's the main page everyone would be going to for information on what's going on.

After that, coming up with an automated process for migrating what must be a shit-ton of zone information to another system must have taken some time. I have no idea what their specific solution was, but I'm fairly confident in the fact that it wasn't just a matter of copying over a few zone files. They'd probably have to do SOME sort of ETL (extraction / translation / load) process that would take some time to develop, test, never mind run.

And I can't remember the last time I gave technical information to a PR person who actually got it 100% technically correct. ;)

My intention wasn't to shit on your point or anything, or in any way defend Go-Daddy and their screwup, I'm just thinking it's a bit unrealistic to try and infer detailed information from a PR release.

In the end, it was technical, they screwed up, and I doubt they'd ever release a proper, detailed post-mortem of what happened.


Heh, yeah. It is a bit difficult to interpret PR speak (and I have had to correct our guy before).

I think perhaps the takeaway from here is to not trust what is being said, go with your gut... and move any services off GoDaddy ;). Would be nice if like Google or Amazon they would release a real post-mortem post. Even if it's an internal 'uh-oh' I trust companies that are willing to admit to mistakes.


"Would be nice if like Google or Amazon they would release a real post-mortem post."

Possible but highly unlikely. Godaddy is "old school" which means they will release as little info as necessary and move on. They aren't interested in the hacker community. Their primary market is SMB's.


I don't see it as defending GoDaddy at all, quite the opposite. I would be more reassured if it was an unexpected massive DDoS which they weren't prepared for but one which they might prepare for in the future.

The way it's described now is a weakness in their infrastructure of which I wonder if it's possible to prevent this from happening again.


"The way it's described now is a weakness in their infrastructure"

Godaddy has plenty to lose by f-ing up. And to my knowledge (as a somewhat small competitor; I'm just pointing that out so my thoughts are taken in context) they have a fairly robust system (anecdotal) for the amount of data they manage. My issues with godaddy (as a competitor) were always on the sell side, the issues of constantly selling you things you don't need etc. Technically I really didn't have any issues with them.


While it could indeed be a routing issue, who's to say that it wasn't caused intentionally by the guy in the tweets? It would be in GoDaddy's interests to cover that up and fix whatever exploit he used to get in, instead of admitting a security breach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: