Hacker News new | past | comments | ask | show | jobs | submit login

Systems failing is not evidence of systems not existing.



So why didn't the 'automatic failover' kick in during the outage? Where was it then? I don't see anything about 're-routing traffic' anywhere in the status page [0]

[0] https://status.fastly.com/incidents/vpk0ssybt3bj


We don't know, but the usual scenarios would be "issue impacts failover mechanism too", "failover mechanism overloads other system components leading to cascading failure" or "something causes failover mechanism to to think all is fine".


> We don't know...

So, the rarest of cases (our network isn’t serving traffic) just happened right now, and their failover system just took a snooze then, but 'it exists apparently' according to you.

Tell that the huge clients that lost sales because of this, and all you have to say is: "wE DoN'T kNoW..."


> Tell that the huge clients that lost sales because of this, and all you have to say is: "wE DoN'T kNoW..."

Tell these clients that they should've carefully read their contract with Fastly, especially the 'Service Level Agreement' part.


Not the point. They were also told that a failover system would kick in and re-route traffic had there been any issues, but this was where to be seen.

A worldwide outage happened that affected almost all locations and everybody, so actually SLA is meaningless in this case. Where was the extra redundancy? Where was the failover system? Why was other companies indirectly affected?

As far as I know Fastly's status page was even down during the outage, the fact that the best answer to this 'is we don't know' tells you everything you need to know. Maybe stop victim blaming this situation and focus on the main culprit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: