You might have typed the URL incorrectly, for instance. Or (less likely but certainly plausible) we might have coded the URL incorrectly. Or (far less plausible, but theoretically possible, depending on which ill-defined Grand Unifying Theory of physics one subscribes to), some random fluctuation in the space-time continuum might have produced a shatteringly brief but nonetheless real electromagnetic discombobulation which caused this error page to appear.
A part of me is sad about this. The old page functioned just fine and acted as a relic of the old Internet. I thought it was kind of cool how they left it alone, especially since it served it's purpose just fine without costing anything. Not that I'm denying it was ugly as all hell :)
Google has been really been stepping up their design over the past few months. I really like the subtle tweaks they made to the top bar across all google pages:
That's true only under the assumption that the UI should be optimized for the Signing Out process. It shouldn't, the design has improved usability by hiding less frequently used options.
I'm a student, and I find Google Docs is super useful. I just wish for a more consistent experience. For example, I like to edit with "Compact Controls." Sometimes that feature goes missing, and I'm forced to edit in full screen – not an incredibly useful feature because the controls are completely hidden.
Yeah, though I think they should cut buck on using bold font all over the place. That should be use for emphasis but the Goog wants to use it as the default.
Given the size of Google, I wonder if there was a whole team tasked with this development. A 404 page for a company as big as Google is an awfully big responsibility for just one person :)
To be honest, I prefer the idea of a more intelligent 404 page. You'd think Google would have sufficient horsepower to make a good guess at what you might have been trying to find.
Google's most frequently accessed pages (most prominently, the homepage itself) are designed by a team of pretty high-level engineers, to reduce latency and bandwidth requirements, by stripping bytes. Every change is reviewed with extreme scrutiny. I'm sure the 404 page was designed with similar care.
If you view source, for example, the image is base64 encoded, and they don't even bother closing their tags on the page, because that's more bytes and the browsers don't notice. This was very carefully engineered.
Seems weird to have Arial in the font-stack, as Arial is the default font for sans-serif on Windows. (And Helvetica on OS X which has the same metrics.)
Google uses Arial in the font stack across all of their pages, for consistency. I assume the following rationale is at least mostly correct.
In case a computer doesn't have that as the default for some reason, they don't want their page to look any different, if it is possible for the computer to display it correctly, because if it doesn't, it affects their brand image.
Imagine if you did a google search and everything looked just a little bit off (and you weren't a developer so you didn't know why): you might think someone was hacking google and stealing your information or changing your results. Consistency builds trust, and this is important to them.
Who in the world downvoted this, please own up? Can you show that gzip does not, in fact, perform well at compressing repeated strings of text such as the likes of closing tags?
Wasn't me, but why make gzip do the work when you can do it once, easily, yourself? Sure it can do it, but their servers can serve the closing tags, and google strips those. The discrepancy is weird is all.
This was something I was thinking too. I watched a video from Matt Cutts yesterday where he mentioned they have a "team" (I guess it could just be 2 people, but it sounds like more) who are entirely dedicated to parsing 404 pages that return 200 response codes. I too wonder if they had a team just for this, or maybe they have a "usability" team who were tasked with this?
This would be a completely different thing though - parsing 404 error pages that incorrectly return a status 200 OK could cause broken links to appear as duplicate content when crawled. Didn't think about this before but I am sure this does require q dedicated team to be able to distinguish this kind of content.
This is already shown by the rareness of a google 404. You hardly ever get it these days. Except if a server crashes for about 1 second, (where my preference would be "you are the lucky winner to click your mouse at the same second as our server crash!"
The other image, the logo, is probably generic enough that it's used elsewhere and a user is likely to have it cached. The other looks bespoke for this page, so wouldn't.
I don't understand how it validates - I'd always understood that html, head, [title] and body were required for a complete document. Certainly the draft spec at http://www.w3.org/TR/html5/semantics.html#the-html-element-0 appears to confirm this ...
Fairplay to them though, they got a semantically and structurally deficient document to validate - it's like the IE6 of webpages ;0)
That's a strange version of "required". Basically the elements are not required in a document until parse time at which point they are inserted following an extremely arduously defined algorithm.
It seems bizarre to me that you wouldn't simply define the location of meta elements strictly as being in the head but instead define that should the parser find them they should be wrapped in to a head element.
On a brief view it looks like one can just drop a meta tag, say, in anywhere in the document and the parser has to move this to the head element?
I didn't realise that they were encouraging tag soup; this isn't part of the spec I've seen before. This sort of complex parsing algo wasn't in XHTML1.X or HTML4.X was it?
The complex parsing algorithm wasn't spelled out in excruciating detail, as it is in HTML5; much of it was implied, and left for the parser developers to figure out.
Strictly, the HTML, HEAD, and (BODY|FRAMESET) elements are required, in a valid document, but the tags delimiting them are optional. That way, code which manipulates the DOM can always count on a HEAD element being present, and CSS specifiers can use 'body' as a root, even if the tags themselves are missing from the source HTML file.
The first actual required tag in an HTML 4 document is <title>, as far as I know. Every HTML document has to have one, and it needs to be opened and closed explicitly. If it's the first thing in the document, it implies an <html><head> before it, and if body content comes after it, that will imply </head><body> as well.
You could put a <meta> tag anywhere before the first body content, and it would still be part of the implied HEAD element. As long as it doesn't come after the (explicit or implicit) </head> tag, it shouldn't cause the document to fail validation.
And no, none of this is valid XHTML. XHTML is always strict, and all opening and closing (or self-closing) tags must be present in the source file.
I don't think the older ones do; however, I'm sure Google gives them a different 404 page (change your User-Agent to IE6 and see how different the search results HTML is).
Which could be another indication why the Google logo itself is linked (besides it being in cache as well). The page degrades well imho in IE6 - the lack of the cutsey robot isn't essential to the page - the logo however is.
Base64 isn't exactly efficient. (Though it might be more efficient than an additional HTTP request. But if so, why aren't they using it for the homepage logo, say?) I wonder if they're using it as non-useless padding for stopping the IE-overrides-too-short-404s behavior.
The response says the server is "sffe". It seems to be used for static web hosting on android.com and providing HTTP error codes for other sites. I'm guessing it's running on the edge servers, but Google hasn't publicized what it is, that I could find.
This is nice, but more developed, cutesy ones with animals and such are all the rage. Maybe they should ask the guy at The Oatmeal comic if he'd be willing to help; he did the Tumblr one for free iirc and it's fairly awesome.
Does it matter whether 404 pages are ugly or not? I mean its a nice experience for a confused user, but I'd frankly rather it made suggestions as to similar urls that might exist, where possible.
I thought "That’s all we know." is cute because this is probably the only moment in Google's history to admit not being able to find/ know the reason why as a superpower search engine! =)
You might have typed the URL incorrectly, for instance. Or (less likely but certainly plausible) we might have coded the URL incorrectly. Or (far less plausible, but theoretically possible, depending on which ill-defined Grand Unifying Theory of physics one subscribes to), some random fluctuation in the space-time continuum might have produced a shatteringly brief but nonetheless real electromagnetic discombobulation which caused this error page to appear.