The main problem with the plain "no www" record is that there is much less you can do with it in DNS. Most especially you can't make it a CNAME nor can you delegate the DNS to somebody else without delegating the whole domain.
With a www.example.com I can just make www a CNAME to a cloud or CDN vendor. With plain example.com I have to delegate DNS for my whole domain to that vendor (assuming they do that service) or use a A-record and go with one of the 3 anycast CDN's in the world.
Same if I want to use a GSLB service. I can just make it a CNAME or put in a NS record for my www.example.com . For example.com I'd have to run GSLB myself or put my whole domain under control of the external vendor.
So all work's domains have the smart stuff on www.example.com and the http://example.com is just a 301 to the www version.
So, I believe it turns out that you can cheat and just define a non-www name as a CNAME to another non-www name. And that mostly works, kind of, maybe, even though it's a violation of the standards. Except that it'll
break under some conditions, but maybe you won't care too much. Or something.
Bleargh. Frankly I've forgotten the headsplitting details of trying to finesse the DNS RFCs. What I do remember is that my company, which does cloud hosting on behalf of many customers, looked into this and determined that for now we should do as you suggest and actually follow the spec: Either point your A record directly at your webserver (or its reverse proxy) or use the www. subdomain as a CNAME. I was left with much sympathy for the folks who designed the standard www. subdomain in the first place: They did what they had to do to work sensibly within the DNS system without having to redesign entire corporate domains around the needs of the web server.
I think one of the main problems for no www. domains is that even though some dns systems let you assign the root record to a CNAME is that it breaks MX records if you do so.
Obviously not possible if domain and www.domain point to different hosts where the host for domain does not run a web server. I'm sure there are numerous other reasons.
Honest question, because I've never tried it. I would believe effectively no web browsers honor it correctly, and certainly there's a boatload of non-web-browser code that won't honor it correctly.
Completely agree; the minute I can host my www-less website on S3, I will. Until then, my www.$domain is a CNAME, and $domain is lucky if it gets a machine serving 301s.
I don't care much either way, except that I see some cases where using a 'www' hostname can be advantageous. In television and commercials, it's a useful marker that what being listed refers to a website. To the layman who is not familiar with all TLDs, 'example.nu' may be bogus, while 'www.example.nu' is immediately recognizable as referring to a website.
My take: 'www' helps in disambiguation, and sure is nicer than having to prefix all names with 'http://.
Why is it horrendous? The string is concise, unambiguous, and works when you put it in the location bar. Including the scheme makes sense only when there is a risk of ambiguity, and even Tim Berners-Lee admits that exposing such a technical detail to users wasn't the best design decision.
I believe he said horrendous since there is a certain trend that Facebook will, in some sense, be a closed internet. There may come a time when not being a Facebook user means that you cannot see some significant portion of the Web.
Well for facebook.com/productname that means a commercial decision - is it worth excluding a (probably) significant portion of the internet from viewing our product's site.
Why exclusion? FB Pages are public by default, no login required. Some individuals' comments may be invisible due to their privacy settings, but technically non-members are not excluded per se.
Why? The Internet encompasses a lot of protocols. Why assume every domain name supports HTTP? It's not fair to call them incompetent idiots when you're the one shooting in the dark. Without an unambiguous URL, you're only guessing, so accept the risk.
The problem is, the different services could be split across hosts, not just ports. It seems SRV records provide the only option here but they're not supported by browsers. That leaves www as the only viable option to move web serving into a different machine.
Most web sites sit behind load balancers and stateful firewalls. They have many, many opportunities to do the port translation to make $name and www.$name work the same way.
The theory you espouse, that www was the only viable option to move web service to a new machine, was true in the 90s when we started doing this. That's how the practice came about. It's 2011. It's not an issue anymore.
http:// equals port 80 and https:// equals port 443, the 2 ports web servers run on. Regardless of the subdomain, if you have http:// or https:// in a URL your getting a webserver. Having to specify www.whatever.com to reach the web server (i.e a website) is redundant, the URI takes care of that. Plus, the vast majority of people on the Internet don't understand protocols or the relevance of why websites start with www. When the concept of URI's and URL's were developed it was to deal with the myriad of protocols available on the early internet, the HTTP protocol was just one of many, but now that the World Wide Web is essentially the face of the Internet to the average user, I feel the tech community should adapt to the current situation and not force people to adopt our, somewhat outdated, standards. Browsers like Chrome just drop the http:// altogether now in the URL bar and the forward facing websites of organizations should drop www. too. I understand the DNS implications but it's trivial to drop the www. at the webserver level.
Of all the protocols on the Internet, HTTP is the most important user-facing one. Having your primary domain not support it, even on the level of blindly returning 301 to all HTTP requests, is tantamount to boarding up all the doors on your shop and hoping visitors guess you want them to come in via a rope dangling from the skylight.
"Mail servers do not require you to send emails to recipient@mail.domain.com. Likewise, web servers should allow access to their pages though the main domain unless a particular subdomain is required."
Sure, email is like this thanks to MX records. We could have the same for web servers by using SRV records ("_http._tcp.example.net" pointing at your "real" web server[s]) but I have no idea how many browsers support looking up SRV before A.
> I have no idea how many browsers support looking up SRV before A.
Firefox doesn't support it, which I think is rather disappointing. The feature request has been around since 1999. Good SRV record support across browsers would assist with many other issues too.
I've been using these records with XMPP servers for some time now and it works quite well. An added benefit is that you can run services on non-standard ports and make this information available via DNS. Of course if the web supported this, it might make it fun for people stuck behind firewalls/proxies run by administrators who think it's a port 80 world out there.
In the case of Jabber/XMPP it seems to be universally supported by servers now. Most clients should support it as well. Both are required to support it to be XMPP (RFC 6120/6121) compliant, possibly because there was no standard privileged port that could be allocated.
Yeah, that line stuck out to me as particularly silly because, generally, there is only one endpoint for mail traffic with a unique port per protocol. When it comes to HTTP, there are often numerous services, many of which can not be reasonably called the "world wide web". I guess you could say that the www is the default and codify that to naked domain, but it seems preferable to just redirect from the naked domain to www in order to disambiguate while simultaneously declaring your default.
Imagine SmithCo, a family run business where John Smith is the founder, president and CEO; Jane Smith is the accountant and CFO; James Smith is the head of sales; Judy Smith runs quality control and manufacturing; Jeb Smith is the janitor. Someone calls and says, "I want to talk to Smith!" Who gets the call?
At least with SRV records, a framework is available for routing based on context. But without them, it doesn't necessarily make sense to establish a default. In a web-centric environment with a single public facing site with low traffic, it might be useful to assign the top level domain an IP address. In more complex environments, it could create as many or more problems than it solves.
The people running this website need to get out of the lab and into the real world. www. is certainly not "deprecated", and declaring it to be so from your ivory tower doesn't change that.
The fact is that it's not important at all. There are plenty of real problems to solve, and interesting questions to ask.
Irrelevant to the argument for or against deprecation, whoever did that website certainly don't have the authority to declare parts of the infrastructure of the Internet deprecated.
Nobody has that authority, really. The internet's infrastructure evolves by "rough consensus and running code," as the IETF puts it, and that web site is working on the consensus part of things.
I'm guessing they ran into trouble with the problems mentioned. It doesn't seem to Google's benefit to remove the feature unless it turned out to be a bigger headache than it was worth.
Because, really, what's the point of the "world wide" qualifier. Were there other network-oriented webs that the www might have been confused with? No.
I think www can be useful offline. We here all recognize domain.tld as a website, but I suspect that www.domain.tld is more recognizable to non-technical folks. "Visit www.domain.tld for more information." etc.
Agreed - if it's a site aimed at a non-technical audience I put it on www. and 301 domain.tld to www. And the other way round for sites aimed at a more technical audience.
What would happen to wildcard SSL certs which depend on the main site being at www.example.com as opposed to example.com which a wildcard cert does not match?
Likewise with restricted cookies, most browsers fail to allow you to restrict a cookie to example.com alone (this restricts to all subdomains also). Restricting to www.example.com gets around this problem.
These complaints seem unfounded, the real problem is people not using permanent redirects on example.com to www.example.com
I wrote up a few pros and cons on using www a while ago. Basically I think only using the TLD is more pure but in the long run you lose less than you might gain by dividing services on subdomains. Whatever that subdomain may be.
Another benefit of hosting static components on a cookie-free domain is that some proxies might refuse to cache the components that are requested with cookies. On a related note, if you wonder if you should use example.org or www.example.org for your home page, consider the cookie impact. Omitting www leaves you no choice but to write cookies to .example.org, so for performance reasons it’s best to use the www subdomain and write the cookies to that subdomain.*
via http://developer.yahoo.com/performance/rules.html
While designing a webcrawler I found out how a LOT of sites are reachable using www.example.com and not using example.com.
Some browser just tend to try the www. prefix if they can't connect/resolve the non-www one. This is why nobody finds out about it and the internet is somewhat broke :(
It wouldn't matter if www.foo.bar and foo.bar both resolved to the target, all too often the www. prefix is the only one that works. Get that sorted then worry about deprecating the prefix.
I'm not sure it is a big deal if you expose both to Google. There is a setting in webmaster tools that lets you explicitly set which you prefer: example.com or www.example.com
I believe internet explorer combines the www. and non www. cookies. I remember sending an email explain site who's shopping cart would "mysteriously" empty when it switched to https://www.site.com if you started on site.com. their answer was basically "no, we're not broke." It was kind of disappointing.
it looks like they did fix the problem though: site.com 301's to www and they now set cookies on .site.com
The TLDs have long lost all their meaning anyways, so dropping them would be a great idea.
Of course it's not gonna happen (too much money to be made by inventing yet another suffix), but technically the root-servers could start resolving all .com's without the suffix tomorrow, except for those that clash with an existing TLD.
For some stupid reason I was using IE6 in class not portable Chrome. We were told a website to go to and I was surprised the normally with-it teacher said www. Turned out it was required, the page didn't load at all without it.
It's one of those stupid corporate/educational vestiges left, like IE6.
But what if your web server is named www.domain.tld?
Speaking personally, I don't even have www.mydomain.tld set up, as I just realized now when I tried to test it. That's ok, in my opinion www is mostly for those people who have learned "When I type something in Internet, it needs to start with 'www'", and my site holds nothing of interest for them :)
Not really, but even if that were the case, so what? Does that magically mean they must have been perfect and it's not possible, in hindsight, to realize that not all decisions were good?
With a www.example.com I can just make www a CNAME to a cloud or CDN vendor. With plain example.com I have to delegate DNS for my whole domain to that vendor (assuming they do that service) or use a A-record and go with one of the 3 anycast CDN's in the world.
Same if I want to use a GSLB service. I can just make it a CNAME or put in a NS record for my www.example.com . For example.com I'd have to run GSLB myself or put my whole domain under control of the external vendor.
So all work's domains have the smart stuff on www.example.com and the http://example.com is just a 301 to the www version.