Hacker News new | past | comments | ask | show | jobs | submit login
Www. is deprecated - should it be? (no-www.org)
64 points by zengr on April 17, 2011 | hide | past | favorite | 80 comments



The main problem with the plain "no www" record is that there is much less you can do with it in DNS. Most especially you can't make it a CNAME nor can you delegate the DNS to somebody else without delegating the whole domain.

With a www.example.com I can just make www a CNAME to a cloud or CDN vendor. With plain example.com I have to delegate DNS for my whole domain to that vendor (assuming they do that service) or use a A-record and go with one of the 3 anycast CDN's in the world.

Same if I want to use a GSLB service. I can just make it a CNAME or put in a NS record for my www.example.com . For example.com I'd have to run GSLB myself or put my whole domain under control of the external vendor.

So all work's domains have the smart stuff on www.example.com and the http://example.com is just a 301 to the www version.


So, I believe it turns out that you can cheat and just define a non-www name as a CNAME to another non-www name. And that mostly works, kind of, maybe, even though it's a violation of the standards. Except that it'll break under some conditions, but maybe you won't care too much. Or something.

Bleargh. Frankly I've forgotten the headsplitting details of trying to finesse the DNS RFCs. What I do remember is that my company, which does cloud hosting on behalf of many customers, looked into this and determined that for now we should do as you suggest and actually follow the spec: Either point your A record directly at your webserver (or its reverse proxy) or use the www. subdomain as a CNAME. I was left with much sympathy for the folks who designed the standard www. subdomain in the first place: They did what they had to do to work sensibly within the DNS system without having to redesign entire corporate domains around the needs of the web server.


I think one of the main problems for no www. domains is that even though some dns systems let you assign the root record to a CNAME is that it breaks MX records if you do so.


Yeah, it's coming back to me now. Thanks!


I don't mind the www., the thing I absolutely hate is sites that require it, but don't put up a 301 on the www-less url.


Obviously not possible if domain and www.domain point to different hosts where the host for domain does not run a web server. I'm sure there are numerous other reasons.


Seems it would be worth it to run a lightweight server on :80 solely to redirect to the www host. Sounds better than "foo.com" just not working.


You don't even need a web server to throw a 301 at everything that comes by saying "GET / HTTP/1.1".


Do SRV records work?

http://www.anta.net/nic/draft-andrews-http-srv-01.shtml

Honest question, because I've never tried it. I would believe effectively no web browsers honor it correctly, and certainly there's a boatload of non-web-browser code that won't honor it correctly.


This is correct, and it is the main reason why you will be hard-pressed to find a massive company whose root domain doesn't redirect to www.


Completely agree; the minute I can host my www-less website on S3, I will. Until then, my www.$domain is a CNAME, and $domain is lucky if it gets a machine serving 301s.


I don't care much either way, except that I see some cases where using a 'www' hostname can be advantageous. In television and commercials, it's a useful marker that what being listed refers to a website. To the layman who is not familiar with all TLDs, 'example.nu' may be bogus, while 'www.example.nu' is immediately recognizable as referring to a website.

My take: 'www' helps in disambiguation, and sure is nicer than having to prefix all names with 'http://.


It's horrendous, but nowadays many advertisers are directing people to their site with addresses in the form:

facebook.com/productname


Why is it horrendous? The string is concise, unambiguous, and works when you put it in the location bar. Including the scheme makes sense only when there is a risk of ambiguity, and even Tim Berners-Lee admits that exposing such a technical detail to users wasn't the best design decision.


I believe he said horrendous since there is a certain trend that Facebook will, in some sense, be a closed internet. There may come a time when not being a Facebook user means that you cannot see some significant portion of the Web.


Well for facebook.com/productname that means a commercial decision - is it worth excluding a (probably) significant portion of the internet from viewing our product's site.


Why exclusion? FB Pages are public by default, no login required. Some individuals' comments may be invisible due to their privacy settings, but technically non-members are not excluded per se.


Just in response to the parent poster stating that Facebook could become a closed internet, requiring an FB login to access content.


Concise? The superfluous inclusion of "facebook" and the slash negates your claim here.


Or try AOL keyword "productname"!


I think this is great. I don't even have to be near a computer to know I have no interest in the product.


Most TV and radio ads just refer to something.com

I'd say www is only helpful when using an uncommon domain suffix such as .ly


Every time I enter a www-only-site without the www prefix and see an error page, I think incompetent idiots for a brief period before I add the www


Just this past week, I realized that irs.gov does this, and had exactly that reaction.


Interesting, I find Firefox at least will attempt to try www.site.com if site.com returned a connection failure, but not for 400/500 responses.


Why? The Internet encompasses a lot of protocols. Why assume every domain name supports HTTP? It's not fair to call them incompetent idiots when you're the one shooting in the dark. Without an unambiguous URL, you're only guessing, so accept the risk.


Other protocols normally use different ports.

Also in all cases where this happens to me, www.$name is the only service being provided, most of the time poorly.


The problem is, the different services could be split across hosts, not just ports. It seems SRV records provide the only option here but they're not supported by browsers. That leaves www as the only viable option to move web serving into a different machine.


Most web sites sit behind load balancers and stateful firewalls. They have many, many opportunities to do the port translation to make $name and www.$name work the same way.

The theory you espouse, that www was the only viable option to move web service to a new machine, was true in the 90s when we started doing this. That's how the practice came about. It's 2011. It's not an issue anymore.


http:// equals port 80 and https:// equals port 443, the 2 ports web servers run on. Regardless of the subdomain, if you have http:// or https:// in a URL your getting a webserver. Having to specify www.whatever.com to reach the web server (i.e a website) is redundant, the URI takes care of that. Plus, the vast majority of people on the Internet don't understand protocols or the relevance of why websites start with www. When the concept of URI's and URL's were developed it was to deal with the myriad of protocols available on the early internet, the HTTP protocol was just one of many, but now that the World Wide Web is essentially the face of the Internet to the average user, I feel the tech community should adapt to the current situation and not force people to adopt our, somewhat outdated, standards. Browsers like Chrome just drop the http:// altogether now in the URL bar and the forward facing websites of organizations should drop www. too. I understand the DNS implications but it's trivial to drop the www. at the webserver level.


Of all the protocols on the Internet, HTTP is the most important user-facing one. Having your primary domain not support it, even on the level of blindly returning 301 to all HTTP requests, is tantamount to boarding up all the doors on your shop and hoping visitors guess you want them to come in via a rope dangling from the skylight.


"Mail servers do not require you to send emails to recipient@mail.domain.com. Likewise, web servers should allow access to their pages though the main domain unless a particular subdomain is required."

Sure, email is like this thanks to MX records. We could have the same for web servers by using SRV records ("_http._tcp.example.net" pointing at your "real" web server[s]) but I have no idea how many browsers support looking up SRV before A.


> I have no idea how many browsers support looking up SRV before A.

Firefox doesn't support it, which I think is rather disappointing. The feature request has been around since 1999. Good SRV record support across browsers would assist with many other issues too.

https://bugzilla.mozilla.org/show_bug.cgi?id=14328


I've been using these records with XMPP servers for some time now and it works quite well. An added benefit is that you can run services on non-standard ports and make this information available via DNS. Of course if the web supported this, it might make it fun for people stuck behind firewalls/proxies run by administrators who think it's a port 80 world out there.

In the case of Jabber/XMPP it seems to be universally supported by servers now. Most clients should support it as well. Both are required to support it to be XMPP (RFC 6120/6121) compliant, possibly because there was no standard privileged port that could be allocated.


Yeah, that line stuck out to me as particularly silly because, generally, there is only one endpoint for mail traffic with a unique port per protocol. When it comes to HTTP, there are often numerous services, many of which can not be reasonably called the "world wide web". I guess you could say that the www is the default and codify that to naked domain, but it seems preferable to just redirect from the naked domain to www in order to disambiguate while simultaneously declaring your default.


What else would you want to use the "raw" domain for?

Or phrased differently, what's the disadvantage of pointing your main domain to the www server?


Imagine SmithCo, a family run business where John Smith is the founder, president and CEO; Jane Smith is the accountant and CFO; James Smith is the head of sales; Judy Smith runs quality control and manufacturing; Jeb Smith is the janitor. Someone calls and says, "I want to talk to Smith!" Who gets the call?

At least with SRV records, a framework is available for routing based on context. But without them, it doesn't necessarily make sense to establish a default. In a web-centric environment with a single public facing site with low traffic, it might be useful to assign the top level domain an IP address. In more complex environments, it could create as many or more problems than it solves.


The people running this website need to get out of the lab and into the real world. www. is certainly not "deprecated", and declaring it to be so from your ivory tower doesn't change that.

The fact is that it's not important at all. There are plenty of real problems to solve, and interesting questions to ask.


Irrelevant to the argument for or against deprecation, whoever did that website certainly don't have the authority to declare parts of the infrastructure of the Internet deprecated.


Nobody has that authority, really. The internet's infrastructure evolves by "rough consensus and running code," as the IETF puts it, and that web site is working on the consensus part of things.


Too bad that google app engine seems to only allow subdomains, so www seems the only sane choice for now. Or have they fixed that by now?


They probably won't fix that, because DNS does not allow CNAMES at naked domains.


It is actually a feature they removed. In the early days AppEngine had some support for naked domains. I had one.


I'm guessing they ran into trouble with the problems mentioned. It doesn't seem to Google's benefit to remove the feature unless it turned out to be a bigger headache than it was worth.


I set up a site there recently and AFAICT "naked domains" are still not accepted.


I love this one, listed as a "competitor": http://www.www.extra-www.org/


Yes, should be, just because "double-u-double-u-double-u" takes a ridiculously long time to vocalize.


Should've just been "web" instead of www.

Because, really, what's the point of the "world wide" qualifier. Were there other network-oriented webs that the www might have been confused with? No.


World Wide Web sounds more and more like Information Superhighway every year.


That's why sane languages use a different pronounciation.

(By the way, world-wide-web is faster to say in English than double-u-double-u-double-u".)


But somehow more difficult both to pronounce and hear clearly.


I just go with 'dub dub dub…'


Trip-dub!


I think www can be useful offline. We here all recognize domain.tld as a website, but I suspect that www.domain.tld is more recognizable to non-technical folks. "Visit www.domain.tld for more information." etc.


Agreed - if it's a site aimed at a non-technical audience I put it on www. and 301 domain.tld to www. And the other way round for sites aimed at a more technical audience.


What would happen to wildcard SSL certs which depend on the main site being at www.example.com as opposed to example.com which a wildcard cert does not match?

Likewise with restricted cookies, most browsers fail to allow you to restrict a cookie to example.com alone (this restricts to all subdomains also). Restricting to www.example.com gets around this problem.

These complaints seem unfounded, the real problem is people not using permanent redirects on example.com to www.example.com


I wrote up a few pros and cons on using www a while ago. Basically I think only using the TLD is more pure but in the long run you lose less than you might gain by dividing services on subdomains. Whatever that subdomain may be.

http://vvv.tobiassjosten.net/internet/using-www-for-your-dom...


Another benefit of hosting static components on a cookie-free domain is that some proxies might refuse to cache the components that are requested with cookies. On a related note, if you wonder if you should use example.org or www.example.org for your home page, consider the cookie impact. Omitting www leaves you no choice but to write cookies to .example.org, so for performance reasons it’s best to use the www subdomain and write the cookies to that subdomain.* via http://developer.yahoo.com/performance/rules.html

My own takes: http://eapen.in/to-www-or-not-to-www/ http://eapen.in/to-www-or-not-to-www-update/


While designing a webcrawler I found out how a LOT of sites are reachable using www.example.com and not using example.com. Some browser just tend to try the www. prefix if they can't connect/resolve the non-www one. This is why nobody finds out about it and the internet is somewhat broke :(


It wouldn't matter if www.foo.bar and foo.bar both resolved to the target, all too often the www. prefix is the only one that works. Get that sorted then worry about deprecating the prefix.


It doesn't matter much. just take care that you don't have two domains with the same content.

www.example.com and example.com are different sites to Google and have different cookies as well.


I'm not sure it is a big deal if you expose both to Google. There is a setting in webmaster tools that lets you explicitly set which you prefer: example.com or www.example.com

http://www.google.com/webmasters/

Although, the preferred way is probably to use a redirect.


And a canonical URL in your header.


Just redirect permanently from one to the other.


I believe internet explorer combines the www. and non www. cookies. I remember sending an email explain site who's shopping cart would "mysteriously" empty when it switched to https://www.site.com if you started on site.com. their answer was basically "no, we're not broke." It was kind of disappointing.

it looks like they did fix the problem though: site.com 301's to www and they now set cookies on .site.com


One step further: remove '.com' from the end. For those domains that fit.

e.g. http://www.hp.com => hp


Type "hp" into your URL bar

Hit Ctrl+Enter and rejoice in the automatic www.hp.com goodness.


You shouldn't discount his point so easily.

The TLDs have long lost all their meaning anyways, so dropping them would be a great idea.

Of course it's not gonna happen (too much money to be made by inventing yet another suffix), but technically the root-servers could start resolving all .com's without the suffix tomorrow, except for those that clash with an existing TLD.


For some stupid reason I was using IE6 in class not portable Chrome. We were told a website to go to and I was surprised the normally with-it teacher said www. Turned out it was required, the page didn't load at all without it.

It's one of those stupid corporate/educational vestiges left, like IE6.


yes, it should be, for all the reasons listed in the first post on that page at the bottom


I believe Tim Berners-Lee himself has said that he regrets introducing the "www." prefix. [citation needed]

Edit: he was, in fact, regretting the double slashes. Thanks for the correction.



Not until you can do load balancing on GAE/AWS without it.


No it should not - missing off the www is as annoying as people who abuse the TLD, e.g. .tv means "I'm in Tuvalu" not "I'm on television lol".


Where were you ten years ago before all of this BS became ubiquitous?


I never liked the WWW. xxx.domain.tld should be a host name and not a service determinator.

my http blog is accessible on domain.tld:80 and my telnet comment interface runs on domain.tld:1337 :)

if you're curious about the telnet interface: http://fettemama.org/faq_en.html


But what if your web server is named www.domain.tld?

Speaking personally, I don't even have www.mydomain.tld set up, as I just realized now when I tried to test it. That's ok, in my opinion www is mostly for those people who have learned "When I type something in Internet, it needs to start with 'www'", and my site holds nothing of interest for them :)


Sure it should, there is no value whatsoever in `www` subdomains. It's a relic from a time when admins lacked a bunch of tools and clues.


Yeah, stupid admins. They only built the internet that you take for granted, right?


Not really, but even if that were the case, so what? Does that magically mean they must have been perfect and it's not possible, in hindsight, to realize that not all decisions were good?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: