I made a little project taking this idea to an extreme. The idea was, that you should be able to generate "meme" image macros simply by typing a URL. That way you could create memes on the fly anywhere that you can enter a URL (Slack/Twitter/Facebook/etc.) without having to leave the app. Just type a URL of the form:
If you are at all worried about it going away, you can add:
?host=imgur
To the URL and it will 301 to the image hosted on Imgur. That is assuming you have more faith in the longevity of Imgur than you do in the longevity of my side project :D
That's really cool. I use something similar (http://memegen.link) in my noddy project "Zen D Trump", which takes his tweets and pastes them on pictures of relaxing things like beaches and waterfalls.
This is a pet peeve of mine. I'm writing some vuejs SPAs and I am using nuxtjs for server side rendering (probably a clone of react's nextjs) and when appropriate, using the URL for reproducible "what you are currently looking at" state as much as reasonably possible. It's too disappointing these days how apps don't let you middle click open a link to a new tab, destroy your original URL when rewriting navigation history to redirect you to login, etc.
While I've become less than enamored by Microsoft lately, these are difficult decisions to make because every assumption carries a penalty. All abstractions are leaky. That being said, maybe they can do better. I used SharePoint 2007 and was told 2010 and beyond were much better?
I've been playing with drupal for a few weeks and there's the concept of an alias in drupal where everything is /node/{node-id} internally but the user sees the alias every time.
About angular 2+, you can have routes I believe. Like {base-url}/user/keganunderwood can show you the profile page for me. Even if you go to it directly.
I can think of good reasons for picking each of these, and even good reasons for picking their alternatives. URL design is tricky to get right because realistically you can't change it later.
There are technical reasons to consider, as well (yes, it's DNS again). If you want to receive emails @foo.com, you have to set MX records for foo.com., and that means you can't set a CNAME anymore - you'll have to make do with A/AAAA records.
For a lot of applications this is not an issue, but it does mean an overhead for highly distributed services. You won't see many global/high traffic companies drop the benefits 'www' gives. This is not purely to get the 'old folks' market.
(PS: The subdomain doesn't have to be www., of course - cnn goes with edition.cnn.com., for example.)
Yep. The DNS hierarchy exists for a reason, and if you have a bunch of machines, you want to use it.
Also, I'm a single person, and I have multiple independent web sites. Many of them live under the same domain. Lots of companies have many more. You can play annoying tricks to smush them all into a single name (at least, most of the time), but why?
I think the world is ready to internalize the idea of hierarchic names, at least insofar as understanding they're independent entities grouped under the same name. Haven't seen any studies, but it seems like we're past the 'do I need the "www"'? point.
All that said, I also like short. The magic comes from knowing when to choose what, and I don't think you get there with rules-of-thumb alone.
I disagree with 2a. No reason why this shouldn't be a path that the user can edit to go one level "up" to /search/product/couches.
That's kind of the point of the article - these are parameters for the programmer, but are UI to the user.
Separating parameters by "&", except for the first KVP which is separated by "?" from the URL is not intuitive. On the other hands, they path editing is familiar.
Unless you have only one product that has color, or that color is the only thing you can filter your couches by, then no, it's not a hierarchy. There no nothing of "going up".
> No reason why this shouldn't be a path that the user can edit to go one level "up"
While I find it more visually pleasing, this is my issue with this style: traversing up would first include /search/product/couches/color, which doesn't make sense. I've seen alternatives like /color:red/ before - which takes away the pairing-problem but feels odd..
Products aren't hierarchially ordered first by type, then by colour, then by material, etc. These are all attributes that have multiple possible orderings (or no order at all).
Of course, if the site is designed such that it forces you to pick those attributes in a specific order, that URL scheme would make sense (but that would be a poor design for other reasons).
Wow it does feel so much more readable than using an ampersand. I'd still prefer an ampersand because it plays well everywhere and the kind of people who'd look at url and try to understand would be too ingrained to ampersand anyway.
I had several websites where changing the URL schemas resulted in Google rank dropping, resulting in advertising revenue loss. I don't see I can be more honest than that. And frankly, you are not just unfair, you are being rude.
Those are porn websites, so I'm not going to share them. And there are only 2 kinds of redirection, permanent or temporary so it's pretty hard to do it incorrectly.
That's a "slug", it's extremely common and IMO a good thing.
Sure, you can create a weird looking or even misleading URLs that way but I don't think it's a big problem because 1/ as soon as the page load the URL gets rewritten to the real title and 2/ it's often very easy to obfuscate links regardless of that. Many platforms allow you to hide your links behind an href with some markup for instance, so you can make bogus links very easily. Think of something like:
<a href="http://evil.org/">http://google.com</a>
This is very common in spam emails.
You can't even trust the browser's link preview tooltip because it can be overridden in JS. So in general it's a bad idea to blindly trust an URL "from the outside", slug or not.
I really, really wish youtube would do the same thing for instance, it's completely impossible to know what a youtube link is pointing towards. You could argue that they want short URLs but since they already have a "youtu.be" shortening service to make them even shorter it feels a bit redundant.
I'm talking about the "preview" usually at the bottom left of the browser when you hover a link. By using a Javascript event handler on the link you can override what happens.
Google does that for instance, if you hover on top of a search result it'll look like a direct like to the website, however if you look at the HTML source it looks something like this:
So even though the href goes to wikipedia in this case if I click the link the browser goes to a google page that then redirects me.
You can see the real URL by right-clicking on the link and then hovering again, it causes the "onmousedown" code to run and replace the href by the real value.
Duckduckgo uses a "click" event handler instead. As far as I can't tell Bing doesn't do anything and directly links the target website, which is odd. I may be missing something.
I don't know how this works in different frameworks, but in principle, nothing prevents you from arranging it so that /user/6380/hanselman is valid but /user/6380/[somethingelse] 404s.
Edit: but this has the potential to break links if the user changes their display name.
I use the same scheme for blog posts on a website I develop. I solve this problem by looking at the slug and redirecting to the correct one if it is wrong.
"Other people do it," by itself, is not a great justification.
It's yet another way to mislead. Just because other misleading schemes exist doesn't mean this isn't also misleading and potentially bad.
As for "How so" ... I didn't think it through. I'll go with "potentially not good," but equally not thought through. Since the subject of the article is URLs as UI, when you send someone a URL to "look at this", what they see is the URL, and in my example the human readable part is "the site" and "a3n", but what they get is nothing to do with a3n.
I can only intuitively start with "that's misleading," and imagine (but not point out) the possibility of "something bad". Maybe something merely annoying like rick-rolling.
Last time I was localising a service I ended up localising the the URLs as well - if URLs should be readable they should be readable in the localised language too right? Luckily there’s the RouteTranslator gem which made it trivial: https://github.com/enriclluelles/route_translator/
The fact of the matter is unskilled people are always going to be designing websites. This is OK!
However, it does mean that we should require them to do as little as possible so they have the best chance of getting it right. Asking them to design a second UI right from the start on top of the first HTML/JS one -- and then telling them never to change it(!) -- is a little much.
Instead of URLs, websites should have UUIDs to identify each resource. They should also have metadata describing what that resource is. The metadata (eg "articles/urls-are-uis") should be able to change without breaking links to the resource. Browsers should be intelligent enough so that when you hover over a link you see the metadata, not the UUID.
(This has one downside which if if you link to thing X and the UUID is later changed to point to thing Y, it may look like you linked to something you didn't mean to. This can be trivially fixed by including a "what the metadata was when I made the link" field in links along with the UUID)
EDIT: I'm actually not set on UUIDs specifically, they're a little long. Any random, non-meaningful identifier is fine.
Really I'm just saying that points 2 & 3 of the article are so good we should have made them the default (and perhaps only) option from the start.
Yes, I absolutely agree that good URLs are extremely useful, although personally I don't really like the long-string-of-text pattern, especially if it isn't actually significant. A short identifier is useful when sharing them offline: "HN article 14723409" or "YouTube video AQcSFsQyct8" or "forum thread 5705591"
For a blog, news, or other chronological content, I'd like to see a timestamp of some form. If it's rigidly hierarchical content, then a hierarchy (of which I should be able to remove 'subdirectories' to see the parent content) makes sense. Otherwise flat IDs are OK too.
Too bad browser developers seem to love hiding or mutilating them...
The string of text, if you are seriously trying to remember it (which I seriously doubt... do you really try to type in those sentence-long title slugs from memory?!) is subject to domain-specific forms of corruption, in the same way direct quotes from people or TV characters tend to be: you subtly change the grammar or replace a noun with a synonym with which you are more familiar.
Regardless: I am quite serious... do you seriously try to remember and type, from memory, URLs with title slugs?
Doubt he does that, however I collect some links in a text file and the ones with text in them are the ones I can identify instantly. The ones with just IDs are totally random as to what they are and usually requires a comment accompanying them in the file.
I agree with the article, and I really don't understand how Google Search can be so bad with URLs. Seems they get more and more paramenters every time I look, to the point my browser (Safari) hides the URL just for them.
Google Search is perhaps the definitive example of a site with no interest in human-readable URLs. Google wants you to navigate the web using Google, not by cutting and pasting URLs.
This is all solid advice, but there's something that still bothers me about URLs:
From one side I hear that hypertext should be the engine of application state. This implies that the URL router should control just about everything, and that you should be able to click a link in an email and jump directly to any state of your application. From the other side I hear that web apps can be just as capable as desktop apps, and it's only a matter of time before we'll be using PhotoshopJS in a browser.
What isn't said is that Photoshop has no concept of an "address bar", and it probably will never have one. As with so many things, the best practices for a blog or message board might be completely different from the best practices for a creative application. Could you design a URL format to represent the state of a Photoshop editing session? Would you even want to?
Insisting that a URL change with state is taking it too far. It's not how I understood what Tim Berners-Lee and others were saying at the beginning. I would be interested, however, in articles that espouse this.
If the URL represented state, then it should change as you're filling out an HTML form, at each keystroke. But instead there is one URL for the blank form and one after you click Submit.
A better rule is one URL per "document" or "record." So in your Photoshop example, there would be a different URL per file that you edit (www.photoshop.com/image001.psd) but not per edit. Well, if the app saved versions, then you could append ?v=203. But in general I think it's enough to align URLs to "documents" (like a news story) or "records" (like a particular profile in a contact database).
> Insisting that a URL change with state is taking it too far. It's not how I understood what Tim Berners-Lee and others were saying at the beginning. I would be interested, however, in articles that espouse this.
> A better rule is one URL per "document" or "record." So in your Photoshop example, there would be a different URL per file that you edit (www.photoshop.com/image001.psd) but not per edit. Well, if the app saved versions, then you could append ?v=203. But in general I think it's enough to align URLs to "documents" (like a news story) or "records" (like a particular profile in a contact database).
If you're using non-destructive editing a URL might indeed be helpful for sharing the progress and evolution of a file with others. Something like photoshop.app/jdavis703/file/oceanview.psd/layer/colorcorrect might be really useful.
I can’t remember if it was actually shown or just argued for, but the simplification of the URL in the address bar actually helps identify phishing sites, as there is a lot less text to process through, and that one character that might have been changes becomes more visible than before :)
For the "easier to read" part I would prefer Chrome's black vs grey url string.
But for phishing .. with all the tricks possible with Unicode lookalikes, I wonder if people are still doing easily visible character exchanges? qoogle instead of google?
Proposed what? Great article and I completely agree with it, I'm frustrated perhaps daily by bad URLs, (I don't know if I've ever written a more lame description of myself!) but I was very confused by the ending.
I was expecting the author to propose a (informal, but) concrete system for URL UI. I know it's short of implicit throughout, but a summary would be good.
I was thinking this recently as I tried Reddit's mobile app. I found that navigating subreddits through URL was so natural that I was kind of lost in the app -- and frankly Chrome did a better job of suggesting the content I wanted than the app did.
I hate that people use shorturls on Twitter, especially news sites with paywalls. I check the full URL of articles in my Twitter client before I click them, especially when people share the links without any description.
Since Twitter gives all links the same allotment of characters, it's really frustrating that people insist on sticking to shorturls.
Goes to show how tracking and advertising keeps ruining the Web.
URLs are addresses. For web, we messed up and URLs became part of the UI. It shouldn't be necessary to use URLs in most cases though. e.g. imagine using IMAP URLs to read your email.
Anecdote: in the last half year, and GraphQL. Proponents are coming out of the woodworks and saying all sorts of crazy stuff about how it is better than REST, and the meetup attendees are eating it up without any hint of skepticism.