Hacker News new | past | comments | ask | show | jobs | submit login
URLs are UI (hanselman.com)
255 points by smacktoward on July 8, 2017 | hide | past | favorite | 91 comments



I made a little project taking this idea to an extreme. The idea was, that you should be able to generate "meme" image macros simply by typing a URL. That way you could create memes on the fly anywhere that you can enter a URL (Slack/Twitter/Facebook/etc.) without having to leave the app. Just type a URL of the form:

  http://urlme.me/<search_prhase_for_meme_image>/<top_text>/<bottom_text>.<ext>
http://urlme.me/


Brilliant!

http://urlme.me/simply/one_can't_simply/fix_the_web

..but your example might lead the way.

Also, I hope this stays up for a long time to come.


If you are at all worried about it going away, you can add:

    ?host=imgur
To the URL and it will 301 to the image hosted on Imgur. That is assuming you have more faith in the longevity of Imgur than you do in the longevity of my side project :D




That's really cool. I use something similar (http://memegen.link) in my noddy project "Zen D Trump", which takes his tweets and pastes them on pictures of relaxing things like beaches and waterfalls.

https://github.com/sammoorhouse/zendtrump/blob/master/zendtr...

@ZenDTrump


And this is why URLs as input don't work:

http://urlme.me/philosoraptor/what%20if%20urls%20were%20actu...?

Where's my question mark? ;)

Nice idea and nice execution though, too bad the contraints work against it.

EDIT: Aaaaaand the UI broke on HN too!


http://urlme.me/philosoraptor/what%20if%20urls%20were%20actu...

FTFY.

But yeah, having to url encode stuff is annoying


With slightly different server-side handling, that could work.

HN would still remove the ? though.


This project alone is more convincing that the entire linked article. Needless to say, I love it!


Facebook's Phabricator software has this exactly built in, but I think it's a relatively little known feature.

The only difference is that you need to upload a selection of base images and assign shortnames to them, instead of entering a general search phrase.


The 'x' (#cancel-copy) does nothing. You've to focus/blur #link to go back.


Ah, thanks for catching that. I've created an issue here: https://github.com/captbaritone/urlmeme/issues/13


Wow, I love this. My girlfriend and I LOVE sending memes to each other. This will give me major points :-)


The only problem here is that now I can make you appear to say anything I want you to say.

http://urlme.me/success/I,%20captbaritone/lick%20goats.jpg

This can get much nastier fast.


You mean without that website you wouldn't know how to put some random text on an image?


No, I can force you to host words and it looks like you said them.


It's a meme generator. Nobody's going to believe that the person who operates the site "said" anything on there.

Next are you going to upload an image to imgur that "looks like" the creator of imgur is saying something bad?


Imgur has responsibility for all of the content it hosts.


Well, only in that they shouldn't be hosting illegal content (e.g. CP). But beyond that, nothing they host can be construed as being "said" by imgur.


See also the pit of despair known as SharePoint.

example.com/sites/MySite/MyList.aspx

can be functionally the same as

example.com/sites/MySite/MyList.aspx?GUIDGUIDGUIDGUIDGUIDGUIDGUIDGUIDGUID=GUIDGUIDGUIDGUIDGUIDGUIDGUIDGUIDGUID&GUIDGUIDGUIDGUIDGUIDGUIDGUIDGUIDGUID=GUIDGUIDGUIDGUIDGUIDGUIDGUIDGUIDGUID&GUIDGUIDGUIDGUIDGUIDGUIDGUIDGUIDGUID=GUIDGUIDGUIDGUIDGUIDGUIDGUIDGUIDGUID&GUIDGUIDGUIDGUIDGUIDGUIDGUIDGUIDGUID=GUIDGUIDGUIDGUIDGUIDGUIDGUIDGUIDGUID

Why, SharePoint, why?

Or even worse, Angular SPAs

example.com/page#nice_page_you_have_there/shame_if_you_tried_to_share_it

As a tester, I always check:

1. What happens if I go to "example.com", "http://example.com", "https://example.com"

2. Sharing URLs - at least you should get a landing page, but I've seen 404s from shared URLs


This is a pet peeve of mine. I'm writing some vuejs SPAs and I am using nuxtjs for server side rendering (probably a clone of react's nextjs) and when appropriate, using the URL for reproducible "what you are currently looking at" state as much as reasonably possible. It's too disappointing these days how apps don't let you middle click open a link to a new tab, destroy your original URL when rewriting navigation history to redirect you to login, etc.


While I've become less than enamored by Microsoft lately, these are difficult decisions to make because every assumption carries a penalty. All abstractions are leaky. That being said, maybe they can do better. I used SharePoint 2007 and was told 2010 and beyond were much better?

I've been playing with drupal for a few weeks and there's the concept of an alias in drupal where everything is /node/{node-id} internally but the user sees the alias every time.

About angular 2+, you can have routes I believe. Like {base-url}/user/keganunderwood can show you the profile page for me. Even if you go to it directly.


Which of the following URLS are clearer?

    1a. https://foo.com/
    1b. https://www.foo.com/

    2a. https://foo.com/search/products/couches/color/red
    2b. https://foo.com/search?products=couches&color=red

    3a. /posts?recent
    3b. /posts?recent=
    3c. /posts?recent=true

    4a. /post/1234/products-couches-furniture-red-cheap-recent-ikea-tags
    4b. /post/1234/
I can think of good reasons for picking each of these, and even good reasons for picking their alternatives. URL design is tricky to get right because realistically you can't change it later.


> 1a. https://foo.com/ > 1b. https://www.foo.com/

www for old folks, non www for all the rest.

> 2a. https://foo.com/search/products/couches/color/red > 2b. https://foo.com/search?products=couches&color=red

https://foo.com/search?products=couches&color=red

Those are parameters, not a path. The first one has only value as an SEO optimization.

> 3a. /posts?recent > 3b. /posts?recent= > 3c. /posts?recent=true

/posts/recent

> 4a. /post/1234/products-couches-furniture-red-cheap-recent-ikea-tags > 4b. /post/1234/

Both should work on your website, 4b should redirect on 4a.


I disagree with 1 as a blanket statement.

There are technical reasons to consider, as well (yes, it's DNS again). If you want to receive emails @foo.com, you have to set MX records for foo.com., and that means you can't set a CNAME anymore - you'll have to make do with A/AAAA records.

For a lot of applications this is not an issue, but it does mean an overhead for highly distributed services. You won't see many global/high traffic companies drop the benefits 'www' gives. This is not purely to get the 'old folks' market.

(PS: The subdomain doesn't have to be www., of course - cnn goes with edition.cnn.com., for example.)


Yep. The DNS hierarchy exists for a reason, and if you have a bunch of machines, you want to use it.

Also, I'm a single person, and I have multiple independent web sites. Many of them live under the same domain. Lots of companies have many more. You can play annoying tricks to smush them all into a single name (at least, most of the time), but why?

I think the world is ready to internalize the idea of hierarchic names, at least insofar as understanding they're independent entities grouped under the same name. Haven't seen any studies, but it seems like we're past the 'do I need the "www"'? point.

All that said, I also like short. The magic comes from knowing when to choose what, and I don't think you get there with rules-of-thumb alone.


I disagree with 2a. No reason why this shouldn't be a path that the user can edit to go one level "up" to /search/product/couches.

That's kind of the point of the article - these are parameters for the programmer, but are UI to the user.

Separating parameters by "&", except for the first KVP which is separated by "?" from the URL is not intuitive. On the other hands, they path editing is familiar.


Unless you have only one product that has color, or that color is the only thing you can filter your couches by, then no, it's not a hierarchy. There no nothing of "going up".


> No reason why this shouldn't be a path that the user can edit to go one level "up"

While I find it more visually pleasing, this is my issue with this style: traversing up would first include /search/product/couches/color, which doesn't make sense. I've seen alternatives like /color:red/ before - which takes away the pairing-problem but feels odd..


And imply you only have product and color. What about size, price, date, description, rating, etc. It is a "search" url after all.


Products aren't hierarchially ordered first by type, then by colour, then by material, etc. These are all attributes that have multiple possible orderings (or no order at all).

Of course, if the site is designed such that it forces you to pick those attributes in a specific order, that URL scheme would make sense (but that would be a poor design for other reasons).


I'd be happy with any of these. What I'm meeting with my team next week to discuss is establishing some conventions to avoid stuff like this:

4c. /post/_level/modular/1234--Products--Couches%20red+cheap%20ikeaTags%28OLD%29.aspx


I think this is more user friendly.

    https://foo.com/search?products=couches;color=red;show_recent;


Wow it does feel so much more readable than using an ampersand. I'd still prefer an ampersand because it plays well everywhere and the kind of people who'd look at url and try to understand would be too ingrained to ampersand anyway.


Of course you can change it later, that's what 301s are for.


If you make money with it, 301s are going to hurt the wallet for some months.


You obviously have no idea what you're talking about.


I had several websites where changing the URL schemas resulted in Google rank dropping, resulting in advertising revenue loss. I don't see I can be more honest than that. And frankly, you are not just unfair, you are being rude.


You changed them without redirecting properly.

Actually, tell me a few of these websites.


Those are porn websites, so I'm not going to share them. And there are only 2 kinds of redirection, permanent or temporary so it's pretty hard to do it incorrectly.


> I love Stack Overflow's URLs. Here's an example: https://stackoverflow.com/users/6380/scott-hanselman

> The only thing that matters there is the 6380. Try it https://stackoverflow.com/users/6380 or https://stackoverflow.com/users/6380/fancy-pants also works. SO will even support this! http://stackoverflow.com/u/6380.

This works too: https://stackoverflow.com/users/6380/a3n

That doesn't seem right, and on a different site could even be dangerous.


That's a "slug", it's extremely common and IMO a good thing.

Sure, you can create a weird looking or even misleading URLs that way but I don't think it's a big problem because 1/ as soon as the page load the URL gets rewritten to the real title and 2/ it's often very easy to obfuscate links regardless of that. Many platforms allow you to hide your links behind an href with some markup for instance, so you can make bogus links very easily. Think of something like:

    <a href="http://evil.org/">http://google.com</a>
This is very common in spam emails.

You can't even trust the browser's link preview tooltip because it can be overridden in JS. So in general it's a bad idea to blindly trust an URL "from the outside", slug or not.

I really, really wish youtube would do the same thing for instance, it's completely impossible to know what a youtube link is pointing towards. You could argue that they want short URLs but since they already have a "youtu.be" shortening service to make them even shorter it feels a bit redundant.


> You can't even trust the browser's link preview tooltip because it can be overridden in JS.

What!? Just tooltip, or status bar also?


I'm talking about the "preview" usually at the bottom left of the browser when you hover a link. By using a Javascript event handler on the link you can override what happens.

Google does that for instance, if you hover on top of a search result it'll look like a direct like to the website, however if you look at the HTML source it looks something like this:

    <a href="https://en.wiktionary.org/wiki/test" onmousedown="return rwt(this,'','','','2','AFQjCNHdfeYp_b4PzYbkDh9qequUqhrOQw','','0ahUKEwjmkPK0qfrUAhVD2hoKHSc9DG0QFggwMAE','','',event)">test - Wiktionary</a>
So even though the href goes to wikipedia in this case if I click the link the browser goes to a google page that then redirects me.

You can see the real URL by right-clicking on the link and then hovering again, it causes the "onmousedown" code to run and replace the href by the real value.

Duckduckgo uses a "click" event handler instead. As far as I can't tell Bing doesn't do anything and directly links the target website, which is odd. I may be missing something.


I don't know how this works in different frameworks, but in principle, nothing prevents you from arranging it so that /user/6380/hanselman is valid but /user/6380/[somethingelse] 404s.

Edit: but this has the potential to break links if the user changes their display name.


I use the same scheme for blog posts on a website I develop. I solve this problem by looking at the slug and redirecting to the correct one if it is wrong.


Dangerous? How so?

This scheme is used on other sites, too.


"Other people do it," by itself, is not a great justification.

It's yet another way to mislead. Just because other misleading schemes exist doesn't mean this isn't also misleading and potentially bad.

As for "How so" ... I didn't think it through. I'll go with "potentially not good," but equally not thought through. Since the subject of the article is URLs as UI, when you send someone a URL to "look at this", what they see is the URL, and in my example the human readable part is "the site" and "a3n", but what they get is nothing to do with a3n.

I can only intuitively start with "that's misleading," and imagine (but not point out) the possibility of "something bad". Maybe something merely annoying like rick-rolling.


Last time I was localising a service I ended up localising the the URLs as well - if URLs should be readable they should be readable in the localised language too right? Luckily there’s the RouteTranslator gem which made it trivial: https://github.com/enriclluelles/route_translator/


Absolutely true, and also a problem.

The fact of the matter is unskilled people are always going to be designing websites. This is OK!

However, it does mean that we should require them to do as little as possible so they have the best chance of getting it right. Asking them to design a second UI right from the start on top of the first HTML/JS one -- and then telling them never to change it(!) -- is a little much.

Instead of URLs, websites should have UUIDs to identify each resource. They should also have metadata describing what that resource is. The metadata (eg "articles/urls-are-uis") should be able to change without breaking links to the resource. Browsers should be intelligent enough so that when you hover over a link you see the metadata, not the UUID.

(This has one downside which if if you link to thing X and the UUID is later changed to point to thing Y, it may look like you linked to something you didn't mean to. This can be trivially fixed by including a "what the metadata was when I made the link" field in links along with the UUID)

EDIT: I'm actually not set on UUIDs specifically, they're a little long. Any random, non-meaningful identifier is fine.

Really I'm just saying that points 2 & 3 of the article are so good we should have made them the default (and perhaps only) option from the start.


Yes, I absolutely agree that good URLs are extremely useful, although personally I don't really like the long-string-of-text pattern, especially if it isn't actually significant. A short identifier is useful when sharing them offline: "HN article 14723409" or "YouTube video AQcSFsQyct8" or "forum thread 5705591"

For a blog, news, or other chronological content, I'd like to see a timestamp of some form. If it's rigidly hierarchical content, then a hierarchy (of which I should be able to remove 'subdirectories' to see the parent content) makes sense. Otherwise flat IDs are OK too.

Too bad browser developers seem to love hiding or mutilating them...


A string of text is much easier to remember then a bunch of digits. Exorcism if there are more than 4 digits in a row


The string of text, if you are seriously trying to remember it (which I seriously doubt... do you really try to type in those sentence-long title slugs from memory?!) is subject to domain-specific forms of corruption, in the same way direct quotes from people or TV characters tend to be: you subtly change the grammar or replace a noun with a synonym with which you are more familiar.

Regardless: I am quite serious... do you seriously try to remember and type, from memory, URLs with title slugs?


Doubt he does that, however I collect some links in a text file and the ones with text in them are the ones I can identify instantly. The ones with just IDs are totally random as to what they are and usually requires a comment accompanying them in the file.


I agree with the article, and I really don't understand how Google Search can be so bad with URLs. Seems they get more and more paramenters every time I look, to the point my browser (Safari) hides the URL just for them.


Google Search is perhaps the definitive example of a site with no interest in human-readable URLs. Google wants you to navigate the web using Google, not by cutting and pasting URLs.


Plus it would be a bit ironic if the Google Search page itself cared for SEO :)


I switched to DuckDuckGo, but for a while I had a browser extension that would fix Google URLs for me. Recently I added another one for Twitter links.


This is all solid advice, but there's something that still bothers me about URLs:

From one side I hear that hypertext should be the engine of application state. This implies that the URL router should control just about everything, and that you should be able to click a link in an email and jump directly to any state of your application. From the other side I hear that web apps can be just as capable as desktop apps, and it's only a matter of time before we'll be using PhotoshopJS in a browser.

What isn't said is that Photoshop has no concept of an "address bar", and it probably will never have one. As with so many things, the best practices for a blog or message board might be completely different from the best practices for a creative application. Could you design a URL format to represent the state of a Photoshop editing session? Would you even want to?

Something's gotta give.


Insisting that a URL change with state is taking it too far. It's not how I understood what Tim Berners-Lee and others were saying at the beginning. I would be interested, however, in articles that espouse this.

If the URL represented state, then it should change as you're filling out an HTML form, at each keystroke. But instead there is one URL for the blank form and one after you click Submit.

A better rule is one URL per "document" or "record." So in your Photoshop example, there would be a different URL per file that you edit (www.photoshop.com/image001.psd) but not per edit. Well, if the app saved versions, then you could append ?v=203. But in general I think it's enough to align URLs to "documents" (like a news story) or "records" (like a particular profile in a contact database).


> Insisting that a URL change with state is taking it too far. It's not how I understood what Tim Berners-Lee and others were saying at the beginning. I would be interested, however, in articles that espouse this.

I'm referring to HATEOAS, the final and most demanding "stage" of RESTful API design: http://timelessrepo.com/haters-gonna-hateoas

> A better rule is one URL per "document" or "record." So in your Photoshop example, there would be a different URL per file that you edit (www.photoshop.com/image001.psd) but not per edit. Well, if the app saved versions, then you could append ?v=203. But in general I think it's enough to align URLs to "documents" (like a news story) or "records" (like a particular profile in a contact database).

This sounds like a good idea.


If you're using non-destructive editing a URL might indeed be helpful for sharing the progress and evolution of a file with others. Something like photoshop.app/jdavis703/file/oceanview.psd/layer/colorcorrect might be really useful.


That's a really interesting idea.


This URL I've memorized and revisit regularly is a bite-sized marvel of semantic clarity:

https://www.wellsfargo.com/mortgage/rates


It passes the 'parent exists' test too.

https://www.wellsfargo.com/mortgage


URLs are also inherently useful state managers, with powerful time traveling histories, when used well. cough cough


First thing with new Safari: Change default setting for URL bar to actually show URL. Because UI.


I can’t remember if it was actually shown or just argued for, but the simplification of the URL in the address bar actually helps identify phishing sites, as there is a lot less text to process through, and that one character that might have been changes becomes more visible than before :)


For the "easier to read" part I would prefer Chrome's black vs grey url string.

But for phishing .. with all the tricks possible with Unicode lookalikes, I wonder if people are still doing easily visible character exchanges? qoogle instead of google?


The first realization about URI's being UI I had when I was looking to the first billboard with a URI on it - it must have been around 2000 or so.


> I proposed this.

Proposed what? Great article and I completely agree with it, I'm frustrated perhaps daily by bad URLs, (I don't know if I've ever written a more lame description of myself!) but I was very confused by the ending.

I was expecting the author to propose a (informal, but) concrete system for URL UI. I know it's short of implicit throughout, but a summary would be good.


Made me think of how, (and perhaps it's anti-pattern) I find myself using react router to manage state between components...


> I love Stack Overflow's URLs. Here's an example: https://stackoverflow.com/users/6380/scott-hanselman

While it is nice UI, it's probably SEO first. This ensures keywords in the URL while making it safe to mangle.


I was thinking this recently as I tried Reddit's mobile app. I found that navigating subreddits through URL was so natural that I was kind of lost in the app -- and frankly Chrome did a better job of suggesting the content I wanted than the app did.


I hate that people use shorturls on Twitter, especially news sites with paywalls. I check the full URL of articles in my Twitter client before I click them, especially when people share the links without any description.

Since Twitter gives all links the same allotment of characters, it's really frustrating that people insist on sticking to shorturls.

Goes to show how tracking and advertising keeps ruining the Web.


> Since Twitter gives all links the same allotment of characters

Does it? The documentation I've read doesn't suggest this, unless you're including Twitter's own wrapping of links with its own shortURL t.co


When I type in a URL, the character counter doesn't go down as I extend the URL. So there is no upside to doing it for me.


Weird, that doesn't happen for me.

http://imgur.com/a/2auu0


URLs are addresses. For web, we messed up and URLs became part of the UI. It shouldn't be necessary to use URLs in most cases though. e.g. imagine using IMAP URLs to read your email.


Man, I'd love to be able to link to individual emails from my calendar.

Having entries like the following in my calendar app would be amazing:

"Company event on <date>, for details see linked email"


If we remembered there was more to the "web" than HTTP, this would be a thing.

I remember Tim had a diagram that showed "links" between all different systems, all addressable by URI/URL.


Urls are a very important part of things for me. I spend just as much time figuring out how I want to structure my urls as I do my database.


URLs are programming language.


At some point here the collective web development world is going to realize what it lost when it tossed REST/HATEOS out in its rush to appear modern.


When did this happen, and what frameworks did the throwing out?


Anecdote: in the last half year, and GraphQL. Proponents are coming out of the woodworks and saying all sorts of crazy stuff about how it is better than REST, and the meetup attendees are eating it up without any hint of skepticism.


Thank you very much. I will investigate.


I know this is nitpicky but the author's website doesn't follow their own advice:

https://www.hanselman.com/blog/URLsAreUI.aspx

https://www.hanselman.com/blog/URLsAreUI --> 404


Yep. 17 year old blog. Technical debt. It's on Github and it's on my list. I also specifically say it in the post. ;)


The article does mention that.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: