This is something rsync.net (the service) does well. For example, I haven't seen any other service with a "CEO page"[1]. Maybe it's something common and I just haven't noticed in any other service because it was not as discoverable.
I don't know if it's really useful since I don't think I've ever needed to forward anything to any CEO, but I'm not even a customer and this page is the first thing that came to mind when reading the article.
You'd think that OpenAI, by now, had technical writers on payroll. Well, according to LinkedIn, they don't. It's not that surprising, then, that their documentation is in such a sorry state. Why they haven't hired specialized roles for documentation is beyond me; they either think they're irrelevant, or they ruthlessly prioritize growth over docs. Whatever the reason, they're hurting themselves.
Hiring technical writers would be admitting to human supremacy in the technical writing space. Better optics to use GPT-generated docs regardless of impact to engineers and users.
I literally built this concept into my new project last night, because I wanted to add evidence to the story about the Google Gemini App moderating yt-dlp, by showing that the Gemini API does not. Also to enable a funnel to the project and the other business-y reasons Simon outlines.
The other thing I did was use localStorage to keep a list of public chats you've visited, so that when you come back you can see the other chats you have read. Also easier lookup than trying to find wherever you may have gotten the original link from. (like scrolling back in text history)
And there are probably a few whole generations already that don't even know that link rot affects all links, not only links to "internal" URLs (like, say, a Discord image).
So there's artists that just link to some social media website, not considering that accounts can be suspended, usernames can change, etc.
Similarly there's also developers that "link to"[1] dependencies without considering that repositories might disappear (together with the source code for that dependency's version if nobody backed it up), a package's version might be removed from registries, online documentation for a dependency could disappear (ugh), etc.
[1]: Just adding name+version to whatever manifest file and forgetting about it forever. Maybe adding a cache (not even a proper mirror, much less any self-sufficient way to build the dependency in case of disaster).
To mitigate link rot I always include a title along with the URL. This is especially important for URLs with opaque ids like on YouTube instead of slugs. If you visit a pulled YouTube video link you’re left not knowing the title to search for it elsewhere.
This doesn’t always work because many websites neglect page titles. I’ve always wanted to ask a wide range of web developers why they neglect titles. Why?
The fact that people use the HTML title attribute for everything except for including the proper title of the thing pointed to is sort of perturbing. Even for Google SERPs and here on HN, it would be useful to have access to the full thing instead of truncated titles when they appear, but neither site is an exception to the tradition of non-use.
You could probably do some fun compression scheme where you provide just enough information for a fixed llm version to generate a page that satisfies its authors goals.
Your idea seems fun if people can one by one describe what the page use to look like and/or crawl the web for clues. We can have our under reconstruction banners back.
I don't think there is a browser extension but there is a site called websim AI that creates fake pages in real-time using LLMs and it honestly works surprisingly well.
love Simonw blogs. I've been reading his blogs using the python interpreter since last year September. He mentioned he started doing it on his own to generate whole applications on the go.
I'm still not clear not how to do it though. I expected some easy option or button, and didn't find it.
I admit I could have spent more effort trying to figure out how to do it, but a handy link/tutorial would have helped me keep doing it instead of pretty much ignoring GPT most of the times.
I absolutely loathed a lot of Microsoft products for this simple thing. VSTS/VSO/Dev Ops/whateverNameTheyHaveNow, Sharepoint, etc were absolutely atrocious at this. Here is a deep link that’s 700 character long, with a couple of dozen base64 query strings and nonsensical path. “What’s the problem? Can you use a url shortening service? URLs are long nothing we can do about that” fuck me.
Back in 2014, my team had an internal tool to view the state of resources in our system. All resources and their states were stored in a SQL database. Yet, the web app they developed was a SPA (before the invention of routers and stuff) and it never updated its url or supported deep linking. Whenever you wanted to send someone an email or an IM about an issue with a specific resource, you had to tell them “go to X tool, search for Y, click on Z -> W -> M -> O -> K, then you’ll see the issue there”. I found that so fucking infuriating. Why can’t I just use an https://X.com/Y/Z/W/M/O/K link to share that deeply nested state? When I brought it up multiple times i was always told “it’s not a priority and it’s not that big of a deal”
One time we were given 2 weeks to work on whatever we thought needed fixing. I decided to build an alternative that supported deep linking. But I also decided that all deep links should accept an `/api/` prefix that just returned the content in JSON format. It was just a hit with everyone in the team/company that the usage of X tool almost diminished over night even though my tool was much more rudimentary and didn’t have all the features that tool had. Nonetheless, turns out most people just wanted an easy way to share links rather than a “really powerful SPA that lets you dig down and investigate things”.
A month later the team that worked on that tool X announced that they now support deep links in a huge email to the whole company. Yet they thought the simple feature of returning JSON data on `/api/` prefix is irrelevant. 5 years after, my tool’s UI became obsolete but the actual service was promoted to a “vital internal service” because so many other teams built automation around the `/api/` prefix URLs and that team had to take that code and maintain it.
I've also found that kind of situation.. I've learned that in an office environment people are often content using a tool and following the established procedure and are not considering it could be better -- even if you ask them! Until you show them something better....
Good job :) Hope you at least got some recognition out of your efforts
one reason react server components make me uncomfortable (they do have their merits) is they encourage commingling of api and presentation. and we all know that presentation layers always fail to design for some user/usecase you just cannot yet foresee
It stands for Single Page Application. It's a web application that works by loading a big chunk of JavaScript and using that to render every "page" of the application, rather than providing links to different pages for different parts of the app.
Think Trello (SPA) compared to Hacker News.
These days well written SPAs can use the HTML5 history API to provide proper URLs to different parts of the application, so linking and bookmarks still work.
Historically this hasn't always been the case, and even today poorly written SPAs may fail to implement proper linkable URLs.
simon is often the very first person to write up a developing story in ai that most people close to the matter know about, but he can do it fast and link to all the relevant facts quickly and make it accessible for the people further away from the story to discuss.
URLs are the cornerstone of the web. A precise, universal (hopefully), long-lasting (hopefully) way of referencing articles and other resources. It's always frustrating to see people fail to appreciate their brilliance, e.g. search this on YouTube rather than just pasting a link into a message. Giving a write-up a permanent home on the web can certainly help give it visibility, and help the author avoid writing up the same ideas again.
Related classic essay: Cool URIs don't change. [0][1]
Two under-utilized properties of URLs are also that:
- there's a near-infinite supply of them
- they support forward declaration
Together the practical upshot is that if you're having a conversation with someone or responding during the Q&A of a talk or whatever and you want to be able to say, "Yeah, we thought about that, and we have some information about it on our site—just visit acmeinitiative.example.com/skub," except that you haven't already written the /skub article yet, that doesn't preclude you from being able to say in the moment (i.e. live) that /skub is effective immediately now the designated handle for such an article, it's where the article will appear once you do write it, and it's how any interested party should retrieve it once it does appear—whenever that is. (The same goes for articles published by other people/organizations and other third-party resources that you want to reference—just mint a URL from your namespace on-the-fly and then whenever you get a chance, set up a redirect to whatever it is you wanted to link to.)
There are so many recordings (podcasts episodes, etc.) that I've listened to involving smart, technical people who definitely control their own domains but don't think to take advantage of this. Usually they sort of mumble some description that you might be able to use to find whatever they're talking about, or they manage to only get half the words in the title wrong when they're trying to recall it for the host, and then you and every other interested listener has to individually squander time and attention if you want to track it down. It results in a huge waste of collective energy.
The UX is not good enough yet. We would have to 1) show that it is reserved to people going on the link and 2) offer good/enough ways to be notified once the link becomes online and 3) need to know the likelihood of the link to actually work in the future based on prior commitment.
I think the idea is that if you were recording a podcast it wouldn’t be live (I know the parent used the word live but I think they meant “live” during recording or in conversation while the episode is being created), so you are free to make references to soon to be declared URLs.
You just have to make sure you have populated the content at the location you are referencing before you upload or publish your episode for your listeners.
...and URLs being forever cuts both ways. Want to reorganize your domain's structure? Better set up 301s to forward the old address, forever, and hope (or avoid) any overlap between the old schema and the new.
Transfer of ownership? Well, all those hyperlinks from other sites don't know.
This is not a criticism of URLs per se, but I find it troubling that the mere act of visiting one is something that we have to warn people away from. It's like:
Ok, that sounds like good advice, and easy do follow.
> Don't look in the direction of modly food, else you'll get sick
That's unreasonable. What kind of madman wrote the rules for this universe?!
When I try to imagine a better way, it usually disallows the referents of a URL-equivalent from changing after they're created--that way trust bestowed once can be reused without prompting the user a second time for the "same" thing.
For that reason, I'm not a fan of placeholder URL's like you're describing. The instability of URLs to me feels like more bug than feature.
I don't know what you're talking about at the end, but you're definitely applying an inconsistent (double) standard at at least one point in your comment. The status quo is one where the reference is either completely unresolvable, or the referent accessible only after some effort that would have been better avoided. Forward-declared identifiers don't exacerbate any of these issues. Meanwhile, the set of things ameliorated by them is non-empty.
Basically I'd rather be using cryptographic hashes of the page instead of URLs so that if I trust the hash then I have an obvious mechanism for determining if I should trust the payload that the hash refers to.
Stability like that would severely limit the places that a malicious payload could hide, and it would enable users to compare notes about what is or is not trustworthy.
If the identifier can exist before its referent, then any such verify-the-payload-given-the-id activity becomes much more complicated because we now have to wonder if we're getting different versions of the page for the same identifier (e.g. like when airlines present different prices based on which browser you've used even though you used the same link in each. I'm trying to dream up a web where that's not possible).
Hashes are not memorable. I can’t give my gramma a hash, but I can give her example.com/recipe123. If she trusts example.com, or me, she should be able to trust the content. If she does neither, a hash will not save her because she needs to have seen the content first to make a decision to trust it.
It's fairly common to create "link" objects which have both a human readable component and a URL component. Not much would be lost if the URL got less readable, we'd just have to be more diligent about associating a human readable string with the link. This could be done automatically if the content happens to provide its own "name" field. Otherwise you'd just have to give your link a name.
Links which are displaying an ad-hoc name can show up in one color. Links which display whatever the content names itself can show up in another color. We can have different fonts for whether people you trust have flagged the content as trustworthy or whether the've flagged it as malicious. Nobody needs to see the hash itself. But none of that works if a link might resolve to different content at different times.
As far as needing to see the content before you know you can trust it... There's no harm in fetching malicious data and taking a peek at it. Just don't act on it.
If we train people to not even look at the threats then they're not going to have a feel for what threats actually look like.
I think that is too much infrastructure for just a link. I think links are fine the way they are today. There are other problems with the web, but it's not links. Also, simetimes taking a look is the same as acting on it. Clicking on an link in outlook is often acting on it, if it's some script or whatever it is that people do on outlook.
If it's content addressed you can gossip content between users, so I'd say it's far less infrastructure because you don't need servers.
But I know what you mean, it's a lot for the user to manage. I'm just looking for something drastic to change because the web as it is makes me feel like a rat in a maze. I'm trying to figure out how to leave notes on the walls for the other rats.
> That's unreasonable. What kind of madman wrote the rules for this universe?!
My chain of thought went: "QR code containing a bobby drop tables! What would a human version of this be? Viral memes, in the original sense, that cause psychic damage? Oh wait, photosensitive epilepsy is a thing."
as a podcaster i feel this, but 1) its hard to look up precise references without interrupting conversation flow (im optimistic llms will help here) and 2) some people would want to tell u to search their name more because that helps The Algorithm
- FB blocks many sites for sharing copywrited content (even random blogs)
- reddit blocks all dot ru, may archival sites, telegram links etc. etc.
- twitter blocked some blogging platforms
- also many smaller sites block discord (which is justified)
Hopefully this will motivate people to leave them.
Yeah, this is so frustrating. The contortions people have to go through on Instagram, TikTok, LinkedIn and now increasingly Twitter to work around the "algorithm" punishing or forbidding links is infuriating.
"Link in bio" culture is the reason companies like Linktree even exist! And good for them, they're providing a sadly necessary service.
Likewise it's difficult to link to content within those platforms.
But the reason they don't like external links is that If they can't easily follow, external links, they are more likely to shrug and keep scrolling, instead of doing something else. That means marginally more ad impressions shown.
Link-in-bio services would be a lot more useful if they accepted a link from the reader—the link of the item that referred them there because the platform it was on didn't allow direct-linking to off-site pages—and then returned the link that the author intended to convey there but was prevented from posting.
1. It triggers my "AOL Keyword" yuck response immediately.
2. It completely ignores the concept of search bubbles. The results you and I get when searching the same term can be wildly different.
3. URLs and hyperlinks are right there. Instead of trying to make me do extra work you can just link me directly to a thing. That way I can see your exact reference instead of wading through a bunch of reaction videos to the video you wanted me to see.
FWIW I’ve heard ad spots on NPR where a brand says “search for “my financial adviser” and click on Some Brand”. Obviously trying to bump up their rankings by increasing CTR for that term in Google Search. They don’t even need to say “Google it” because they know most people already will.
Cybersecurity is a big topic, and issues of trust aren't exclusive to URLs. Search engines are often manipulated into showing malicious pages high in their listings.
If you're communicating with someone you trust, it's better if they send you the URL directly.
It's easy enough to tell someone to click on https://youtu.be/dQw4w9WgXcQ, but how do you transfer that URL verbally, over a phone call or some other voice-only medium like podcasts with out resorting to an equally hard to memorize url shortener?
For podcasts the answer is to use the show-notes feature to post the URL.
For phone calls the answer is to send a text message.
If you're communicating by audio and have no textual 'side-channel' then yes things are more awkward, unless it's a simple and memorable URL (e.g. example.com).
In BTC, they have devised a way to transform the private key, to twelve words. I don't know how that technique is called or where on github it is, but there is for sure a way to for a YT url to be made into words.
You got me curious- so I looked into this. Its called BIP39[1]. I made a quick proof of concept to generate 6 word phrases from a youtube url using the same wordlist[2]
you can make an url shortener that uses short phrases; the s/key word list represents 11 bits per word, so two-word phrases like ode-beam, halo-cham, or jail-heal cover the first two million urls. in my own password generator http://canonical.org/~kragen/sw/netbook-misc-devel/bitwords.... i use a custom '12-bit words of 5 letters or less' list which does 12 bits per word, so phrases like acute-doc, cups-forms, or crypt-swap cover your first 16 million shortened urls. these options also give you some degree of error correction
using an url shortener has the advantage that it takes you to the thing the podcast wanted to take you to instead of what a search engine chose to sell you
Do you speak this way? I notice a lot of "an" usage online which is not in line with how people speak, e.g. "an horoscope". "An url" is likewise not reflective of normal English pronunciation.
Well, twelve words can certainly be transferred verbally, even though the generated words are not that memorable.
Encoding a link to a much better memorable scheme could be done through a url service which parses a web page through an LLM, generates some tags, and creates custom routing using the tags. Rails or a more modern tool like Actix-web can do that easily.
For example i asked Llama-8B to suggest tags for this HN thread using the title and the first 2 comments, and it suggested: web, URLs, hyperlinks, online-identity, permanence, flexibility, referencing, resource-management, web-architecture.
> help the author avoid writing up the same ideas again.
Oh, how I wish!
Except that many sites and services are hostile to this because it
encroaches on their "attention territory".
If as an author you link to an idea you already carefully expressed
elsewhere as a blog post or book, the comment gets put down or
censored for "promoting".
More often on HN now, to avoid punishment, I just copy/paste my
original writing rather than give the reader a link to explore more
deeply.
There's clearly a gap between what we preach as good "academic" ways
of spreading information and ideas, and the reality/practice in
systems that control expression.
Point taken, but even in the context of Hacker News, if your write-up exists as a blog post with its own URL it can be submitted for discussion in a thread of its own, linking to the blog post.
From the point of view of a Hacker News reader, it's easier if you copy+paste the relevant text from your blog post directly into your comment. None of us are in the habit of following every link. Even if you do have to copy+paste in this way, at least the text you're copying from has a permanent home.
The taboo against 'promoting' is also there for a reason. Sometimes people really are motivated by bumping the hit-count on their page, rather than by contributing to the discussion.
Things are certainly worse on the major 'silo' websites that are engineered to try to prevent people navigating away from their domain or equivalent mobile app.
> Point taken, but even in the context of Hacker News, if your
write-up exists as blog post with its own URL it can be submitted
for discussion in a thread of its own, linking to the blog post.
That's a bit like being in the thrall of an intense and interesting
conversation and saying "sorry I have to go mail you the
documents". It breaks the flow and defeats the purpose of a technology
that was designed to overcome exactly that pitfall.
> From the point of view of a Hacker News reader, it's easier if you
copy+paste relevant text from your blog post directly into your
comment.
True. And I do that often enough as well. In addition I want to give
the reader a genuinely interesting link (which itself contains further
well researched links to explore the topic. Again, that's what we
built this technology for.
> The taboo against 'promoting' is also there for a reason. Sometimes
people really are motivated by bumping the hit-count on their page,
rather than by contribution to the discussion.
Understood. And sometimes people aren't motivated to by
hits. Crucially there's no mechanism for distinguishing the two and so
conversation is stifled out of fear.
> Things are certainly worse on the major 'silo' websites that are
engineered to t to prevent people navigating away from their domain
or equivalent mobile app.
They certainly are, but do we want to emulate that and let HN become
the same?
I don't know if it's really useful since I don't think I've ever needed to forward anything to any CEO, but I'm not even a customer and this page is the first thing that came to mind when reading the article.
[1]: https://www.rsync.net/products/ceopage.html