Remember when this was a standard feature of social websites?
If you want to read a person's updates without signing up for Twitter, or without visiting the site, just use the rss feed to subscribe. If your page is public, any one with a feed reader can subscribe to your feed and see your latest updates in their feed reader, even if they don't have a Twitter account. The feed contains the information you see on the page, but in a special format for easy aggregation. (2011)
A news aggregator is the best way to keep track of all the feeds you care about. Facebook will generate an RSS Feed that you can save to your bookmarks folder and view in your browser. You will now have an auto-updating RSS Feed that alerts you of important things on Facebook involving your account. (2008)
Still works on reddit, though it wouldn't surprise me if they removed it at this point, they seem to be reneging the minimalistic clear and open parts of their interface.
If I remember right, they’ve already said (or at least implied; it really seems like they had absolutely no implementation plans beyond “we want money now”) that RSS and JSON access to posts and feeds will be removed at the same time the API changes take effect.
With interest rates rising and the endless funding for web “startups” (Reddit’s almost 20 years old, it’s comical that they still have no path to profitability while also burning cash on unpopular NFT and AI experiments) going away, the metric most platforms are shooting for has gone from “number of users per month” to “number of interactions we can put ads on per month.”
This of course triggered the usual enshitification tactics: cut off all access to content outside of heavily tracked and ad-ridden apps and (maybe) web “platforms”, kill off things like RSS feeds or API access that create goodwill and build community support but don’t directly have places to run ads, redesign the interface to deemphasize individual discussions in favor of TikTok-style algorithmic feeds and lots more ad space, and even alter the image host so that what were formerly direct image links now return a “light” web page with extra branding and (you guessed it) ad slots.
The problem is that forums just aren’t profitable on their own, but rather than figure out other paid services they could offer that could serve as a companion to a forum (seriously, Reddit is in the perfect position to launch a Patreon clone, and has even discussed doing it in the past), they seem convinced that they need to follow in the footsteps of Facebook and Twitter and destroy their core product in favor of an algorithmic social media hellscape and trend chasing the latest tech buzzword (as I said earlier, they dumped millions of dollars into NFTs and crypto over the past couple years, and these API changes are supposedly to try and get money out of OpenAI; neither of those plans are likely to be successful in the long run).
Well, that would save me some time I suppose, I'm not about to keep refreshing r/talesfromtechsupport to see if anything interesting pops up. Now if only they kill old.reddit I'd finally be completely free.
The cash grabbing part is weird though, initially you just had some ads and reddit gold to offset the server costs. Were they running on venture capital money that has run out or have they blown up their company with so many managers that their expenses have exploded?
It's not that profitability is a focus, it's just that it's a style these days to sacrifice user freedoms/utility for it where it was unthinkable before.
Note that every subreddit and every Youtube channel is also an RSS feed. I have to imagine that some rogue engineeers snuck that functionality in over a decade ago and that it's simply escaped the notice of any PMs or bean counters since then.
> I have to imagine that some rogue engineeers snuck that functionality in over a decade ago and that it's simply escaped the notice of any PMs or bean counters since then.
Nothing really rouge about it. Google was big on RSS initially as a way to subscribe to content across the web. They had their own RSS Reader after all.
Yet, Chrome added an experimental "Follow" feature in 2021 for RSS subscriptions [1]. Edge did the same calling it "Follow Creators" that mostly focuses on YouTube but also uses RSS [2].
It probably doesn't cost them much to keep the RSS feed going, and it's a way to keep a subset of people coming back to watch videos that otherwise wouldn't.
For instance, I dislike having to open YouTube to see what's new (or get notified by email) so I end up missing a lot of new videos from a handful of channels I consistently watch. Once I discovered there the RSS feed for videos, I added them to NetNewsWire on the Mac and it's been a much nicer way to see what's new.
So for people like me that would probably never watch new content, the RSS feed offers another way for YouTube to get me back on the site. That being said, I hope they don't kill it.
It very likely costs a bit if you think of it like a bean counter, in terms of "lost ad views." Though in fairness, if you're using RSS to browse Reddit, and you're not using an ad-blocker, you sound like you've all but gone out of your way to view their ads.
I would think the opposite -- YouTube makes most of their money from advertising on videos, so the driving goal is to get users to watch as many videos as possible. The algorithm they use is a big part of that, but RSS feeds are probably a pretty good way to get people who only watch a handful of channels back the site (as the RSS feeds are just links to the video pages, without video embeds).
That's a valid take. Though, not all videos are equally ad-ridden. I suspect if I were to start using rss to subscribe to channels I'd be favoring those that don't make much use of them. But yes, it's not so cut and dry.
Thank you SO MUCH for pointing this out: I never knew about YouTube being a feed. I will make a YouTube video demonstrating this so those who use RSS readers will be able to monitor my channel.
Every YouTube playlist has an RSS feed too. Unfortunately they always show the top fifteen items and there’s seemingly no way to reverse that, making the feeds useless when creators add new episodes to the bottom of the playlist instead of the top (which is fairly common).
Handy if you parse that into something like SQLite and use that as input for something like `yt-dlp` for channels you want to follow but which are generally unwatchable "live" due to, e.g., a 15 minute video having a near-2 minute sponsor segment and 5 inserted ad-breaks.
Or if you only want certain uploads from a channel, e.g. "A Closer Look" from "Late Night With Seth Meyers", it's simple to filter the RSS and only watch/fetch those.
Very interesting to learn this, and sad to know at the same time that it could probably rot and degrade over time. I wonder if YouTube clients like newpipe use those RSS feeds.
This is literally the way I used Reddit for pretty much the entire time I did. I don't get how anyone tolerates the actual site.
And it's how I use Lemmy now that I've switched over.
Youtube...that may be handy for. But I think the channels I follow regularly like that also publish as a podcast, and most of the time I'm fine with the audio-only versions.
And it's a good thing too since post version 3 mastadon instances do not serve HTML text or images. It's all just a javascript application. The RSS feed is the only way to actually access the text of posts without executing an arbitrary application. It'd be nice if there was a "nitter" for twitter but for mastadon(s).
Mastodon (with an o, an often made mistake) has an API, which most alternatives (pleroma, gotosocial etc) also implement, partly. Several frontends, including extremely lightweight clients exist for this API.
Mastodon furthermore implements the whole server-server ActivityPub standard, which can be used for some actions, like following someone, just fine. This is a very well described standard.
Mastodon doesn't implement ActivityPub server-client standard, instead it has the aforementioned selfbrewn "rest" API. Which, IMO is a shame, and has caused all clients (mobile, web, etc) to rely on this nonstandard API.
Edit: the point being:there are several ways to get content from Mastodon, depending on usecase and needs. You really don't need to scrape or parse or load the JS cliënt.
> Mastodon doesn't implement ActivityPub server-client standard
To be fair to Mastodon, the main developer did give a reason for this.
> The ActivityPub Client-to-Server spec assumes a thin server and thick client. By which I mean, the client, like an e-mail app, has to download and manage most data from the server. From my understanding that's the only way to make anything like search or username autocomplete or even a notifications tab to work with the C2S at all. In my experience, app developers are generally not excited to do that kind of legwork, and we're entering the kind of P2P territory which comes with its own challenges like the ease of hopping from one device to another, or the fact that to have the same functionality in iOS, Android, and Web, you would need to recreate the heavy-lifting fundamental boilerplate in each separately. For that reason, I am not particularly interested in the C2S part of ActivityPub.
It seems odd to make some distinction between HTML or RSS and a JSON endpoint. If anything, the JSON endpoint is actually more human readable than the HTML or RSS output, you just have an "arbitrary application" that happens to understand HTML and RSS to display it to you. All of them contain the exact same data, just represented in different ways.
Firefox has a nice UI for browsing JSON, but Chrome will just give you a text dump. Ironically, Firefox also used to have a nice UI for browsing RSS feeds but they removed it awhile ago.
I've tried reading the JSON too but typically remember to append .rss to a user profile URL more quickly than I remember or type /api/v1/.../statuses?exclude_replies=true etc (or .json to the profile url if that instance supports that).
Both are vastly inferior and tedious ways requiring me to type a new URL. It is not controverserial to say the primary purpose of a web browser is to display HTML. Most URLs resolve to HTML pages. It's been this way for 30 years. Only in the last 10 has this radical new application view of the web been creating walled gardens on top of the old document model web. And that's mostly for commercial/profit-motive reasons so it's sad to see non-profit motive software cargo culting SPA only dev.
It also means post v3 mastadon is inherently incompatible with indieweb standards like webmention since there is no page to check for URLs.
>Ironically, Firefox also used to have a nice UI for browsing RSS feeds but they removed it awhile ago.
IIRC, (some) browsers used to have a feature that allowed you to browse a web page that had XML content, such as http://a.com/b.xml .
You could expand and collapse nodes. Or maybe I remember it as being for XML, but it was actually for RSS. After all, RSS is an XML format or application - is that the right term?
Thanks! While far more tedious than just being able to see HTML in a browser when going to a URL (expected behavior for the last 30 years), zygolophodon does seem like the least worst option. I've now installed it on my newer machines which have python 3.6. For older computers I guess I'll keep appending .rss to the user profile URL and and finding the relevant post by eye (or just ignore mastadon URLs).
The JS requirement still annoys me. Mastodon is a whole lot faster than Twitter, but it still can't compete with Nitter and that's just rather silly.
I'm fine with things like pagination and submitting forms being broken without Javascript, but at least make it possible to view a linked thread.
I've been running a Lemmy server or my own for a bit and I have to say, Mastodon just seems so incredibly wasteful. Lemmy's UI may be quite barebones in comparison but the server component runs a lot more efficiently while also providing actual HTML when you ask for content.
The interesting thing about this is that it’s evident that many people don’t use spell check. I find it hard to relate as personally I can’t resist resolving those red underlines.
There are many many third clients that can make use of Mastodon API and Activitypub. There are plaintext web clients such as brutaldon that probably have what you want.
Modern web browsers are JavaScript engines that also render stuff. I understood the resistance to running JavaScript 20 years ago, but today, I rank that up there with buying a cell phone but refusing to run 3rd-party apps. The web is JavaScript now.
I have JavaScript disabled by default and enable it on my bookmarks and when necessary on random new websites I visit.
I have no problem turning on JS for a web app. I understand Google Maps just couldn't really work how it's expected to without JS. Same for web email clients, or games, or office suits.
But if your "app" is just a progressively-enhanced list of posts (e.g., a blog), there is no justification for forced JS usage. Sorry Mastodon.
Running JS is a privilege for those with the computing power (or battery capacity) to spare, and turns the browser into the biggest risk vector on whatever machine that browser is installed on.
Or requiring it just to render images. Actually seen some sites that work that way. And these were simple company pages or personal portfolio sites, so no reason why the images could not be static(ally served).
In other news, the blog of the RSS Advisory Board, inactive since 2014, started getting updates again :) It was a pleasant surprise seeing new articles from them pop up in my feed reader again after such a long time.
I noticed recently that I get more engagement on Mastodon than Twitter. It shifted in the last couple months as I have been posting to both the same content.
Similar, but I think that is natural at this point.
My main account has 100x as many followers on Twitter, but accumulated over many years, and I suspect a fairly substantial number are bots and/or accounts that are now rarely in use, combined with the algo pushing a culture of just following huge numbers of people which may never surface for you in the feed anyway.
If we trust the view counts on Twitter, the average tweet gets seen by just a tiny fraction of my followers.
On Mastodon, meanwhile, a very substantial proportion of my followers actively engage, and about half has shown "signs of life" (post, like, boost) in the last week or so.
That said, this will change as accounts age and slowly get abandoned there too, so it will take a long time before we see if there's a qualitative difference.
i think so because on Twitter you can always get some randomness due to the algorithm and someone might reply if your Tweet appeared on the "For You" tab. On Mastodon this "randomness" has way less probablity to happen unless you use hashtags and someone finds the Toot through the hashtag.
Quality of interactions apart, the randomness of Twitter is one of the things i miss on Mastodon.
I was so happy to see that Nitter kept working after they killed the API. Now I use their RSS feeds to keep up with my favorite accounts since I’m not on Twitter that much anymore.
Mastodon instances have RSS feeds where each item links to it original location on its original instance, not the instance where you got the feed. This is useless for engagement, because you have to jump through hoops to reply to a post.
The issue states that there isn't an API for it. What isn't needed is an API to resolve a foreign toot to the local one, but for the Mastodon instance to make available the local feeds in RSS form in the first place. There is no logical reason that you can see local toots on some hashtag topics in the web UI, or an app, but not in RSS.
Does not seem to be the case for users on instances that use Akkoma or Pleroma, and probably other non-mastodon frontends. E.g. https://social.kernel.org/torvalds.rss
Thanks for pointing this out to me, I'm sure I'm not the only one who didn't know this. I recently wanted to import my own Akkoma RSS feed into my website, and I tried searching around the Akkoma documentation and didn't see any mentions of feeds.
Your URL is slightly off. If your feed reader supports auto-discovery (most do) you can just paste in https://social.kernel.org/torvalds (the HTML UI) and it will find the feed for you.
Well if you have the build automated, you can simply use a markdown editor and commit the file into git. It's not like an experience like Wordpress but i would say it's not that bad.
If you have the time, I suggest doing some research on the many available open source static site generators. One might meet your needs. If not quite, well, that's the beauty of open source ... you can modify or get someone to modify it.
If not, find some friendly teenage techie with free time on their hands, get them to do it for you, and pay them some amount for it.
Yeah well I really don’t wanna maintain or pay someone else to maintain a persistence layer, when I already have an iPhone and pay for iCloud subscription.
So your idea is an iPhone app that builds the site and uploads it via FTP/S3? That’s a neat idea but it’s pretty niche. I’m not sure if there’s a big enough market to make it viable.
Lists don’t, though. It is easy enough to get that going if you use the API (which I did, obviously), and feels like something the platform could easily add.
(I consume a couple of my lists almost exclusively via RSS, and the result is a bit like following a couple of curated news sites)
Annoyingly, gotosocial appears to gate off RSS behind `enable_rss` in the database accessible only by an (undocumented) extension to the `update_credentials` API call. Tch. That could do with being exposed to the command line account mangler.
That would require authentication or a keyed RSS feed. This isn't impossible, but it takes some careful consideration and is a possible security issue.
Reddit offers this for some account feeds, using obfuscated, but not authenticated, feeds. If someone gets ahold of the feed URL, however, they have access to private account data.
For Mastodon, a preferred option might be to use a terminal-based / commandline Mastodon client, of which there are several.
From my preliminary investigation of Lemmy, it does seem to have RSS feeds for communities, but could not find them for individual posts - which is a bit of a shame.
i don’t think rss comes close to scaling. every single subscriber will poll every single thing every 30 minutes (or whatever). this was a crushing amount of traffic 20 years ago when the web was much, much smaller…
That's why I chose to do manual polling/clicking for the built-in reader I have for HeyHomepage.com Just a list of all subscribed feeds that one can click through. My reasoning was that since a subscriber actively and knowingly subscribed to something interesting, they might gonna click it again when they're interested again. No need to automatically poll everything all the time.
Maybe for a future newsletter/magazine function I need to do some automatic polling in the background.
If you want to read a person's updates without signing up for Twitter, or without visiting the site, just use the rss feed to subscribe. If your page is public, any one with a feed reader can subscribe to your feed and see your latest updates in their feed reader, even if they don't have a Twitter account. The feed contains the information you see on the page, but in a special format for easy aggregation. (2011)
https://web.archive.org/web/20110228080118/http://support.tw...
A news aggregator is the best way to keep track of all the feeds you care about. Facebook will generate an RSS Feed that you can save to your bookmarks folder and view in your browser. You will now have an auto-updating RSS Feed that alerts you of important things on Facebook involving your account. (2008)
https://web.archive.org/web/20080908073106/http://www.facebo...