Why has this become so complicated? I use Newsboat https://newsboat.org/ to read RSS, and follow about 100 very interesting authors that way.
I just put the RSS urls into a text file and the self-contained 1.7mb program does the rest. Somehow it gets by without using a combination of Electron, Mongo, Algolia, Redis, machine learning (!), and Sendgrid.
Maybe the comparison with a text-based RSS reader isn't fair and Winds does some crazy cool stuff, but it's hard to see what that is exactly.
It all depends on what you are doing with RSS. If you subscribe to feeds who barf out 1000 new articles every hour, then a proper filter-system is a basic neccessary, and machine learning can help there. But I don't know whether this reader is using it for this.
Reading the github, it's seems also kinda questionable why they use multiple databases and services. That gives the impressioen that selfhosting this reader will be a pain. I guess I stay with Tiny Tiny RSS (https://tt-rss.org/) for the moment.
As you can see here: https://imgur.com/a/Ikff2LX Newsboat handles Hebrew as it's supposed to. That is fetching the article and sending the output to the terminal. It is after all a terminal program and therefore only displays text trough the terminal. You can set the text width though. So I suspect newsboat supports the Hebrew, Arabic, Chinese and other non latin scripts which are of a decent size. The rest is all dependant on how you configure your terminal/ the font you use.
What's even more simple is to punch the URL in ASCII numbers to the cards, and put it in the "RSS" deck, then on every Monday morning press play on the program tape, that loads the URL's from the card deck and after a few hours prints all new articles on the mainframe's printer.
N.B. this isn't fully open source because it relies on the Stream API: https://getstream.io/
They have a free quota of 3 million feed updates (not sure if total, I'm guessing per month?). Important to keep in mind because the app may be open source, but they can pull the plug on it by shutting down the web service. And so you can't self host it independently.
It might be nice, not saying otherwise, but for example something like Newsblur.com is open source and hosted, so you can pay a yearly fee for the service, while having the peace of mind that you can fork it and self host it yourself should the product die, which is what many of us want after Google pulled the plug on Reader ;-)
NewsBlur is a great project. Winds relies on Stream, Algolia and Mercury API. NewsBlur supports Mercury. You can run both Winds or Newsblur on your own server.
I wonder if we could do a project together with Newsblur where we list if sites properly support RSS. Similar to how the Python 3 ready sites popped up.
> I love using RSS to follow the programming and tech news I care about. Unfortunately, the number of sites supporting RSS has been in rapid decline over the last few years. The reader ecosystem is slowly degrading as well.
After that introduction I was hoping Winds would be some kind of proxy that creates an RSS for sites that don't have one. But well, it's just another RSS reader. Those have actually never gone way, I'm using Inoreader every day.
* Shameless plug *: Our little startup, Feedity - https://feedity.com, helps create custom RSS feeds for any webpage, via an online feed builder and REST API.
Custom feeds can even be created for dynamic content, utilizing Chrome for full-rendering, and many other tweaks & techniques under the hood for seamless & scalable indexing.
For reasons I don't understand, it remains incredibly difficult to scrape a well-organized blog and turn it into something I can consume like RSS or a kindle book.
Note that many/most WordPress RSS feeds aren't that useful because they only show a snippet rather than the entire post. This dreary state of affairs is due in part to the fact that "checking" an RSS just means downloading the entire file containing all posts the author would like to make public, regardless of how many are new to the reader. This insanely unnecessary bandwidth usage penalizes sites that have long (>10) feeds with complete posts. (The single RSS file on my blog takes up the majority of my bandwidth costs.)
For your RSS issue, you can safely truncate your RSS feed to the latest N articles, where N is a random reasonable number you choose (10 is good, because it's so decimal). RSS reader software will know how to deal with it.
WRT scraping, it is getting harder because funny websites w/ no static content where everything is generated via Angular or Perpendicular or sth. are really hard to deal with. Recently my uni switched from an army of Wordpress websites to an homegrown AJAX-MVC-Reactive abomination where the links are reimplemented via some funny black magic where the actual link items (which are not anchors btw) don't have encoded in them the link targets, but only an onclick event that knows somehow where to go. And because they just killed all the RSS feeds, I wrote up sth. to revive them for me via phantomjs, but could not figure how to find the link targets, I cannot link the RSS items to anywhere but the main announcements page, and I can't add any description from the link target.
RSS should be kept, those who don't know the job they are doing should be replaced.
> For your RSS issue, you can safely truncate your RSS feed to the latest N articles, where N is a random reasonable number you choose (10 is good, because it's so decimal). RSS reader software will know how to deal with it.
But people who don't use the right aggregators will not be able to read past 10 posts. And if everyone was using aggregators, I wouldn't have such a high bandwidth bill.
> WRT scraping, it is getting harder because funny websites w/ no static content.
But sites that can have an RSS feed necessarily need to deliver their content in a static form. What surprises me is that we don't have a tool for even those cases.
> But people who don't use the right aggregators will not be able to read past 10 posts.
I do not think such aggregators exist, and you can just ignore people using software that do not comply to widespread conventions.
> And if everyone was using aggregators, I wouldn't have such a high bandwidth bill.
I don't understand this sentence. All RSS client software is called aggregator.
> But sites that can have an RSS feed necessarily need to deliver their content in a static form.
Not really. Many websites which are essentially blogs are transforming themselves into single-page web apps. My uni's websites included. Some do it for the $$$, some for reasons that I can not know (jumping the bandwagons with minds toggled off).
You can just set your blog software to truncate your feeds to a reasonable number w/o any worries. And I suggest you look at your logs, because some silly bots might be consuming your bandwith along with your normal traffic; there are some that like RSS feeds.
> I don't understand this sentence. All RSS client software is called aggregator
I'm trying to distinguish between (1) people using software the directly downloads the RSS feed from my website to their device and (2) people who use services that download the RSS feed to a server which can then serve a cached copy of the posts to many users. If everyone used (2), then I would only have my RSS file downloaded as many times as there are separate services, which is not very many. So apparently many people are doing (1), and if my feed is short then they are limited in the blog history they can read to how long they've personally been subscribed (or less, if they need to clear their device and can't re-download my old posts).
> Not really. Many websites which are essentially blogs are transforming themselves into single-page web apps
That's why I specified "But sites that can have an RSS feed...". If you have an RSS feed with useful posts in the feed, then you must be delivering static pages. (If you're just delivering snippets with links to a dynamic page, then there is no way for any service to cache the page either.) So my question is: why isn't there software to scrape webpages of blogs that offer an RSS feed with complete posts? This would enable me to comveneiently read post history going back more than 10 posts.
> And I suggest you look at your logs, because some silly bots might be consuming your bandwith along with your normal traffic; there are some that like RSS feeds.
All the bots put together make up 27% of my bandwidth. It's a lot, but it's not the root cause.
> So apparently many people are doing (1), and if my feed is short then they are limited in the blog history they can read to how long they've personally been subscribed (or less, if they need to clear their device and can't re-download my old posts).
RSS is a means for people to follow new posts from you, in order to read new posts they are supposed to come to your blog and use your archives.
Yeah, but for my case it means waiting 10 seconds per link, and another 10 seconds when returning to the initial page, which has some 20 entries, and there are more than ten such pages that I need to scrape, so I passed on that...
>This dreary state of affairs is due in part to the fact that "checking" an RSS just means downloading the entire file containing all posts the author would like to make public, regardless of how many are new to the reader. This insanely unnecessary bandwidth usage penalizes sites that have long (>10) feeds with complete posts. (The single RSS file on my blog takes up the majority of my bandwidth costs.)
Still, exceptionally low cost compared to running pretty much any website's massive CSS and JS files. A single image in most cases will take more than the entire RSS feed before compression.
That said, I would like to see a new standard (a new one would be needed [1]) that only gets the difference from what you last read - I think that would really take RSS feeds to new places of usefulness. There's no reason why you couldn't send the server an ID (not timestamp to avoid issues with timezones, clock stretching, forward/backward time setting, etc) of the last request and have it send back everything since (within reason).
You should already be able to implement this using the If-Modified-Since HTTP header. The server only needs to send you back articles in the RSS feed that have been added after that date. The If-Modified-Since value is meant to come from the previous responses Last-Modified (not a timestamp you make up), so that side-steps timezone issues.
> Still, exceptionally low cost compared to running pretty much any website's massive CSS and JS files.
My blog has both RSS and browser readers, but the bandwidth is dominated by the RSS feed. Since the RSS file has no images (just HTML pointing to the images), I think it's more likely that for my Wordpress blog the CSS/JS overhead is just not that much (as opposed to the alternative hypothesis that I have many many times more RSS readers who never end up downloading the images).
Exceptionally low cost per hit, as opposed to overall bandwidth. Overall bandwidth will probably fair off worse as you say due to the polling nature of RSS. I think in general RSS readers could do a better job of fetching heads and checking whether or not there is a change worth fetching, that would save a tonne of bandwidth.
Also in general, I would be tempted to make an RSS feed more minimalist in terms of content and markup. It should just be a short `<description>` and a link to the main article (which would still allow you to potentially monetize your content or gauge interest more accurately).
>I think it's more likely that for my Wordpress blog the CSS/JS overhead is just not that much
Also, bandwidth is just one resource - potentially each call to a page is a database read, whereas your RSS feed should be static (not sure about the WordPress implementation, but I would hope for static caching with something that doesn't change for long periods of time). I've seen WordPress database lockups with modest amounts of traffic (again, most of the time this could have been easily statically cached - but doesn't appear to be by default).
> Exceptionally low cost per hit, as opposed to overall bandwidth.
I don't understand. I'm telling you that my RSS file literally dominates my bandwidth usage in GB.
> I think in general RSS readers could do a better job of fetching heads and checking whether or not there is a change worth fetching, that would save a tonne of bandwidth.
Yah! Agreed.
> It should just be a short `<description>` and a link to the main article (which would still allow you to potentially monetize your content or gauge interest more accurately).
No, I want people to be able to read it offline. I'm not trying to monetize anything.
> Also, bandwidth is just one resource - potentially each call to a page is a database read
I have a simple website. The bandwidth is the dominant cost.
>> I think in general RSS readers could do a better job of fetching heads and checking whether or not there is a change worth fetching, that would save a tonne of bandwidth.
> Yah! Agreed.
You could also reduce the number of `<item>`s you keep in rotation to a more manageable number.
>> It should just be a short `<description>` and a link to the main article (which would still allow you to potentially monetize your content or gauge interest more accurately).
> No, I want people to be able to read it offline. I'm not trying to monetize anything.
It should still be much more lightweight than it's HTML counterpart. Should be almost nothing to it, next to no markup, no styling, no scripts and a highly compressible piece of data.
I don't understand how you're racking up massive bandwidth. Can you put some numbers to it:
Or you have way more users via rss than through their browser. This could be a good thing as it results in less bandwidth than if they all hit you directly through browser.
That's something both RSS consumers and produces should already support, but don't always, called an E-Tag, a standard part of the HTTP standard. However, the E-Tag is all-or-nothing; either it matches, and the entire request is essentially aborted, or it doesn't match, and the entire file is served up.
Passing "the ID of the last piece of content I saw" would allow the server to return just the updated stuff, or abort early like an E-Tag. However, counterintuitively, as is often the way in computer science, I'm not sure it would be that big a win to be able to return partial content. The vast bulk of the win on most blogs will just be the ability to abort at all, provided just fine by E-Tags.
I would say that if your site is getting hammered by HTTP requests for your RSS, do double-check that you've got E-Tags set up and working correctly. It is in the best interests of the big scrapers to support that properly, as they are paying for that bandwidth too. RSS aggregators don't have to get too large before this becomes a top-priority feature request. Unless the feed is literally changing on roughly the same frequency as it is scanned, it shouldn't be the dominant factor in your bandwidth bill.
“This insanely unnecessary bandwidth usage penalizes sites that have long (>10) feeds with complete posts.”
Since rss is text only the sizes are very small and compress well. Considering the average web page is 3M[0], pulling down 10-100k of every post ever doesn’t matter. And http takes care of not pulling the same file over and over.
It’s certainly a downside, but completely useable as is, and better than any viable alternative.
As far as standards go, I prefer simple, static file, over something requiring dynamic response.
There’s also nothing stopping the site from limiting the RSS feed to only 5 posts with a link for full.
> Since rss is text only the sizes are very small and compress well. Considering the average web page is 3M[0], pulling down 10-100k of every post ever doesn’t matter.
My web pages are 100k and my RSS feed is 500k.
> It’s certainly a downside, but completely useable as is, and better than any viable alternative.
It doesn't fulfill the need I originally mentioned: making blog archives readable offline.
> As far as standards go, I prefer simple, static file, over something requiring dynamic response.
I agree static is better, but a dynamic response isn't necessary. You could just have a static file that listed all the blog posts, with a link to another static file for each post. This avoids having the user download 10 blog posts each time they want to poll if something new has happened.
Even better: instead of serving the generated for, point your RSS subdomain to your favourite CDN and only update your files when the site changes. That's likely to save you some money.
They only show a snippet because they can't sell your eyeballs via the RSS feed. They need just need to drsw you to click onto their full site to blast you with ads.
You mean the fact that aggregators like The Old Reader just download the RSS file from the website periodically, and then serve up cached copies to their many user? True, although this is another way of saying that some of the value added by aggregators only exists by virtue of the terrible design of RSS.
It doesn't need code. Just a static text file with a list of publication dates and URLs, each of which points to static HTML file for each publicly available posts. Then each user could check the list and download only what's new, without needing an aggregator. Users would never need to download the same content twice.
You can pretty much do that with RSS, if you want. Slightly decorated with some XML stuff but that'll pretty much gzip right away. Statically, if you use the right web servers and magic invocations the web server may specify.
You'll find your feed readers aren't particularly impressed, though.
This won't work because my users don't have software to automatically download the new posts, curate them, and save them for offline reading (in contrast to having the posts directly in the RSS file, for which all that does happen). That would require...a new standard, i.e., a replacement for RSS.
No, it would require a new type of consumer. RSS can carry a list of URLs just fine, as a degenerate case of RSS. RSS does not itself mandate that the content of the RSS must be displayed in "some sort of feed reader".
The good news is that means you don't have to wait. You can write that now. It will work on existing RSS feeds, just not quite as optimally for your proposed use case as you might personally like, but it will still work. It will work even better on yours, which will also work in conventional RSS readers.
Now, you might have problems getting "the real page content" from your URLs, but that's a separate problem. (History strongly suggests the large-scale content producers would actively fight you if you try, because you'll probably be trying to strip their ad revenue either deliberately or accidentally as part of what you'd be doing. Which is, after all, the reason why RSS is already not terribly favored by that crowd and why they want you in closed gardens of their own devising... unfortunately getting around this problem is a great deal more difficult than hypothesizing that some sort of new standard could somehow deal with it....)
I'm using "RSS" to refer to the standard practice people actually do. It's not very important that some organization somewhere defined an official RSS standard which in principle is flexible but which in practice is never used other than in a very specific way. (Without an agreement on how to use it more generally, no one can build an offline blog archive reader, and the fact that RSS could in principle be the base is irrelevant.)
If you want to play semantics, I'm fine with rephrasing my complaint as: "We need to build on the flexible super official RSS standard -- which is little more than an XML file -- and actually agree on a way of delivering blog archives for offline reading. RSS in practice does not currently achieve this very simple goal." This is just different words to describe the same thing.
> You can write that now. It will work on existing RSS feeds, just not quite as optimally for your proposed use case as you might personally like, but it will still work.
Huh? Other website owners who would like me to be able to read their archives can modify their RSS file, but we have no agreement on how to do that in a standard way. Likewise, I could modify my RSS file, but since there isn't a standard my readers won't have software to take advantage of it.
> (History strongly suggests the large-scale content producers would actively fight you if you try,...unfortunately getting around this problem is a great deal more difficult than hypothesizing that some sort of new standard could somehow deal with it....)
The blogs I want to read offline do not have ads and do not care about this. I just want a solution that works for this simple problem, not a way to take content from people who don't want to give it to me without attaching ads.
Is the problem really that RSS needs replacement, or can it simply be improved. I think there is quite a bit of room for extension to address some of the concerns that are expressed here too, including accurate meta tagging for educational resource discovery purposes.
I'm under the impression that pulling extensions on RSS will land us in the XMPP situation, where you have servers and clients speaking different languages.
On the other hand, if you want to say that your client supports this supposed RSS-ng standard they need to support those new features because they're part of the protocol.
This is to say, I don't like protocol extensions, but that's just me :)
That's because rendering JS/CSS/whatever has become the norm. It has become the norm for two reasons: tooling for building "static" sites with this technology has gotten very popular and well-done, and sites with a vested (advertising/lockin) interest in showing their content "their way" have jumped on board.
If I may add mine: https://www.pipes.digital/. It allows creating feeds for sites if the site is structured enough, and the feed can then manipulated with other blocks, similar to how Yahoo Pipes worked.
Feedback is always welcome (and subscribers even more ;) ), you can also send me a mail (in profile) or follow the way described in the docs to reach me (https://www.pipes.digital/docs#support).
If you don't mind writing a bit of Python glue code, you can also use https://github.com/nblock/feeds (I'm one of the authors). It uses Scrapy under the hood and is supposed to run as a cronjob. It will create full text Atom feeds of whatever data you will feed to it. Spiders (plugins) are usually quite small, especially when there is no login (paywall) involved: https://github.com/nblock/feeds/tree/master/feeds/spiders
Marketing is a powerful force. It's true that RSS has existed for many years, but also consider how many people have grown up without learning about it.
A site should still be able to choose to not produce RSS if they don't want. Encouraging RSS is different than forcing RSS feeds on sites that don't want them.
I think so too. I use RSS as a very minimalist blog and have had good experiences with it so far. It's hard to monetize the format, but it really draws a lot of people to my side.
Torrenting is illegal because you (generally) weren't authorized to make a copy. Stripping commercials isn't because you're removing something, or changing the way you read it. There might be some legal case if you have to make another copy of media in order to alter your consumption, but thank God I'm aware of no case attempting to control your personal use of media.
(DMCA doesn't count, it's about circumventing things that prevent you from copying, not change your consumption.)
Just to this point, an example: My parents' VCR had a built-in feature that would automatically fast-forward through commercials when playing back something that it recorded.
I don't understand why you would ever want this to be a desktop app. Try FreshRSS[1], it's awesome. It works well on shared hosting and it runs on SQLite.
TTRSS only gave me trouble. Threw all kinds of strange errors at unexpected times. I don't know how many times it died on me after an upgrade. I eventually gave up and found FreshRSS. Been running (and updating) it over a year, without a single problem.
One of the best things about it is escaping the algorithmically curated feeds.
Every and service that I use has an RSS feed, except for Twitter. I use https://twitrss.me/ to follow users. If you don't find a feed, sometimes you just have to dig a little. You learn at which URIs the most commons CMSes presents their Atom/RSS feeds (hello /feed/).
> I don't understand why you would ever want this to be a desktop app.
Perhaps because the vast majority of people in the world have no interest in figuring out web hosting so they can read some news articles. That doesn't seem like a viable way to "bring back RSS".
Exactly. And a lot of people still spend tons of time in what amounts to a standard desktop environment. I switched from cloud audio players to QMMP and have been looking to do the same with RSS.
It seems likely that the average user (that has no interest in figuring out web hosting) will still want to use software/service that works across devices (just like everything else works these days). For those users there's https://newsblur.com/ and similar services.
> I don't understand why you would ever want this to be a desktop app.
I thought you were going to say that it needs to be a website+mobile app for maximum adoption, and I was ready to say that you have a point but full-weight desktop apps still have their place. Then I looked up FreshRSS, and it's an aggregator that you host yourself?
Any answer to the question "how can we get more regular users to adopt a service" that starts "first, they all need to install Apache..." is very doomed.
Isn't Apache installed by default on at least all desktops nowadays? Obviously, people should be expected to tinker with httpd.conf, but something fully automated running on top of Apache isn't beyond imagination - and the installation could even be handled automatically if required.
100% agree with self-hosting an RSS aggregator. It's awesome.
Personally, I gave tt-rss and FreshRSS a shot, but went back to Miniflux[1]. One binary to run (written in Golang) and
plugs into Postgres. Easy to set up, but I prefer Miniflux for the same reasons I prefer Hacker News' website—simple and functional.
Since I'm considering to switch from TTRSS to something else, may I drill a bit? (the website is a tad short)
I heavily rely on nested categories in TTRSS (Youtube -> News for news channels for example), while the website says it has categories, can they be nested?
And most importantly, does miniflux handle about 700 feeds well? It would help a lot if I could lower the update rate of some feeds that only update once a month...
No nested categories. They're all flat. Miniflux is ultra minimalist and "opinionated," which I ended up preferring to all the others. YMMV! If you're looking for a ton of features or customization, look elsewhere.
It's also wicked fast, resource-light, and the code is really easy to grok if you want to hack in anything. But, the update rate is a single global setting[1]. I wouldn't be surprised if it could handle far more than 700 feeds... you'll hit bottlenecks from bandwidth or the DB before anything with the app.
>YMMV! If you're looking for a ton of features or customization, look elsewhere.
I don't need much customization, I simply rely on a lot of features for daily convenience. Either way, it seems good enough and I might be able to work around it (or submit a patch if I'm not lazy).
Update rate is merely a concern because I don't want to spam some hosts with repetitive updates for no reason, might be worth another patch.
Way way too complicated, needs a setup and configured postgres install, why use an RDBMS when it could have embedded sqlight and avoided all that sysadmin stuff
Agreed, this seems like a perfect fit for SQLite. It's not like you need massive concurrency, and you would have avoided all this administrative burden. Too bad, that's literally the only reason I chose not to try it. FreshRSS seems very nice too, but is also a hassle to deploy (what with apache/nginx and a bunch of requirements).
Deployment difficulty is why I wrote my own: https://github.com/rcxdude/nobsrss . It's super easy to deploy, but it's super minimal (unlike the way a lot of people seem to use RSS, I just use it for notifications, so all I need is a link to the actual website).
Huh, that's pretty cool, although I'd like it if it marked items as read when I clicked on them. You can easily achieve that by adding a URL route to read an item, and when a user visits that, you mark it as read and redirect to the real URL.
Fun fact, Stream is almost entirely written in Go. Winds is based on Node/React. I think it's nice as it enables more people to contribute. (everyone knows JS)
> TTRSS only gave me trouble. Threw all kinds of strange errors at unexpected times. I don't know how many times it died on me after an upgrade.
Counterpoint: I never had any issues with it and updates are just git pull, maybe login with the admin account for db migration, restart the update daemon, no issues at all.
Same here. Some years ago I switched from Thunderbird on several machines to a self-hosted TT-RSS that vastly improved my (professional) RSS feed browsing. Some minor annoyances with the scrolling when the RSS items are long but overall great experience.
> Every and service that I use has an RSS feed, except for Twitter.
I use https://feedbin.com as a RSS backend to any client I can imagine (Reeder on iOS in my case). Feedbin recently introduced a feature that treats and presents twitter searches/users/tags as RSS feed and extracts media and links in tweets. Right along all your other RSS feeds: https://feedbin.com/blog/2018/01/11/feedbin-is-the-best-way-...
https://readkitapp.com/ is still a much better client in my opinion. For now Cappuccino can't even load the entire article in cases when feed has only an excerpt.
The TwitRSS thing sounds awesome. It would be pretty easy to setup a web hosted version of Winds so people can access it however they want. Anything you particularly like about FreshRSS?
I forgot all about Netvibes.com! I remember setting up a personal dashboard with my emails, news, Digg, etc way back in ~2006. It may very well have been my first foray into RSS, now that I think about it...
Two words: News. Blur. I have been using https://newsblur.com for years now (ever since Google Reader died) and am not switching to anything else soon.
Yea, I'm thrilled with NewsBlur. Significantly happier than I even was with Google Reader, plus it's open source, plus the dev is fairly steadily introducing well-thought-out features (like highlighting articles from infrequent posters).
I've also been a happy user of NewsBlur since then. I use the paid option ($36/year), which gives you unlimited RSS feeds. There's also a free option that gives you up to 64 feeds.
They have both a web app and mobile (iOS/Android) apps.
I did exactly the same thing. I tested quite a few services on the downfall of Google Reader and stuck with Newsblur ever since.
I cannot think of any other service that works as smoothly as Newsblur and provides me with exactly what I need; keyboard shortcuts, good mobile clients, open source, decently priced, constant improvements.
Just a friendly PSA that WordPress comes with RSS feeds enabled by default. Since an overwhelming majority of content websites use WordPress, I think it's safe to say RSS is still widely available. Try going to your favorite blog and append "?feed=rss2" to the end of the URL. Like so:
For years (since the demise of Google Reader) I’ve been using feedly.com (which is free) as the source for my RSS feeds but I never actually use the web client. I use native clients that integrate with Feedly.
> For years (since the demise of Google Reader) I’ve been using feedly.com (which is free) as the source for my RSS feeds but I never actually use the web client. I use native clients that integrate with Feedly.
Interesting, I also migrated to feedly but I do use the web application on the desktop, though the feedly app on my phone. On desktop, the webapp works really well for my purposes — which is mostly going through all updates (all sorted by oldest, "j" for next), and once in a while opening one in a full browser window ("v") or sending the page to instapaper.
I also highly recommend Reeder for both iOS and macOS, it’s absolutely brilliant and it’s an actual native program not more of this JavaScript in a web frame bloat/slothware.
Did RSS ever go away? Almost all of the blogs I follow still have RSS feeds. Maybe I'm just in a bubble. I think the decline in relative importance of RSS is certainly real, but that's more an effect of people getting their information from more centralized sources (like HN).
I don't think many sites removed RSS, but many new sites and services in the last ~5 years, never implemented or served it in the first place. The typical share icons now are FB, Instagram, and Twitter. Email is less common (although usually there's an email newsletter signup). For a while, there wouldn't be link to RSS on the page, but you could still find the .rss or .atom link in the HTML header. But a lot of times that's not there anymore either.
It makes me sad to have to settle for Liking a company's FB page, since it's up to FB whether or not I ever see it.
Youtube is like this. They don't advertise rss feeds but include them in the page source of youtube channels. In the one or two times I couldn't find the link in the source, I'd just copy the channel id of the channel and use the feed url from another channel. Works like a charm!
A surprising amount of services have this. You can even turn your Gmail inbox into an (authenticated) RSS feed[1]. Small warning: it uses an outdated XML feed namespace, but most readers should handle it fine.
> "For a while, there wouldn't be link to RSS on the page, but you could still find the .rss or .atom link in the HTML header. But a lot of times that's not there anymore either."
Most blogs that I visit will highlight the RSS ("Subscribe to this page") icon in Firefox's icon bar. (The RSS icon is not present by default, but you can add it via the Customize dialog.)
For Chrome, there might be extensions that implement similar RSS auto-detection functionality.
I have no idea. I'm still using it and all the blogs I read still use it. I'm a little confused. I think the death of rss has been greatly exaggerated. As an aside I described one of my worst experiences with food poisoning as "having the winds" I'm a little biased against the name I guess.
I can tell by my experience of building https://telescope.surf, a lot of the sites are killing RSS or simply not linking it from any of their pages. I had to build crawlers for a lot of the sites.
Same thoughts here. I actually designed my own template for ATOM feeds for my Hugo generated blog. My blog serves both RSS and ATOM for the home page sections and even various taxonomies. So people interested in Emacs can subscribe to a feed of Emacs related posts only.
Why is this a downloadable app instead of a web app? It's built entirely using web technology. I think RSS belongs in my browser, so I can easily access it from whatever computer I might be using.
This is a great point; however, we wanted to bring the user experience to the desktop. You are more than welcome to submit a PR to make this application web compatible.
Why do you require users to make an account? I personally would rather have all my settings stored locally, and export them if I needed to. An option for this would be great.
Yeah I installed it, saw you need an account, and uninstalled it.
I quite like desktop programs still, but I expect everything to be stored locally and not to need an account. I had hoped for a KeePass style db file that I could sync on Dropbox or something, but the last thing I need is more accounts and my data on some randoms server that could be taken down when they get bored, or run out of money/motivation.
Your comment broke the HN guidelines, so it was properly downvoted. This one too. Could you please (re-)read https://news.ycombinator.com/newsguidelines.html and post civilly and substantively, or not at all? We're hoping for something a bit better here.
Well yeah, ofcourse. I dont see where my inital post broke anything.
It was esentially as meaningfull as OPs reply.
Ofcourse, the point of "this app should run in a browser" is very valid. The reply, he can just code it himself is in my eyes a passive aggressive way of saying "no, i said your point is valid, but actually think it isn't and therefore ill ignore it anyway."
Its obvioulsy, that a well developed web app should be runable in a browser and the developers could have thought about it earlier. They know their project better than anyone, so the afford of understanding how this piece of software works for an outsider is way to high.
"Your app should run on windows. It only runs on [not windows]? Downvote!"
"Your app runs based on technologies that mean you could easily port it to windows. But it doesn't run on windows? Downvote!"
"Your app runs on windows, but it's built on technologies that allow it to run anywhere else; why does it only run on windows? Downvote!"
See the pattern? Saying "I don't like your entire contribution because it doesn't run on a given platform on which I think it COULD run" is counterproductive and at best pointless; at worst rude.
(deleted useful feedback but ultimately neggy comment -- sorry, these guys are launching an app, I don't want to shit on that in a top comment where it's the reader's first experience).
No, it's a complaint about the availability of the app itself. You're not forced to use the App Store and most freeware is offered both in and out of it.
I gave Winds 2.0 a look last week and had the same experience. It took like 5 minutes just to get a download link and it takes me to the App Store where I have to sign-in (after signing in to LastPass and pulling out my phone for 2FA). Then I finally get it installed and immediately get asked to register for yet another account.
It's a pretty app, but the install process was terrible and it's honestly not a great RSS reader. It's a worse version of Flipboard with somehow less control over the content you see.
Is this true if you're a developer? I've stayed away from macs because you need xcode to get any sort of dev tools and xcode is only available on the App store (at least this was the case the last time I tried to develop on a mac).
I meant for providing your apps to users. Apple's software is only available via the App Store, but you can put your app up for download anywhere you want. As an example, Slack can be downloaded from both their site or via the App Store. [1]
XCode is through the app store AFAIK, but CLI tools are available without it, and have been so for at least a couple years now.
Also there are a few other IDEs that do their own thing, it's objective-c and that's a language with multiple implementations (though one overwhelmingly popular one, obv). E.g. AppCode has been around for a while: https://www.jetbrains.com/objc/
I'd like to know why I should download and install this when I can use Feedly. Maybe it's uncool to like Feedly nowadays, but it does everything I want it to do. Before that I used Google Reader until it was retired.
We'll start to link this on the download page. Other users had the same experience. I like the idea of auto updates on the app store. But yeah...
Thanks for the feedback on the onboarding flow. We're changing that to make it optional. RSS parsing is done on the server to allow for future mobile releases.
There is no monetization planned for Winds. It started out as an example app for getstream.io and gradually became more popular, which is kinda cool. Again, we don't intend to make money on Winds. As long as it doesn't get too crazy we'll even keep on allowing free new signups on the hosted version.
Yeah, I'd strongly recommend considering the app store as a high-complexity and high-cost alternate channel. Can be useful for exposure and trust, but the infinite company blog posts about the difficulties of dealing with it / the inflexibility on pricing structures / etc really haven't changed since it was introduced. And sandboxing / IPC limitations / etc get added with OS updates, and that causes a new round of breakages in any app doing anything even remotely non-standard (could be in a lib!).
An important alternate, but probably not a good idea to make it the sole channel.
All these services have free plans either if you have little traffic or if it's open source. So it's definitely possible to self host this (although maybe not completely trivial).
Disclaimer: I work at Stream (but haven't really worked on Winds)
An outage at any one of five services will stop your thing from working, and it won't necessarily be obvious what happened. Now you need to monitor five services, plus the one you actually built. Oh, and the free plans will stop being free once you cross a threshold of use, which is probably about when it becomes important to you.
If each of those services has an outage once a year, you'll get to feel all of them.
Compare to tt-rss: if your own web server and database are running (and there's an internet connection), it's working.
Those are enterprise services that offer at least four nines of availability each, probably more. AWS even advertises 11 nines for a select set of features. They should add up to a service that has an availability in the 99.98%.
Meanwhile, obtaining anything over 99.9% is usually a challenge for the amateur. Even if you are on call on every day of the year, it's very easy to go to sleep with your phone muted and wake up the next morning to discover the service is down for whatever reason.
That's not self-hosting. That's paying a cloud provider for a lot of services I don't need.
To me, self-hosting means I can (theoretically) run everything I need, on my bare-metal -- a cloud provider or cloud service (eg, RDS) then becomes a choice, not a requirement.
Suppose you swapped out every piece with a free, local one -- it'd still have the problem of being way overdesigned for what I'd need. Even if the UI looks decent.
Elfeed, if you are using Emacs. You can view most blogs and lighter websites w/ no problems with Emacs's web browser, EWW. Or you can tell Emacs w/ one variable to open links with an external browser.
I'm not sure what this non-native application is offering me over plenty of Mac (and presumably Linux & Windows) native versions using way less memory and having platform-native UI controls.
RSS's challenge has never been apps, it's support from publishers to keep using open formats.
I'd also say a large part of its issue is lack of internal consistency of the format itself. Firstly there are the various official RSS versions, then there are all those Atom things and probably others as well, and then there's the thing about how you're supposed to render the things correctly. XML as the content format is also pretty much antiquated.
A modern format would no doubt have to be JSON- or YAML-based and have its human-readable content in plain text or markdown, so it'd have to be pretty much readable with a plain text http client, like curl.
So, just let RSS die the slow death it's been going through for good reasons and bring something consistent and straight-forward into its place. Something that you could easily generate and parse from any modern language without specific libraries.
Bringing RSS back is like trying to bring SOAP as a RPC system back; it just won't fly anymore no matter how much hot air you try to pump into it. We know better now and have better ways to do the things.
It's a circle that right now is spiraling the wrong direction. I hope that the community can come together and improve the user experience around RSS. This in turn will help show the value of RSS to publishers. The current user experience around RSS isn't always very use friendly. On the other hand, the people that care about RSS are designers, developers and journalists. We should be able to turn the tide.
I switched to inoreader.com when Google Reader went away. Not the best but it works for me, a tad slow to load the first time.
I tried Feedly but gave up when their nginx LB timed out after a minute because my OPML import was a big file. Plus, the interface does not feel like an RSS reader, as somebody else points out.
The thing that bugs me about the Feedly website is that it doesn't really feel like a RSS reader, at least on the free service. Maybe the paid version is (much) more customizable but I doubt it. It reminds me more of Flipboard.
Edit: It's tailored to marketing people so I think that's why.
You might want to try NewsBlur.com - it's quite similar to how Google Reader used to be. It has a free version (64 RSS feeds) and a paid version (unlimited feeds).
(I remember trying Feedly when Google Reader died, but ended up going with NewsBlur instead.)
I use the Feedly iOS app - never used or even seen the website. What I like about the app, is the "all done!" message at the end of your daily feeds. It's a nice reminder/poke to get off the phone and get on with life.
The whole rss is dead -> long live rss narrative has been an eye-opening experience for me in terms of practically illustrating just how powerful and influential the news is on technological development. Engineers can apparantely be a lot more easily manipulated than what I previously thought. (I'd argue we never knew whether or not RSS was dead because it was based on small and skewed sampling biases.)
That said, pleasantly speedy! Nice work. There are a fair number of minor UI glitches (suggestions list frequently flickers when going to/from it, follow/unfollow can sometimes escape bounds of window and cause scrollbars, etc) but overall it feels pretty nice. I'll give it a try for a while.
If you're taking recommendations though:
- It'd be really awesome if I could paste a URL into the search field and have one of the options be "add a new feed" rather than clicking the +.
- And the "featured" section is probably a decent intro to new users (keep it!), but I'm unlikely to ever use it, and it's taking up a LOT of real-estate. Maybe an option to hide it / make it able to scroll away, to give more room for the stuff you already follow below.
Stream is an API for building activity feeds. You use it when you want to follow things in your application. Another use case is notification feeds. RSS readers are a tiny fraction of our customer base, but it's something we care about. API tour is here: https://getstream.io/get_started/
Currently Stream powers the feeds for over 300 million end users, Stackshare did a nice post about our tech:
https://stackshare.io/stream/stream-and-go-news-feeds-for-ov...
I'm surprised rss2email (or something similar) has hardly been mentioned. Getting your RSS in your email makes a surprising amount of sense. You can filter, organize and archive it. You can search it. You can mark it read or put flags on it. You can forward it to friends if that's your thing. And it works everywhere you have an email app.
I also have small scripts that run as cron jobs and scrape a few Twitter feeds and a few subreddits and email new items to me.
Having all my updates in one place I can consume in one place when I want is just great.
I'm not sure what I'd get out of Winds looking at it.
The thing I most hated about RSS readers was they treated it like email (or even old-style newsgroups), assuming you cared how many articles you hadn't read.
Nope, most of it's news (whether tech or some other kind), and the social media stream kind of access (where you care about what's current, but if you missed it so what) seems a lot more appropriate.
I mean, if you want to support that kind of view, you could certainly just purge anything more than 10 articles old or whatever from the RSS feed from your inbox.
We probably just use these things differently though. Like, if a product I'm interested in gets reviewed I want to see that -- I don't want to have to go search the website later because maybe it was reviewed but I wasn't hovering over my RSS feed the hour or two it was the top story.
Basically, it completely replaces the website's front page for me, and if I have to go there I've failed.
Interesting. I'm on the total opposite end... I'd rather have "email2rss" for turning all the spam/newsletters into a feed. I don't want _any_ of that content entering my email.
I recommend Kill the Newsletter[1]. It gives you an email address that you can send all your subscriptions into, and an RSS feed to consume it.
Couldn't you just filter those newsletters into a folder? I'm not sure how making it RSS is materially different.
Hardly anything I described here goes directly into my inbox, to be clear. That would drive me crazy (I rarely have more than a dozen emails in my inbox -- all things that I need to somewhat immediately need to be act on).
Sure. Probably would work well with name+newsletter@example.com type filters.
I just like to keep them separate, I suppose. And, I just naturally prefer to consume news through RSS feeds. Every newsletter or RSS feed I read is ephemeral, disposable, and I probably only ever read 1% of the content (if that). Email (for me) tends to be the near opposite, and anything I can do to keep the noise and cruft out of my email, I'll do.
never used it before but I imagine it wouldn't work well with hundreds of updates everyday? Plus centralized feed management is easier with an RSS service like Feedly. I don't want to scour through all my mails for my feeds, as if it wasn't hard enough keeping with mailing lists already.
It would work fine, mail servers and clients are damn comfortable handling those numbers.
I suppose it'd be easier if you never want to have any files to back up, but I already do (emails themselves, git repos, various databases) so storing the rss2email configuration is nothing extra for me.
You can setup filters to put the stuff from RSS in whatever folders you want. By feed itself. By author. By keyword. Just like any other email. I use Sieve with Dovecot/Pigeonhole but any decent hosted mail service should provide something similar.
Question: why am I constantly hearing about RSS and never Atom? Isn't Atom newer and supposedly better? Are the two basically interchangeable, or did Atom never catch on enough that people want to revive it?
I think it's a situation sort of like with SSL and TLS, where the former term implies the latter. I mean, it's not rare to find people using the term "SSL Certificates" even on TLS-only setups. Also:
Lack of a good client is not the issue here. RSS needs to either add a standard way to serve ads or some sort of revenue model. Otherwise it doesn’t make sense for most publishers to adopt or maintain it.
So 'we' can block it. NLP for ads detection would be disastrous as a problem to tackle. And if users need to spend considerable effort to distinguish ads from actual content, the tech-platform will soon die due to angry mobs.
The comment to which I was replying claimed that it was necessary for publishers to adopt RSS. I'm pretty sure they were thinking that, if RSS had some kind of standard way to increase the probability of ads being seen, publishers would adopt it more widely (and that's probably true).
My point is that feed content can already include ads as regular text or images or whatever. Feed readers and aggregators complicate ad-tracking, but not ads themselves (like what newspapers and magazines have used for decades [or centuries]). Some of my favorite feeds are sites with plain ads and the advertisers seem to be targeting the site's audience instead of individuals. That seems to work well enough given that it's existed in its current form for several years now at least.
> And if users need to spend considerable effort to distinguish ads from actual content, the tech-platform will soon die due to angry mobs.
We're probably writing past each other. I think you're imagining a much more widespread adoption of RSS in which this would be a real problem. Or maybe I'm just weird. But I don't see the problem with, e.g. following a feed of someone's Twitter activity that includes an (obvious) ad every n feed items for some suitable value of n. And my imagined world wouldn't require any changes to RSS.
I've been using rss again for a few months, and it's been really nice. It's like twitter without the noise.
The "wow" moment for me was when I discovered google alerts allows to get alerts as a rss feed (individually for each alert). So this means content discovery through RSS is now possible (the main reason why I started to replace RSS with twitter). Alerts have two modes, one "main articles only" where google applies some blackbox magic to limit the amount of items talking about the same thing, and "all articles".
I'm seriously thinking about using this to build my own privacy oriented clone of google feed. I could have a browser extension performing keyword extraction from my web history to detect my interests, then have google alerts set up on those keywords and a local app chewing all similar content from items through textrank to produce a "news", with a list of sources, maybe sorted through alexa rank. Sounds like quite a job, but worth it.
In terms of technology, it's a dog. Beyond the problems associated with malformed XML, and crazy namespaces (I once went through several hundred thousand feeds and found over 100 different tags), the process of polling for new content is inefficient and wasteful. I recently moved my personal blog from one server to another and was amazed at the amount of bot traffic I get - over 5 years after I stopped blogging regularly.
In terms of business sense - why would a publisher ever want to create an RSS feed in the first place? I'm still surprised they bother. RSS feeds don't drive sufficient traffic to justify their existence, and allows easy copying/republishing. There's zero financial incentive.
I'm a news junkie and loved blogs in their heyday, but those days are over and they aren't coming back.
I recently unfollowed most "media" on my Twitter, and transitioned those sources to RSS / Feedly app on iOS. Feedly works good enough, and my Twitter feed is now much quieter, and I feel good about spending less time on Twitter (for various reasons).
I have found that finding an RSS feed for a website/brand can sometimes be difficult. Feedly does a good job making this easy, but sometimes I need to resort to Google, to find the proper feed.
I also find it rewarding to follow (on Twitter) journalists, instead of the paper/website/org they work for. You get more personal tweets, story follow ups, and can engage them in conversation, unlike most media feeds which are managed by staff uninterested in engaging.
I use Liferea. It used to work well 6 years ago and still works good. I don't need to share my feeds with the "cloud" and I don't need to run yet another instance of Chrome and have yet another copy node_modules. Pricing: 0$/month.
That looks like a nice desktop app, and that's certainly a start.
If other users are anything like me, then to really revive RSS you have to think more multi-device.
Assume the user will want to get to their feed of stuff to read from their phone, tablet, or laptop.
Assume the user will want to listen from their phone most of the time (iphone or android), but maybe their watch while exercising, their car stereo while driving, and through speakers like Echo or Sonos at home.
Get these things to sync up nicely and I bet people wouldn't mind a bit that RSS is underneath somewhere, making it go.
Does this app support RSS syncing services like Feedbin? I don't see any mention of that in the description, which suggests that it doesn't. If so, that's a pretty big missing feature.
* Allows items on that feed (from all the sources) to be earmarked
* At the end of reviewing the unread items of the feed, summarise a 'checkout' of earmarked items
* Sends a payment to each of the publications for the content earmarked
* Compiles the content into a consumable format such as kindle, PDF or download for mobile
I can only think of instapaper/pocket type services that don't allow for payment, or for browsers like Brave that use blockchainy things that seem new and scary. Or are just RSS readers.
Pretty much my thoughts: if you want RSS to live, turn it into the platform for content micropayment. If you just want it to be a better adblocker, don't act surprised if nobody wants to play along.
RSS never died. Many sites (at least those of interest to me) still release feeds. It's just that they are not advertised as much and the RSS logo kind of disappeared.
I’m still looking for a reader that can crawl the web and find articles based on specific keywords and exclude sites of my choice. I want to follow certain topics in a particular niche but the best solution is setting weekly roundup with google alerts in their news which only includes those that are part of “google news.”
Shameless plug: This is exactly what we built https://contentgems.com for. It is entirely driven by feeds (both RSS/Atom as well as Twitter home timeline). You can filter by content (Lucene query), by feed, by domain (suffix) and a few other parameters.
I didn't find any more cool features in Winds than in other Google Reader clone products. I have been using inoreader(https://www.inoreader.com/) since Google Reader was shutdown, and it works very well for me.
For websites I want to follow that don't provide an RSS feed I wrote a 30 line python script which runs in daily cronjob and creates me an an XML feed that I can import locally in my reader (newsboat). Easy and it works great.
How about everyone gives the creators of Winds some credit instead of bitching about RSS, the interface, the desktop experience, etc. I would imagine they put months of effort into building out such a complex application.
I don't the problem is a lack of clients or interest from consumers; it's that publishers want more control of the content and how it is consumed (for example by forcing you to use their own platform).
Winds 2.0 was easy to setup but it immediately asks to register an account upon the first app launch. Why? I prefer to store my settings locally or in Dropbox. I uninstalled the app without moving any further.
I have far too many feed items to read. I'd like to review ~1,000 items in 20 minutes, as a start. (Review means: Read the headline, possibly read the summary, decide whether to read the full story, do whatever is needed to open full story, move to next item; I'll read the full stories after I've gone through all the feed items.)
* Deduplication: Which might remove ~5% of my feed items
* Grouping: This would be by far the most important feature. I need something to group articles covering the same story (e.g., all the news publications' stories on the big game or big speech or big calamity last night). Then I can quickly choose one and ignore the rest.
* Speed: UI that responds at the speed of thought and maximizes throughput when reviewing feed items. A keyboard interface is essential.
* Complex filters: So I can automatically process items as desired.
* EDIT: Automate handling of broken feeds: On one hand, I want to spend as little time on broken feeds as possible; perhaps the reader can retry for x hours, then try obvious alternative addresses (perhaps Stream can maintain a db of the new addresses, saving each user from redundantly solving the problem themselves), then notify me with an efficient interface for resolving the problem.
* Micropayments: Maybe over-ambitious, but it would be a great way to support the authors. A possibly world-changing feature.
Judging by my billing history, it looks like I signed up the same day it was announced back in 2013. I must be grandfathered in. I'd happily pay $5/mo for Feedbin if the grandfathered plans were removed. In terms of cost per hour of use, it's probably the service I pay the least for.
Not the guy, but one reason to avoid ubuntu snaps is that it is designed not to allow users to have complete control over software updates. With snaps, the developer controls the update process, not the user.
Some people believe that the user should have the final say in what his or her system does. Snap goes against this. It's a great project otherwise.
I'd add to that a general dislike of the "splitter" attitude taken by Ubuntu. Unity vs Gnome, Mir vs Waylaid, now Snap vs Flatpak. The latter works well, the former doesn't improve in any meaningful way. Why fragment?
Nearly every site I visit on a regular basis, HN, Reddit, YouTube, has RSS feeds. Granted, they're link only feeds, but that's far better than having to create an account just to get an inferior "subscription" experience or having to re-scan over things I've already read and ignored.
I've never used an RSS reader so correct me if I'm wrong, but RSS can basically allow you to read the content without advertising? Sharing is fine, but I think they would prefer sharing in a way that can make money.
I use RSS/Atom to keep up-to-date on the 100+ webcomics I read, and the content in those feeds can vary wildly. Some include the full comic page, some only a thumbnail, some contain an accompanying blog post, others are empty and you need to follow the link to see the post.
If you want to read everything in your feed reader, those "incomplete" feeds are disappointing, but I just use it as a kind of newsletter to get notified of updates. I even use Thunderbird's built-in feed support.
Most feeds only contain "excerpts" - which require you to visit the website to read the entire article. Thankfully, "reader mode" exists. Ads are also entirely possible, but not commonplace.
Same here - I use Feedly less often than I used Twitter when my media feeds were housed there. I also find this same behavior after we recently ditched cable TV. Those TV shows and networks that were "important" to me before are not nearly as much now that I've broken free.
I just put the RSS urls into a text file and the self-contained 1.7mb program does the rest. Somehow it gets by without using a combination of Electron, Mongo, Algolia, Redis, machine learning (!), and Sendgrid.
Maybe the comparison with a text-based RSS reader isn't fair and Winds does some crazy cool stuff, but it's hard to see what that is exactly.