Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Openring, a free and decentralized network of blogs (sr.ht)
351 points by ddevault on June 15, 2019 | hide | past | favorite | 95 comments



This article gives a good rundown of what a webring was.

https://www.hover.com/blog/what-ever-happened-to-webrings/

> A webring contained a list of websites that all contained a similar theme. A single moderator — or Ringmaster — was in charge of approving and adding each website to a webring. Sites participating in a webring would then place the ring’s navigation box at the bottom of their site, which would bring visitors to whichever site was next or previous in the list (depending on which option they selected). If a visitor was on the last site in the list and clicked next, the list would loop back and load the first website in the list, in essence forming a ring of websites.


If this is widely adopted, it would be quite straightforward to crawl and graph a network of all the blogs in the ring. This would be really cool! Search sucks for finding good blogs in my experience.

Is there anything in the output html that would make this crawling easier to do?

RSS seems to persist because it’s on by default on almost all blogging platforms. How can we make this into an on by default option in the main static site generators and CMS’s?


This post could just as easily been from 1999 sadly.


How would you go about getting that information on existing blogs? A great many blogs have 'blogrolls' but scraping that information seems far more time-consuming than it ought to be due to the variations in layout etc. etc.


Wordpress-based blog proprietors are likely to maintain their blogrolls using Wordpress’s Links feature. Because this is a standard module that produces more or less the same output across all WP blogs, it is rather easy to parse for and scrape.

However, if this Show HN tool is for static-site generator-based blogs, then individual bloggers may have more leeway on how to format their blogrolls.


I'll give that a look. It's been a while since I checked into it but I've been wanting to map a particular blog network for a long time so that sounds very helpful.


It's interesting that none of the comments so far complain about the size of the binary (13MB!). Nodejs and Electron projects get flak for this all the time, but Go somehow gets away with it.


I could definitely stand to strip the binary (saves about 2M), but the main issue is that (1) it's statically linked and (2) I actually pull in a surprising amount of dependencies for a ~100 line program. It uses an template parser & renderer; an HTML cleaner; a time formatter; an RSS/Atom parser; TCP, HTTP, and XML implementations; and extra handling for Unicode. Even then, 11M is a bit excessive.

However, the comparison to Electron is unfair. The base Electron package is well over 100M, more than 10x as large, and generally not statically linked. This binary also doesn't hog your RAM and CPU and GPU like Electron does.


While it could and should be slimmed down, I agree that the comparison to electron makes little sense.


I think the larger complaint about electron is the memory usage, not disk space usage. Go may have large binaries, but they run fast and efficient.


Exactly. Who cares about 13MB disk usage in this day and age? But RAM is still too easily exhausted on a workstation with a decent amount of multitasking going on.


Personally, it’s the slowness, not the memory use


This looks like it is intended to be run on a server, and it runs and exits once completed rather than continuing to run.

Criticisms of something that needs a semi-permanent allocation of my laptop's memory (like slack or Spotify) don't really apply.


Wow, I didn't know Go had large binaries. There's no reason it should though. As Go programs are compiled down to native code, if as the last step of compilation, the equivalent of unix strip (or the JavaScript world's tree shaking or the JVM world's ProGuard) was run, the binary would only have code that's actually used. Since Go is a relatively young language, it might be that implementing this was just not that high of a priority for the Go implementers.

Alternatively, it's possible that the executable size is large because it: (1) actually contains a lot of in-use code, or (2) contains media files. I know on Windows, it's common for people to make media files part of the .exe file. The Windows Portable Executable (PE) format makes it quite convenient to do this; and the Windows API has functions that make loading up files bundled as part of the .exe file as seamless as reading from the file system.


Related: https://golang.org/doc/faq#Why_is_my_trivial_program_such_a_...

Also there are no media files embedded, the whole project is 130LOC + some html markup:

https://git.sr.ht/~sircmpwn/openring/tree/master/openring.go


Go binaries tend to be fairly static, relying very little on system libs. It also makes some size tradeoffs for better load times. Also, compared to NodeJS and Electron... it’s still tiny. Not that anyone cares so deeply anymore.


I imagine because go compiles to an actual native binary, whilst electron wraps a browser and a chunk of JavaScript in a zip file.

You can’t really say “go gets away with it” as though these languages are doing the same thing.


You're right. My observation was just that the discussions often don't go much deeper than "look at this big binary", so I was surprised to find none of that here.


I can't say I've personally ever seen binary size explicitly called out in an Electron slating thread. It's usually memory usage on a typical 8-16GB laptop, or battery drain in the cases of frivolous animated bells and whistles.

That's just my anecdata


The next time you notice a particular clichéd complaint missing from a thread, it might be better to not comment on its absence unless you actually want to have the same discussion all over again.


I’ve never seen anybody complain about binary size in Electron. Most complaints are about RAM and CPU usage.


What do everyone use for feed reader !? My old phone used to have a built in feed reader. And my old browser also used to have a feed reader. Looked on Google play and there where very little to choose from. Thinking of creating my own feed reader ... Or have "blogging" moved over to "Youtubing" and the occasional podcast !?


I had used Thunderbird's RSS reader for ages, but last year I needed to work on both machines concurrently, so went looking for an alternative and found newsblur - https://newsblur.com is awesome, and works on mobile as well. (unaffiliated, happy paying user. it is free software so you could host it yourself if you want).


I'm a lifetime subscriber to Feedly (https://feedly.com) which has behaved sanely since I joined and claims to have mobile apps but I'm not the target audience for reading blogs on my phone.

There are also quite a few self-hosted options, the most famous one that springs to mind is TinyTinyRSS (https://tt-rss.org/) but I'm sure there are folks who have more experience with any alternatives https://alternativeto.net/software/tiny-tiny-rss/


I'm using also Feedly and I am a happy user. It feels so good to read blogs, much rewarding than social networks.


Feedly is excellent. If they only added decent feed management to their Android client (unsubscribing from a post level in particular), it would be perfect.


Coming soon


I used to send all the RSS articles to my mail box. Integrated synchronization between devices, mark as read, starring, categorizing, search... All it needs is a script that runs somewhere, it's the simplest one you can self host (provided you already have a mail box)

I don't use it anymore because I don't do RSS anymore. I found the constant browsing to be a bit of a desperate situation, the same way people constantly browse their Facebook account. I'd rather stumble upon a nice article.


Some people like Feedly, but I've always strongly disliked it for reasons I'll get into only if someone wants. (Some of them may not even be current, I haven't really checked the ecosystem since the year Google Reader died.)

For those looking for something closer to the original Google Reader experience with a few extra (and very nice!) features, I would suggest Inoreader, which is what I've been using for many years now. It uses a Freemium model, rather than forcing everyone into an ad-based "free" version.


I also hated Feedly, whose cardinal sin seemed to be that it's not Google Reader. Nowadays I use Newsblur, which is close enough to Google Reader, and I wholeheartedly recommend it.


Newsblur was one of two others that I gave a go after the Google Reader demise (also TheOldReader). While I liked it okay, I ended up preferring T.O.R. (and then Inoreader) to it because the interface (at the time) was very messy and they had scaling problems with the deluge of new users at the time.

Any of the three were better feed readers than Feedly, which seemed closer to a Google News clone (or more recently, Mozilla's Pocket) that happened to have support for RSS feeds.


I use BazQux. Super simple interface and it has search (Feedly didn't back when I picked after the Google Reader debacle, and its interface is a bit bloated to me).


https://miniflux.app - Very minimalistic and written in Go with a hosted and self hosted option.


I've used and recommended it until the Go/PostGres rewrite happened with miniflux 2. I've switched to Selfoss since then because it had more costraints when self-hosting (php and no database was easier).


For me running one docker-compose up is a lot less work than running a php stack so no complaints there and it works without any issues since it's release.


Self hosted https://freshrss.org/ has been pleasing me for years now. No troubles at all, even when updating. Uses SQLite, which is awesome when migrating.

Used https://tt-rss.org/ before, but had so much trouble after updates. Hit and miss.


I've been using https://tt-rss.org/ for many years now. I'm hosting the backend on a small VPS just for myself. The web client and the Android app are great and the whole stack is rock solid. Highly recommended if you want to read your feeds on multiple devices and keep them in sync.


I went through a handful of RSS readers over the years till I settled on QuiteRSS (https://quiterss.org/). It's a Qt native application with builds for many operating systems. It's still in active development, fast, and has a nice interface.


For aggregation I used feedbin.me and now Feedly after I pruned subscription services.

For reading the Reeder app (iOS, Mac)


https://git.codemadness.org/sfeed/

I really like the simplicity and minimalism of Sfeed. I setup a nightly cronjob that generates an ordinary HTML page, from all the RSS feeds that I like to follow.


I used Feedly for quite a while since google reader shutdown, but switched to Inoreader about 3 or 4 months at. Generally I’m happy with it, although most of the time I access the feed via the Reeder MacOS and iOS apps


Tiny Tiny RSS as a backend and https://github.com/jeena/feedthemonkey as a desktop frontend.


I'm quite partial to TinyRSS, available at tt-rss.org.


feed2exec is pretty useful:

https://feed2exec.readthedocs.io/



Disclaimer: it's your project :P


Looks nice. Is there any reason to use getopt over flag in Go?

How do you find the experience with sr.ht?


I prefer getopt because it's more succinct and more standardized.

I made sr.ht :) I can't give you an unbiased opinion, but naturally I think it's quite nice given that I designed it explicitly to suit my needs. Check out the marketing page for some more details:

https://sourcehut.org


Heya, so I just checked out Sourcehut and despite really wanting to use it (and pay) I decided not to. But because I find the product cool, I figured I should note why. So,

1. My primary projects of which I need private hosting contain many (less than a GB, currently) "large" files hosted. Of course, these files need to be tied to the source code. Currently I'm solving this with Git LFS. Now, I'm not exactly tied to the solution, I just want it to be fairly pain free and baked in, of which Git LFS does a good job on. I super look forward to Git LFS.

2. I know you're working on it, but I have no desire to change my workflow to be email oriented. I don't intend to debate it, it's just not what I desire. PR UIs would be hugely helpful.

#2 is barely an issue at all, because I know you're working on it. If I had GitLFS I would have signed up for $10/m as that's well worth it to me.

Appreciate the product, hope it does great things for you. I look forward to using it when/if you implement GitLFS or something like it :)

Thanks! As a potential customer I hope this was at least mildly helpful :)

edit: It should be noted that my GitLFS files are of course binary. 3D modeling, PSDs, that sort of thing.


Hey, thanks for the feedback.

Regarding point 1, I do eventually want to add git lfs support.

Regarding point 2, the only thing I can say is "don't knock it until you've tried it". As someone who's spent thousands of hours each in GitHub, Gerrit, and email, as well as some time in GitLab, Gitea, and Phabricator, I've tried a lot of workflows and email is by far the most efficient. However, in the future I plan on adding web UIs for review and patch submission which are backed by email underneath, so you can use the web or email - whichever you prefer. I think you ought to give email an earnest shot, though.

https://git-send-email.io

https://aerc-mail.org

Also, lots of people use a subset of SourceHut, like the CI service for example, while still hosting git repos on GitHub or GitLab. Because it's modular in design, each piece can be useful ala-carte or composed freely with other solutions.


Oh, and if there's a way to subscribe / be notified when LFS support is added please let me know :)

For now I'm building around Gitlab, possibly migrating to Github if I run out of LFS storage on Gitlab (Github lets you pay for more storage.. don't think I can on Gitlab yet..).

Appreciate your work :)


It'll definitely be announced on sr.ht-announce:

https://lists.sr.ht/~sircmpwn/sr.ht-announce


Might I ask whether you have any intention to revisit this issue? https://todo.sr.ht/~sircmpwn/todo.sr.ht/176


That was a disappointing response. The overall design is ok but this really felt out of place. Hopefully he realizes that and stops blaming the “puritanical jerks” as he himself said.


Isn't that what the issue says he will do?


Ooh no one tell them about the terrifying hybrid feline/invertebrate that lurks on a different git hosting platform. O, the unprofessionalism!


> Is there any reason to use getopt over flag in Go?

Its parser follows the POSIX utility syntax guidelines and behaves like most POSIX tools. The flag parser on the other hand is simpler, but can lead to some surprises if you are used to getopt-like parsing.


SirCmpwn, the project owner here, also owns sr.ht.


What's the use case for something like this ?

Looks a bit like what any RSS reader would provide. Or maybe it's meant to publish links to other blogs on a separate site, which could work if the articles have a somewhat permissive license, I guess.


It allows bloggers that want to keep control over their own platform and content (read: not use Medium) to be part of a bigger network of selfhosted blogs. Like in the good old days.

I for one hope it catches on


Yeah, I remember webrings back in the day and absolutely miss it. The discovery aspect of it was awesome! You come across a great article through a search engine and are curious who the author likes reading.

Most centralized platforms seem to recommend other authors based entirely on the topic. You’re reading a post about Rails Generators? It’ll recommend other articles about the same thing. With WebRings it was way more diverse than that. End up on an electronics page and discover all sorts of other fascinating authors doing entirely different things.


It was also a key feature of Google reader kind of. You would be able to follow other readers and not just publishers also increasing discovery.


How does it solve curation & search?

Edit: I'm asking because I suspect your answer is going to be "it doesn't/that's not the point/who cares?" But people who read blogs will just flock to the centralized services that solve curation & search quite effectively and keep users reading. And then this centralized service will have strong incentives to become _more_ centralized, not less, and decentralized solutions like this one never really gain traction and become functionally irrelevant.


But people who read blogs will just flock to the centralized services that solve curation & search quite effectively and keep users reading.

I don't think that's true. People read blogs by writers they like (they go directly to that writer) , or about topics that interest them (they follow links from a website or social account about that topic) , or because they're looking for a specific post about a specific thing (eg they Googled). None of those things are best served by gathering writers on a single centralised platform. In fact, so long as blog posts are open the reader probably doesn't care how the posts are published. (Side note: this is the flaw in Medium. No one want the subscribe to the Netflix of blogs. People might pay to access their favourite writer, but not in the long term.)

Centralized blog platforms serve the writer. They're easy, they often have good tools, and they have an audience, although I doubt that's actually very useful to most writers - just having more readers without caring who they are is pure vanity. You want relevant interested readers if your blog is going to be effective promotion for you.

Ultimately, blog platforms are fine. Writers are a great customer base to have. Just don't kid yourself blog platforms benefit readers. They don't.


It's meant to be used and configured for individual blogs - you provide the sites you're following and it picks out articles. I'd say the selection of blogs you follow is curation enough. It's not meant for use a la medium.


What does it mean for this to be functionally irrelevant? It fulfills its functional purpose regardless of whether it's popular or not.


You rig it into your own blog. It's designed for use with static site generators. Check out the links on the bottom of my blog posts:

https://drewdevault.com/2019/06/13/My-journey-from-MIT-to-GP...

They're generated by this software.


Oh, OK, thanks for the example. That's nice !


What's the use case for something like this?

Looks like it takes RSS feeds and makes a blogroll. I thought most blogging platforms had that function built-in. Certainly Blogger does (ex: http://www.sloopin.com).

Maybe this is for if you built your own blogging platform and don't have a blogroll plug-in/feature available.


I've never been able to find an answer to this I could make sense of, even back when 'blogrolls' and 'webrings' were actually in popular use.


It's good for people with content that doesn't align with the values of employees of major platforms e.g right wing publishers, the porn publishers of Tumblr etc. Such people are always at risk of deplatforming and it's nice for their content to be spread out all over the web to prevent that risk.


ah, what's old is new again. i can't wait to get my 1999 on! =)

the subtitle is a little more informative: "A webring for static site generators." i'm glad to see this kind of the movement toward self-deplatforming.

i wonder how comments are both attributable and decentralized? i'm not a fan of comments being blog posts themselves (which would be one solution to this), as they're often "less formal" than a post and should reflect that.


To be fair, lots of people (certainly I) think we took a series of wrong turns on the information superhighway in the 2000s, toward a dangerous level of centralization, so...maybe looking backward is good.


We're doing this in the IndieWeb community, an example: https://jeena.net/comments/1045

The background technology: https://indieweb.org/Webmention

The rather unsuccessful try to do a indiewebring: https://indieweb.org/indiewebring


TL;DR, because comments to date suggest people don't bother reading the link:

1/ Yes, it's a blogroll.

2/ It's meant for static site generators.

3/ It solves curation and search by you doing the curating yourself, in the form of providing links to RSS feeds of the blogs you like.

Looks really nice. I think I'll hook it up to my generator.


To be fair, there's not that much information apart from the technical usage on that link. I read it but it's not that easy to parse what it's for and what it looks like.


The bottom of the post includes an image that shows you exactly what it looks like.


I got that, but if you don't know what a "webring" is I can understand how it's not that obvious at first.

I understood what it's about after reading it in detail but I don't blame people who have questions here. That's all I'm saying.


love it! i always wondered what happened to the good old blogroll


Interesting quote from Brad Enslen about Google's possible role in this:

> Search engines in general but Google in particular: they have warped the way we build websites, many websites used to have a splash or landing page first: “You have reached the Gates of Marlborodor” (complete with MIDI music) and a big Enter button. Search engines decided they didn’t like that so word spread to get rid of them. Rumors spread that large link pages (for surfing) might be considered “link farms” (and yes on SEO sites they were but these things eventually trickle down to little personal site webmasters too) so these started to be phased out. Then the worry was Blogrolls might be considered link farms so they slowly started to be phased out. Then the biggie: when Google deliberately filtered out all the free hosted sites from the SERP’s (they were not removed completely just sent back to page 10 or so of the Google SERP’s) and traffic to Tripod and Geocities plummeted. Why? Because they were taking up space in the first 20 organic returns knocking out corporate and commercial sites and the sites likely to become paying customers were complaining.

Comment is from here: https://www.kickscondor.com/when-the-social-silos-fall/#comm...

Blogrolls are returning here and there. Here's one I saw recently: https://www.gyford.com/phil/writing/2019/06/04/blogroll/


I always liked the concept of webrings as a vehicle of discovery, but in practice it was always terrible. Nobody curated what could or could not be part of a ring so you were absolutely not finding good pages by clicking on the “next site” link.

They have potential, but there are nuances to make them worth using


I liked this, and added it to my site: https://www.jefftk.com/p/openring


Very cool! Since I already have my feeds sitting locally in my feed reader's db, it'd be cool if I could just pipe that out to Openring somehow.


Hmmm oh I suppose if I just exposed my feed reader's aggregated feed as an rss feed over a local http server, I could just point Openring at that! I might try to whip that together this afternoon! (I use newsboat in case anyone else is interested in such a thing.)


Actually, I'm afraid this won't work (but I would be interested in seeing if you could write a patch for it). Openring is designed to only use up to one article from each feed, to keep the set of links diverse.


All RSS readers support import/export feeds in OPML format. Maybe you could support that format as the input to Openring?


Patches welcome! I think this would be a great addition.


That makes sense. Sure, I'd definitely be up for trying to write a patch for it. I'll pull down the repo this evening!


Wow this gave me a serious flashback to late 90's websites and all the links to other sites in a network.


This should be a Jekyll plugin as well. Maybe when I get time.


Post this is right-leaning communities. Right-wing thought leaders are really getting hammered by major platforms right now and they could really drive adoption if they take up your product.


Is it just me, or is this just an automated way to steal other people's content and put it on your blog without their permission?

> ...fetch the latest 3 articles from among your sources... Then you can include this file with your static site generator's normal file include mechanism.


It only fetches the first 256 characters and links through to the article on your own site... I guess if you don't like it you can send me a DMCA request.


Seems like it's only the first 255 characters /pedant


It's just you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: