> A webring contained a list of websites that all contained a similar theme. A single moderator — or Ringmaster — was in charge of approving and adding each website to a webring. Sites participating in a webring would then place the ring’s navigation box at the bottom of their site, which would bring visitors to whichever site was next or previous in the list (depending on which option they selected). If a visitor was on the last site in the list and clicked next, the list would loop back and load the first website in the list, in essence forming a ring of websites.
If this is widely adopted, it would be quite straightforward to crawl and graph a network of all the blogs in the ring. This would be really cool! Search sucks for finding good blogs in my experience.
Is there anything in the output html that would make this crawling easier to do?
RSS seems to persist because it’s on by default on almost all blogging platforms. How can we make this into an on by default option in the main static site generators and CMS’s?
How would you go about getting that information on existing blogs? A great many blogs have 'blogrolls' but scraping that information seems far more time-consuming than it ought to be due to the variations in layout etc. etc.
Wordpress-based blog proprietors are likely to maintain their blogrolls using Wordpress’s Links feature. Because this is a standard module that produces more or less the same output across all WP blogs, it is rather easy to parse for and scrape.
However, if this Show HN tool is for static-site generator-based blogs, then individual bloggers may have more leeway on how to format their blogrolls.
I'll give that a look. It's been a while since I checked into it but I've been wanting to map a particular blog network for a long time so that sounds very helpful.
It's interesting that none of the comments so far complain about the size of the binary (13MB!). Nodejs and Electron projects get flak for this all the time, but Go somehow gets away with it.
I could definitely stand to strip the binary (saves about 2M), but the main issue is that (1) it's statically linked and (2) I actually pull in a surprising amount of dependencies for a ~100 line program. It uses an template parser & renderer; an HTML cleaner; a time formatter; an RSS/Atom parser; TCP, HTTP, and XML implementations; and extra handling for Unicode. Even then, 11M is a bit excessive.
However, the comparison to Electron is unfair. The base Electron package is well over 100M, more than 10x as large, and generally not statically linked. This binary also doesn't hog your RAM and CPU and GPU like Electron does.
Exactly. Who cares about 13MB disk usage in this day and age? But RAM is still too easily exhausted on a workstation with a decent amount of multitasking going on.
Wow, I didn't know Go had large binaries. There's no reason it should though. As Go programs are compiled down to native code, if as the last step of compilation, the equivalent of unix strip (or the JavaScript world's tree shaking or the JVM world's ProGuard) was run, the binary would only have code that's actually used. Since Go is a relatively young language, it might be that implementing this was just not that high of a priority for the Go implementers.
Alternatively, it's possible that the executable size is large because it: (1) actually contains a lot of in-use code, or (2) contains media files. I know on Windows, it's common for people to make media files part of the .exe file. The Windows Portable Executable (PE) format makes it quite convenient to do this; and the Windows API has functions that make loading up files bundled as part of the .exe file as seamless as reading from the file system.
Go binaries tend to be fairly static, relying very little on system libs. It also makes some size tradeoffs for better load times. Also, compared to NodeJS and Electron... it’s still tiny. Not that anyone cares so deeply anymore.
You're right. My observation was just that the discussions often don't go much deeper than "look at this big binary", so I was surprised to find none of that here.
I can't say I've personally ever seen binary size explicitly called out in an Electron slating thread. It's usually memory usage on a typical 8-16GB laptop, or battery drain in the cases of frivolous animated bells and whistles.
The next time you notice a particular clichéd complaint missing from a thread, it might be better to not comment on its absence unless you actually want to have the same discussion all over again.
What do everyone use for feed reader !? My old phone used to have a built in feed reader. And my old browser also used to have a feed reader. Looked on Google play and there where very little to choose from. Thinking of creating my own feed reader ... Or have "blogging" moved over to "Youtubing" and the occasional podcast !?
I had used Thunderbird's RSS reader for ages, but last year I needed to work on both machines concurrently, so went looking for an alternative and found newsblur - https://newsblur.com is awesome, and works on mobile as well. (unaffiliated, happy paying user. it is free software so you could host it yourself if you want).
I'm a lifetime subscriber to Feedly (https://feedly.com) which has behaved sanely since I joined and claims to have mobile apps but I'm not the target audience for reading blogs on my phone.
Feedly is excellent. If they only added decent feed management to their Android client (unsubscribing from a post level in particular), it would be perfect.
I used to send all the RSS articles to my mail box. Integrated synchronization between devices, mark as read, starring, categorizing, search... All it needs is a script that runs somewhere, it's the simplest one you can self host (provided you already have a mail box)
I don't use it anymore because I don't do RSS anymore. I found the constant browsing to be a bit of a desperate situation, the same way people constantly browse their Facebook account. I'd rather stumble upon a nice article.
Some people like Feedly, but I've always strongly disliked it for reasons I'll get into only if someone wants. (Some of them may not even be current, I haven't really checked the ecosystem since the year Google Reader died.)
For those looking for something closer to the original Google Reader experience with a few extra (and very nice!) features, I would suggest Inoreader, which is what I've been using for many years now. It uses a Freemium model, rather than forcing everyone into an ad-based "free" version.
I also hated Feedly, whose cardinal sin seemed to be that it's not Google Reader. Nowadays I use Newsblur, which is close enough to Google Reader, and I wholeheartedly recommend it.
Newsblur was one of two others that I gave a go after the Google Reader demise (also TheOldReader). While I liked it okay, I ended up preferring T.O.R. (and then Inoreader) to it because the interface (at the time) was very messy and they had scaling problems with the deluge of new users at the time.
Any of the three were better feed readers than Feedly, which seemed closer to a Google News clone (or more recently, Mozilla's Pocket) that happened to have support for RSS feeds.
I use BazQux. Super simple interface and it has search (Feedly didn't back when I picked after the Google Reader debacle, and its interface is a bit bloated to me).
I've used and recommended it until the Go/PostGres rewrite happened with miniflux 2. I've switched to Selfoss since then because it had more costraints when self-hosting (php and no database was easier).
For me running one docker-compose up is a lot less work than running a php stack so no complaints there and it works without any issues since it's release.
Self hosted https://freshrss.org/ has been pleasing me for years now. No troubles at all, even when updating. Uses SQLite, which is awesome when migrating.
Used https://tt-rss.org/ before, but had so much trouble after updates. Hit and miss.
I've been using https://tt-rss.org/ for many years now. I'm hosting the backend on a small VPS just for myself. The web client and the Android app are great and the whole stack is rock solid. Highly recommended if you want to read your feeds on multiple devices and keep them in sync.
I went through a handful of RSS readers over the years till I settled on QuiteRSS (https://quiterss.org/). It's a Qt native application with builds for many operating systems. It's still in active development, fast, and has a nice interface.
I really like the simplicity and minimalism of Sfeed. I setup a nightly cronjob that generates an ordinary HTML page, from all the RSS feeds that I like to follow.
I used Feedly for quite a while since google reader shutdown, but switched to Inoreader about 3 or 4 months at. Generally I’m happy with it, although most of the time I access the feed via the Reeder MacOS and iOS apps
I prefer getopt because it's more succinct and more standardized.
I made sr.ht :) I can't give you an unbiased opinion, but naturally I think it's quite nice given that I designed it explicitly to suit my needs. Check out the marketing page for some more details:
Heya, so I just checked out Sourcehut and despite really wanting to use it (and pay) I decided not to. But because I find the product cool, I figured I should note why. So,
1. My primary projects of which I need private hosting contain many (less than a GB, currently) "large" files hosted. Of course, these files need to be tied to the source code. Currently I'm solving this with Git LFS. Now, I'm not exactly tied to the solution, I just want it to be fairly pain free and baked in, of which Git LFS does a good job on. I super look forward to Git LFS.
2. I know you're working on it, but I have no desire to change my workflow to be email oriented. I don't intend to debate it, it's just not what I desire. PR UIs would be hugely helpful.
#2 is barely an issue at all, because I know you're working on it. If I had GitLFS I would have signed up for $10/m as that's well worth it to me.
Appreciate the product, hope it does great things for you. I look forward to using it when/if you implement GitLFS or something like it :)
Thanks! As a potential customer I hope this was at least mildly helpful :)
edit: It should be noted that my GitLFS files are of course binary. 3D modeling, PSDs, that sort of thing.
Regarding point 1, I do eventually want to add git lfs support.
Regarding point 2, the only thing I can say is "don't knock it until you've tried it". As someone who's spent thousands of hours each in GitHub, Gerrit, and email, as well as some time in GitLab, Gitea, and Phabricator, I've tried a lot of workflows and email is by far the most efficient. However, in the future I plan on adding web UIs for review and patch submission which are backed by email underneath, so you can use the web or email - whichever you prefer. I think you ought to give email an earnest shot, though.
Also, lots of people use a subset of SourceHut, like the CI service for example, while still hosting git repos on GitHub or GitLab. Because it's modular in design, each piece can be useful ala-carte or composed freely with other solutions.
Oh, and if there's a way to subscribe / be notified when LFS support is added please let me know :)
For now I'm building around Gitlab, possibly migrating to Github if I run out of LFS storage on Gitlab (Github lets you pay for more storage.. don't think I can on Gitlab yet..).
That was a disappointing response. The overall design is ok but this really felt out of place. Hopefully he realizes that and stops blaming the “puritanical jerks” as he himself said.
> Is there any reason to use getopt over flag in Go?
Its parser follows the POSIX utility syntax guidelines and behaves like most POSIX tools. The flag parser on the other hand is simpler, but can lead to some surprises if you are used to getopt-like parsing.
Looks a bit like what any RSS reader would provide. Or maybe it's meant to publish links to other blogs on a separate site, which could work if the articles have a somewhat permissive license, I guess.
It allows bloggers that want to keep control over their own platform and content (read: not use Medium) to be part of a bigger network of selfhosted blogs. Like in the good old days.
Yeah, I remember webrings back in the day and absolutely miss it. The discovery aspect of it was awesome! You come across a great article through a search engine and are curious who the author likes reading.
Most centralized platforms seem to recommend other authors based entirely on the topic. You’re reading a post about Rails Generators? It’ll recommend other articles about the same thing. With WebRings it was way more diverse than that. End up on an electronics page and discover all sorts of other fascinating authors doing entirely different things.
Edit: I'm asking because I suspect your answer is going to be "it doesn't/that's not the point/who cares?" But people who read blogs will just flock to the centralized services that solve curation & search quite effectively and keep users reading. And then this centralized service will have strong incentives to become _more_ centralized, not less, and decentralized solutions like this one never really gain traction and become functionally irrelevant.
But people who read blogs will just flock to the centralized services that solve curation & search quite effectively and keep users reading.
I don't think that's true. People read blogs by writers they like (they go directly to that writer) , or about topics that interest them (they follow links from a website or social account about that topic) , or because they're looking for a specific post about a specific thing (eg they Googled). None of those things are best served by gathering writers on a single centralised platform. In fact, so long as blog posts are open the reader probably doesn't care how the posts are published. (Side note: this is the flaw in Medium. No one want the subscribe to the Netflix of blogs. People might pay to access their favourite writer, but not in the long term.)
Centralized blog platforms serve the writer. They're easy, they often have good tools, and they have an audience, although I doubt that's actually very useful to most writers - just having more readers without caring who they are is pure vanity. You want relevant interested readers if your blog is going to be effective promotion for you.
Ultimately, blog platforms are fine. Writers are a great customer base to have. Just don't kid yourself blog platforms benefit readers. They don't.
It's meant to be used and configured for individual blogs - you provide the sites you're following and it picks out articles. I'd say the selection of blogs you follow is curation enough. It's not meant for use a la medium.
Looks like it takes RSS feeds and makes a blogroll. I thought most blogging platforms had that function built-in. Certainly Blogger does (ex: http://www.sloopin.com).
Maybe this is for if you built your own blogging platform and don't have a blogroll plug-in/feature available.
It's good for people with content that doesn't align with the values of employees of major platforms e.g right wing publishers, the porn publishers of Tumblr etc. Such people are always at risk of deplatforming and it's nice for their content to be spread out all over the web to prevent that risk.
ah, what's old is new again. i can't wait to get my 1999 on! =)
the subtitle is a little more informative: "A webring for static site generators." i'm glad to see this kind of the movement toward self-deplatforming.
i wonder how comments are both attributable and decentralized? i'm not a fan of comments being blog posts themselves (which would be one solution to this), as they're often "less formal" than a post and should reflect that.
To be fair, lots of people (certainly I) think we took a series of wrong turns on the information superhighway in the 2000s, toward a dangerous level of centralization, so...maybe looking backward is good.
To be fair, there's not that much information apart from the technical usage on that link. I read it but it's not that easy to parse what it's for and what it looks like.
Interesting quote from Brad Enslen about Google's possible role in this:
> Search engines in general but Google in particular: they have warped the way we build websites, many websites used to have a splash or landing page first: “You have reached the Gates of Marlborodor” (complete with MIDI music) and a big Enter button. Search engines decided they didn’t like that so word spread to get rid of them. Rumors spread that large link pages (for surfing) might be considered “link farms” (and yes on SEO sites they were but these things eventually trickle down to little personal site webmasters too) so these started to be phased out. Then the worry was Blogrolls might be considered link farms so they slowly started to be phased out. Then the biggie: when Google deliberately filtered out all the free hosted sites from the SERP’s (they were not removed completely just sent back to page 10 or so of the Google SERP’s) and traffic to Tripod and Geocities plummeted. Why? Because they were taking up space in the first 20 organic returns knocking out corporate and commercial sites and the sites likely to become paying customers were complaining.
I always liked the concept of webrings as a vehicle of discovery, but in practice it was always terrible. Nobody curated what could or could not be part of a ring so you were absolutely not finding good pages by clicking on the “next site” link.
They have potential, but there are nuances to make them worth using
Hmmm oh I suppose if I just exposed my feed reader's aggregated feed as an rss feed over a local http server, I could just point Openring at that! I might try to whip that together this afternoon! (I use newsboat in case anyone else is interested in such a thing.)
Actually, I'm afraid this won't work (but I would be interested in seeing if you could write a patch for it). Openring is designed to only use up to one article from each feed, to keep the set of links diverse.
Post this is right-leaning communities. Right-wing thought leaders are really getting hammered by major platforms right now and they could really drive adoption if they take up your product.
Is it just me, or is this just an automated way to steal other people's content and put it on your blog without their permission?
> ...fetch the latest 3 articles from among your sources... Then you can include this file with your static site generator's normal file include mechanism.
It only fetches the first 256 characters and links through to the article on your own site... I guess if you don't like it you can send me a DMCA request.
https://www.hover.com/blog/what-ever-happened-to-webrings/
> A webring contained a list of websites that all contained a similar theme. A single moderator — or Ringmaster — was in charge of approving and adding each website to a webring. Sites participating in a webring would then place the ring’s navigation box at the bottom of their site, which would bring visitors to whichever site was next or previous in the list (depending on which option they selected). If a visitor was on the last site in the list and clicked next, the list would loop back and load the first website in the list, in essence forming a ring of websites.