* A blog post is a snapshot: what did the author think at the time they wrote the post? If they change their mind or learn more, they write a new post and link forward and backwards. I know how to write for this environment (write what I think now, try to write things I'll feel glad to have written later) and how to work with things other people have written (consider the date, it's just one person's view).
* A wiki page is unclear. When should it be updated? How much should I trust that it was up-to-date as of the last-updated date vs that just being when someone fixed a typo? A few wikis and sites with wiki-like approaches (Wikipedia, gwern.net) manage to handle this well, but I think it's generally much more difficult and rot-prone.
And a blog could be updated as many times as you want! Suppose you wrote a post with some kind of a guide a few years that still gets a lot of views — you can still update that, and even re-publish it on your homepage.
This style is more popular with some media sites — e.g. journal.tinkoff.ru (in Russian) does this with their “instructional” posts — but I think it should work for personal blogs just as well.
I've seen a number of blog posts that have eye-catching banners at the top that indicate that the post was either updated with new info, or say that the post is outdated with a link to a follow-up entry. Those are always nice to see since it shows that not only is the info up to date, but also that the author cares enough to keep their posts relevant to their audience
I think in modern times because you don't see it the same way you forget that a newspaper, magazine, encyclopedia, academic paper, and a reference book are different for a reason and do not have to be unified, and there was no need to do so. Having a way to publish contemporary articles vs building an effective knowledge reference are different goals. The physical print production was part of the language of intent. A blog with a contemporary note of an obscure Ubuntu upgrade issue is often the best way to describe it whereas a wiki with a current reference of now the fundamental baseline of relevance. A wiki eschews the most relevant details the same way an encyclopedia focuses on the core points. As a writer I need many formats and as a reader I cherish many formats.
I like the sentiment of this, I feel it very strongly. But I also think this is just another expression of personal website anxiety. This is the same sentiment that lead to the boom in digital gardens, notion, PKM etc.
I haven't accepted it yet, but I think people who host their own personal websites need to accept that they're hosting a personal website, and it's going to change over time as they change. People already know and accept and see that as a feature of a personal website. It's necessary as the tech changes too, a personal website with the latest and greatest tech from 20 years ago renders like garbage in a modern browser.
I really don't mean to sound mean, and I do sincerely empathise and sympathise with the author, because every year or 2, I have the same revelation that my website hasn't been updated in a while, and it's not my fault, it my platform just isn't technically correct and it's too restrictive and _that's_ what's stopping me being consistent on my personal website. But let's be honest, that's a me problem for not updating it or adding to it.
Every year or 2 for the past 20 years, I'm sure many of us could write the same "<my current website structure> rots. <The one I had an epiphany about a few days ago> wait".
> a personal website with the latest and greatest tech from 20 years ago renders like garbage in a modern browser
The irony being that one without the "latest and greatest" tech would still render fine - tbh, even in 2005, some of this "latest" tech was recognised as not the "greatest".
Examples? I don't think daringfireball.net has changed that much since 2002, neither blog.codinghorror.com since 2004.
Coding Horror has moved from self hosted movable type to hosted movable type to Ghost. The theme has been deliberately kept relatively consistent, but the tech has had significant changes.
That didn't last long. The timing is incredibly funny as just 4 days after making this post, Coding Horror _did_ redesign, more significantly different this time.
12 year old me's website from 2002 on webarchive looks like trash compared to when I made it because framesets and css has evolved since then.
Many of my nerdy friends at the time also had fully flash based, or just a flash animation on the front of their websites.
We also all made liberal use of blink and marquee tags which might still work in some places, but are officially deprecated and unsupported in others. At the time, these sorts of things were considered the latest and greatest. Hell, I remember one maniac playing around with Microsoft Silverlight on their personal site as we were in college. We knew some was good, some was bad etc. None of it survives as intended now though, unless it was updated.
There aren't many personal like websites I remember that have stayed personal sites for over 10 years. But some have, kind of. Take gwern.net or stevepavlina (not a regular reader anymore): both of them - if you squint - look pretty similar minimal 2000 - 2010 style as when they first started, but the implementation has had to evolve as even things like CSS have had a bunch of breaking changes over the major versions.
codinghorror uses Discourse for comments, and that didn't come around until 2014, and the only thing that looks like it might be from 10+ years ago on daringfireball is the content: the tech that renders it and the design all hinges on 2010+ technologies and companies.
I love minimal styles and ideals. But if you want to create a personal website that survives time and looks how you expect in a modern browser, even a minimal one, you have 2 options:
1. it has to either look exactly like some variation of motherfuckingwebsite.com, and that will eventually vary between browsers
2. it takes some effort to maintain over a period of years
Sorry, I wrote the lengthy response above and I think just realised me and the other person who responded to you have misunderstood your comment: I think me and the other person initially assumed you're making only 1 point, that latest and greatest tech wouldn't render, but your examples clearly use modern tech from well after 2004, which makes no sense. On rereading, you're making 2 points, right? 1 point is that tech from 2004 can still work (but you didn't provide example) and other point is that those 2 sites you did provide are example that "website structures" (e.g. linear blog) can survive time but didn't actually state that point. Is that right?
Assuming so, you're completely right on your second point, and people who've done that (maintained a blog for years) are the real legends who've conquered person website anxiety. They're better than many, including me and OP, who have the issue of feeling like they need to completely rearchitect their website every 2 years to a wiki, or digital garden, or knowledge base, or whatever the latest PKM tech buzzword is. This is why I was saying that there's no need to crap all over e.g. blogs for rotting, as your examples prove blogs can survives decades. OP is just another response to that feeling of "uuurgggh I can't quite wrangle my thoughts into a neat, atomic, chronological list of blog posts and people who read blogs will judge my personal website, so I'll tell them it's not a blog".
> There aren't many personal like websites I remember that have stayed personal sites for over 10 years. But some have, kind of. Take gwern.net or stevepavlina (not a regular reader anymore): both of them - if you squint - look pretty similar minimal 2000 - 2010 style as when they first started, but the implementation has had to evolve as even things like CSS have had a bunch of breaking changes over the major versions.
FWIW, I don't think CSS/JS have been much of a problem in terms of breaking. Before Said Achmiz got involved, while gwern.net was still a fairly simple static website, there were pretty much no serious instances of browsers breaking the CSS/JS. Everything ran fine. (Browsers are good at backwards compatibility, as long as you aren't pushing features too hard, as my site was not.)
The big problem was changing expectations and demands. 2010-era Gwern.net would continue to look fine and render fine in a 2025 web browser... as long as you were on a desktop/laptop. If you were laying in bed at night on a smartphone or a tablet, it would be both blinding and unreadable. This wasn't a problem in 2010, when <5% of my traffic was 'mobile' and mobile was very hard to design for even if you wanted to bother, but it is a problem in 2025, when >50% of my traffic is mobile. You can't write off >50% of your traffic.
People now assume that a website will render reasonably on a smartphone, and they increasingly assume there will be a dark-mode. But, even if you aren't using a legacy website based on Flash or tables, and are creating a clean greenfield HTML5 website, these are actually quite complex, demanding features!
You can get 80% of the way with some relatively simple tweaks, but then the rest can be almost arbitrarily difficult. Note that aspects of dark-mode can be subtle - while most websites have settled on the body class approach which avoids the 'blinding flash of white' problem, they continue to screw up dark-mode by treating it as a binary toggle between light vs dark, instead of a three-way toggle between auto vs light vs dark, and most dark-modes settle for fading out images instead of inverting vs fading them with a heuristic like https://invertornot.com/ . (We're still iterating on both features. Most recently, in November 2024: moving the scrollspy header to the bottom of mobile screens instead of top, and taking another dive into color theory in order to have colored link-icons which look good on mobile dark-mode.)
Compared to the burden of features like a mobile mode which is actually good, the regular level of web browser JS/CSS breakage is scarcely noticeable. (It tends to look something like, 'oh no, now in Chrome 123, there's a 1px gap in the dropcaps and the spacing constants have to be adjusted, THANKS GOOGLE'. Annoying, but hardly a major burden.)
Of course, you can quickly strangle yourself with your dependency stack, particularly if you get involved in NPM or you listen to the wrong web gurus, and your website will break overnight, and JS is definitely very fragile and rapidly bitrotting if you define 'JS' as node et al. But if you are being sensible about it and using plain JS (as we do) or well-chosen stable frameworks, of web dev's woes, 'web browsers keep breaking old working CSS/JS' is thankfully far down the list.
The big problems are self-inflicted ecosystem problems like bitrot/bitcreep (https://gwern.net/holy-war#bitrot), and the ever greater complexity expected of websites compared to the old 'preview in the only web browser anyone uses (ie. IE6), make sure it looks fine in the standard monitor dimension 800x600, export from MS Frontpage, ftp up to your shell account, done' days.
You have no idea how much joy it brings me to have something written to me, from you :) Your site does such an amazing job of capturing a particular way of thinking and interacting with the world. I can't articulate what that way is exactly, but it's been an immense comfort and an inspiration to me personally in so many ways since around 2010. I identify as a bit of an overthinker, and I see your site as a testament to what an overthinker can produce if they can wrangle their thoughts into a coherent personal website structure. I'll try not gush too much, but you are someone I have respected and followed on and off for most of my adult life, thank you, please have some warm fuzzy feelings of being appreciated by a random internet stranger. Anyway, back on topic...
I think it depends what you define as breaking. I'm a simple, nerdy, backend/infra kinda person, so CSS issues don't really bother me too much - I like your "what gets used, works" thought, if I can read the text, I'll live with it. But I've met stereotypical designer-type web devs who make the 1px gap in Chrome 123 at least sound like a major burden. I don't know Said, but I assume they sit slightly closer towards the "1px gap _is_ broken" side than you or me, even if they're not a full blown designer-type.
Regarding changing expectations, and the mobile example, I find it really interesting to read your thoughts on that. I feel like your website is an exception to normal expectations, at least for me. If I find myself on your site on mobile, I just e-mail the link to myself and pick it up on desktop. The information is so dense, stimulating and interlinked that consuming it on a tiny mobile feels like listening to Mozart over a tin can and some string. I think the physical mobile form factor is not good for long form or cerebral content, but maybe that's my own DeSkToP iS BeStToP bias.
Have you done any analysis on bounce rates based on first visit device? I would guess a slightly higher return rate for people who visit on desktop first, as - to me at least - the archetypal gwern.net desktop website experience is a big part of the charm.
> But if you are being sensible about it and using plain JS (as we do) or well-chosen stable frameworks, of web dev's woes, 'web browsers keep breaking old working CSS/JS' is thankfully far down the list.
I think some of this also depends on motivations for having a website too. If it's to get notes out there, then as long as it can be read it's fine. But if it's to stay up to date and be associated with the latest and greatest web experiences and technologies (or at least be perceived to) you're destined to a life of new-shiny chasing and tech churn. I do wonder how much of that ability to identify "sensible and plain JS" comes from shiny-chasing 10 - 20 years ago and just using the bits that still work from then. What will be the "sensible and plain JS" in 20 years time?
A lot of this I guess is also specific to being a dev for a personal site, rather than pro web designer: part of being a modern business is keeping a modern front, so whilst "browsers breaking because of old stuff" is far down the list "having to switch to new shiny framework because of business decision to 'refresh' things" is a lot higher.
But to try bring to some sort of conclusion, on a technical/presentation level, HTML/CSS/browser breaking stuff are - or will be - problem for a lot of people getting started with personal websites, because even experienced nerds can struggle to identify which tech will be around in 5 years time. You probably don't see it as that much of a problem, because you've solved it with a combination of the 2 options I gave: your site feels like a stylistic ally to motherfuckingwebsite.com (minimalism), and effort has been put in to maintain it over the years (e.g. UI update efforts via Said).
To tie this back to OP about blogs vs wikis, that's a slightly different problem the author faces IMO. Not quite bit rot, but something higher level, an anxiety around content structure matching the shape of thought of the individual author, and the perception of that (blog vs wiki vs garden vs pkm buzzword) rather than technical (React vs Angular vs js/css buzzword).
I think Said is a lot closer to the '1px is broken' than me, but also that it's not really that important. How many websites have dropcaps at all? They are fun to have, but very optional. And so if you find the maintenance to be a burden, you can simply not do that. And then the core CSS/JS basically never breaks. (All of your problems will come from churn elsewhere, like SSL certificates - a major source of feature creep is the demand that everything be HTTPS now - or OS upgrades or SaaS bitcreep.) So my point is that a simple website is as close to zero-maintenance as makes no difference. It's much more complex than plain text files, but that complexity is widely implemented and is stable.
I too am biased towards desktop and have to force myself to make any use of the mobile version, and check for regressions or friction. I find mobile to be incredibly stultifying for any kind of complex work or thinking. Nevertheless! There are a lot of mobile readers (including many people I respect and would like to keep as readers). So, mobile's gotta be good. At least I think we have managed to create about as realistically good a reading experience on mobile as feasible, and serve as a design showcase there too, to shame everyone else.
That sounds plausible, but I haven't attempted to link visitors across devices. I am not sure if Google Analytics even allows that. (The new one is so confusing and hard to use I've largely stopped looking at traffic statistics.) I think it might be hard to interpret such an interaction, though, because mobile users bounce so fast and clearly so distracted & thoughtless & crippled (eg. because they are killing time on a bus), while desktop readers are much more mindful and 'high-quality time', so they are different in many ways beyond what version of the website they see.
Do very many people actually want to claim to show off the latest web dev gimmick? I'd say not. Almost no one is doing that. Even web devs often have quite plain personal websites or blogs which are not trying to show off their mastery of the latest Nodebuzz.js (best viewed in Chrome 234) feature. When people write about their new Proton static site which uses Netlify to cross-compile from Github pages through AWS Lambdafront functions called on demand with a Kafka load balancer to prerender pages for the wasm.js shim and it only costs $100/month after optimization to host their blog with 50 pages and 50 views/day (aside from that time they accidentally created 10MB of logs and it cost $1,000 but don't worry, AWS was willing to forgive it), I am pretty sure they are not going through that to impress an employer! They are going through that because it's a lot of gizmos and puzzles to play with and 'number go up', and the end-result is irrelevant. They are doing it to watch the gears of the new toy go round, as Norbert Wiener said of the atom bomb makers. Which is fine, as long as you're just getting experience (and also not making a new atomic bomb), but may have side-effects in naive people thinking that any of that is a good idea, much less necessary.
Managers & designers, on the other hand, do need churn to justify their existence. Look at the hilarious trends of companies commissioning new fonts where you can't see the difference with some Helvetica-like even overlaying the fonts, accompanied by thousand-word manifestos rhapsodizing over the design process inspired by the cliffs of California and the sensual humanism of Bauhaus (with a cheeky node to Swiss Modernism, and a dangerously daring serif on the 'g').
Oh, I think I would disagree there. As I was saying, browsers are actually amazingly good at backwards compatibility (perhaps too good). Which makes it easy to decide what browser features to use: if it's Chrome-only, sure, it may be gone in 5 years, but you don't have to be a genius to know "maybe I shouldn't make any major use of a Chrome feature that no other browser wants to support, and I should wait and see". I may have mentioned that we have a rule of thumb: we can use anything with >95% global support on CanIUse.com, and anything else must be backwards compatible/polyfilled or not used at all. We've hit a handful of things where the browsers betrayed us (I'm lookg at you, `<srcset>` and `ping` attribute), which you can find on https://gwern.net/design-graveyard but you know, even mid-1990s `<blink>`, which was never standardized to begin with, worked until 2013! So I'll put it this way: I don't think you can name 3 examples of reasonably widespread features, which were standardized and had >95% CanIUse.com support as of 5 years ago (2020), which are now either broken or rapidly being removed and have fallen <95%. (The only examples I can find are still exotic: https://en.wikipedia.org/wiki/Cache_manifest_in_HTML5 , browser FTP support, and https://en.wikipedia.org/wiki/HTTP/2_Server_Push 2 of them would have been bad ideas to use in new code in almost any circumstance, and I suppose for personal websites you could've in theory used the cache for an offline browseable website... maybe. I doubt many people ever did.)
I think there's some degree of anxiety around perfectionism, but also around the aversion of writing (some people find any kind of writing to be very difficult, even when they have just written an essay by talking to you in a chat client), and the fear of pseudonyms being broken or being canceled. (Zoomers & Alphas seem particularly terrified of 'the dark forest'.)
Wikis are great, but it feels like outside of Wikipedia they've been dying. I wish bashhackers was still with us, and regret even more that there's not other large old school wikis for things like systems software development or other programming languages (or even a newer take on the original WikiWikiWeb). I guess maybe some of it is GitHub pages or that weird GitHub wiki thing now, but they don't really feel the same.
It's tricky, as a lot of what makes Wikipedia good is an accumulation of policies and practices that force you to use its features. You can't just install MediaWiki or Confluence and expect it to be good.
As an example, orphan articles (ones not linked from any other pages) [1] are categorized and tagged on Wikipedia. There are many editors that spend their time trying to find places to link to orphaned articles, which improves discoverability of that information.
The best programming language-specific Wiki I've found is cppreference.com. They have a very nice stylistic approach to dealing with the different C++ standards.
There is a saying you can't solve a social problem with a technical solution. Wikis are mostly just a thin technical layer to let people create their own solutions to social problems.
That saying said, people often wrongly call technical problems social, being blind to obvious technical flaws that drive users into various directions. They attribute it to some inherent user property when in reality it’s a technical nuance of a system that does it. Most problems include a user behavior, but most user behaviors include the shape of some existing solution.
I think I understand what you mean, but it would be really helpful to make sure I understand if you could make a practical and specific example of what you mean? Has there been a case with Wikipedia/the Wikimedia Community where this specifically has happen?
I think this can only be seen by an insider-enough, and I am not that much of a wikipedia guy.
An example that probably everyone can understand is youtube. Content creators are often begging for like/subscribe and even present graphs that show that only 10% of their regular viewers are subscribed. This seems like a social problem to some, but in reality it’s a technical problem because if you like/subscribe, your feed will drown in “similar” videos of much lower quality and that will haunt you for weeks. And there’s a whole set of other ui/ux issues even if you decide to subscribe. So many people avoid interacting with videos too much because of that.
I know youtube is a beaten horse, but it is a textbook example of what I mean. I could theorize about what’s wrong with wikis, but wiki guys see it much better. My key thought here is “check twice that it’s not technical before calling it social”, but it may not be the case here.
On the other hand, the entire reason that creators want you to like/subscribe is so you drown in their videos, so you watch more, so they get more ad $$$. If liking/subscribing didn't have that affect, they probably wouldn't beg you to do it.
As an aside though, only 10% participating isn't very unique to youtube. Its a pretty common rule of thumb that 89% only lurk, 10% participate in some limited fashion, and 1% become super active, in any online community https://en.wikipedia.org/wiki/1%25_rule
Their videos pop up in the feed by simply watching. That’s how 90% stays retained without an explicit connection. What happens after subscribing is two-fold: a subscribed viewer gets more “related” crap from other channels due to shown “interest”, and a channel gets more chances to get recommended to new people. So it’s a growth thing, but not directly linked.
But I agree in general that a problem may be sort of recursive or self-supporting and not always technical. I just see people often enumerating simplest solutions and defeating them in the same comment, so that nothing gets done or even tried, when real-world systems are always complex and complexity is hard but at the same time has many potential tipping points. People build empires on default settings alone and lose businesses on a tiny but stable leak in the funnel. Thinking that there’s no such important and easily overlooked points in any non-trivial system, and it’s just all about users, is a mistake.
I'm not sure if this is what the parent means, but i kind of feel like the tags vs categories debate.
MediaWiki uses the term "category" for its system of tagging pages. This generally is used as a very complex system of hierarchy with many nested levels and prescribed "correct" categories.
Some people think we should use "tags" instead (e.g. less nesting and more intersecting. Instead of a category "People in Belgium" use a tag "People" and a tag "Belgium" and apply both).
On a technical level the feature requirements are basically the same, but the word tag and the word category have such strong connotations that the mere name strongly supports one social organization over another (imo)
I would say the current lack of editors is a technical issue and not a social one. Most search engines and large-language models discourage readers from going to Wikipedia by hiding the source of the information. The actual technical means the Wikipedia community uses for social communication (IRC, email, Wikitext talk pages) are out of date as well.
We definitely need an official Discord equivalent with wider adoption. I can't effectively use IRC on my phone and computer simultaneously. In terms of editor outreach, I don't have any easy answers. Wiki Education is very good at bringing in editors.
This is another good example of people writing the problem off to social without even trying something more sophisticated than a counter: https://news.ycombinator.com/item?id=42578134
The amount of trivial self-defeating ideas that for some unclear reason supposed to prove the absense of a solution - with the current solution being a dead stupid counter - is amazing.
I used to sit next to Nate, the founder of cppreference.com . Humble dude. I had no idea he created it until a coworker mentioned "did you know that Nate created a website about C++". I thought cool, assuming it was some small site. I asked Nate what it's called, he said cppreference.com . I thought wow, that's the C++ website.
Building on your idea, it’s also important to look at how wikis use Mediawiki. Different wikis in the Wikimedia ecosystem apply different sets of patterns appropriate to their size and purpose. Wikivoyage is a useful example of hierarchical information. Wikibooks shows how subpages can be used effectively.
Aside from Mediawiki.org, Ward Cunningham, inventor of the wiki, has written extensively about wiki patterns. He’s also created a reference implementation for a federated wiki that attempts to solve the fork & revision control use cases centralized wikis struggle with. See https://github.com/fedwiki
I feel like that's been helped along culturally by fandom.com buying out various independent wikis and then shittifying them to the point of unusability.
There's an ongoing mini-revolution against Fandom, with the wikis that have the manpower moving away to other platforms like wiki.gg, WeirdGloop, or just self-hosted. Minecraft, WoW, Runescape, LoL, etc.
If only we could have something similar but official for Starfield.
For some very stupid reason, Bethesda decided to launch official modding support with Starfield while their wiki is "temporarily down for maintenance". But it's been like that for at least 1 year, so seems unlikely to ever come back, and who the fuck takes down something so vital for a feature you're shipping in a major game anyways, without letting a read-only copy remain online at least?
Adding autoplay videos that are tangentially related to the wiki page.
Said videos take about 10 seconds to show up and will cause the screen to scroll to a different point from where you are currently reading
Their privacy policy is exhaustive, and you opt in by default to sharing your data with Google and 7 other analytics providers
UI noise like "share to Instagram/YouTube/LinkedIn" that probably 0.1% of people even consider using
When you're on a specific game wiki, you care only about that wiki content, but fandom places a Fandom navigation bar and side menu
Their default email strategy sent almost daily updates for random posts on (unrelated) FANDOM community when I made a couple comments on a specific wiki a decade ago.
Also, on mobile the video ads plus Fandom navigation bar now take up a sticky combined 50% or more of the viewport until you find the close button.
Also, Fandom got its start by taking advantage of the easy, permissive nature of CC licenses of major early wikis, including much of Fandom started as chunks of even Wikipedia itself that got excised for "relevance" or other "non-encylopedia" factors. Even in some of the cases where communities have decided to leave Fandom, Fandom believes it remains their right to scrape the community's new site and not need to include links back to the "real" home, so Fandom itself becomes an SEO disaster for the community. Even communities that had hoped to protect themselves from such situations by choosing a more restrictive CC license with the NC (Non Commercial) clause have been struggling with revoking Fandom's access to their content against Fandom's legal team saying that Fandom's commercial waver to the content was both retroactive and in perpetuity, and those wikis would have known that at the time.
Ads, awful layout on desktop, beyond awful on mobile, slow server responses. On the flipside more wikis are actually moving OFF Fandom lately, for instance to wiki.gg
The last time I visited a Fandom wiki on mobile, it autoplayed two videos simultaneously at full volume and required me to scroll through 75% of the page to get to the actual article, which itself was also full of ads. It was so bad that it motivated me to setup my own DNS server with ad blocklists
Besides ads and points sibling posts made: Fandom does a terrible job making a wiki an actual wiki. Their infoboxes are garbage, there's awful category navigation, and just general "wikiness" is entirely removed or hidden. Fandom presents wikis like they're SEO spam sites rather than actual references. Check out Wookiepedia or Memory Alpha on the Wayback machine sometime and compare them to current Fandom wikis. Even the thinly styled MediaWiki was vastly more navigable than the shit that is Fandom. The footers Fandom adds with advertisements of unrelated other wikis is also ridiculous.
Fandom is SEO spam infested with intrusive and obnoxious ads and is terrible at being an actual wiki. It fails at everything but co-opting community generated content to make money.
Wiki's are still doing well in the gaming space I think (https://oldschool.runescape.wiki to name one, has a dedicated community of thousands of players behind it).
I agree with you in general though, the GitHub wikis really don't feel the same.
There are a lot of gaming wikis but unfortunately a lot of them are hosted by Fandom, and it's hard for independent alternatives to compete with Fandoms SEO.
I think one of the best alternative (that is still free) is actually github pages. The problem is that the contribution model is not trivially solvable via permissioning - the owner of the github page has to approve (the PR changes).
Ideally, what would be best is if there's a set of github action that would automerge changes from a list of approved contributors (who would first fork the page, make the edits and submit a PR). This set of github action would need to be a turn-key setup, without needing much technical knowledge (so that the laymen can do it).
I think the default media wiki implementation suffers from there being no nice "page of contents" - Wikipedia is the same but many articles have a lot of manually curated "see also" or "other foos from bar" type table of links that helps a lot, but without it, it feels to me like the default discoverability and search on media wiki are terrible.
So if you are setting up your own wiki then you need to spend a lot of effort to create and maintain the ToC/landing page and build those interconnections up, and just hope Google indexes everything so you can search etc (... and if you have an internal wiki at work then you are basically screwed because Google won't index that). Otherwise you write your articles and they "disappear" and are not readily found.
At least that was my experience of working with media wiki and dokuwiki - the sites always felt "empty" even if you had teams of people contributing articles, because it was hard to see what was there in a meaningful way. And so they always felt moribund and abandoned even if they actually weren't.
Patiently waiting for someone to create an overhauled UX. Potentially one that does some sort of clever LLM-powered hierarchy so people can browse and peruse page hierarchies sliced-and-diced by some sort of topic/area-of-interest. So say I enter "engines" or something as my area of I treat, the LLM auto-categorises the pages into a sensible hierarchy that I can view, then I change my term to e.g. "steam engines" and it recalculates the hierarchy and shows something sensible for that etc.
The whole wiki notion is decentralized and free . . . everyone adding things as needed.
That conflicts with things like known governments and corporations: authoritarian, if not also bureaucratic. The attempts to control wikis make people much less likely to contribute.
If you don't know what Fell Running is, you probably would have little use for it and it wouldn't ever be on your radar -- as would be most things outside the pale of pop culture or your own version of the Roman Empire.
Doesn't Jira and Confluence just integrate seamlessly? Unless you want some particular feature?
Both systems are able to link to each other. I worked for a company where Jira was configured to take a support ticket, and then link it to a knowledge base article in Confluence based on namespace and topic I think.
When it first started, there were great swaths of stuff that could be written. Some huge fraction of history, art, science, etc now all have pages with some level of detail. To continue ongoing maintenance for current events is going to take significantly less activity.
And the stuff that isn’t already on Wikipedia generally has too few primary sources to be allowed to be created. So the ongoing edits would be primarily updates to existing content rather than new stuff.
That's complete rubbish. Information rots, whether it sits in a blog or a wiki doesn't matter. A blog can be updated, can show its last updated date, just as a wiki does.
The statement hits home with me because over the past 20 years I have actually gone back and forth between having a wiki as a personal website and now finally back at blog again.
I find that markdown + tags is the best way to organize my personal knowledge base that I call a blog. My attempts at using Wikis always felt overkill.
I interpreted the poem less as a literal manifesto against the practicality of blogs and more as a metaphor for how the format impacts us psychologically. Blogs, being chronological and linear, feel a bit disposable—you post and move on—which promotes rot. Wikis, on the other hand, are dynamic and interconnected, inviting ongoing growth and freshness.
I still don't really really agree with that either, though. I tried swapping out my simple static blog for a MediaWiki instance and quickly realized why you don't see many people doing that anymore. Maintaining a "complex abyss of ever-evolving thoughts" and actually writing stuff are often mutually exclusive
Blogs are normally run by a single person (unless for a publication), wikis are normally run by a community. Hence someone can discover a wiki and edit it long after its original creators have departed, so long as it wasn’t set to private. The best that discoverers of an old blog can do is to write in the comments section. Maybe reblog it and hope that pingbacks (remember those?) get triggered.
The implicit invitation to collaborate is what gives the wiki longevity and the possibility of resurrection.
Yes it adds value to see an interface, funny you should mention that specifically because recently I was on the job market and I created a Hugo blog where each job I applied for was one post. So I could keep track of when I applied, how, interviews, results and so forth in one markdown file. Then when it was time to report my job applications to the government so I would get money I just ran hugo server and looked at them graphically and chronologically.
I'm guessing the main advantage is navigational structure - while I agree that can be overdone, and I practice very minimal navigation on my own 'blog', I think some amount can make the pile of markdown files easier to read.
I’m sure it will come to bite me in the ass one day but my personal site is a wiki written in the style of website that I’d write as a teenager: random updates about things and my life without any overlying theme or brand. I just write what I think, sticking my Blog entries under the Blog category and posting things haphazardly otherwise.
Other things I do that one doesn’t these days but you’d be eager to do in the past is that I’m public about my life. Funnily enough, it was someone else’s comment about Wikipedia deleting their article (which I did manage to recover) that pointed me to a Japanese mathematician. His website filled me with such nostalgia. There were all these stories of his life and things like that.
We used to put things like that on the Internet. The one thing I did miss back then was the ability to make small updates to people’s websites to fix typos and so on. So my website is a wiki (it’s just Mediawiki).
It’s been vandalized before by bots but I make nightly backups to R2 so I just dump and restart if things get ugly. Otherwise, it’s been fine.
One thing that might be fun is if someone one day happens upon my site and feels that sensation of looking at someone’s lived life.
Hey I enjoyed reading your site! Also, notably, it made me aware of the word "Bundeshausfrau" which is my new second-favourite German word (after the un-dethroneable "Sitzpinkler").
I like saying "sitzen, nicht spritzen" to amuse myself, which is a close relative. But the funniest thing is that I thought I'd coined Bundeshausfrau in that post but it's apparently the most obvious choice to express the concept in German because it's been used at least a decade ago (and possible more)! Well, TIL.
Some of us still do. But I think it was always very much a minority even on platforms that encouraged it. See Similarworlds.com, a successor to the Experience Project.
I quite like the "Wiki" on my site. It's half blog, half wiki. The entries can be dated. They are presented chronologically. Navigation metadata is assigned to each entry. But not every entry appears on the blog feed. I can link between entries like [[link]]. Each entry tracks the links to and from it. The best of both worlds I think ^^
I'm contemplating adding something like that to my https://simonwillison.net site at the moment: most of my content works great as a blog but I like the idea of having some content that's more "page like" which I can continually update - things like a list of currently active projects.
Perhaps the greatest contributor to the philosophy behind my own site is taken from the website of Wendy Carlos. Her site houses a page describing how her website ‘lives’: [1]
> I’m happy to report that this page (like most housework) will never be finished. It is a living document that grows and matures, just like most of real life. It is not a “work in progress”, for this would imply not much intrinsic value until that magic day it is completed.
> A novel is a work of art that, once completed may continue to exist forever in that finished state. An encyclopedia must be published at regular intervals to reflect new information gathered since the day it was published. Periodicals are timely only when first printed, then fall behind the times – get the latest issue to keep up. The technology behind web documents allows us to update information as often as is necessary. In this context, publishing dates become an outdated concept.
> While it is possible to “finish” a web document, the fixed information becomes stagnant, thus abolishing any desire for a return visit. This is something I call a cob-web page.
Lovely poem, but I don't 100% agree with the idea that the wiki possesses some kind of ethereal, spiritual advantage over the blog. The post-SEO internet has been unkind to all forms of online writing, and the wiki has been an equally effective vessel for the proliferation of rot as any other (looking at you, Fandom).
From a practical perspective: Blogs may rot, but wikis decay. Larger projects with established community manpower may not struggle with offsetting the maintenance and complexity that traditional wikis demand. For personal writing, however, the burden of preventing decay falls entirely on the author- and it's not a trivial burden. Like others have mentioned, there seems to be an absence of great wiki software offerings that do a great job of mitigating said burden. The few I have tried introduced an inherent complexity and maintenance overhead that significantly detracted from the core activity of writing.
Regardless, I'm hoping that it's just an engineering problem that has yet to be solved instead of an unavoidable characteristic of the medium itself. I would love for the wiki to make a big comeback.
Within my current project, I use a special C dialect. So, I have to write a lot of explanatory text for those who dare to use it.
And even in solo projects, I have been in situations when I had to check my own docs to understand what's going on.
As a result, my project is effectively also a wiki:
The idea is to put motivational and explanatory text into the parallel wiki, while all the API docs stay in the code the normal way. These are seriously different things.
The next step to unit tests all the code docs. Or, the other way around, to document tests to make them joy to read. That is the only way to solve doc rot.
Any time I discover some new piece of coding information, or solve an obscure bug for which there is not a lot of common documentation, I write about it in my personal mediawiki server. Maybe some day when it has more content I'll host it publicly, but for now I find it more useful than a blog or a personal journal
Although I am not fond of the fixed flow of blogs, I have yet to find a "wiki" or similar mind-map-like tool that allows me to write better interconnected, non-linear writing that also works with a static site.
I really love the idea of keeping a wiki, but for the same reasons j3s self-hosts a single binary for the blog, none of the wiki software that exists is particularly appealing to host and it sounds like a relatively large amount of work to build something like this from scratch.
Note that the usage of “blog” in this poem has nothing to do with livejournals and pre-spam personal journal blogs in general; it’s referring to the post-indexing apocalypse brought on by SEO and Google reacting to ‘freshness’.
Counterpoint: I subscribe to many blog RSS feeds because the content comes in a useful format and appropriate cadence. I have never subscribed to a wiki to get regular updates for it.
After the history-making success of wikipedia the wiki movement has unfortunately stagnated. Wikinomics [1] is yet another early vision of the digital society that did not come to pass.
The reasons as always quite complex: from the general decline of the public internet due to centralization / enshittification (and now wholesale appropriation), to poor technology choices and missing value propositions that could induce the next wave of adoption and development.
Yet there is still no tangibly better alternative vision for open source knowledge management, especially if of the collaborative kind.
One interesting direction - yet after more than a decade still largely in embryonic phase as far as broad adoption - is wikibase [2]. It runs as an extension of mediawiki and makes it relatively painless to integrate structured data in a semantic web style (e.g. [3] for an example of integrating veris [4] data).
Its not clear if the wiki era is permanently dead or it just waits for some rain to blossom again.
> One interesting direction - yet after more than a decade still largely in embryonic phase as far as broad adoption - is wikibase [2]. It runs as an extension of mediawiki and makes it relatively painless to integrate structured data in a semantic web style (e.g. [3] for an example of integrating veris [4] data).
I agree a lot with this. As a person who speaks many languages, something that gets evident very early is that isn't one "The Web", but many webs like English Web, Spanish Web, Portuguese Web and so on.
This is extra noticeable on Wikipedia, since many articles exist in many languages. The drawback is that sometimes information is split across the languages, so someone speaking English, Spanish and Swedish can sometimes build a more complete picture from just one Wikipedia article, if the data isn't in the other article languages.
Enter Wikidata+Wikibase, which makes the knowledge itself trans-language, and instead only the definition/value names need translated, but the composition itself is language-agnostic.
If this imaginary article with separate info in three languages for one article could all use Wikidata as a base, they can all share the same knowledge and make sure that people who only speak one of the languages, come out with the same understanding.
Basically, Wikidata if successful, will multiply the knowledge on the web!
I don't know what the heck that means (clicking on the article didn't illuminate it for me, sorry) but Wikis that their maintainers have lost interest in become overrun by spam pretty quickly and if they just disable edits instead it's not any more useful than a blog (arguably worse since a blog post is at least intended to be tied to a specific moment in time while Wiki articles are meant to be eternally current).
I'm personally in the top left corner and bottom right corner at the same time, which is sort of funny.
I have used WordPress since 2004-2005, and I've also written a Python static site generator before using Flask + Frozen-Flask[1]. I've also made stops through tools like Sphinx, Hugo, Gatsby, and VitePress[2]. But my personal site continues to run WordPress[3].
I think I'd prefer something like VitePress these days for a technical documentation site. It has a lot going for it for that use case. And it feels built to last.
On true wikis that one can self-host, I recently learned that MediaWiki with a reasonable theme like Citizen[4] is a nice choice for an open source powered private wiki. Although I do find the Mediawiki markup language a little cumbersome versus simpler markup languages like reST or Markdown/MyST in the Python community (or GitHub-Flavored Markdown or Asciidoc supported elsewhere). But Mediawiki has a lot of nice features -- after all, Mediawiki powers Wikipedia. The theme makes it work properly on mobile, adds a little more structure for automatic ToC, and makes content editing a bit simpler.
It still isn't nearly as polished as commercial wiki-like software (e.g. Notion) but it's better than open source wikis used to be.
On the subject of the blog post, I think bit-rot or info-rot is the natural order of things. The kind of software you run isn't going to change those facts. And if you're curating knowledge about technical computing subjects (that isn't about durable topics like, say, C and Linux system calls), you should expect exponential decay.
I do find it kind of amusing how many tools and frameworks developers have created for making it easier to edit HTML pages, though. Truly a foundational 21st century problem that deserves a technical solution that can last for decades without itself bit-rotting.
The front page of urban dictionary's definitions are all added post 2020, which fits since the urban dictionary usage is the way far more online members of gen z use it.
* A blog post is a snapshot: what did the author think at the time they wrote the post? If they change their mind or learn more, they write a new post and link forward and backwards. I know how to write for this environment (write what I think now, try to write things I'll feel glad to have written later) and how to work with things other people have written (consider the date, it's just one person's view).
* A wiki page is unclear. When should it be updated? How much should I trust that it was up-to-date as of the last-updated date vs that just being when someone fixed a typo? A few wikis and sites with wiki-like approaches (Wikipedia, gwern.net) manage to handle this well, but I think it's generally much more difficult and rot-prone.