Hacker News new | past | comments | ask | show | jobs | submit login
Cool desktops don’t change (tylercipriani.com)
534 points by thcipriani on June 16, 2022 | hide | past | favorite | 479 comments



I find the vimmaxxing message common enough but I disagree with it.

Not because vim vs nano vs emacs etc. Just that plain text is not enough.

I want to be able to encode more of my memory and context to my notes with a minimum of fuss, and therefore want to embed images (gasp!), tables, hyperlinks, and even file or sound embeds in my documents.

Yes, open simple file formats are better than closed complex formats (and doing text vs binary formats has something to say for it as well).

No, I don't like that Onenote's .ONE file specification is super complex and therefore only semi-open.

But I'm not about to lock myself into a DEC dumb terminals 80x24 limitations where I have to learn some greybeard's key bindings for everything, all for the privilege of losing all my ability to embed multimedia combined with said gatekeepers tut-tutting me that I should just make a link in my terminal to the local image file, nevermind that you can't copy/paste an image into a bash terminal and have the file be saved to that directory (ok it probably exists but is some obscure something).

So I watch Xournal++ very closely, specifically [this issue](https://github.com/xournalpp/xournalpp/issues/937) because it will have the total package, multimedia embeds, ink support, text support, file embeds, all in one file that I can syncthing save to wherever else I like.

But until then? Onenote desktop, sorry but not that sorry.


> I want to be able to encode more of my memory and context to my notes with a minimum of fuss, and therefore want to embed images (gasp!), tables, hyperlinks, and even file or sound embeds in my documents.

I do this stuff with my notes in Emacs and it's really nice. Emacs is a graphical application and its notetaking facilities are really outstanding. It even supports syntax highlighted, executable, exportable code snippets, as well as all the basic text formatting, hyperlinks, tables of contents, tables, etc. It's a great tool for developers and sysadmins to use for notetaking.

If you know Emacs isn't for you, that's totally fine, but I wanted to throw this out there for bystanders that when someone suggests to build your notetaking workflow around a great text editor, the proposition is not at all to somehow confine oneself to an 1980s terminal.


Same here. I use Emacs to write tons of notes. I rely heavily on tables, hyperlinks, timestamps and tags. Images are also possible, just not part of my workflow. All this is trivial, out of the box and I know it will still work many years down the road.

Even custom hyperlinks and text notes that refer to chunks inside PDFs are easy to do.


Your phrase 'Images are also possible' worries me. But then you go on to say 'All of this is trivial ...', which makes me feel a lot better.

I've had many well-intentioned but ultimately abortive attempts to get on board with Emacs.

So, are images 'possible' or 'trivial'? Not trolling, and I do note that you say it's not part of your workflow.


You cannot embed images in Org documents. Or rather, you can, but would need to code the support for this for yourself. On the other hand, displaying images within the Org documents is trivial. You just need to have the image file saved somewhere. Then you insert a link to the image file and, with the default settings, you should be able to see the image rendered inline in the document. You can also generate the image from within the document. You write a ditaa code block, execute it, and the png with your UML (or whatever) is inserted or updated.

On the other hand: no, Emacs is not a WYSIWYG word processor. The need to support terminal as a display backend weights heavily on the pace of GUI improvements. That's because all interface elements need to be renderable on all backends (not strictly true, but the prejudice against GUI-only elements is significant). So if you hope to be able to easily and conveniently display something like this: https://klibert.pl/statics/vw-doc-view.png then Emacs is empathically not the best tool for the job.


eehh...

so (out of the box) in org mode if you type [[./file/path/to.png]] it will turn into a link, if you hit Ctrl-c-x-v it will toggle between showing images as a link (which will open in a new emacs buffer) and just showing the images inline.

ALSO you can just drag an image file onto an emacs window and it will display it in a new buffer.

however getting it so that A) images are displayed automatically when you finish typing the [[path]] and/or B) dragging an image into emacs will result in a new link in orgmode... that's probably possible


https://github.com/abo-abo/org-download

This is the package you're looking for. Just drag and drop images into the org mode buffer and you're done. I've been using it for years to take lecture / book notes in org mode where I have to constantly take screenshots and embed them in my notes.


that's part B.

part A is still annoying though. if you have inline images toggled and type in a new image it still turns into a link instead of an inline image.


That's an org mode setting, can be added to the #+startup declaration at the top of the file.


It’s as trivial as images in markdown, but there are more optional config options and extensions to, e.g., just paste from the clipboard.


My life and work would be impoverished without emacs.

Thank you Richard


I personally dislike emacs, but say the same.

Why?

vim would not be what it is, without emacs to rage against.


vim would not be what it is without having become a shallow emacs.

Emacs is emacs because its extensible, between vim, neovim and vim9, im starting to think people might enjoy the extensibility seeing as that brings vim up to 3 scripting languages to write things in.

vim has become what it purported to hate


I don't think Vim ever hated extensibility. Vim is defined by its modality and vocabulary and grammar, which Emacs lacks without extensions like evil.


> I want to be able to encode more of my memory and context to my notes with a minimum of fuss, and therefore want to embed images (gasp!), tables, hyperlinks, and even file or sound embeds in my documents.

Yea, you will get a lot of Emacs and Emacs with Org mode answers here. To be fair they not wrong :) As a complete emacs newb, start with something like "doom emacs" ! It's brilliant for beginners.

>Emacs is a graphical application

Too few ppl know this.


> Too few ppl know this.

Including me until now. I will examine this emacs of which you speak. No promises but I can at least attempt to run it.


ProTip: Not many ppl like YouTube as a "learning platform" but just look for "doom emacs" on youtube to get the basics. You only a need to "learn" a minimal set of keys+concepts to start to get insane value.

Goodluck :)


Seconding the Doom recommendation. There is no need to become a guru before you can be productive, or to use Emacs for all your coding just to make good use of it for notes. Doom is a great way to quickly get up and running for notetaking with Org mode.

Get used to the basic workflow for a little while and forgive the monospace-ness while you evaluate the workflow. I like taking notes in Gollum wikis, the wiki format used by GitLab and Github. The project management features enabled by default in Doom will kick in for that. Decide how you feel about those.

If you like the workflow, take a look at org-modern and take a few minutes to work out the proportions for variable pitch fonts. Then you can have Org text render in a nice, proportional font that you like and use monospace only for the embedded source code (including shell languages) blocks.

As an alternative to the using git repos to store notes collected into projects, take a look at Org Roam v2 for keeping a topic-based knowledgebase that you can work with without thinking much about the filesystem, and which offers mindmapping features.

That stuff works great for me, has required zero substantial elisp knowledge to get working, and has been natural for me to ease into. I hope you find a workflow you like!


So ? :) How did it go ?


As far as I can tell, embedding images into org documents still requires saving them to a directory and linking them. I can't seem to find a way to get it to actually embed the image data into the document, for example like a base64 data URL or something.


This is the only way I've done it, since my org notes are organized into git-based wikis and that's kinda what I expect from wikis.

Direct embedding binary data in the Org files would be cool for emailing Org files around, though. It looks like Org does support this kind of embedding on export, if you don't mind exporting to other formats when you send documents around.


Base64 images and URLs are possible with extensions, but simple files are better IMO. Getting URLs asynchronously doesn’t work well, and base64 embedding seriously bloats the text file and makes it more difficult to plain text tools on it. (E.g., you’ll get false positive grep results.)


I have images, audio, and backlinks using UUIDs in my terminal emacs. The image support is the worst, as I have to open the image and then it takes over the screen until I dismiss it, but it’s usable.


Agreed 100%. My core workflow is "snap a picture, draw on it." I deal with hardware, and the bandwidth with which this lets me get physical information into the computer is so far beyond any Markdown-based workflow that I can't go back. I suspect this result would hold for many non-software professions.

For now, I've settled on GoodNotes because it's good enough at PDF Backup, OCR Search, and drawing. I have given solid tries to Evernote, ZoomNotes, Concepts, and Notion, and they each are amazingly better than GoodNotes in at least one major respect, sometimes multiple, but none of them are good enough on those zero-compromise requirements and GoodNotes is. FWIW.

EDIT: Oh, while I'm flamebaiting, I also use my ipad camera to take pictures of screens. Life is too short to schlep screenshots between computers, especially the ancient ones that run scientific equipment (lots of 68000, XP on some recent ones). Printer emulators and java applets can get bent. Runs everywhere, my ass. Moore's Law has given us gigahertz and megapixels, and if I can use them to eliminate painful asinine pointless busywork that's my god-given right and nobody is going to convince me otherwise. Sometimes I even use my ipad camera to take a screenshot of something on my PC and toss it into the mix. Sue me.


Shift+CMD+5, adjust rectangle, CMD+C

Paste on your iPad

Clipboards are synced through iCloud.


kde connect does that, well with text atleast, you just do copy on your desktop/phone and paste on the other device because it keeps your clipboard in sync. now kde connect is even on IOS so basically all major operating systems now has cross compatibilty via kde connect which is cool.

if i had to share a screenshot from my laptop/phone, it is a matter of taking the screenshot and "sharing" via kde connect. take all of 3-5 seconds and this workflow is now available to everyone


Nice, I'll have to give it a try!


Windows. Thanks, though.


> I want to be able to encode more of my memory and context to my notes with a minimum of fuss, and therefore want to embed images (gasp!), tables, hyperlinks, and even file or sound embeds in my documents

I swear I'm not shilling for Google Keep, but I've found myself using it more and more over the last couple of years; it hits a sweet spot between "limited enough that I don't get distracted" and "advanced enough that I can still express myself".

The biggest praise is its ubiquity. It's just there when I need it. Interesting article + my in-the-moment thoughts? Google Keep-ed. Want to add a reflection at the end of the day with a picture? Google Keep-ed. Need to remember an address in the car? Toss it in the Keep and open it on my phone.


notion/joplin or another note software


Obsidian! #justTryIt


Obsidian changed everything for me. Years of markdown notes and pdf's finally in one place with easy search and synced to my Synology. Svg for the visual stuff. And Vim key bindings. Multiplatform and extendable with plugins.:)


this, the fact thatit is just markdown files means i can add images in the app, should i need and i can use this to add a quick note from the terminal

  function qn ()
  {
    date_today=$(date +%F)
    year=$(date +%Y)
    month=$(date +%m)
    day=$(date +%d)
    FNAME="$NOTES_DIR/Work/journal/$date_today.md"
    #FNAME="$NOTES_DIR/Work/journal/$year/$date_today.md"
    echo "### $(date +%F' '%T)" >> $FNAME
    rlwrap cat >> $FNAME #
    echo -e "\n" >> $FNAME
  }

I've tried a lot, but obsidian works for me


I've never seen `rlwrap` before, thank you. I am adapting this to my workflow. I use Org mode, but as it's still the idea is the same.

Thank you!


Plugins are a HUGE plus.


I can't recommend Obsidian enough. My only complaint is that for a fairly straightforward tool its UI performance wasn't amazing on my previous machine (an early 2015 MacBook Pro so pretty old but no slouch) but on my new machine that's not an issue at all.


I totally agree with you. One Note is fantastic and I would like to use it but not being able to open the notes in linux makes me avoid it. Do you know if there is a way of opening One Note notes in Linux without using the webapp? (I don't even need to edit them!)

In the meantime, I'm using stylyslab write [0] which uses svgz and is a good replacement for OneNote and a compromise I'm willing to make to not be locked in but tbh the OneNote app is just so much better.

[0] http://styluslabs.com/


You are correct, I have write3 installed on my phone and need to use it on the desktop too for that same reason.

I keep having dreams of making a linuxnote for OneNote files since the file format is technically open specs, but as they are super complex and I've barely opened a hex editor before I just don't know what I'm doing enough to actually try parsing the bytes of an arbitrary file yet.


> and therefore want to embed images (gasp!), tables, hyperlinks, and even file or sound embeds in my documents.

And exactly which one of them can't I do in asciidoc?


I guess he means in a wysiwyg way. That's all fun and game till you have to open a different editor for multimedia stuff, render the document and update it "blindly" in the source so everything works...


Doesn't even take until then before it fails for me...wysiwyg formats are traditionally a nightmare for version control systems. All my plaintext notes in Markdown and Asciidoc can be checked in without a problem :-)


I did mean a wysiwyg style yes.

If it is more than step 1 cut, step 2 paste, the tool has too many steps for my workflow, which like other users have posted is mostly a braindump of knowledge or context around several tables, links, and images/screenshots, sometimes with digital ink if I'm on a device with an active stylus.

But for producing formal documents, I have checked out asciidoc before and have respect for its capabilities, I just don't consider it in the running for my particular workflow as I see it as a different tool.

I'm not after version control for my notes though I know many others like it.


I wrote a library to parse plain text notes, sowhat [0] that delivers a lot of semantic and structural information. Notes for me solve lots of problems in my routine that wouldnt work with a markdown implementation: tasks, budgets, time management, reading lists, work log, simple calculations that operate on the global notes environment, etc.

Sowhat handles transactional data well, you could implement double-entry for example with relative ease. Other things include: links, events, quotes, tasks, formulas and a few organizational elements. I combine these elements in different ways depending on the problem.

[0] https://github.com/tatatap-com/sowhat


Rough, hand-drawn diagrams and spreadsheets are where my text-based note taking system falls apart.


Org mode has a table feature that does most of what I'd do in a spreadsheet anyway.

https://orgmode.org/worg/org-tutorials/org-spreadsheet-intro...


The reason I really like text for a lot of note taking is that I find it a lot easier to use compared to a word processor. Word processors tries to be smart about formatting, but never fails to annoy me. I still struggle editing the text of a link regularly.

But spreadsheets applications tend to be more useful than annoying. Doing tables with formulas in text docs are possible, but it seems more annoying than useful. In some instances I enjoy using jupyter notebooks.


iOS notes has changed my life. It's available on my work Mac, home Mac, and iPhone, and is always synced. It has everything I need and nothing more.


Beware: there is not an easy migration path out of Apple Notes.

Also sync is not that great for massive datasets: I tried to sync my 22k-ish Evernote notes to Apple Notes and it took days and I never got the same number of notes on all the different devices. If you are currently building your notes you will probably not encounter this issue, though.


I also almost have 20k Evernote notes.

I was considering obsidian, but the devs say that obsidian is designed for a maximum of 20k notes.

So, I'm already on the upper bound, as you are. Probably, obsidian would be slow if I import all my notes.

Evernote, on a Windows 10 Desktop, now works fast, at least for me. I think we undervalue this fact, we undervalue the fact that it is hard to make good note taking apps that can sync and search fast with more than 20k notes. I guess there are users with 60k notes!


Evernote, for all the flak it is getting, is still a remarkable product and I feel like the pricing is fair for what it gives. It ingests everything from everywhere, it indexes and scans everything for a flat fee.

They are currently still catching up to reach feature parity with the old native clients, but at least they are releasing more frequently --which means betting the house on Electron technology is paying off--.

I do not use Evernote any more (finally ended up with plain files in iCloud Drive and EagleFiler to top it off) but I am still following their releases as a paying customer.


Evernote is indeed a great piece of software. They have gone through very difficult times, but now the Desktop app is working very well (not the Android client, which is very slow, but improving), it is fast and despite it lacks some old features, it has great new features, like Tasks. I would not be able to work without it. Each time I thought about moving out, I realized I could not, because of some nice feature. So, Evernote is great and it is here to stay. My feeling is that those who are happy with Evernote do not write blog posts or tweets about it, it is mostly the haters and the frustrated who are writing out there.


I like Bear.app (MacOS, iOS, iPadOS) even better. Specifically, I love that it's Markdown-based and has support for code blocks with syntax highlighting. Only problem is that it uses iCloud Drive for syncing and my employer has blocked that. :-/

I agree with you, though: Notes is really quite good. I'm not sure how Notes syncs, exactly, but for whatever reason it's not blocked by my employer.


I think I've just decided to drop Bear for Obsidian on Mac. I need that plugin support (hard to leave VSCode line shifting hotkeys), and the code blocks work so much better.


Hmmmm, thanks for giving me something to check out.


Worth mentioning that it's also available on web at icloud.com. I don't have a mac, but have an iPhone and iPad, and i love the Notes app for quick things and lists that I want to remember - and it's accessible from anywhere.


I stopped using it when I needed to work in Windows. I use InkDrop these days which is cross-platform.


Yes, same experience, but I wish there was code formatting in Notes. A lot of my notes are code snippets and Notes fails there. I've recently switched to Obsidian with the iCloud drive backup and trying to get used to it.


Notability will pop your block off


CUA bindings, for the initiated (like me!):

https://wiki.c2.com/?CommonUserAccess


What about running wordpress locally instead? It does sounds like suitable for your needs. A single docker-compose.yml is all it takes.


Terminals have improved a lot since the dumb. Think your comment would do better without the slight. However, I use CUA keybindings myself.


I guess the tweet about it is a joke but funny though. It says even use Vim for tweets, but then it also says sent using IPhone.


Sounds like HTML to me, or at least the output of it.


Sounds like TempleOS is the choice for you!


> I want to be able to encode more of my memory and context to my notes with a minimum of fuss, and therefore want to embed images (gasp!), tables, hyperlinks, and even file or sound embeds in my documents.

On my scale of highly upvoted uninformed opinion, this ranks high up there, next to Bill Gates microchip vaccines...


> On my scale of highly upvoted uninformed opinion, this ranks high up there, next to Bill Gates microchip vaccines...

Is this comment auto-generated? What do you mean?


or you could use xterm's vt240 mode (from 1989 or so) and use emacs or vim extensions that embed sixel data in text files. just because you didn't read the manual doesn't mean it can't be done.


There's a million things with manuals to read. I would rather grab something that just works in an obvious way.


This is oddly sad to me. It's like hearing somebody say that you don't need to read a book when you can just watch the movie adaptation. You're missing out on so much depth and growth potential by just settling for the easiest option.

Maybe I'm just a weirdo. I love reading manuals and discovering interesting nuances in the things I use every day. Or picking some new software at random and learning about it just because it's fun and interesting.


It’s oddly sad to me that you (and so many others) see interfaces adopting more intuitive and inclusive idioms and reducing the barriers to entry while becoming even more powerful as regressions.

Software is interesting and fun in different ways to different people; most people find tools accomplishing their goals with as little overhead as possible far more fun than reading technical docs. Not saying you’re wrong to like reading manuals, but it’s not a moral failure to have a cognitive profile, or heck, even a schedule that’s meaningfully hindered by your daily toolbox having reading prerequisites.


They are not inclusive. They are dumbed down, and by dumbing them down they are less powerful. Or at least less powerful than they could have been.

I don't get that allergy to reading. For a long time it has been accepted that you had to read a manual before using something relatively complex, as not everything can be expressed intuitively, and because intuition is not really universal.


> For a long time it has been accepted that you had to read a manual before using something relatively complex, as not everything can be expressed intuitively, and because intuition is not really universal.

Do you know how many pages of software documentation the average person will read in their entire life? Zero. Why? They don't have to, and imposing that requirement yields no benefit. I'm always astonished by the perennial hubris required to glibly assert chemical engineers or historians or teachers or physicists or lawyers or doctors or librarians or literature professors (or even other developers who want to concentrate on other tasks) only avoid software tools with prerequisite study because they're too lazy to read.

The purpose of software is to help people more easily solve their problems. The bigger the barrier to entry— prerequisite study for example— the harder it is for most people. That's acceptable if the complexity is genuinely necessary to afford or augment expert usability in purpose-built software, but it almost universally compensates for ham fisted interface design. Interfaces that stop expert users from doing things efficiently and only afford beginner workflows are also poorly designed. Expert vs dumbed down is a false dichotomy.

For many FOSS developers and users, tolerating counterintuitive interfaces turned into a badge of honor, and that turned into an aesthetic preference. It's not the default state of software, it does exclude non-technologists, and is why FOSS alternatives are still alternatives for every application not specifically designed to be used by technologists, even though they're free. Every single one. Every professional photographer on the planet who relies on Adobe Photoshop would jump for joy if Gimp even came close to being a sufficient replacement. That preference is also why developers don't generally make significant interface decisions in any professionally managed software project.

Not having a technologist's domain knowledge or priorities isn't a contemptible moral failure. Having it doesn't come with design expertise— learning design takes a lot of study and practice, but I'm guessing you have other things you'd prefer to concentrate on learning. Which is fine. Just realize that assuming documentation is sufficient to guide users through software is primarily informed by your not understanding software usability rather than everybody else misgauging the importance of software documentation.


The dumbing down has other reasons, mainly mobile ones. It's sad to see.


Fwiw, there's a pretty decent discussion of the "expert/trained" interface vs the "beginner" interface in Theirry Bardini's book about Douglas Engelbart.

Engelbart assumed people wanted an interface for "trained experts" -- the idea being computer applications were important enough for people to invest time in learning new concepts to make their work more efficient.

Larry Tessler and others argued you should present a beginner's interface based on existing metaphor. Larry eventually went on to work at PARC and Apple where his work inspired (and frustrated) generations of computer users.

Boarding noted that Tessler (and others in PARC) needed a raision d'etre inside the Xerox organization and partially used "building document handling systems for secretaries" to justify their budget. Kay was well known at the time for wanting to use computers in education, so it's no wonder the interface Jobs saw at PARC was one tweaked for beginners.


Interesting anecdote, but a vast oversimplification of beginner and expert needs that was formed in the infancy of computer interfaces.


Not op but I’ve got 25 years of daily computer use under me. I’m tired of having to read manuals and try to figure out the latest popular app. Software should be better these days, more intuitive to use, more interoperability between applications, data should be even more portable.

This isn’t to say I don’t enjoy learning apps anymore, I just have to guard my time more wisely.


Congratulations on the daily use. I think the point of this article is exactly the opposite of having to read new things every day. It is to read the old things once with the insinuation that the skill will translate to other things as well.


I've used Emacs for 30 years, and still do. I know enough keybindings to get by. Under the hood, everything I do is Linux. I love i3wm and Pop Shell. My main rig uses Linux as the base os (which is mostly used as a hypervisor, for security reasons).

Still, as front-end, I find myself mostly working through Windows VM's, on top of KVM. I pass through most of the hardware (usb controller, the main GPU), and get a native windows experience. Most work happens through Chrome, VS Code and Windows Termnal. I use PowerToys to get some kind of tiling window manager.

There are two reasons for this. One is that my workplace has security setups for Windows and Mac, but not linux. I can grant them full control over the VM I use for work, and can use the standard setup for vpn, etc, as well as ability to use all corporate resources with no pain or hacks.

The other is that a lot of things are more painless. Working across multiple monitors requires less setup, scaling 4k "just works", most hardware and front-end software I use behave as-well or better than on linux (particularly webcam + Teams, that I depend on heavily).

Also, as editor, I still use emacs for some things, when I just want to edit some file in a random directory.

But for development work, I've switched to VS code. The ability to ssh into any remote VM, container, etc, and be able to work directly on a remote server, complete with gitlens, debuggers, file browsers, shell access, etc, is a killer app for me. I also love the pane management, which provides the tiling window manager functionality I love from i3/Pop Shell. (Power Toys is ok, but not quite as efficient for me as i3.) Emacs CAN be set up to do those things, but that takes more effort, I feel, and still not the same user experience.


+1. I use emacs as my daily driver. Plus some tools I wrote myself to evaluate C# and JavaScript. Every now and again I hop over to a jetbrains tool.

I was slightly disappointed by i3 and xmonad and built my own WM in lisp.

In the old days we used to write our own tools by extending and combining existing tools. Now people seem to just download python and use whatever library comes with it after a day or two of experimentation


Or you could read the manual for one app and use it for a long time.


I don't see the comparison holding water. The difference is between a tool that servers a purpose and reading or watching something is that sometimes I just need the tool to get out of my way. The added depth available takes up time and mental resources to learn, potentially far in excess of how regularly I use the tool or how critically I need that functionality.

This is especially true where there's a selection of tools that all work within a similar "market" space for X functionality - some are going to be less complex, some more intuitive for me personally, some fulfill a one-off need and their other functionality is duplicated by another system I know much better (and have put the time into learning).

That shouldn't take away from your enjoyment of reading through minutiae of various systems, but you can't reasonably expect a lot of people to match your enthusiasm for doing so.


Meh. My tools aren't written for other people. They can do whatever they want. But bitching about how a tool they didn't pay for that was originally written for a different purpose requires them to read something is kind of funny.

It's entirely possible there's a community of people out there who don't mind reading about abstractions exported by tools. Especially if it means you'll save time in the long run.

The original post seemed to me to be a mild introduction to the concept that maybe you can get your job done more efficiently with extensible / combinable simple tools rather than waiting for someone to add a feature to Gnome.

It's totally fine if you don't want to do that. If the person in the cube next to you does that, how is this an insult to you?

Maybe you're not the audience for this article.


I prefer both. I like when something works right out of the box. And then I read the manual and find out about all those moments when the defaults were saving me from my foolishness. And then I can unfold the thing's real potential.


The author makes a common statistical error in interpreting the Lindy effect. The Lindy effect proposes, simplified, that the longer something has been around, the longer it will probably stay around still. The author then makes a quick jump and posits that the opposite is also true, which it is not. Just because something has been around for a short time does not mean that its expected lifespan is somehow short. In other words, A implies B does not mean B implies A as well. All things that have been around for a long time had at one point only been around for a short time.


Disagree: even in your formulation, no 'quick jump' is needed.

Your statement A:"The longer something has been around, the longer it will be around" is not the opposite of B:"the shorter something has been around, the shorter it will be around".

Rather they mean the same thing. 'longer' and 'shorter' here are just English language ways of referring to the same time t that an object has been around.

If someone tells you "the longer a distance is, the more time it takes to walk it", that is exactly the same as "the shorter a distance is, the less time it takes to walk it"; there's no logical leap there.

I could conceive a rule that says "archeological artefacts are likely to be around for a long time" and it'd be a mistake to conclude that this means that non-archelogical artefacts will only be around for a short time.

But that doesn't seem to be how the Lindy effect is formulated, either on Wikipedia or on your post, so there doesn't seem to be an error in applying it to new things.


The real meaning of the Lindy effect is:

   the more something has been continued to be around due to be continuously and repeatedly selected from a pool of similar other somethings, the longer it will likely continue to be so.
Because this is a statistical effect (longer lived things are drawn from a pool of things, some of which are not long lived), you cannot invert it trivially.

If there is no selection process, then the Lindy effect is either meaningless, or decomposes to an assertion that the current thing is the only way to do something.


Old things that are still around are generally longer lived things. New things may or may not be longer lived. This means that if you sample old things that are still around, they are more likely on average to be longer lived than new things, because the short lived new things have not been weeded out yet.


It's like how the music of the 80s or whatever decade seems better when reviewing, as you can just listen to the albums which stood the test of time. And you forget about all the trash music.


But one can think of a counter example where all the new music are very high quality and will stay around for a long time (not saying this is likely to happen). Then the statement "the shorter something has been around, the shorter it will be around" is false.


Ok, so let's say discuss your formulation instead:

Why can't we 'invert' it, just because it's statistical effect? Yes, in your formulation, some of the new things in the pool will go on to live a long time, while others will be selected out.

But so what? We are talking about the expected lifetime of an item in the pool, conditioned only on it's age. There's no fundamental problem making a statement that this expected lifetime is short for new things, even if some fraction of those new things will last a long time, right?

After all, we don't know any one individual item that's been around a long time will last a lot longer. We only know we expect it to. Because even long lived items have finite lifetime, hence they'll eventually die (and when they do it'll be really surprising, because they've been around so long; but it will happen eventually.)

And so the statement is always talking about expected lifetime, whether for items that have already lasted a long or short time.

(Hence I still don't think there's really any logical 'inversion' here.)


> There's no fundamental problem making a statement that this expected lifetime is short for new things, even if some fraction of those new things will last a long time, right?

Everything that's new in the pool might be better than everything that's old.

All you can say about the old stuff is that it was better (by some metric(s)) than anything it had to compete with so far.

But you can't say anything about the new stuff. Sure, statistically it is likely that it will some blend of bad, middling and good, but you don't actually know the mix, or which term describes which items, until after the selection process (i.e. time) has taken place.


Absolutely there are real situations where the new stuff is going to last longer than the old stuff, even the old stuff that's been through a selection process. E.g. modern manufacturing techniques have improved overall longevity of all 2022 models.

But I think you are outside the Lindy effect model at that point.

To put this in the example of the original post: The author says visual studio code is expected to last less time than VIM.

You could counter by saying: "hey, maybe, uh, the rise of Product Management as a discipline has meant that modern software overall will have longer lifetimes, and hence it's not fair to guess that VIM will outlive VSCode".

And that'd be a fine position. But imo the right way to frame that isn't "the author did an incorrect logical inversion of the Lindy effect model"; rather it would be "I don't think the Lindy effect model applies to this domain".

(No one is saying the Lindy model is universal.)

I guess you could say you want to apply it only within a given year of software; so, we're happy to look backwards and apply the Lindy model to software written in 2011, but we've no idea how to think about the lifespan of software written in 2022, and aren't allowed make any inferences from software written before 2022.

That's fine, but that's an additional constraint we've added, is outside the Lindy model, and, really, we're in "all models are wrong, some are useful" territory here, where I'd ask "is it really useful to throw away all that previous data? Wouldn't it be a better starting point to use the lifetimes of previous years as at least a prior?" And if you grant that, then I think theres no logical error here.


>Everything that's new in the pool might be better than everything that's old.

It "might", but the empirical observation behind the Lindy effect points that this is unlikely (if we take "better" to mean "more fit to live and grow old and still used").

Sure, we haven't seen the new things develop yet. But we have seen that most new things dont survice time: the things that survive are a small subset of each "new things" (say, vi and emacs, and not one of 100+ 70s programming editors).


Quite funny that the concept did not appear to apply to Lindy's restaurant itself which operated from 1921 to 1969 - shutting down 5 years after the term was coined in 1964.

https://en.wikipedia.org/wiki/Lindy%27s


If you sampled from a pool of now defunct software projects, or all software projects that were made in some year, you could make some estimate of the survival probability of software projects that are current. But if you can't draw an Kaplan-Meier curve, it's unclear to me how you would assert there exists a survival function that could be inverted.


I’m talking out of ignorance her, so please educate me: if you take into account the black swan theory both kaplan meier and Lindy effect can’t say anything about anything, no?


The point of the black swan is you can't model it. You can't predict COVID-19. You can't predict Russia renouncing it's debt, leading to the collapse of LTCM. You can't predict Hurricane Katrina hitting just so as to push feet of water into Lake Pontchartrain. You can't predict the the 1989 Loma Prieta earthquake. You can know everything there is to know about epidemics, finance, weather, and seismology, and you still can't predict those events. So don't. Do you best to build robust systems, looks for places with excessive efficiencies, and plan mitigation strategies in advance.


Exactly. Suppose a piece of software remains around for another year with probability p, and for the sake of this example, that p is constant.

Then if the software has been around for one year, the expected value of p is 50%. But if the software has been around for ten years, the expected value of p jumps to 0.5^(1/10) ≈ 93.3%.

In this way, if a piece of software has been around for longer, then it has a greater chance of sticking around. In fact, the expected number of years it has left is indeed equal to the number of years it has already been around, as stated in the article.

In practice this mechanism is more complicated, as all software is influenced by a changing environment, but this same idea is still at the core.


> Then if the software has been around for one year, the expected value of p is 50%. But if the software has been around for ten years, the expected value of p jumps to 0.5^(1/10) ≈ 93.3%.

This reasoning is incorrect. You have to take the distribution of p into account.

In a world where almost every software has p=0.5 the software which has been around for 10 years is likely to have been lucky and not to have higher p.


Make sense ! If you squint just enough you have also evolution there :)


>If someone tells you "the longer a distance is, the more time it takes to walk it", that is exactly the same as "the shorter a distance is, the less time it takes to walk it"; there's no logical leap there.

that's comparing A->B with A->B. comparing A->B with B->A would yield "the less time a distance takes to walk, the shorter that distance must be"


> that's comparing A->B with A->B

Yes, that's the point. feral (your parent comment) has accurately observed that Etheryte (your grandparent comment) has mislabeled A->B as B->A. But feral is completely correct that "the more time something has already been around, the longer its future expected lifespan is" is exactly the same claim as "the less time something has already been around, the shorter its future expected lifespan is". Etheryte is making a pretty bizarre error; he seems to be under the impression that "a < b" is the opposite of "b > a".


The logical contrapositive to A is A' “if something dies off soon, there's a good chance it was a recent fad“. A makes sense, and A' makes equal sense, but B is claiming stuff about new tools for which we have no way of guessing the future.


Your logic is wrong. You could go to the Wikipedia page for contraposition and learn about this, rather than arguing in comments.

To use your own example: You have to take a long time to walk a long distance. That doesn't mean you have to take a short time to walk a short distance.


I think you are taking the Lindy effect too seriously.

You basically only know one thing about something, which is how long it has been around. The things that come to life and die that you observe will be, on average, in the middle of their life.

So if something has been around for five years, given no more information, your best bet is that it'll be around for another five years.

One year? Another one year.

Hopefully this explains well why the Lindy effect also says that something that's been around for a shorter period of time is more likely to disappear sooner.


But this is not the only thing you observe. You can see many more things about those new products than simply the amount of time they have been around.

You can see Notion come out, and go "wow, I find this good and I think it will keep being around and good for a while". Like the GP said, the fact that it is young doesn't mean it is doomed, or even that it is not good. The Lindy effect says nothing in that direction.


> go "wow, I find this good and I think it will keep being around and good for a while".

Sure, you can have a much richer model of the world than the Lindy model provides.

>fact that it is young doesn't mean it is doomed, or even that it is not good. The Lindy effect says nothing in that direction.

No; it says that young things are less likely to last than old things. If the average expectancy is small, sure it doesn't guarantee any one thing is doomed (some aren't) but it absolutely does tell you that most of them aren't going to last long.

Imagine a friend tells you that they know someone who is planning to become a professional rock star when they leave school.

They are probably not going to make it, because most people who try don't. You can't be sure, because some people do become rockstars. But it's not an error to be sceptical of their chances.

If you hear they are still gigging after 3 years, even if they haven't made it yet, you are probably a tiny bit less sceptical.

That all makes sense, right?


The fact that "most of them aren't going to last long" is not enough to deduce that any specific one of them is not going to last long. All things that are old were once new, and survived. Again, the implication does not go that way and cannot be used even as a rule of thumb. It does not hold.

If you tell your rock star friend to go back to school, you are helping him minimize risk. The chances are that he won't make it, but that's not a consequence of the Lindy effect. It's because surviving in the rocking world is difficult as it requires that people pick your music repeatedly over other available bands, which is the requirement for the Lindy effect to apply, not its consequence.

More importantly, if you tell every rocker to stop and go back to school because their chances of succeeding are low, you end up with no rock bands. Because all rock bands have to start out being new.

Believing that most new rockers won't become big is fine. Believing that any rocker that is new won't make it is wrong, in fact you can be certain that some will.


Yes, however we are reasoning probabilistically here.

We're not making absolute statements about either new or old things.

We're not saying any one new thing definitely won't last another 10 years, the same way we're not saying any one old thing will definitely last another 10 years.

The whole discussion is about what things are likely to do.

There's a place for formal logic in discussions of probabilistic models, but it can also confuse people.

> Believing that any rocker that is new won't make it is wrong, in fact you can be certain that some will.

If you were made to bet money repeatedly on whether a new rocker or an old rocker would still be a rocker in 5 years, you would end up with more money if you always picked the old rocker. This means that older rockers last relatively longer and new rockers last relatively shorter ("on average" / "by expectation" / "probabilistically"); that's literally the same statement.

That the underlying mechanism might be gradual removal doesn't change this, and doesn't matter.


You know, fair enough. I understand what Etheryte at the top of this thread is saying: Assuming that "most of the new tools won't be around in 10 years" is correct. Assuming "none of the new tools will be around, only Vi will remain" is incorrect. Betting on other tools make sense, and giving up on all new tools would fulfill the prophecy at everyone's detriment.

I got the sense that the author is trying to say the latter, however what they literally said is more like the former: Visual Studio Code, specifically, is likely not to be around in 30 years. I can't argue with that.


> But this is not the only thing you observe.

Yes it is, in this context.

When you take into account more factors, then sure you can get a better prediction. But then it's not the Lindy effect anymore.

I'm talking about the Lindy effect only. Nothing else.

First sentence on the Wikipedia page:

> The Lindy effect (also known as Lindy's Law[1]) is a theorized phenomenon by which the future life expectancy of some non-perishable things, like a technology or an idea, is proportional to their current age.

Proportional.


But it does, if you assume constant speed. (I almost wrote that in my comment but decided it was too obviously implied to mention.)

So if we assume that's implicit in my example, how does the Wikipedia page on the contrapositive help now? (It doesn't, as my argument isn't relying on any property of it.)


So are you saying Pareto distributions follow linear relationships? Are you sure about that?

I would read up on the topic if I were you. Statistics can be unintuitive sometimes, and basing your argument on supposition can be unreliable. I'm not sure about the correct answer here, but most folks get e.g. the Monty Hall problem incorrect if they rely on their intuition.

For a product at T = 0, why would it be optimal to assume its lifespan would be T = 0 as well, rather than the average age of all products?


> I would read up on the topic if I were you. Statistics can be unintuitive sometimes, and basing your argument on supposition can be unreliable. I'm not sure about the correct answer here, but most folks get e.g. the Monty Hall problem incorrect if they rely on their intuition.

But this is a stupid-obvious problem. The Lindy effect is easy to write out in formalisms:

    age(x) > age(y) ⟶ life_expectancy(x) > life_expectancy(y)
Here's the claim that Etheryte says is the opposite of that:

    age(x) < age(y) ⟶ life_expectancy(x) < life_expectancy(y)
It is hopefully obvious that these two claims are identical, not opposites. But if it isn't, consider that we can rewrite the first one like so:

    age(y) < age(x) ⟶ life_expectancy(y) < life_expectancy(x)
Statistics aren't relevant to the question in any way; it doesn't matter whether they can be unintuitive sometimes.


>saying Pareto distributions follow linear relationships

No; not sure where you got that from.

>For a product at T = 0, why would it be optimal to assume its lifespan would be T = 0 as well, rather than the average age of all products?

I'm not saying it would be. (The original blog might, but that's irrelevant to my posts here.)

If you knew the average lifespan of all products, that would be your best estimate of the lifespan of a new product. (You can't use average age naively without thinking about right censoring.)

If the average lifespan of a product was short, then the average lifespan of a new product would be short. As the product aged, it's expected lifespan would increase.

I.e. the longer something has been around the longer it will be around, or, equivalently, the shorter it has been around the shorter it will be around (both obviously taking about expected times.) Make sense now?


There is no reason to assume content speed. Initial statement does not assume constant speed.


I mean, the 'initial statement' we're talking about here was my hypothetical example, right?; just pretend I decided not to leave out "assuming constant speed" after I first wrote it, and we're good? Unless you are referring to a different statement in which case we're all mixed up and have exceeded the carrying capacity of HN threads :)


You have to take a long time to walk a long distance. Regardless of whether your speed is constant.


The statement is A=>B. A implies B e.g. long life, implies longer life remaining.

The moment you negate A, both positive B and negative B, satisfy implication.

I.e. you can't claim shorter life implies shorter life remaining. For that to hold you need equivalency, not implication.

Here is an example. Rain implies streets are wet.

Does no rain implies streets are dry? No. There could be a flood or street cleaning, or a pipe burst.


You have somehow failed to understand what is being claimed.

The Lindy effect says that entities with longer realized lifespans have longer expected future lifespans.

In other words, if one thing has been around for 5 years, and another thing has been around for 3 years, then we know three things:

1: The first thing's expected future lifespan is f(5) years.

2: The second thing's expected future lifespan is f(3) years.

3: f(5) is greater than f(3).

Etheryte claims, in a gross error, that this does not imply that the expected future lifespan of shorter-lived things is shorter than that of longer-lived things. This is ridiculous; the claim Etheryte denies is a simple restatement of the Lindy effect. For our two objects of ages 5 and 3 years, the "new" claim would tell us the following three things:

1': The expected future lifespan of the first thing is f(5) years.

2': The expected future lifespan of the second thing is f(3) years.

3': f(3) is less than f(5).

But 1' is exactly the same claim as 1, 2' is exactly the same claim as 2, and 3' is exactly the same claim as 3. No claim has been negated, only repeated.


Ok. But then you just proved Lindy effect is at best probabilistic, and most likely survivorship bias.

Because it doesn't hold invariant to time.

It's not hard to make a counterexample. E.g. when Windows 3.11 existed for several years and Microsoft published Windows 95.

If we were to travel back then and use Lindy effect we would get wrong predictions.

Because it's such a simple heuristic it only demonstrates one way effect.


> But then you just proved Lindy effect is at best probabilistic

Obviously? Look at the statement of the Lindy effect:

> The Lindy effect proposes, simplified, that the longer something has been around, the longer it will probably stay around still.

What do you think the word "probably" means?


I salute you, Thaumasiotes, great explanation.


>Does no rain implies streets are dry?

It would, if the only thing that could make the streets wet is rain; ie if we were in a limited model where that was true.

We're talking in the context of a limited model in this thread. My point was that the statements are equivalent in this model.

Alternatively:

Imagine we were talking about the size of a pizza and how big a dinner it will lead to.

A blog says "wider diameter pizzas generally lead to bigger meals. Narrower diameter pizzas generally lead to smaller meals".

The first comment here says "just because wider pizzas implies bigger means doesn't mean narrower pizzas implies smaller meals, the blog has made a logical error!".

I reply "in the context of roughly round pizzas the two statements are the same".

You then give the example about the rain and streets. Hopefully it's clear that your general logical point isn't relevant to the pizza discussion, where there is a relationship between width and area.

Similarly, in the Lindy model, which we are actually discussing, there is a relationship between age and expected lifetime. That relationship is more complex than the pizza one, which confuses things, but hopefully the pizza example makes it clear why there isn't a general logical error here.


> It would, if the only thing that could make the streets wet is rain; ie if we were in a limited model where that was true

We are in a limited model. Many factors affect software lifespan.

It's a heuristic at best.


I remember a rule in audio speakers which sounds similar to Lindy effect: bad distortion (>=10%) are bad sounding speakers, but the opposite of that isn't true (e.g 0.001% distortion can still be a bad speaker)


I believe you are incorrect. According to wikipedia:

> The Lindy effect is a theorized phenomenon by which the future life expectancy of some non-perishable things, like a technology or an idea, is proportional to their current age.

This implies that things that have been around for a short period of time do in fact have a short expected lifespan. You're correct that "A implies B does not mean B implies A as well", but that assumption is not needed.


These statements of "longer" and "shorter" actually refers to probabilities. So it makes sense to state when something is not "longer", then it is "shorter". So I disagree, OP's statements about the Lindy effect makes sense.

Also someone commented about COBOL. Of course no one thinks COBOL "is around", what also means statistics about usage. COBOL is declining and almost no one uses it anymore, so it is safe to say "it is almost not around".


There are 220 billion lines of COBAL in production. Much of the most essential banking infrastructure in the world runs on COBAL, and is likely to indefinitely.

https://www.bmc.com/blogs/cobol-trends/#:~:text=According%20....

The only way we stop using COBAL in production this century is to have some kind of apocalypse, or maybe an apotheosis.


Given that the typo was made 3 times (suggesting that it's a misunderstanding, rather than a mistake), hopefully you will take this comment in the educational manner in which it's intended rather than being a snide "gotcha" - it's COBOL, not COBAL.


I think the Lindy effect can work in reverse. Like if you had a collection 1000 things that are all just invented, then the likelihood would be only a small percentage of them will be around and in use in 50 years. Compared with a collection of 1000 things that have all be in use for 100 years, the percentage of them that will be around in 50 years will be much greater.


You can't apply it endlessly in reverse though. Say you have a piece of software that's been around for 25 years - assuming that it's halfway though its life is reasonable. If something was just released 25 minutes ago, assuming that it will be abandoned in another 25 minutes is preposterous, since products tend to have a minimum amount of time for which they stay relevant. You could use this argument to create an "ultraviolet catastrophe"[1] for the Lindy effect and argue that virtually all software should be abandoned microseconds after being released.

[1]: https://en.wikipedia.org/wiki/Ultraviolet_catastrophe


I'd actually assume most GitHub repos get exactly one commit and are effectively abandoned in actual zero time.


It's a statement about probability. You can't assume that that specific piece of software will be abandoned in 25 minutes. But, if you had to guess which was 'more likely' to be around at some future time, then the Lindy effect would suggest its the software that's been around for 25 years over the one that was just released.


If software is only 25 minutes old, you probably aren't using it.


Yup, it seems that this 'effect' is simply extrapolating from randomness and averages.

Take any random thing and random time (with zero knowledge of actual lifespan), and on average, you are in the middle of it's lifespan. Therefore, if Thing-A has existed for 28 years, it is likely to last for another 28, if Thing-B has existed for 6 years, it's likely to exist for another 6, and so on.

It may be somewhat informative for comparisons but not in real life.

You are hiking away from a disaster with all your possessions and life's savings in your backpack, and are now at a muddy riverbank needing to cross. You ask me how deep the river is and I tell you the average depth is 6 inches. That sounds great, but I have most definitely NOT told you that you'll be able to get across without finding a deep spot and having to drop your backpack to survive.

Using this effect to make judgements about product lifetime is similarly uninformative. It is a hint leading to only a possible inference, not data leading to a valid prediction.


With respect to software objects, the most likely reasons for their life to end are:

1. Not sufficiently useful relative to involved costs for most applications(data formats, configuration maintenance, etc.)

2. Disrupted by something that is "10x better" for the purpose(e.g. using a spreadsheet instead of a text editor for 2D, cell-oriented data)

3. Outside forces invading the ecosystem and obsoleting dependencies(new OS, hardware, etc.)

So what the Lindy effect describes in long-lived software is just the software that is relatively cheap to keep around, is hard to greatly surpass and resists invasion - which describes a lot of "worse-is-better" software, where the UX kinda sucks, and it's actually a bit too unstructured for any particular application, but not enough of these things that anyone cares to address it in the relevant professional scenarios where it comes up: instead the user just girds themselves to fight it into submission because they can spend six hours fighting it and one week debugging it or two weeks making a Right Thing that is much less compatible.


It's not reversible, inasmuch as your first statement is correct, but cannot identify the things that will be around.


It doesn't claim to be able to pick which specific things will survive, it only gives an estimate of the probability that something will survive over a certain time frame.


That's not right. The Lindy argument holds in that case as well, it's just a different version of the doomsday argument.

The basis for this kind of reasoning is essentially that, if you can assume that you are an 'average user' (and you don't have reason to believe you're especially late or early), the chance that your prediction about the longevity of the project is correct is most likely to be true if you 1/3 - 3x[1] the lifespan of the project currently.

That is because if say, you predicted VsCode existed a hundred times longer than it currently did, that prediction is only true if you are indeed among the first 1%. 99% of VsCode users making that prediction will be wrong.

[1]https://cdn.vox-cdn.com/thumbor/2VfpAbtj-yOq5gHhYIdgrAIdBuw=...


This is correct, let me take a crack at explaining why.

The Lindy effect is named after a restaurant, Lindy's, which ironically closed recently. If we were to take it as invertible, we would have to conclude that a brand-new restaurant, which opened an hour ago, is most likely to shut down one hour from now.

This is an obvious absurdity: if we wish to speculate on the longevity of new things, we can't use the Lindy effect to do so. It's a good heuristic for betting that something will continue, it's a bad heuristic for betting something won't.


Reading through the wiki page, the Lindy effect is not so much about the the restaurant itself. It was initially about a comic's ability to stay relevant by not having too many appearances then later seemingly co-opted by Mandelbrot and Taleb to mean what it does now. Lindy's law is a strange one for sure.

https://en.wikipedia.org/wiki/Lindy_effect#cite_note-6

https://www.gwern.net/docs/statistics/1964-goldman.pdf


> most likely to shut down one hour from now.

Yeah but in an hour it most likely shut down in two hours /s

I really don't get the whole argument over this definition, to me it sounds like a funny observation more than a law of the Universe.


It's useful if you're trying to decide which is going to last longer, Harvard, or Bob's Quality Education Mill est. 2022.

It's not useful if you're trying to decide whether Bob's Education Mill will last longer than the state-funded community college opened in the same year, there are plenty of other heuristics which we might use there, but not this one.


> ... a brand-new restaurant, which opened an hour ago, is most likely to shut down one hour from now.

A new restaurant that opened one hour ago existed for months during its planning stages. That restaurant will probably not survive its first few months of business.


The Lindy effect states that the expected lifetime of a thing is proportional to its current age.

That includes the expectation that something young will be (on average) half way through its lifespan.


Can it also apply to the time until an event which hasn't yet happened? Ie. a system you have never observed crashing before, has been running for T hours. So being on average half way through its lifespan, will probably run another T hours without incident.


Not quite.

It works based upon the 100% certainty that whatever you are, at some point you will cease to be.

On average, regardless of what you are, you are in the middle of your life or existence (some very high variance here).

In order to generalise it to non-certain events, such as a program crash, you'd need to remove the certainty assumption and rejig the consequent statistics - it might be doable, but you wouldn't end up with something quite so clean and simple.


I am interested in this topic but never quite understood it. Is there a techincal reason why these organizations don't transpile COBOL to C? I imagine building such a compiler wouldn't be that hard, and even if it is, we've had decades to work on it.

Is it more of a process thing (having to change unit tests, code review, hire new people, etc) that prevents this? Or is it just too risky given the important roles of the mainframes?


Methinks you are replying to the wrong comment? But for the fun of it, the large three letter mainframe company made an effort to rewrite their COBOL compiler's backend to use the same JIT compiler they have for Java (check slides 5 and 8): https://www.slideshare.net/MarkStoodley/under-the-hood-of-th...

Such a change would be transparent to the end user.. Except for gotchas like not all users have the source code for all of their dependencies. What happens if you want to update your program and you have a dependency on a binary from some vendor who went defunct 35 years ago? You're stuck compiling against whatever artifacts you have so does transpiling to C still work?


Yes, wrong comment, sorry! Thanks for the link, it was interesting


Wrong comment?


Yes, how embarrassing


I don't see any issue, as long as you consider the time until the system crashes or stops for any other reason rather than the time until the system stops specifically due to a crash.

And note that the Lindy effect is about the expected value (mean). The statement "probably run another T hours without incident" could be interpreted as relating to the median, which is actually a little less than the mean. So the statement should either be "the expected value of the number of hours until the system stops is T", or "the probability of the system running for at least another T hours is a little less than 50%".


Edit: "A little" is inaccurate, the probability of the remaining lifetime being at least T is actually 25%[1]. And all this is assuming a "Lindy proportion" of 1 (which seems reasonable) and that no other information is known.

[1] Calculation: Lindy proportion 1 corresponds to a Pareto distribution with probability distribution function `a / x^(a+1)` where a = 2 and the range of x is [1, ∞). The cumulative distribution function is `1 - 1/x^a`. So the probability of the remaining lifetime being less than or equal to the observed lifetime is CDF(2) = 0.75.


Very true but I think what's also implied in the article is that new tools (VS Code) that look very different from old tools (vi/emacs) will probably not last. So, if A is very old and B is very different from A then B is less likely to succeed.

I don't believe that, btw, just guessing at the mindset of the author.


Even that's a fairly bad take.

Visual Studio Code is based on a much older line (visual studio) which dates back to 1997.

It's not nearly so new as it may seem, although I certainly appreciate the refresh from the older, more feature-filled (and feature slowed) visual studio proper.

Not to mention - most of the "new stuff" in visual studio code is really just a nice UI layer that's built using mature and incredibly battle tested tooling - HTML/CSS/JS.


Is VS Code actually based on Visual Studio? I thought it was purely a marketing-based connection like Java/Javascript.


Depends on how you define "based on", I suppose.

It comes from the same parent company, with a lot of interest in solving many of the same challenges. The UI paradigms are obviously related, and if you've ever done any real VS project debugging, you'll find the structure of those configuration files very (very) similar to how tasks work in VSCode.

I don't believe the VSCode codebase ever actually pulled anything from the original visual studio, but the roots of the application clearly come from the same place.


It's not meaningfully based on VS proper beside the name. The main difference is it lives in a very different ecological niche. Microsoft wouldn't have taken up its development if it didn't.


If disruption does happen, it's better to be the one disrupting your own product than let others do it.

Some people, a few for now, are migrating from VS proper to VS Code, including for languages like C++/C#. I am one of them.


I can’t really use VS code for work (PHP support is nonexistent compared to PHPStorm), but I needed to make some changes to a C++ codebase in WSL. VS couldn’t get anything to work, I couldn’t even get it to compile. VS code got compiling and step debugging via gdb working after just a few tweaks to the make file and tasks. Worked like a charm.


Upvoted but disagree. VS Code and VS have totally different lineages and probably not much shared code under the hood. I've used VS for many years (best IDE bar none, in my opinion) and when I started playing with VS Code I couldn't help but think they should have chosen a different name.

I appreciate what VSCode is achieving, though, I can run on my Linux host and remote into my microcontroller bare-bones OS for debugging and everything works nicely together but it ain't no Visual Studio.


The reason the binary is called "code" is because the name "Visual Studio" was applied at the last minute by marketing - this was told directly to me by one of the program managers involved in the launch.


VS Code is more of an editor (with a nice plugin system) than an IDE, right?

Should have called it something like "Technical Notepad."


Yeah I'd say the lineage is Notepad++ -> Sublime -> Atom -> VS Code


I don't think there's anything in the wording of the Lindy effect that means it's only a one way inference. It says the life expectancy of a thing is proportional to its age. That makes sense to me. If a product is going to die, it's likely to die fast. If a product sticks around five years, it's more likely it'll be around for ten more. It's why startups go under all the time, but you don't hear of a lot of 50 year old companies going out of business.

I think the error is interpreting these things literally. If we did, then if it were 1990 we'd say Windows wouldn't be see the next century. Probabilities are attached.


A "logical" error rather than a statistical one :)


I’d never heard of the Lindy effect!

By the author’s logic, COBOL programs still in use today will long outlive Linux. That could even be true.


There are 1000 things that have been around for 1 year.

There are 10 things that have been around for 10 years.

I think it’s safe to say that there is less chance of any individual thing that’s 1 year old being around in 1 year than the things that have already survived for 10.


Well, yes, but that's the outside view, based on knowing very little. If you know more than just the age, often you can get better estimates.


> The Lindy effect proposes, simplified, that the longer something has been around, the longer it will probably stay around still.

I think there is a linguistic ambiguity here. What is meant by "the longer something has been around"? Longer with respect to what?

If you mean to say if A has been around longer than B, then A will stay around longer than B (in expectation), then this law is commutative with respect to A and B and what you specify as "opposite" holds.

If you mean to say if something has been around for t then it will stay around for some f(t) where f is an increasing function, then again what you specify as "opposite" holds.


The converse of "the longer something has been around, the longer it will probably stay around still" is "the longer something will probably stay around (in the future), the longer it will have already been around (from the past)." The change from longer to shorter is not a logical converse.


> In other words, A implies B does not mean B implies A as well.

I think you wanted to write A implies B does not mean "not A" implies "not B" as well. B implies A would be: the longer something will probably stay around, the longer it already has been around.


But a lot more things have only ever been around for a short time. Only a few made it through that “having been around a short time”, stay around longer than expected, and become exceptions to the rule.


Well, that was certainly a successful nerd-snipe.


It just says visual studio is statistically more likely to die out - as most software does.


I don't really understand this idea of never taking your hands off the keyboard. Maybe people program differently to me, but most of the time I'm not typing anything. Most of my time is spent thinking. When my thoughts are clear and the problem is solved, then I type. And when I do, it's usually no more than a dozen lines at a time.

I get the impression from these people that they are constantly typing things. In fact, they're typing so much that they can't possibly waste valuable seconds using a mouse. I must be misunderstanding what they mean because that just can't be right.

And what's with the "you can achieve the same thing faster, without breaking your concentration" in regard to using a spell-checker or a calculator or whatever. Are you being serious? I can achieve the same thing faster? I mean how long do you think it takes to check the spelling of a word? Even if I must look it up in a physical dictionary, how long are we talking here?

Guys, seriously, slow down. You're going to burn out. I don't want to judge because I don't know you. Maybe you're a rockstar, but I'd guess that if you're really going this fast, the quality of your code is suffering.


> In fact, they're typing so much that they can't possibly waste valuable seconds using a mouse. I must be misunderstanding what they mean because that just can't be right.

Yes, you are misunderstanding. First, it's not only about typing but it's generally about doing whatever you are trying to do without unnecessary delays, including navigation between functions, files, windows, etc. Second, it's not just about saving seconds here and there: the end goal is to stay in the flow state, avoiding unnecessary interruptions and context switches. Every time you reach for the mouse and move the cursor or scroll a document you stop thinking at the problem at hand because you're focusing on the motion, and when you're done with the mouse you waste mental energy resuming your previous train of thought, maybe even forgetting something.

> Even if I must look it up in a physical dictionary, how long are we talking here?

This is actually a good example because it seriously disrupts your thinking by forcing you to pause for some seconds and completely focus on something else. I am sure proof-reading a long document in this way is much more mentally exhausting (and slower, but again speed is not the point).


Your experience is very different from mine!

Using a mouse does not distract me at all. I don't think about it, consciously, any more than I think about the motions my fingers make as I operate the keyboard. I'm not thinking about the tools, I'm thinking about what I'm doing through the tools; my hands move automatically.

I suppose it is like learning a musical instrument. At first you have to learn how to operate the instrument, practicing the motions to build up muscle memory. Then you start learning to play notes through the instrument. Eventually you stop thinking about the instrument, or the notes, because all that has become habit, and you just think about the music you are making; the instrument feels like an extension of your body.

If you have had a long-standing preference to use only the keyboard, and not the mouse, perhaps the mouse feels distracting for you because it is not part of the instrument you have learned to play. Of course this could be a self-perpetuating tendency.


> Every time you reach for the mouse and move the cursor or scroll a document you stop thinking at the problem at hand because [...]

... because I have well over 200 of own keybindings in my spacemacs dotfile already and quite a few of them don't do what I need in some particular situation or some underlying package got broken or, or, or...

> you're focusing on the motion, and when you're done with the mouse you waste mental energy resuming your previous train of thought, maybe even forgetting something.

That's because it's already half past 3 p.m and except having two or three coffees, I haven't eaten anything yet.

Besides, I highly doubt if I don't get any Nobel- or Turing-Award it's because I don't keep my fingers on the home row.


If you’re thinking, you don’t even need to be at the computer.

I can’t speak for others, but I keep my hands on the keyboard a lot of the time because I’m reading. As a (neo)vim geek, the key is to be really good at moving through the code. Being able to jump from where you are to where you want to be in a file far away without stopping allows you to read the code in a more linear way than it’s written.


It's also about doing less movement with the arm. I just got a big screen and my desire to avoid the mouse just increased a lot (increasing the cursor speed mitigates this a bit however). An external keyboard (that has a numpad) may force you to spread the arm a bit more than what is ideal.

It's also less cognitive load when the keyboard shortcuts are internalized. This may be worth it for operations you do a lot.

> Even if I must look it up in a physical dictionary, how long are we talking here?

This, however, takes an infinite time.


It's not really about speed, it's more about preferences. I'm a keyboard type of guy, I just find using a keyboard nicer, that's it. I don't like having to move windows around with a mouse, I find the keyboard experience better. Same with scrolling, I find using up and down keys much nicer than a wheel as well. It doesn't mean that one way is the right way and the other is wrong, it's like taste I guess.


Magic Mouse continuous scroll experience was a game changer, if I had to go back to basic mice I would prefer keyboard for everything


I have long wondered the same. What are people doing which makes them so concerned with the efficiency of their editors? I spend the great majority of my work time reading and thinking. When it seems that solving a problem will require me to change a great deal of code, that's generally a sign that I haven't done enough thinking yet. On the rare occasion that I do need to write a lot of new code at once, it's not that big a deal to just... type it out!

I have had a few co-workers who seemed to spend a great deal of time typing, but I have not generally been impressed with the quality of their work. In fact, the most incompetent developer I have ever met, an enthusiastic proponent of his favorite editor and its automation features, was also the most prolific, routinely churning out hundreds or even thousands of lines of awful, bloated, bug-ridden code a day.

The field of software development is large and varied, so of course it is possible that there are competent people doing solid work which really does involve a tremendous amount of fiddly editing, thereby justifying the otherwise inexplicable degree of attention given to sophisticated editors; but I cannot imagine what their working lives are like, and I hope I never have to find out.


The whole thing actually goes back to smoking weed. The ‘no mouse ethos’ goes at least as far back as the ratpoison window manager and this historic post: https://www.nongnu.org/ratpoison/inspiration.html Tiling window managers, living in terminals, vi, that Firefox with vi keybinding - all part of this THC cult. I recall some other ideologically foundational work where the author talked about being able to work one handed with this type of setup, in this case I think performance is taking a back seat to joint-smoking ergonomics.


Did you miss the part in the link you posted where he says that he’s joking?


I am similar to you with regard to process and how much I output at a time, but in my case having to fuss with the mouse a whole bunch makes it that much easier for me to lose my train of thought.

That said, I don't optimize heavily against this the way some folks do. I use emacs and org-mode and GNOME (well, whatever the System76 folks are calling their reskinned GNOME desktop :)) which I think provide a nice balance. Emacs lets me switch between files with a couple keystrokes rather than having to dig through 10 tabs, and GNOME I think encourages an alt-tab based workflow. I still use the mouse for most other things though.


It's a running annoyance of Windows that they are always making small changes to the UI that don't really come across as an improvement or a deterioration but that force you to relearn things.

It really drove me crazy when I went from being a linux partisan for being responsible for quite a few different Windows machines and on a given day I could be working with anything from Win 98 to Win ME to Win NT to various editions of Win 2000 and XP and if you had to find something in the UI it would be slightly different in all of those which was a cognitive load. Contrast to to Linux where I did it all on the command line and it stayed the same in that time frame.


The trick on windows is pressing win+r, opening the run dialog. One keycombo works in all windows versions. It takes a real command or exe, not some simplified shell abomination. After learning a few .msc filenames, you can quickly get around the old advanced config screens. Most common tools havent changed their names since win95, e.g cmd winword excel calc. In fact, I paste and recut one liners in it to strip them from their formatting.

Compare that with the start menu. I type something, and the chosen program changes every time. I type notepad++, it shows the correct program until I type the d, then decides I really want to open edge and search notepad++ on bing. Or it launches an uninstaller instead of the actual program.

Now one of these days ms is going to optimize the win+r experience, so have fun while it lasts.


> Most common tools havent changed their names since win95

The trick is to rely on decades of acquired arcane knowledge about .exe filenames. What a great UI!


>The trick is to rely on decades of acquired arcane knowledge

I hate windows as much as the next person, but to be fair that's kind of how command-line life works on Linux too isn't it?


haha, I did a brief presentation talking about this exact keyboard shortcut (among others) and why it was so useful for Windows admins back in my college or high school computer club. The feedback I got was the presentation was dry, but useful, but that's keyboard shortcuts for you... :D


I am reminded of a study where the hue of lights in a factory was changed, and the workers were reported happier and more productive. Then they changed it again (actually, to the original hue), and the workers reported being happier still and more productive!

So what lighting hue was best? Irrelevant, the important thing was that workers perceived that management was paying attention to their wellbeing.

I think something similar is happening in Windows (and Mac!) desktops, where they change small things "for productivity", and the majority of people will think it's an improvement, just because it's different.

Bt a small subset of us opinionated people will be upset that our carefully tuned habits are disrupted.


> they change small things "for productivity", and the majority of people will think it's an improvement

I feel like I've never talked to a person, techie or not, who didn't agree it was incredibly annoying to have UIs that they're used to change out under their feet.


I think it's why in the anecdote they change lighting hues, and not for instance swap the machines, or move around the work areas for no reasons.

In IT parlance, I guess lighting hues would be equivalent to the changing the window manager theme for a lighter/darker one ?


I usually don't mind at all, it's a non-issue. Losing features suck, but a new UI, especially for tools that were legitimately dated, is something I like.


How would you define "legitimately dated"?

I'd say something like the "ribbon toolbar" upgrade of MS Office, which I think improved accessibility of functions, is one of the only examples I can think of. But then I'm not sure if that was just a case of getting used to the new layout.

Windows "Settings" versus the Control Panel is a counter example. It was a downgrade, it still sucks now, years later, but the Control Panel could be argued as 'dated' by some definitions.


> I'd say something like the "ribbon toolbar" upgrade of MS Office, which I think improved accessibility of functions, is one of the only examples I can think of.

I don't think it improves accessibility when I keep having to go hunting in different sections for the right button. (The ribbon in the Recycle Bin makes no sense to me either.)

I'll admit it can be daunting for a new user to see a lot of buttons, but if there is a tooltip or even a tutorial for each button showing where it can be useful, that shouldn't be much of a problem.


Genuinely astonished

For me, the Ribbon is what made Office unusable. I've used MS apps since the MS-DOS and Xenix days. I remember the old MS menus (2 lines, bottom of screen, summoned with Esc) to CUA transition, then the horror of WinWord 1, then the passable WinWord 2, then the actually quite nice WinWord 6.

I still run Word 97 on Ubuntu under WINE. Works perfectly.

But after Office 2007, it is totally unusable for me.


Hehe, It did take a long time to get used to it, by necessity of "where I work upgraded, so no choice but to work with it". So, now it's second nature, and in my brief interludes with LibreOffice I struggle with the layout style and finding where things are.

I did say "But then I'm not sure if that was just a case of getting used to the new layout" - still open to that being the case.

Maybe it's like 12th edition text books, change for the sake of change. In the case of Office software, we need people to keep needing the 'basic' and 'intermediate' training sessions; we need credentials to justifiably decay with time?


The ribbon can be nice but I love that on macOS we still get the menu items. So much easier to find things.


I'm not saying every UI update will be good, just that in a huge amount of cases, updates that actually do improve the UI will be placed in the same bucket as bad updates, just because "muh me no like change".


“dated” is not the issue, rather does it continue to be fit for purpose?


> I am reminded of a study where the hue of lights in a factory was changed, and the workers were reported happier and more productive. Then they changed it again (actually, to the original hue), and the workers reported being happier still and more productive!

> So what lighting hue was best? Irrelevant, the important thing was that workers perceived that management was paying attention to their wellbeing.

I doubt it would work if they changed the place of tools, the layout of the factory, stuff like that.


Or the workers know to mark “yes I am happier” on the surveys lest they get remaindered out.


Related, you just have to spend some time on various tech sites/forums to see that people will complain when something changes in an OS release, but even more so people will complain when the UI does NOT change "there's nothing new, boring" to paraphrase that whole second category... Not the same exact individuals of course, but there are loud voices clamoring for changes for change's sake (even if they don't know they are doing that).


It's possible to recognize that mechanism and not mind being affected by it. There is nothing wrong with experiencing joy based on 'just' cosmetic changes. Indeed it's a direct effect of what's called 'novelty-seeking' behavior, a character trait associated with lots of positive things for the person and society, and only a few drug habits and untimely deaths in non-FAA-approved aircraft.

Many of the possibly meaningless desktop changes are also meaningless in the sense that they won't affect any workflow, and are therefore benign.


Surprisingly I find linux quite guilty of the neverending gui papercut too (it's a bazaar after all).

Sadly Microsoft is kinda forced to follow the trends, everything moves in all direction (web, phones).


> Sadly Microsoft is kinda forced to follow the trends, everything moves in all direction (web, phones).

Former Microsoftie here. While they are following trends, the actual impetus behind all the little changes you see all the time is that managers and ICs are incentivized to make "impactful" changes if they want good performance reviews. UI changes are a pretty easy way to have "impact". You can say something in your review like, "and X million users used the new taskbar that's in the middle of the screen".


This fucking reeks of Pressure to Publish in academia.


Except that publishing stupid papers does not harm millions of users, but yes.


It harms thousands of researchers who read them and base their further research upon.


The average paper only gets one citation.


If the average type of paper deviates over time, it doesn't matter whether the average number of citations is low - the type (direction) of research that gets cited shifts as well, and ultimately the direction of all research done shifts.


I don't understand how this scheme (in companies or academia) manages to emerge and sustain everywhere.. surely there's an answer but I don't know it.


I think the long and short of it is that systematically incentivizing anything is hard.

When organizations are small, you have the resources to evaluate performance on a case by case basis because everyone works with everyone else to some degree. When organizations grow and you have layers of bureaucracy, inevitably some asshole middle manager gives someone a bad review for a bullshit reason (jealousy, racism, dislike, etc).

That's when HR swoops in and starts making processes and standards for everything to shield the company from lawsuits. Crucially, these processes and standards don't necessarily prevent the problem, but just let the company's lawyers argue that they tried to prevent the problem.

Now that you have processes and standards for everything, employees start gaming the system. The goal is no longer "do the thing that makes the company succeed". The company is so big that relatively few individuals have the power to swing the company's fortunes one way or the other. Instead, the goal is now "maximize my career growth".

Then you end up in the nonsense FAANGM situation where individual little teams are putting out crap and making decisions that make the company look dumb. Things that would be easy to ignore if a significant fraction of teams weren't doing it. All because they're doing things that maximize their "impact".

This is hard to correct because you don't want to be too harsh with your teams and hurt morale, and it's not like it's every team. So you try to be flexible, be tolerant of missteps as long as people show improvement, because if you just fired everybody who made boneheaded decisions in the name of "impact", the hit to moral would cause terrible attrition problems. And worse, some of them would sue the company, which was the problem you were trying to avoid in the first place.

s/company/university/g as needed.

It's one of those classic situations where every individual step in the process does at least kind of make sense, but the end result is bonkers.


That's the problem with "impact", they're not asking for "positive impact", and they're not getting it.


You have your choice with Linux where you don't get a choice with proprietary OSes. If you build your desktop from parts you get to keep it as long as you like. I used the same desktop, mostly unchanged for 15 years on Linux without problem. It was basic, Openbox, gkrellm (later conky), xbindkeys, etc. Never had to worry about it changing as Openbox was basically done and the other parts were mostly done as well. Only had to mess with it when I wanted to.

Linux (and FLOSS in general) gives you the choice to have it the way you want it.


Not in these days of systemd and tight integration between everything. Want to fork your window manager? Well you'd better keep it up to date with the graphics server, which needs to stay up to date with the device manager, which needs to stay up to date with the kernel, which needs to stay up to date with... you've pretty much got to either fork the whole OS or live with whatever the big contributors are doing.


Systemd is actually a great tool for building your own desktop. It's user services/units are perfect for long running user process. I use it extensively for my Openbox setup. Running compton, conky, redshift, xbindkeys, urxvtd daemon, setting the wallpaper, running the screenlock program, and my calendar reminder all as systemd services.

The only thing really changing right now is Wayland and the switch from window manager to compositors and the related support software. There are getting to be a decent amount of options in that regard with a good crop of lightweight compositors and tools starting to mature.


is it true ? if XFCE decides to do things I don't like I may not be able to manage a fork


XFCE is a integrated desktop. You are choosing the desktop and what comes with it, including how the developers decide to change it. If you want control start with a window manager or a window manager like compositor for wayland and build your desktop from there. There are non-desktop versions of pretty much everything you need (notification daemons, keybinding, etc.).


https://www.trinitydesktop.org/ is an example of the community maintaining a continuation of KDE 3.


It only depends on motivation and time dedication. Anyone can be a developer.

Also if you go past the 3 main desktops there are many many window managers, standalone launchers, panels from which you can pick up the pieces you want to build something you like.


The first thing I do with a Linux install is uninstall X windows.

I remember being excited when I saw the first beta test of KDE but it seemed each version got a little bit worse after that and that's been the trajectory of the Linux desktop since 1995 or so.


I don't mind X, but I push i3 or xfce and nothing more. I tried bare console linux but there were too many keyboard mapping fu and a few web browsing facility I didn't bother to lose.


How do you surf the web or use GUI programs?


I don't. I surf the web and use GUI programs on Windows or Mac OS. I have a Linux server that runs Jellyfin and my IoT devices.

awk, grep, tail, nginx and all that kind of stuff is fine.

(I tried taking the 1050 card from my media server I used for AI training long ago and put it in another cheap Linux box and put SteamOS on it. They claim it can run Windows games but it won't run anything out of my steam account including the games that the proton database claims works. That's the kind of brokenness as the expected condition that is endemic to the Linux GUI)

If I found a GUI app worked on Linux I'd be so surprised I'd have to file a feature report with their feature tracker.


No offense intended but it sounds like you don't know what you're doing.

Xorg has worked for decades, a multitude of window managers and desktop environments have been perfectly usable for decades, and a couple of million people (going by Steam HW survey results) seem to be able to run games via proton perfectly fine.


Do you realize that even if the marketshare for desktop linux is minimal, let's assume 0.5%, those are still millions of computers working using what you claim doesn't work under any condition?


Good take ! Reminds me of that saying. "If one person tells you, you have tail ignore it, if many ppl tell you, you have a tail... look behind you" :P


Did you enable Proton in Steam? It's not turned on by default, though I believe that at this point they probably should. It's handled everything I've wanted to run on it.


Yeah, by default Steam only runs games that are made for Linux, you need to enable Proton to open up support for Windows games and use the ProtonDB to figure out the overall support for those games. Alot work flawlessly, some have known issues, and some flatout wont work, usually because of some anti-cheat software.


I think at this stage in the game, if you can't use a linux desktop but consider yourself to be 'in tech' in any way, the problem is probably you.

Doesn't mean you have to think it's perfect, doesn't mean you have to use it for everything, but 'can't use any GUI app on linux' is definitely a you problem.


Asking for a friend: "Is text-p0rn a thing ?"


Windows 11 had me feeling this quite a bit, particularly with settings and volume/network management.

They attempted to make the new Settings panel the 'HQ' for everything, but in the process really buried some things (for technical people) under numerous additional clicks/sub-menus, if it's even there at all anymore. I think they've been addressing the concerns with new updates, but I still find myself floundering sometimes.


There's a bunch of UI/UX regressions in Windows 11. Examples off the top of my head:

- After waking my computer from sleep, my last active window is not active any more. In fact, no window is active. I have to hit alt+tab to grab focus of the window again, or click the window (this probably wasn't caught because most people use their mouse for everything)

- There's new animations for the basic native Windows menus, including the Win+X menu I use for sleeping my computer, shutting down, etc. When you navigate to a sub-menu with the arrow keys, you have to wait for the animation of the sub-menu sliding out before you can interact with it. I've had to slow down my muscle memory to sleep my computer because otherwise I'll hit random other menu items cause the child menu didn't slide out fast enough to keep up with my inputs.

- I usually snap windows around with the Win+arrow keys shortcuts, but they added a new snap layout where sometimes when I do Win+up it brings the window to the top half of the screen. But that's also how you maximize a window, so I don't know how it decides which one it will do. In my experience it seems to be related to your key input speed.

- Sometimes I'll crop and resize images in Paint. They redid a bunch of the UI in Windows 11, and now when I use the resize menu, I can't hit enter to confirm the size I input. Again, seemingly this UI was only tested by people who use the mouse for everything.


Just tried Win+X on Build 22621. Arrow keys and letters work right away. I didn't have to wait or slow down.


one of the best things you can install is windows power toys, and PowerToys Run becomes your work horse to get to everything really easily. web search, calculator, file search, programs, services, find settings, find open windows, timezones, unit conversion, launch things as admin, etc

https://docs.microsoft.com/en-us/windows/powertoys/run and check the search commands / hotkeys

My start menu / win-r days / open a shell to execute some commands are pretty much all over now and I just use PowerToy Run


I actually reformatted my computer so that I could downgrade to W10, only because of the missing feature of being able to choose "never combine" on the taskbar. Why make a new taskbar and then make it worse?

Even after using W11 for a few months ("I will get used to it"), I still noticed everyday how I felt less productive and annoyed having to hover over the icon and try to find the correct window, vs just having them all laid out in the taskbar.


Don’t fear the Wayland. Wait patiently for the pain points to be worked out, and then reap the reward of a smoother and more robust desktop.

I’ve been running SwayWM for multiple years and it’s been great. There’s not anything I’m aware of that I can’t do in wlroots compositors; I even have Zoom screenshare working on my work computer under SwayWM. Most stuff, like WebRTC, doesn’t even require manual tweaks; just need the right packages installed. And you get the typical Wayland benefits, like great support for heterogeneous DPI, reduced jank, and potentially better robustness. (Depends on compositor for now; but there is a path towards compositor crash recovery, which should make things far better.)

A comparable Wayland compositor to Xmonad will likely rise as a good successor in the future.


> Wait patiently for the pain points to be worked out

Efficient remote desktop. Won't happen, ever. Simply cannot happen.

Blits from offscreen to onscreen surfaces bypass the compositor, so it can't implement them as RDP commands and instead has to push the whole image across the pipe every time.

This is why xorgxrdp is so crazy fast and responsive compared to every single Wayland RDP server.

I use sway (a Wayland compositor) on all my machines. But all of my headless machines run xorgxrdp because there is nothing in the Wayland world that is even close. This is a fundamental problem with Wayland that cannot be fixed until there is a perspective change on the Wayland committees. Sending surface-to-surface-copy commands through the compositor clashes with their worldview, but without it we will never have decent remote display -- even (as they say) as "the task of a higher-layer protocol".


That’s fine, but applications that can efficiently do that are slowly disappearing in favor of direct hardware acceleration, with software rendered offscreen rendering as the fallback, so those days are quite numbered now.

That said, the best way to go in the future is probably going to be something more like waypipe.

edit: People seem skeptical, but…

GTK4: https://discourse.gnome.org/t/gtk4-efficiency-and-performanc...

Qt 5: https://forum.qt.io/topic/67371/x11-forwarding-slow-on-qt-5-...

Electron/Chromium: https://github.com/microsoft/vscode/issues/5243

Even my terminal emulator of choice; only fallback is software rendering.

Hope you can stomach video codecs, because that’s where we’re headed unless you stop updating your software.


Yes, it is really heart breaking how the Linux desktop is regressing. Instead of polishing the experience and focusing replacing Xlib with XCB which would make it possible to have excellent native performance, a huge amount of resources are invested in rewriting everything for Wayland. Now more than a decade later, things are still not even working properly (yes, maybe it works for you), with no credible plan for remote desktop, and instead toolkits just getting worse for X11. And then I have to say, from an engineering perspective, I find X to have vastly superior design (despite all the propaganda otherwise).


Even under Windows, GDI days are over and RDP has to work with the modern toolkits, so nowadays it is h.264 streaming anyway. So basically the same as with Wayland.


Yes, upending the whole stack takes a while. X itself took a long time too. The thing is, it’s not a race. It’s OK for it to take decades.


> so those days are quite numbered now.

Yes, I've been hearing that for 14 years now about several workflows Wayland broke (remote clients, etc.) and it keeps not happening.


Current GNOME has RDP server build-in. It is a bit cumbersome, but it is there, works, and the cumbersomeness will be improved upon.


RDP is not remotely (no pun intended) the same thing. It solves an entirely different class of problem.


RDP is not that different from networked X11 as you think it is. RDP originally also transmitted GDI commands, just like X11 transmitted X11 commands. RDP can also remote only a single app (it is called RemoteApp), it is not bound to remoting entire desktop. Actually, Wayland apps running under WSLg use RDP underneath to integrate with the Windows desktop.


I'm happy to take bets on it.


> slowly disappearing in favor of direct hardware acceleration

In favor of software rendering, because datacenters mostly don’t have that hardware. X11 is more likely to be replaced by Javascript apps than Wayland apps because even going through a browser to a canvas is a better remoting story than sending entire CPU-rendered video frames to a desktop with a GPU that’s nearly untapped.


It seems you are implying you run X applications in a datacenter and have them piped to display locally. At that point, I have to ask, why? I’m sure you’re not alone, but this is not a common use case in my experience.

It seems like some applications have been adapting their architecture to deal with the remoting use case. VS Code is widely publicized for supporting it, but I personally also use Neovim which is capable of doing similar things, and IIRC even Jetbrains has been working on this. Of course, that covers code editing, but only code editing. So that still leaves a lot out. That said, these solutions have very good UX and portability, so it’s easy to see why people prefer that.

Let’s say the world went Wayland. What do you do about the other stuff?

Well, there’s not nothing that can be done. For one thing, while I do think that the XRender world of abstraction is basically going to die, that doesn’t necessarily mean there’s no way that remoting could be improved. For example… it’s possible that some day, a solution like VirGL could allow hardware acceleration over a waypipe tunnel.

And also, while Xorg is no longer being maintained, XWayland is still in scope for maintenance. So for the foreseeable future, you can still make use of X11 tunneling just fine under Wayland. Is this ideal? Maybe not. However, it’s perfectly practical, and therefore I don’t see why it wouldn’t be a decent solution. I expect mainstream desktops and apps to continue to support X11 apps for years to come, giving little urgency to worry all that much about the remoting case.

As usual with software, the old thing is always deprecated before the new thing is ready. I’m pretty confident that Wayland has made the right architectural choices for the future, and now all that’s left for us is to fill in the blanks.


Been using Wayland for about 6 years now. I'm not sure X11 is ready for use yet. Every time I try it I get terrible screen tearing and can't set the DPI scaling correct for my second monitor. The trackpad also seems to work much worse for some reason.


Har de har har.

This is part of the thing that annoys me about Wayland boosterism: people complaining about the need to fix stuff I can't see, don't care about and honestly think does not matter to 80%+ of people.

Screen tearing? Never seen it. Don't care.

Different DPI on different monitors? Can be fixed on X. Works in Cinnamon. Also GNOME and KDE but I find them unusable.

HiDPI support? Don't care. Don't own a HD monitor. Don't use anything with a 3D card in it. It's for gamers AFAICS. I'm old and grumpy.

It works, there are games, Steam works on Linux now.

Trackpads? Oh, those toy things on cheap plastic laptops? Don't care. I use Thinkpads. I turn the toy pointer thing off.

For me, this all falls under:

Here's a nickel, kid, go buy yourself a console.


You are just old and grumpy. There is a heck of a lot more use for a GPU other than gaming, including content creation, which also benefits from higher resolution and dynamic range displays, though X and Wayland can't handle HDR yet. Graphics libraries like Skia, browsers, the Android emulator and so on all take advantage of your GPU. OpenCL, CUDA, mixed-precision computing, transcoding, ray-tracing are all accelerated with a GPU.


Wayland can handle HDR, and wlroots/sway supports it too.


That can't be true, the Wayland people are currently working on a Weston reference implementation before changing the protocol to support HDR.

https://gitlab.freedesktop.org/wayland/weston/-/issues/467

The DRM KMS API can be used to pass HDR metadata to the monitor, so it's not impossible to write a software to take advantage of that, but I don't know any that does.


What I should have said is Sway properly supports high bit depths and color formats that can support HDR. It still has no protocol for actually negotiating HDR, but I also don’t have HDR displays, so I wouldn’t notice the difference.


> You are just old and grumpy.

Well, I know, that's why I said it.

> There is a heck of a lot more use for a GPU other than gaming

I am aware. I just contend it's mostly minority stuff.

> including content creation

Do it for a living. Have done most of the last 25Y. Make my living doing it.

I do it in Markdown these days. Mostly in an Electron thing that's 100x bigger than it needs to be, but it works. Used to do it in Word. Outline Mode is gold for a writer.

I do it across 3 displays, 2 of 'em quite big ones.

On an Intel integrated GPU. Don't need dedicated silicon for that.

> which also benefits from higher resolution and dynamic range displays

Not for me.

> though X and Wayland can't handle HDR yet.

Don't care. Don't need it. My iMac probably can; I don't care on it, either.

The only thing in this department that my iMac does better is that when I plug in an external screen, the OS figures out its size, aspect ratio, and makes everything the same size across all screens. That is a fantastic feature, benefits everyone -- unlike the stuff you're talking about -- and Wayland can't do it, nor can X, nor can Windows.

> Graphics libraries like Skia

Never heard of it. Don't need it.

> browsers

I have nice smooth GPU-assisted scaling and playback on my cheapo Intel integrated chip, thanks. Less hassle with drivers, too.

> the Android emulator

What Android emulator? When I need Android, I use my phone. It has nothing I want on any of my desktops, ta.

> and so on all take advantage of your GPU.

And a cheap simple integrated one is 100% fine for this.

The Apple Silicon Macs are making this point very well. Their GPU is relatively simple but because of fast memory, shared with the CPU cores, it works better than the phat nVidia and AMD space-heaters.

Big discrete GPUs are a temporary passing phase that'll soon be as dead as spinning hard disks: IOW, a temporary trend only found in some fancy servers.

> OpenCL, CUDA,

Don't need it, never did, almost certainly never will.

> mixed-precision computing, transcoding, ray-tracing are all accelerated with a GPU.

And are also doable on a standard CPU, with less coding effort. Also, all things I've never really needed or wanted.

For clarity:

There are uses for this stuff. But honestly, most of them are ways to try to find ways to use the big fat hot special-purpose silicon designed solely to try to make games render faster. The stuff that is needed by more or less all general-purpose computer owners can be done perfectly fine at more than acceptable performance by small, simple, cheap, electricity-frugal GPUs that can be built into a small part of a multicore 64-bit CPU's die.

And that is the pattern of the future: smaller, simpler, cheaper GPUs, without all the fancy rendering stuff, sharing the die of a CPU. We will look back at the era of big fat 2-slot GPUs with multiple cooling fans and multiple power cables with the same amused derision as we look back at big clunky slow 5.25" mechanical hard disks now.


>Don’t fear the Wayland. Wait patiently for the pain points to be worked out, and then reap the reward of a smoother and more robust desktop.

I hear you, and I know there are good parts in Wayland (I just dont know them yet) but dammit, it feels like the Linux desktop has just in the last few years started to become "nice and stable" and predictable compatible with many things, and now we starting from scratch :(


Good news: we aren’t really.

When I first started using Linux, graphics drivers were mostly drivers for XFree86. Today, the modern Linux graphics stack with kms/drm is used by most drivers, and now Xorg can use kms+drm instead.

Same for input drivers… but now most people on Xorg are using libinput, which is what Wayland compositors use, too. Libinput handles your mouse, graphics tablet, touchpad, keyboard, etc.

Even Pipewire! When pulseaudio started, a huge obstacle was indeed just buggy audio drivers. Undoubtedly we still have those. But also, it’s a much better situation today than it was. Also, a lot of the ideas already existed in JACK, Pulse, etc.

So while a lot of stuff is indeed newer and still needing to mature, the typical Wayland desktop shares a lot in common with a modern Xorg setup thanks to gradual improvements.


This Wayland Vs X11 reminds me of that saying:"The last of the old tech is usually better than the first of the new tech".

Sorta like how the state-of-the-art "Ox Cart" was probably better than the very first generation of an automobile.

I think we that is where the Wayland-vs-X11 currently is.

For what it's worth, I love my Ox Cart with years of patches and bells and whistle that I know just how to kick-and-push to make everything work, even those nvidia-drivers that came of course straight form hell !


Wayland has been around for 13 years now. That’s an eternity in software years. It still hasn’t caught up with X11 for many use-cases. There’s clearly some sort of problem (I suspect that it’s not a technical one).


My strategy is to keep the software for a desktop computer stable and of it's era forever. If it gets so old that I'm having trouble compiling things because my glib and gcc are so old then I'll build an entirely new desktop with up to date OS and software. Then set it up and use it till it can't do new software again. This happens every 5 to 10 years. I never lose ability. I only gain it.

There are many things my 2010 era Core2Duo running Ubuntu 10.04 can do that my fancy new Ryzen desktop with Debian 11 can't and won't ever be able to do. Things I cannot give up because they're too important for my daily life.

>And when Wayland finally happens? Well. I guess I’ll have no choice but to stop using computers forever ¯\_(ツ)_/

Wayland isn't going to happen. https://dudemanguy.github.io/blog/posts/2022-06-10-wayland-x...


> Wayland isn't going to happen.

I'd be very disappointed by this because I'm in a mixed DPI environment and I need its support.

Seriously though, I find hostility toward Wayland so weird. I could kind of understand it with systemd, but X seems perpetually stuck in 2005.


> Seriously though, I find hostility toward Wayland so weird.

I felt the same way until I tried to get Chromium working without turning into a blurry pixelated XWayland mess. I still run Sway, but the hours I've spent pouring over smug "It's working as intended! X is soooo 2005 anyway" posts while troubleshooting my Wayland config has been absolutely infuriating.

I've also had to pause updates from Visual Studio Code, while they have contributed a lot to Wayland support in Electron they sure do break it with regularity.


Did you try

    chromium —enable-features=UseOzonePlatform —ozone-platform=wayland
I’ve seen this work brilliantly on some GPUs and segfault at start with some.


people have been trying to replace X11 since I started using Linux around '96.

The Xorg we use today is miles ahead of the XFree86 days that I find it rather humorous people are still complaining. Sure, I can see from a dev point of view it still sucks to work with. But from a user point of view? It does everything I need it to, and more. I can run remote apps. I can run headless X (how I was doing headless Chrome way before anyone else). I can run multiple X servers on one machine and switch between them (one for multi-monitor productivity, and a second for single-monitor gaming since most games don't work great with multiple monitors). xrandr is a thing today. I can dynamically change just about anything on the fly. I haven't touched the Xorg config and modelines in about a decade now. Getting a GPU to work has never been easier. There is just insane flexibility in what you can actually do with X11 that the warts kind of just fade away.


Most folks don’t use these features any longer. For example, I haven’t run a GUI remotely since the late 90s on a SGI cluster. Everything not very local moved to the web.

Autoconfiguration is table stakes as well.


And, right, this is why Wayland adoption is so low: the thought process is always "I don't use this personally so there's no point supporting it". And, I'll be the first to say: they're doing the work, so they get to decide what the product looks like. But this is why nobody uses it.


It’s default now in many dists. I don’t use it but not because I care.


Yeah it's kind of weird that people want things to work as well as they do in Xorg. While I completely understand that it works very well for you, the "for you" doesn't actually make it so for me.

While I understand that it's probably my fault for not discovering the same combination of software components and configuration that you did through brute force... Xorg no longer makes me do these sorts of things. Probably because it's not in a hurricane of development where every wheel is reimplemented monthly. You want people to switch to Wayland? Restart development on Xorg and introduce lots of breakage until it's just as much of a pain in the ass as Wayland.

I've tried it as recently as this year. The deal breaker? Toolbar popups in LibreOffice would not show. While I could submit bug reports to hundreds of different projects wait for them to be a duplicate of a closed won't-fix-not-our-bug-good-luck-figuring-out-who's-bug-it-is-becasue-they-sent-you-back-here, I just don't have the patience I used to have for that. I got to get something done. So, back to xorg until some update discards my preference and loads Wayland again.

Yeah, why should I be mad? Just be patient another 13 years for the pain points (which simultaneously don't exist) to get worked out.


> Yeah it's kind of weird that people want things to work as well as they do in Xorg. While I completely understand that it works very well for you, the "for you" doesn't actually make it so for me.

But the converse is also true. Just because it doesn't work for you, it doesn't mean that it's in and of itself bad. Why would you be mad at Wayland if Xorg works for you? In that case it seems like there's really nothing at all to comment on.

However, the fact that there is so much discussion around this leads me to believe that X doesn't work for many people and that seems to be the fault of Xorg, not Wayland, so I don't understand why Wayland comes in for so much of the criticism. Wayland devs identified some problems and set out to fix them, this shouldn't really have any impact on Xorg at all, right? Just continue to use X.

EDIT: And to be clear on a point, it's not that I think Wayland should be immune from criticism. It's missing features and there are probably things that are broken. But the tone of the discussion (and of other comments in this thread) is that Wayland is bad for the community. That its very existence is somehow an affront. That's what I don't really get.


> Seriously though, I find hostility toward Wayland so weird.

I think the hostility towards Wayland is pretty justified. It's a backdoor power-grab by the GNOME foundation and Red Hat, much like Flatpak, Libadwaita, and to a limited extent, systemd. I, like many others, am totally exhausted of GNOME trying to be the center of the desktop universe. Every couple years, they decide to redouble their development efforts on some useless, stopgap tool that ultimately ends up being underdeveloped and redundant. Making matters worse, they announce $PROGRAM to be the next big feature of Linux, and anyone who's refusing to embrace it is a luddite. In reality, most people can't switch to these alternatives because they're niche, and don't provide the same degree of functionality as their favorite Window Manager.

Furthermore, people don't hate the idea of something replacing X, people hate the fact that Wayland has been in development for more than a decade and is still considerably worse than Xorg with objectively less features and functionality. Adding insult to injury, a majority of these omissions are deliberately removed by the maintainers because the GNOME desktop doesn't need it, therefore everyone else doesn't. Take AppIndicator support, for example. Everyone has statusbar icons: Mac and Windows users alike deal with them daily. When developing Wayland though, AppIndicators were deliberately removed because GNOME didn't intend to use them. Worse yet, the maintainers refused to even support it in wlroots, their pittance of a cross-platform desktop library.

> I could kind of understand it with systemd, but X seems perpetually stuck in 2005.

X is indeed terrible software, and it's functionality is stuck not just in 2005, but rather the mid-90s. I really hate Xorg, which makes it even more infuriating that Wayland:

a. Doesn't support my hardware

b. Doesn't support my desktop environment

c. Makes it harder for me to stream my display, take screengrabs, and use my webcam

d. Deliberately removed functionality that I use on a daily basis, forcing everyone adopting Wayland to write their own implementation of a basic feature.

If I didn't know any better, I'd accuse Wayland of being a project deliberately designed to sabotage desktop Linux. It's a project with less ambition than Quartz, and less hardware/software support than x11. It has a weaker security model than the compositor in MacOS, and manages to have less features than even the compositor in Windows. How is that closer to "the future" than a feature-complete desktop from 30 years ago?

The only truly excellent thing to come out of Wayland was PipeWire. But PipeWire works just fine on Xorg machines too, so I guess we're at an impasse. Wayland fractured the Linux desktop for good, there is no "way forward" anymore.


Yet... it's still the only way I can use mixed DPI displays. I'm sure all of these concerns are justified in one way or the other, but no one's really given me a lot of other options. A quick environmental scan reveals two serious contenders: Xorg and Wayland. Open source precludes the idea of a grand conspiracy, so I'm not sure why no one's working on an alternative if it's as bad as you say.

It's a bit like complaining about factory conditions in China. I'd sure like to buy a computer from somewhere else, but I'm sort of left without a bunch of options so I shrug my shoulders and hold my nose and hope the market sorts it out. Ironically, this is one of the issues that puts me off upgrading to another HiDPI display, thus necessitating my need for Wayland...


I'm not taking away any options from you, you're welcome to use whatever tools work for you. You shouldn't take it as a personal attack when someone suggests that one of your tools could use improvement, and by reaching consensus that Wayland needs more features and hardware support we can send a message to the community that work is far from done.

Linux display servers need a lot of work. HDR content is right around the corner, and nobody in the Linux video stack is prepared. Wayland spends too much time twiddling their thumbs and making life hard on the rest of the Linux community, and Xorg's maintainers are gone. If you're going to use Linux, then by all means, use what works for you. That's the benefit of modular OSes! But we still need to push for more active development in this space. If Xorg is dead, then a lot of work needs to be done on Wayland to get them up to speed. If Xorg is not dead, then we need to find someone to fix it's longstanding issues.

> It's a bit like complaining about factory conditions in China.

Not really. It's more like complaining about a missing feature, say, thumbnails in the filepicker. At first it seems like such an egregious omission that it had to be a bug. But then people defend it, saying "it's not that big of a deal!" When you try to get people to corroborate your claims, people label it as hate speech. When users contribute code, fixes, patches and solutions, you see them all get turned down.

The goals of commercial interests, Linux software developers and Linux desktop users have never been more at-odds. Without a clear path forwards, we can't expect anything to get done. I think it's okay to beat a drum about this stuff online, because it's completely germane on a subject like this.


> Yet... it's still the only way I can use mixed DPI displays. I'm sure all of these concerns are justified in one way or the other, but no one's really given me a lot of other options.

I believe you, but personally buying my monitors in matched sets is easier than giving up network transparency, and I don't think that's an outrageous position worthy of the scorn that the Wayland folks seem to have for it.

> A quick environmental scan reveals two serious contenders: Xorg and Wayland. Open source precludes the idea of a grand conspiracy, so I'm not sure why no one's working on an alternative if it's as bad as you say.

There's no money in it and it needs a lot of boring grunt work. RedHat may be half-assing Wayland but that's still 0.5 more asses than anyone else is putting into anything.


I don't think Wayland, or even GNOME is much of a conspiracy. The people who spearhead these kinds of consolidating efforts(udev, systemd were similar stories) are always going to fall more into the empire-builder category than most people. But it's open-source: to win the category you have to make a public good. So at the end of the process, you have working software and a standard. The standard has to be at least somewhat better for most people, or it falls into the bucket of dead standards.

But Wayland's definitely a big one, touching really old assumptions around Linux desktops. There is plenty to start fights over. I still can't quite use it for all my apps because some stylus apps behave poorly. So I think I'll be in the latecomer camp. This is not a bad thing for me: it just means I've organized my life around using the tech to get a good result now, rather than putting my energy into developing the tech. I was on Windows for the longest time for the same reason.


This is just a shitty conspiracy theory in your head.


lawl - wayland has already happened.

Also, and more seriously - I think I'm not going to take you at your word on this one:

There are many things my 2010 era Core2Duo running Ubuntu 10.04 can do that my fancy new Ryzen desktop with Debian 11 can't and won't ever be able to do. Things I cannot give up because they're too important for my daily life.

I'd love to see a real example instead of this pithy line. My strong (STRONG) suspicion is that anything you can do on that old machine can be done on the new one just fine - although you might have to adjust a bit or learn a new tool, and that can be painful and annoying.


Here are 3 concrete examples of everyday tools with special abilities only feasibly installed natively on old linux distros.

1: I use text to speech a lot. For the last 20 years I've use Festival text to speech on linux. Specifically I've used Festival 1.96 because it has support for the best sounding voices, the enhanced Nitech HTS voices. No other voice set, not CMU arctic or any other, sounds as good. Modern distros don't package 1.96 and I can never get it to compile. Nitech HTS don't work in Festival 2.0. There's no other mature local linux text to speech software that sounds decent.

Every time I build a new computer I both survey the Festival 2.x voice sets and other linux TTS software, including any new weird stuff (like google tachotron, etc). I've never been able to replicate the functionality of my old linux install (or get Festival 1.96 compiled on a newer distro). I really would like the same functionality on a newer distro and I spend a lot of time searching and trying. So if you know an answer to this I'd love to hear about it (anyone).

2: It might be possible for me to get GNU Radio 3.6.5 installed on a newer distro but I laugh at the idea of getting all the out of tree modules from that era to compile on a modern distro. It's the norm for GNU Radio modules to be left behind and abilities lost with every minor version.

3: The Python 2.4 software imgSeek which indexes images and allows me to do MS Paintbrush style colored sketches to search for images.


>I use text to speech a lot.

Honest question: For what - Are you disabled ?

Sorry I'm not sure what the PC way is to ask that question ? I'm purely fact-finding and English and PC is not my first lang.

PS. Not saying you are wrong to use it like that, just out of interest is it possible or worthwhile to just run a VM for that ?


Last question: I'm just damn curious and not trying to be a dick. Apart from the Core2Duo and ancient-linux-setup (no judgement)

Are there any other parts of your life (none computer related) that you also running super-old-tech or versions ?

Lol i'm really really not insulting you, just want to know (learn). There are much to learn from extreme-outliers


Wayland is at best a work in progress.

And if you consider that they made Wayland default on GNOME to force adoption and users usually go to Internet seeking for advice on how to switch back to X11 on GNOME, we will probably wait a long time for Wayland happen.


The silent majority is just using wayland without complaining because it just works.


>The silent majority is just using wayland without complaining because it just works.

Huh ! That is not a fair argument. The silent majority is by definition impossible (or very hard to find) and confirm if its true. Since you know... they silent and all !


I’ve been using it happily for over a year with no real issues. But I do share your concern. I like the variety and choice of WMs on Linux and the Gnome / Wayland crowd do seem to add friction to the other contenders.


> I like the variety and choice of WMs on Linux and the Gnome / Wayland crowd do seem to add friction to the other contenders.

This is a big problem because this friction they add on purpose hurts users of other wm/desktops that don't get their preferred wm/de working correctly.

And it also hurts other open-source developers because it seems the only ones doing something right are the Gnome devs, when in fact the others devs are having problems only because Gnome devs.


The Gnome dev do not force anyone on anything, especially regarding parts of the linux desktop they don't develop themselves.

This is just conspiracy theory.


>There are many things my 2010 era Core2Duo running Ubuntu 10.04 can do that my fancy new Ryzen desktop with Debian 11 can't and won't ever be able to do. Things I cannot give up because they're too important for my daily life.

This has me curious: What sorts of tasks are you doing that only work on a machine like this? I love finding new things to do with old hardware, so I'm interested in your insights.


i'm going to guess its a Thinkpad, x200? t410? maybe even an x61. People (myself included) still use those, not because they do anything more in particular, but they feel great to use, nicely weighted and a stunning[1] keyboard

[1]ymmv, but personally the keyboard on an x61 is perfection, the screen however is very very very far from perfection


Man I loved my x61. It was like the duracell bunny. It was in my apartment in 2011 when the building burned down. The roof collapsed on it and the only thing I had to fix was the screen backlight. I used that laptop until 2020.


If it's the feeling of the hardware alone then the Thinkpad could be used as a thin client to access a modern, more powerful system. Personally I'm curious about the "many things" the C2D with Ubuntu 10 can do that the new one can't - is it some legacy software that can't run on a newer os, or something else?


I love the point in there that it isn't actually xorg vs wayland, it is xorg vs dbus. Wayland is so deficient that basically everything has to depend on dbus if it doesn't want to use X. This framing clarifies the issue substantially, because whatever people think about wayland, they might have some slightly different opinions about dbus.


> slightly different opinions about dbus

Recently i updated my machine and it would fail to boot because NetworkManager-wait-online.service's invocation of `nm-online -s` would fail even when NM was connected, even when `nm-online` actual liveness check would succeed.

I spent hours reading NM code, wading through auto-generated GObject introspections and their XML bullshit to try to figure out why org.freedesktop.NetworkManager.startup was true, what the magic numbers in org.freedesktop.NetworkManager.Connection.Active.StateFlags meant, why my desktop wouldn't boot and couldn't even find what could could cause the state to change before I finally gave up and patched nm-wait-online to just invoke the codepath which did an actual liveness check rather than bumble through a bunch of dbus interfaces.

Gotta say I was missing the old KDE3 dcop after that... IPC is still such a PITA on linux.


Most of the bigger distros have already switched to wayland. It's pretty much inevitable that the rest follow eventually.

https://www.ubuntubuzz.com/2021/10/distros-which-adopted-way...


Well, most distros also ship X and allow you to choose at login. I'd imagine quite a lot of people are still running X since there's always something that does not play nice with Wayland.

My favorite: drag-and-drop from file-roller into nautilus. [1]

[1]: https://gitlab.gnome.org/GNOME/file-roller/-/issues/4


Drag & Drop is an action I had to consciously work to start using when I switched to macOS, a little over a decade ago. Windows and Linux had trained me never to use it, aside from moving files around in Explorer or whatever, because it so often caused crashes, did the wrong thing, or made programs glitch out in weird ways (that last one was mostly Linux). Sure enough, I found a repeatable drag & drop application crash in KDE in my first few minutes of use, last time I poked my head into the desktop Linux world (Ubuntu, in this case) again, a couple years back.


How did Windows train you not to use drag and drop? It's never not worked. Same with OSX. Yet Linux you have to live without it, asking questions on reddit or discord/irc results in 'use the command line to move the file'... which is a bit absurd.


An example: Drag&Drop from 7-zip or winrar causes an extraction to a temporary folder, then copied from there making it take much longer than doing "extract to folder" directly.


Laughed at pointing out the "switch" in Debian: GNOME lost it's position as a default DE there after the smooth as sandpaper transition to 3rd version, imo very relatable.


Have you considered pulling your Ubuntu desktop into a VM running on the Ryzen? Let it transcend the mortal shell in which it was born.


Your now-immortal VM could run faster while using less power, even with the virtualization overhead.


Wayland doesn't need to be perfect, it just needs to an improvement over Xorg, which is hampered by its complexity and reverse compatibility.


I do not think the criticism of Xorg is really correct. At its core, it is actually a nice and well-designed protocol. It is also rock-solid, works without any problem in all my configurations, and supports remote desktop.


> Wayland doesn't need to be perfect, it just needs to an improvement over Xorg

Agreed, but it shows no signs of ever achieving that. Yes, it has mixed DPI support, which is not nothing, but that doesn't generally make up for the areas where it doesn't have parity with Xorg and never intends to reach that parity. (Network transparency, screen recording, ...)

> which is hampered by its complexity and reverse compatibility

Users don't care about technical elegance. If and when those things start causing user-visible issues with Xorg then Wayland might get a chance.


Wayland is already here, and swaywm is the number one reason why I’m sticking with Linux instead of falling back to MacOS.


Are you not a little worried about running such an old OS that no longer receives security updates? Is it airgapped/offline/isolated?


My desktop has been Wayland/sway for years and years, and it's incredibly stable (I am using Arch btw)


Unless you have an nvidia graphics card in which case you can't use Wayland, or if you need to do screen sharing, or have multiple monitors and want to do screen sharing.


Or want to run a program on one computer and have its GUI display on another (people still act like VNC/Spice is an answer here but it's parsecs behind). Or want to write a client that can tell if it is in view or not. Or...


People who have been choosing nvidia gfx cards in the last 2 decades to run linux desktop are these kind of people who tend to accumulate shitty life decisions so they can pose as victims and blame the states and the whole world to be against them.

You can't really raise the nvidia flag in an argumentative in a linux desktop discussion unless you want everybody to know you don't really want things to work well.


I learned the hard way. :(


1. Don’t have an nvidia graphics card

2. Screen sharing just works


1. My desktop I had a Radeon 5700XT, no issues in Wayland except screen sharing. On my Laptop I didn't know that wayland+nvidia sucked. Wow opened up firefox and the whole DE froze for like a minute.

2. Na, it doesn't just work. If you use firefox, Google Meet cannot detect the screen. If you use chrome but have multiple monitors it only detects the primary monitor. It's such a pain when Xorg just works.


What do you mean Wayland isn't going to happen, I've been using it without major problems for years.


You and roughly one-fifth of the Linux desktop userbase. And good for you! I'm really glad it works for your use cases. But when people say "Wayland isn't going to happen" they mean that one-fifth is going to asymptotically approach two-fifths, maybe, over time. Until Wayland has the capabilities of X ca. 2008 it's not going to replace X in actual use.


> I stole the calc bash function from Addy Osmani in 2012 and have used it daily since.

Do yourself a favor

  # ~/.bashrc
  function py {
    python3 -c "from math import *; print($*)"
  }

  $ py 3 + 3
  6
  $ py "sqrt(7)"
  2.6457513110645907 
  $ py pi / 5
  0.6283185307179586


This is very neat at the CLI. However I always keep a terminal tab/pane with ptpython open. Add math, os, sys, datetime, work libs, etc. to your startup file (see below). With pip, it’s incredible.

https://www.assertnotmagic.com/2018/06/30/python-startup-fil...


I often forget to quote * which results in multiplications not working when done using the function approach mentioned above. No such issues with the startup approach. Plus I can store the results in variables and play with them (anything more complicated is more suited for LibreOffice Calc).


alias calc='noglob calc-actual-implementation'

Disables globs for that particular command in Zsh.


I write out calculations in an emacs lisp buffer. This lets me adjust them, correct typos, or save the whole thing.


I have a similar one!

    m () {
       node -p "$*"
    }
Not as elegant because it doesn't import all the Math functions but serves me for small calculations!


why not just use bc -l ?


(not OP, but I have a similar shortcut)

Mostly because I am familiar with Python and I don't want to learn the syntax for the advanced expressions I will seldom use. c() s() etc instead of cos() sin() is not too bad, but if I have to do a specific rounding/truncation/modulo, I probably know how to do it in Python and I definitely don't know how to do it in bc syntax.

Of course a similar shortcut can be defined for your language of choice, if that's JavaScript/Ruby/R/... then you probably want to use that.


This is not an argument for unchanging tools. It's an argument for becoming a power user. Pick good, powerful, customizable tools. When they were made is immaterial. Learn them to expert level.

The "innovations" toolmakers try to push on you will almost never be worth the cost of losing all your knowledge, customization to your uses, and muscle memory.


I honestly fail to see how using a different terminal, typing "spell word", looking through a list of suggestions and then entering a number followed by another keypress to copy said word into the clipboard and finally pasting the the result where I actually want it - all providing I actually know and notice potential spelling mistakes - is in any way shape or form superior to a system that highlights spelling mistakes automatically and presents fixes with a single click.

> Pick good, powerful, customizable tools.

In other words: know ahead of time what you need and how to achieve it efficiently. Discoverability on the command line is atrocious, which might be the reason it feels like such an achievement to finally learn that "tr" stands for "translate or delete characters" or how obscure command line options of grep or cut work to massage text files.

> The "innovations" toolmakers try to push on you will almost never be worth the cost of losing all your knowledge, customization to your uses, and muscle memory.

One could just as well argue that the mental resources and time needed to acquire these specialised skills are better spent to solve actual problems as opposed to do the same thing over and over again manually (otherwise they won't become "muscle memory") using archaic tools designed for completely different systems.

There's another side to the story and I remember vividly that programmers in the late 1980s and early 1990s argued the exact same way when defending their continued use of assembly language versus high level languages.


Shame on the OP. Emacs is 46 this year. As is vi (which is not Vim), but we all know Emacs is much better.

OTOH, ed is 3 years older (Wow! 3 years between ed and Emacs!), and I'm very happy I don't use it today.


Interesting how you only see Vim as Vim but then treat all Emacs implementations as one unit. Emacs back then was probably very different from today's GNU Emacs too, considering it was originally just a set of macros for another editor.


GNU Emacs 1.0 was released in early 1985, making it 6 years “better” than Vim ;-). Multics Emacs is from 1978 and, as a standalone editor written in Lisp, it can be called an ancestor of GNU Emacs and hints towards what its most illustrious descendant would be. You are right in that there are many Emacsen, the same way there is more than one Unix.


> Shame on the OP. Emacs is 46 this year.

In the Vi user's mind, there is no Emacs.


I love ed unsarcastically.


After all, ed is the standard text editor.


The Lindy effect is horseshit. Things that are good tend to be old because of infant death.

There are plenty of old tools in consistent/present use that are cumbersome wrecks, too. Curating good things and calling them some buzzword is silly.


> The Lindy effect is horseshit. Things that are good tend to be old because of infant death.

The Lindy effect doesn't say that the things that have been around a long time have to be good. It just says that their expected lifespan is longer.

> There are plenty of old tools in consistent/present use

...that are examples of the Lindy effect even though they are cumbersome wrecks. The point is that they are still in consistent/present use.


It's a heuristic, not a rule.


Interesting that the author singles out using a search engine as a calculator as a bad idea. I do use proper calculators sometimes, but what makes google attactive as one is that it can handle both unit conversions and natural language. "30 milliliters * 50 in ounces" is an example. It's surprisingly flexible in what it recognizes.


I find myself siding with the author of TFA, even if I don't use the tools he uses, but...

... the search engine complaint strikes me as odd. For me, Google doesn't break my flow at all. I find myself searching the web all day long, as part of my job (and basically everything else, as well). My flow is searching the web a lot, typing some code, googling, typing some more, etc. So it's a completely natural extension of my flow to have Google do my calculations and unit conversion for me.

When people talk of futuristic scifi interfaces, they think some disembodied human voice (frequently female) complying with voice commands. But the reality is that Google is already here and I can have it convert between units, do calculations for me, even roll some dice! And I accomplish this just by typing natural language sentences!


I literally have command line commands that search google and yt so that i can quickly pull up help on whatever my current issue is.


Agreed: One thing I do wish Google-Forex-Conversion will do is to show the exchange rate "both ways"

Say I want to do ZAR (South African Rand) to USD, Google is quick to provide an answer, But somehow I always managed to mess up the order.

I say ZAR To USD in the search box, But I really want (and should have learn by now) is USD to ZAR.


GNU units may be of interest here.


That's what the article would prefer people to use. Google though is usually already present when I am at a computer, its output is easier to read, and its input is more forgiving.


Honestly, I didn’t even see that this is what the article recommended. That having been said, I’ve found myself annoyed at Google’s calculator; it switches to scientific notation before I want it to, and it has trouble with certain inputs that a real calculator does not. So I guess you get your tradeoffs.

That said, I really doubt that a comparable tool to Google’s calculator couldn’t be made, so I hope someone tries to do it. Not because I mind using Google for that (I don’t; I use it at least half of the time) but because a purpose built, configurable, extensible tool could probably go quite a bit further.


The Lindy effect is a predictive tool. Whether something is good or not is much more complicated.

For one thing, switching GUI tools has almost no cost, if that tool doesn't have a significant amount of non-ephemeral user content.

It may be different for people with a strong muscle memory, but I can switch calculators or dictionary apps at any time. Basic GUI apps aren't skills you learn, they're things you get vaguely familiar with, the discoverable UIs guide you even if you don't know what you're doing. The learning time is minutes to days at most.

If I have to learn a new app in 3 years, that's fine. It won't take me much more effort than it probably would to maintain the config for enough Vim plugins to get it to act somewhat like I want it to.

I could probably even switch away from something as big as LibreOffice without trouble, if they used the same file formats and actually gave a reason I might want to switch.

Plus, Android itself is still new, and for most things, Mobile is what really matters to me. Note taking is worthless if I can't access or write down the notes when I think of them or want to check them.

Perhaps if I was doing more advanced programming, more of my notes would be taken at a keyboard?

These simple old tools seem really use case specific. Like, speed of text editing is less critical if most of what you do is interact with modern frameworks, where things might change too fast and the projects might be too big to memorize, and you're relying much more on IDE features to help you, and spending 2x as much time researching as actually coding.


What if you want to take notes that are not just text ?

Diagrams, photos, images, equations.

How do you vim your way out of that?

If they are going to use Lindy arguments on note taking, then it should be pen and paper.

If you still want to se technology, then use pen and paper to take notes then take a photo of the page afterwards.


The author is just another young guy trying to get attention - but not realizing that there's a big world out there and not everybody works the same job with the same workflows as he and the people in his bubble do.


This article proposes that the user set up (and remember) a bunch of alternative utilities that are no better, or at least no less laborious, than a Web search.

"I’ve combined XMonad and Chrome to get little floating web apps all over my desktop"

WTF, that is the last thing I'd want. The world (even the Mac world) has finally moved away from the asinine floating-window fad.


Long-lived, stable tools are great things. However, it's also not great to be stuck in your ways and being unwilling to adapt.

> The problem with most notetaking apps is editing text outside Vim breaks my brain.

I see this as an unwillingness to learn. I felt like the tone of the article was of the sort where I was just there to be told that I'm inferior for using a mouse.

Microsoft Word is actually an older program than vim (not older than vi), so obviously the author should switch to Word to take notes instead of vim.


Also - as much as the Vim/Emacs folks love to bash on the mouse... it turns out a pointer is actually an incredibly good tool for doing most of the things you might want to do with one - like select some text, or change cursor position, or quickly and accurately select an item from a list of similar items.

Having grown up with a mouse playing fps/rts games, I don't really get the hate. A mouse is an excellent tool.


On text selection and movement:

- Repositioning the cursor by incrementally searching for the word you want to move to is more accurate, quite fast (3-4 letters is virtually always sufficient to find the word in question), keeps you in the text / document "head", and leaves your hands on the keyboard.

- Yank (y) + movement (w for word, ) for sentence, } for paragraph, 'yi<modifier>' for copy within marks (quotes, braces, angle-brackets, parenthesis, etc.) is also amazingly fast.

- Visual mode can be used to select specific blocks of text where necessary.

- If I want to select specific fields from a file, I'll usually either use tools for that purpose (cut, sed, awk), or read in the entire file and edit it down from within vim. The tools-based approach means I can reproduce the task readily if I need to do it repeatedly.

And finally, where I'm dumping text from some GUI app to vim ... it's still usually easier, faster, less error-prone, and more reproducible to grab the entire screenful of text without specifically selecting what I want (e.g., C-A or Cmd-A (Mac)), and then trim to size within vim. Again, vim's tools permit working with text as text rather than with text as a GUI presentation.

I'm not saying I never use the mouse for text selection, but it is far less common than you might think, and much more cumbersome.


Cool! But that doesn't at all touch on my point. I'm not arguing you can't use vim keybinds, I'm saying the mouse is good at what it does.

You might struggle to use a mouse (lots of folks do) but for those of us that grew up using it as a much more accurate pointing tool... I'd wager none of the things you've listed present much of a challenge or thought.

Like this:

>it's still usually easier, faster, less error-prone, and more reproducible to grab the entire screenful of text without specifically selecting what I want (e.g., C-A or Cmd-A (Mac)), and then trim to size within vim.

That's funny - I found it incredibly quick and easy to simply select that portion of your comment with my mouse. I didn't have to trim, or take extra, or think about it. I just selected the content I wanted with the mouse in about a quarter of a second.


1. I've been using computer mice for something north of 30 years. I'm reasonably proficient. I've used vi/vim for roughly as long.[1]

2. Text-based selection and interaction work at the context of text. It's really hard to express how powerful this is if you've not internalised it.

3. On many GUI-based devices, the mouse is no longer the mouse, but the finger. This raises a whole host of additional issues:

- You're now covering up what it is you want to select.

- Your selection is no longer based on a single pixel (the apex of the pointer itself) but a region.

- There's the inherent ambiguity between selection and movement. That is, when, where I'm touching the screen, do I intend to scroll the display and when do I intend to select or interact with elements on it? In over a decade's use of touch devices I've yet to see either hardware or software which isn't subject to this failure, constantly.

- Hardware keyboards are highly effective for text input. (I'm writing this on an Android device with an external keyboard. Touch keyboards ... suck and blow, as the saying goes.)

You might struggle to use vim (lots of folks do), but for those of us that grew up using it as a much more accurate text-manipulation tool ... I'd wager none of the things you've listed present much of a challenge or thought.

The point of vim (or Emacs) is that those keystrokes simply become internalised. I don't think through actions, they simply happen.

In a GUI, I'm constantly fighting the interface.

________________________________

Notes:

1. Along with a whole host of other editors, virtually none of which are currently extant or readily available: Wordstar, Mac Edit, MacWrite, DOS Edit, WordPerfect, AmiPro, the TSO/ISPF editor, VAX EDT and EVE, emacs, multiple generations of MS Word, Notepad, MS Write, the whole StarOffice / OpenOffice / NeoOffice line, etc. Vi/vim's pretty much always present, always works, and has evolved incrementally over the 30+ years I've used it such that I'm never faced with the prospect of discarding accumulated technical-knowledge capital.

Yes, the first few weeks were ugly. Steep learning curve. High payoff function.


> The point of vim (or Emacs) is that those keystrokes simply become internalised. I don't think through actions, they simply happen

I'm not sure, emacs by default runs in a graphical terminal and has features that are often easier to use with a mouse, like tab bars or mode line buttons. It feels more at home to me there than in a text terminal, even if the mouse can work with either.


https://gvim.en.softonic.com/

I hear bicycles sometimes come with training wheels.


It's as easy as memorizing a bunch of commands and key chords!


Are you commenting on the Emacs alternative?

In which case I'd agree, if you've internalised that mechanism.


I agree. But on the other hand, I expect you're using a good mouse, configured exactly to your preferences, on your favorite mousing surface, and you have opinions about all these things. Somebody using whatever mouse was cheapest directly on their desk is unlikely to become competitive with an expert keyboard user, who can get away with a cheap rubber dome keyboard.


False on Emacs. A true Emacs user uses both schemes, the keyboard shortcuts and the mouse.


Agreed. I never understood this almost unnatural hate towards the mouse by some vim users.


Is "hatred" their language or yours?

How would you distinguish an awareness of tool strengths, and a recognition that for the specific use in question, keyboard is generally faster / easier / more reproducible?

Because really, that doesn't sound like hate to me. It sounds like proficiency.

Or is there perhaps a hatered toward proficiency?


Dude - you're currently embodying exactly my point.

You think you're proficient - and hell, for your workflows you probably are.

I'm telling you - I spent two years in Vim (it's a phase) I know how to use it just fine, it just didn't end up mattering that much to my day to day life (It turns out I rarely just slam something out on the keyboard non-stop).

To the point where it was basically a wash on performance: small bump in editing speed, HUGE loss in hardware compatibility and flexibility. To the point I eventually stopped because it made pair programming a complete pain. Hell, I probably still have the old copy of my config & vundle setup somewhere in github - but it just DOESNT matter.

You've locked yourself into a world where you seem to think text manipulation is a big part of the day (and who knows, maybe you write novels or do spreadsheet manipulation all day, and it is!). But for me... I spend the vast majority of my time thinking/planning/debugging/discussing/researching/etc... Typing was less than 1% of all the time I spent at a computer. If it turns out I'm actually spending 1% instead of the 0.99999% I would using just vim.... who cares? Just seems to be you.


Again: I've used a wide range of editors. I'm using a graphical one now (the default Chrome/Android edit dialogue). I'm not at all unfamiliar with them.

When I need to do heavy-duty text editing, especially of any kind of a repetitive nature, I transfer contents to a vim session and do the work there. Even on Android. (Thank you Termux, the One App on Android That Does Not Precisely Suck.)

You seem to think I'm making any number of statements I'm not. Specifically, I'm not stating that:

- Mice are useless. They're helpful for focusing the terminal window into which I'm typing. Or playing games. Or using graphics and audio software, often. Or within a web browser (though I also use text-based terminal-mode browsers extensively).

- That everyone should use vim. Emacs users are also permitted to live. And yes, both tools have, as I've noted, steep learning curves, but that comes with a high payoff function. There are a tremendous number of people who simply don't have high technical literacy, or even functional literacy. See my "Tyranny of the Minimum Viable User" (https://old.reddit.com/r/dredmorbius/comments/69wk8y/the_tyr...), and studies of US adult literacy rates (https://nces.ed.gov/pubs2019/2019179/index.asp). Neither of those are elitism, they are an acknowledgement of the cognitive landscape and a realisation that the overwhelming majority of people will not and/or cannot use advanced tools.

- That I don't use other editors / word-processing tools / development tools. I have, and some are listed here: https://news.ycombinator.com/item?id=31771672 But having learned vi / vim, and having had it available on the majority of computing environments I've used over the ensuing 35 years, it is the tool that I, personally, find most useful and efficient. I also* don't spend the bulk of my time actually typing text, but when I am, vim most gets out of the way of my actually doing so. Vim's design philosophy is that even when you're in the editor, most of your time is not typing, but editing (command mode), and that is in fact the default mode of vim. You leave command mode to insert or otherwise add or modify text, then return to where you can read, search, or make other adjustments.

Finally, there's that efficient use of time thing. It turns out that over a 30+ year career the hardest parts aren't learning new tools, and as noted, many people seem to have a strong resistance to actually learning vim, but forgetting the old ones. As I've gone through multiple generations of computing platforms (Commodore PET, CPM, DOS, Windows 3/95/NT/etc., VMS, MVS, Apple II, Mac Classic, OSX, PalmOS, iOS, Android, ...), what's struck me is how non-durable much of that knowledge has been. Ironically, one of the first computing platforms I'd been exposed to was Unix (on teletypes no less), though it was another decade or so before I was truly using it. And again, that knowlege has built incrementally over decades with very minimal resets.

Vi/vim being amongst those jewels. Your "phase" lasted two years. Mine's lasted 32. Vive la différence.

And, moreover, why does what works for me bother you so much?

The two or three weeks I struggled as a first-year uni student have payed off more than virtually anything else I'd learned in my years at school. Seriously.


The correlary to that is that unless you go out of your way to create a "cool" desktop for yourself, the desktop you use will change every few years, usually breaking your habits and becoming less and less usable. For example, I've been using Pop!_OS for a while. It has the terrible GNOME flaw that you can't see thumbnails in the filepicker, you can only see a preview of the image you currently select. Or you could, a few months ago. Ever since a relatively recent update, I can't even see the preview anymore. They made something that was bad for years even worse. Same thing with Windows. I can't find my ways in the new options or settings or whatever that is.

I'm trying to slowly move towards using software. I'm still relatively young, but I don't want to spend my whole life adjusting my habits to new random changes.


And get off my lawn!

Counterexamples - technologies that lasted a long time, then hit a hard dead end.

- NTSC / PAL video.

- Audio on magnetic tape

- Video on magnetic tape

- Manual transmissions in cars

- Daily newspapers

- Mimeograph machines

- Asbestos


Aaaargh, I can not help myself!

* NTSC / PAL video, and audio / video tapes

Better isn't always better. This is a philosophical point, so I'll just give the gist: I believe that advancing something, doesn't necessarily make things for the user better. For example, Netflix is by technical measure "better" than going all the way to a video shop with your friend, finding out they don't have what you really wanted, spending ages deciding on what to watch, going all the way back home, and sitting through the film in one sitting. Which experience did you prefer though, and more importantly, which one was better for you as a person?

* Manual transmissions in cars

What? I know in the U.S.A. most people drive automatic cars, but at least in Europe this isn't so. It's not because they're not available, it's just that nobody wants to drive them. Personally I find them boring and toyish, but I can't speak for the reasoning of others.

* Daily newspapers

There was _a lot_ wrong with newspapers back in the day. One huge thing they had going for them though (which nobody realised was even in question at the time) was that they were written by professional journalists, and they had standards. Now my "news" is given to me by half literate emotional reactionists.


Re examples 1 and 3: Your judgment does not change the fact that those are, as GP put it, "technologies that lasted a long time, then hit a hard dead end".


News media hasn't deteriorated the same way everywhere.

I live in Europe and still get most of my daily news from the morning paper - a paper with professional journalists and correspondents with standards. The articles are also available on their (paywalled) web site, but I prefer traditional paper.


> - Manual transmissions in cars

Only in US, in EU about 80% of new cars have manual transmission.


In the U.K. in the 00s and before automatic cars were really rare. Surprisingly though the last 10 years they’ve increased in popularity and have overtaken manuals for new registrations

Not aware of any eu stats

https://www.autocar.co.uk/car-news/new-cars/analysis-are-man...


Why is that surprising? Over the same period they've also demonstrated better fuel economy and performance than manual transmissions, so it makes sense that they're more popular now.


This was driven by the cost of petrol rather than convenience.

An automatic transmission is a non-trivial amount of weight--especially on the small cars present in the EU--and affects gas mileage tremendously.

Driving a manual in heavy stop and go traffic, however, sucks rocks whether you are in LA or Palermo. You are continuously playing the "rolling game" in order to minimize stress on your clutch to avoid burning it up.

I presume these same forces will be the ones that cause the EU to switch to electric cars first.


> An automatic transmission is a non-trivial amount of weight--especially on the small cars present in the EU--and affects gas mileage tremendously.

This is a case of 'citation needed'. It's not like manual gearboxes are weightless. Then there the likes of CVT.

What do they add is to manufacturing costs. A simple manual transmission is, well, simple, and has looser tolerances (the driver compensates). Automatic transmissions require more exotic materials - in the past we had to use whale oil, which drove their costs to unsustainable prices in non-US markets.


Probably read from here: https://carfromjapan.com/article/industry-knowledge/percenta...

> Surprisingly, the fuel prices in Europe are quite exorbitant. The automatic cars tend to be heavier and can even lead to a loss in the drivetrain, thus more of fuel consumption. On the other hand, the manual is much more economical and affordable to drive. The technological advancements are making the automatic more fuel efficient though.


Whale oil? Not since GM Type A ATF fluid, 1949. The Buick Hydra-Matic of 1939 used some whale oil, but that was rapidly superseded.


Can’t comment on weight, but on efficiency, automatics have better mpg than manual. Are clutch burns weight related? I’ve not heard of them coming from urban use, only track days. There was a small resistance to young people learning automatic as it would be a limit on their licence but that seems to have gone away in the last decade.


> automatics have better mpg than manual

modern automatic, the 7-10 gear kind with a high end smart enough controller, and even that is not a given. It's basically an epicyclic gear train + a torque converter sloshing in oil (basically using fluid as a clutch that never wears out), which has big loss overall (because fluid dynamics), and the added gears+smarts in control offsets that, but they're generally less efficient mechanically than a simple manual gear train.

That said, the "automatic" moniker these days covers a whole lot of varying technologies, including CVT, and automatic (single or dual, wet or dry) clutch+gear change, which are basically manual gearboxes except stick and clutch are not mechanically user operable but driven by servos. And there you get the benefit of the more efficient gearbox and the smart control (a.k.a shift to the higher gear possible ASAP when little torque is needed instead of having the engine stay at a higher RPM, which means more travel, which means more friction)

All things being equal, MPG ratings are usually (there are exceptions) a dead giveaway of the kind of gearbox.

(Oh, and I wouldn't be surprised the least if tomorrow I learned that gearbox control programs would be "tuned" to get good MPG tests with flying colours, the VW way.)

But all of this doesn't really matter, parallel hybrid (e.g Prius&al) basically requires a more complex drivetrain involving an epicyclic gearbox+torque converter (thus auto) (+ IIRC some kind of differential gear train) to mix the two mechanical power sources that can rotate at different speeds (including not rotating at all), and serial hybrid is basically an EV, except with a power generator on board, and EV means either no gearbox or low-count gear one that is entirely driven automatically.

So, manual is going to die anyway as we're being pushed towards low emission vehicles.

Also, a factor of stick being popular in EU is also that in most countries its product bracket and image used to be placed as luxury/high end, and thus used to be subject to higher taxes/insurance premiums for some reason, which pushed the market towards the cheaper option.


Either way they’re both going to be gone when everything is electric.


Good to know manual and automatic transmissions will be around for a good long while, then.


Those are, respectively, transmission standards (1), storage media (2, 3, 4, 5), a power transmission mechanism (3), a data replication mechanism (6), and a material with various applications (7).

Of the set the two that come closest to being interfaces are 4 & 5, the daily newspaper and manual gearbox with its shift lever. In both cases, the actual interface component --- articles written by reporters and published by a specific organisation, and a rod through which a vehicle's drive mode is selected, are the parts least changed.

Even the mimeograph's function still remains, though in most cases it's either through a smartphone or tablet (which reproduces text ina mobile manner which can be shared with others) or a printer-copier of some sort, usually functioning on either a xerographic or wet-ink jet process. In the latter case, the output (print on paper) remains.

In the cases of video formats, audio and video storage media, and structural materials, the components of the end result (video streaming, on-demand audio and video playback, and various structural members and fabrics with specific properties) have changed, but their fundamental functions and perceived endpoints are ... largely ... what they were previously. Enough so that someone from an age in which the technologies you mention were in widespread use would recognise the current replacements.

Interfaces do tend to be exceedingly durable. In large part because they address not just mechanism, but human interactions. The former may change rapidly, the latter not so much.


Ahem... everything on your list is still very much around, maybe not that popular anymore (e.g. Asbestos) but still produced and used (even if in a niche)


I like my manual ford f150 1995. Might retire it for a cheap minimal electric someday.


You won't find a manual transmission in any electric car, for better or worse. At best they would make some fake lever in the console that just tells the ECU 'hrm it looks like the driver wants more torque right now, and maybe play some engine rev noises on the stereo'.


Yes. I do not expect a manual transmission electric. That would be silly.


-Manual transmissions in cars

As european I dissent.


I’ve used 4 of those in the last month, including two of them today


The KDE enthusiast would probably note that KDE was originally called the "Kool Desktop Environment". But even the most ardent KDE enthusiast wouldn't argue that it does not change.


I used KDE many years go and then went the Gnome way in search of "cleanliness". Unfortunately Gnome went a bit too far in the same pursuit and I'm now back at KDE. The bulky KDE of yore is gone and Plasma is great!


That's why you stick with the Trinity Desktop Environment.


I've been using MWM and the same .mwmrc since AIX in 1991. Same palette, same decor, same .Xdefaults.

I've yet to see a GUI (FVWM, KDE, Gnome) that offers something X/Athena/Motif didn't nail 30+ years ago.


Got a screenie?


https://imgur.com/a/pYKiP2Z

Instead of virtual desktops, I just start another VNC session because I don't like losing everything if one session goes down. macOS already has virtual desktops, so no need to reduplicate the effort.


Expect for the icons in the lower left corner, it actually looks surprisingly modern. Seeing VSCode and MWM in the same screenshot is oddly fascinating.


Agreed. The X Window system is pretty damn amazing, despite its warts. Now that you say that, I've anthropomorphized MWM into this old-timer, reflecting what kind of programs it has seen run in it over the years, and what it will see in 30+ more years.


jwz has a comment that addresses this:

Look, in the case of all other software, I believe strongly in "release early, release often". Hell, I damned near invented it. But I think history has proven that UI is different than software. The Firefox crew believe otherwise. Good for them, and we'll see.

HN-safe archive link: https://web.archive.org/web/20120511115213/https://www.jwz.o...

Software performance improvements tend to come from hardware (Moore's Law, still-ish), and software algorithm (in the old-school sense of how information is actually processed) improvements. Leaning on the UI for massive performance enhancement is a bit like expecting order-of-magnitude income improvements by increasing your working hours. There's only so much time in a day, and there is only a limited rate at which humans can interact with digitised information --- generally text, images, video, audio, and data.

The Mother of All Demos was fifty years ago ... four years ago:

https://news.ycombinator.com/item?id=31676445

And yet, it incorporated very nearly all the basic human-computer interface principles still used today.

Apple's Macintosh has seen two principle variants of its desktop UI in the 38 years of its existence. And the second, OSX / Aqua, is now older than Mac Classic was when OSX was introduced by eight years. Apple are highly conservative in UI changes.

I'm not principally an Apple user, or fan. But for my desktop, I use an environment inspired by the Mac's predecessor, NeXT, namely Windowmaker. There's been very little development in years, but the product is stable, and still works even on retina-class displays. The fact that I don't have to go hunting down new interactions every few months or years is a tremendous advantage. And if you want, twm is still a serviceable window manager.

My own tools collection strongly resembles Cipriani's. Applications and tools learned decades ago still provide me regular use. I can do what I intended when I want without being buffetted by constant winds of change and shifting fashions. And quite frankly, it's awesome and a bit of a superpower.


I have always done everything in vi, and it will likely be around in another 30 years. Just recently got an ubuntu laptop after being on macbooks since probably 2006 - and FreeBSD on latops from their first laptop distros in the 90s to then. I can't believe how much I missed a *nix environment, I had forgotten. It's actually beautiful to see it now. Strange to say it, but using a *nix laptop again makes me feel like myself in a way I hadn't felt in a long time. There's something intrinsically hopeful about using linux as your primary environment. As though it's part of the real internet that is still there.

Anyway I actually found myself looking around at Enlightenment as a desktop environment just to see if I could get that old HR Giger theme going again. My .fvwmrc from those days has to be around somewhere as well. Some of these tools don't so much become obsolete as converge on an essential form. Not old, perfect.


>. It's actually beautiful to see it now. Strange to say it, but using a *nix laptop again makes me feel like myself in a way I hadn't felt in a long time. There's something intrinsically hopeful about using linux as your primary environment. As though it's part of the real internet that is still there.

I too enjoy Linux Desktops and MDMA :D (/s /s /s)


macOS is literally a certified UNIX, so I have no idea what could conceivably make it not a "*nix".


I use macOS because it is unix and nominally BSD-like in the sense it has bash and a package manager. However, I'd argue we use macOS to manage things, but working with a FOSS OS comes with a feeling of particpation. A bit handwavy and fuzzy, but trying to articulate the appeal.


If nothing ever changed I would probably still be using awesome. But some new distro version (maybe it was Ubuntu 10.04?) broke something (not blaming Ubuntu, could as well just be a new awesome version that wasn't backwards compatible) and I snagged a coworker's xmonad config. That xmonad config is still running on my x220 (funny, yes, it's also been 10 years) and has lived through quite a few Debian upgrades, but this time I did invest some time to keep it running every few years. But that's just my travel laptop - on the machines I use productively I've been using i3 for... not sure, probably 7 years? And I think it was again some non-working/incompatible xmonad in the distro I was using at the time (or after getting a new laptop)

So not sure if 3 different tiling WMs in 12 years is now good or bad, but apparently I'm open to some change after all, but usually only if pushed.


You should try spectrwm, a "fork" of xmonad not written in Haskell. It effectively replaced i3wm on all my laptops.


On the topic of tiling window managers, has anyone been using Qtile on Wayland extensively? Can't seem to get used to the i3/sway flow and Qtile looks promising in having a very flexible tiling model and being easy to customize, while working across X11 and Wayland.

This talk is from 2011 so that should hopefully make it pass the Lindy threshold ;)

https://www.youtube.com/watch?v=r_8om4dsEmw

EDIT: Wayland support seems to be on the way but there are some outstandings: https://github.com/qtile/qtile/discussions/2409


Nice to see Qtile on Wayland. But I'm personally waiting for xmonad on Wayland.

https://github.com/waymonad/waymonad


As a systems engineer that works with command line and Linux closely for work, I find myself at the opposite of some of this. As good as VIm/Nano/text editors are, they are simply not the most powerful when dealing with modern data formats. I use Apple Notes on my Mac as the universal notebook. I toss just about anything into it(drag and drop images, links, tables, etc) and it indexes them smartly. I can even search for text that was in one of the screenshots I stored there. To take it further, I can markup something on an image in Notes using an Apple Pencil on iPad and continue to take notes on Mac. Tools evolve - if you resist change, you miss out on modern conveniences as well.


I like this but find I'm more productive in Windows. For one reason Microsoft Office. I have to use to work with others and systems that rely on it. For years I struggled with work arounds. The amount of time and effort I wasted is insane.


You could use google docs or libre office, both support offices proprietary file formats


I've been using PCs since before GUIs so some of my habits are probably too crusty at this point. However, I don't really mind so much how the graphical shell changes, the command shell in Windows 11 feels like the MS-DOS of my youth, the Linux shell feels like the old HP-UX or Xenix shells from my youth so I can get all the basics done quickly.

Having said that, I do think that there's mostly improvement in our GUIs as well, bad ideas that crop up now and then usually disappear quickly.


Plain text is cool and all, but I want to embed rich media, that's how my brain/the world around me works.

Yes org-mode, but I've tried multiple times to pick up org-mode but all the concepts and keyboard shortcuts that need to live in my brain just get wearying, maybe I take on too much too soon, but onboarding say Obsidian or Logseq just isn't such a cognitive drain.

People use Google/ddg for calculators/weather/etc because the search engine can work out what their intent is. Remembering a bunch of cli app names and how args are parsed etc, is again more cognitive load. There's no discoverability other than reverse searching your shell history and trying to remember the thing.

I used to envy people like the author who have this workflow that totally works for them, it's lots of Linuxes, lots of CLI, some tiling window manager, cool stuff. I'd love to be like this. However, it all involved cognitive load that GUIs, Search Engines and human focused design abstract away to reduce that load. So use whatever works for you and ignore what's cool.

I have a Thinkpad X230, the model up from the author, running Silverblue and Gnome. Works great. Things will probably change along the way, that's fine, the cost of managing those changes is still lower than learning to live in a CLI.


I would love to see more guides on how to make vim more like a knowledge/mind notes app like Notion/Roam. I've been on and off looking for something like that.


Look into org-roam, which embeds roam in emacs org mode, and evil, which uses vim keybindings and modal editing in Emacs.

Doom emacs has extremely easy ways of installing both of these packages in the setup.


I'm liking vimwiki, for what it's worth. It feels like a pretty small set of changes to vim that make the wiki stuff "just work", and otherwise it's my regular ass plaintext world that I love.


When Notion had a lot of outages in 2020, my entire 200-employee company stopped working hours at a time.

I didn't even notice, happily using my markdown notes.


Wayland is viable. Time to try Sway!


I've been using Sway, very happy - just prepare to do a fair bit of fiddling :)

A quick tip, imitate this however you see fit -- setting `XDG_CURRENT_DESKTOP`:

    $ cat ~/.config/environment.d/envvars.conf 
    XDG_CURRENT_DESKTOP="${XDG_CURRENT_DESKTOP:-sway}"
... so that `xdg-desktop-portal-wlr` can share your screen by knowing the environment you're in

For whatever reason this isn't set by Sway. This will only set it if another DE hasn't already


>~/.config/environment.d/envvars.conf

Set it by exec'ing from your sway config instead. Otherwise every session under your user will think it's running in sway, even if it isn't.

How to do that is explained in the xdpw wiki.

>For whatever reason this isn't set by Sway.

It's because sway generally doesn't do any kind of integration with anything not related to Wayland, which in this case is systemd and dbus. It gives you the tools to do it yourself.

Distros that provide pre-configured packages of sway should ship such a config by default. Eg OpenSUSE has https://github.com/openSUSE/openSUSEway/blob/ee8fe750a843165... + https://github.com/openSUSE/openSUSEway/blob/ee8fe750a843165...


That's fair (RE: setting/sessions). The method I proposed relies on consistency everywhere else (defining that to something), and assumes Sway if not :P

It looks like this (on Fedora) is generally dealt with by the 'sway-systemd' package.

That provides a drop-in config (/etc/sway/config.d/10-systemd-session.conf) that runs a script and prepares the environment for you (including this variable)

    $ dnf whatprovides /etc/sway/config.d/10-systemd-session.conf
    [...]
    sway-systemd-0.2.2-1.fc36.noarch : Systemd integration for Sway session
    Repo        : @System
    Matched from:
    Filename    : /etc/sway/config.d/10-systemd-session.conf
It's rather similar to the openSUSE approach you shared, but seemingly a little more robust: https://github.com/alebastr/sway-systemd/blob/main/src/sessi...

As long as you do this in your Sway config, it should be fine:

   include /etc/sway/config.d/*
I forgot about that adventure until now -- with something like this, you don't need either 'manual declaration' approach really

I initially missed it moving to Sway because I literally copied my i3 config and only changed it slightly


I was so close to switching from X/i3 to wayland/sway but I couldn't get drag and drop from Firefox to mpv to work. Maybe next reinstall.


Well I have three older objects, Slackware, xterm and fvwm, but who's counting :)


Subject + Article aside. I love how personal and passionate we all are about our desktops / workflows.

Lol we up to 330+ comments and we talking mostly the correct way to edit text files and where/how those windows should live and behave etc.

Of course i3 + 'doom emacs' is the correct answer but happy to continue to hear all the other options, even if they are all wrong /s :P


Another candidate to his list in place of the calc script is bc, which conveniently is 46 years old:)

https://en.wikipedia.org/wiki/Bc_(programming_language)

I use it also for the most trivial stuff and is way more immediate than firing up any X based graphical calculator.


I use the build in math command in fish[0] on a daily basis. Its always faster and way more comfortable to just type an equation in instead of waiting for a gui calculator and possibly clicking operators together. Id say for such tasks cli tools always win at the end in terms of user experience and speed.

0. https://fishshell.com/


This is true, I like apps that stay the same and just work.

The problem is that github encourages the exact opposite. Constant code churn, because if people see a project with last commit "2 years ago", they assume it's dead instead of just complete. Why do I know that? Because I have also caught myself doing that.


Cool desktops don't change without the user's consent.

It's why I left first Windows and then Mac.

My computer is my workstation, and I don't want anything changed without me asking.

With FOSS, that's what I get. With Win/Mac, I was at the mercy of the landlords.

Still miss how keyboard-accessible early Windows was out of the box, however.


It bogs the mind out of 7 billion people only one single company is capable of building both aesthetically pleasing and well-functioning laptops (Apple that is). I wish Lenovo (or someone else) did the right thing and built a Thinkpad-like heavy duty bento box machine with modern internals.


> well-functioning laptops (Apple that is)

And even they had a ~4 year period (2016-2019) of putting out sub-par, handicapped laptops.

Nowadays, framework's making some pretty great laptops. Repairable, linux-friendly, and nicely proportioned. Don't like them sticking a giant logo in the middle of the lid though, ruins the look a little.


I have a Macbook for work and a personal Xiaomi laptop. I think the quality of the Xiaomi laptop is just as good. Fingerprint reader doesn't work on Linux, but everything else works as well as on the Macbook. If I were to buy a new laptop now, I'd go with Framework because the ease of changing components appeals to me.


On linux, if you want a calculator, just type python.


And here I go again, recommending Qalculator and its CLI friend qalc.

Python is a lousy calculator, unless you want to program it.


python is good enough for almost everything you throw at it, and is probably installed. qcalc might be better, but it is probably not installed.


I install Julia on my machines for this reason.


I use Speedcrunch. Launches instantly, so not much time savings to use a CLI tool instead.


I'm more of a javascript kind of person, so I prefer the qjscalc command from QuickJS.[1]

[1]: https://bellard.org/quickjs/


Octave is also nice as a calculator


>>> 1/3

0


    $ python
    Python 3.10.5 (main, Jun  6 2022, 18:49:26) [GCC 12.1.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> 1/3
    0.3333333333333333
If you're going to keep using EOL versions, that's on you.


Blame the Debian maintainers. That was the default python package.


Hmmm, I got

>>> 1/3 0.3333333333333333

What version?

    $ python3 --version
    Python 3.9.9
I'd expect your result with `1//3`.


or just type bc


Not enough attention is paid to the (perhaps now cargo-culting) cause of this mess, which is that we still have companies that do something like "selling operating systems." This is a stupid idea that should have died a long time ago.


I don't know, in the consumer market only Microsoft still sells an operating system but it's dirt cheap, you can get Windows licenses for a few bucks. The problem is perhaps rather that they don't sell operating systems and instead of building a good OS product sell their users' data or use the OS to lock costumers into their platform and hardware. There does not seem to be a competitive OS market at all (and probably there never was a healthy one to speak of).


Right, I'd argue that "Operating system as for-profit-product separable from the computer itself" should have never existed -- any more than e.g. "OS for cars." Bill Gates had a wonderful very awful idea. :)


That's what I like about macos - no drastic changes in the 10+ years that I've been using it. They've added some stuff - some of it I've absorbed into my workflow, some of it I haven't and it stays out of my way.


If this is a real opinion - I don't think you're doing very much with macOS.

Hell - just the KEXT changes break about 20 different things in my workflow.

honestly, as someone who uses a mix of all three major OSes (Windows/Linux/Mac), Mac has been the least pleasant dealing with upgrades - they make as many breaking changes as linux, and they have dogshite docs.


My machine only has kexts for Parallels (had? they switched to Virtualization Framework a while ago), and never had any issues as long as I was keeping Parallels up to date. ¯\_(ツ)_/¯


Debian with Mate. As familiar as the back of my hand. Dog simple. Can't recall the last time it broke (without me doing something dumb anyway).

(But "pluma"? "Caja"? Wtf names?)

And none of this "eternal improvements" bs. At least nothing visible in the ui or anything. For all intents and purposes it is evolutionarily flat. Which is exactly the way I like it.

(Except, maybe there is something cleverer than just slapping windows on top of each other willy nilly. I know there are alternatives. I have not been driven to explore there much.)

(Also, the Windows OS is flaming garbage. Possibly literally malicious. I don't know how they stay in business.)


> Also, the Windows OS is flaming garbage. Possibly literally malicious. I don't know how they stay in business.

You don't have to be best, just better than the alternative.

MacOS comes with specialized set of hardware. So good luck tailoring Mac, without it costing like an average Ferrari.

Linux is just worse for average user. Not average HN user, mind you. But people who struggle to figure out print screen.

I fought with Linux in the past. It's death by thousand gremlin bites.

Big part of that is lack of drivers, but there are also major fractures in the space: Gnome vs KDE, X.org vs Wayland, etc.


>I fought with Linux in the past. It's death by thousand gremlin bites.

My experience as well. I wanted to set up a small pi box that i could just keep with me and code on in my downtime. I of course could just use a laptop but this whole setup fit in a small stethoscope case and would get me more used to linux.

There's just SO many small things that don't quite work right, and the attitude of the userbase unfortunately tilts towards elitist if not downright hostile.

I still want to learn more and get better with it, but I spend enough of my time at work reading doc's and trying to debug, I don't love when my freetime feels almost the same.


Caja -> box in Spanish.

Pluma -> Stylo pen. It literally means feather, but you'd obviously guess why it persisted on that name.


Haaa, the sweet comfort of having everything set up and humming. The sweet Debian stable way of nothing ever changing.

It may not last forever, as kids, or new jobs or new hobbies come to disrupt the harmony, so enjoy your sweet time.


Why would it change?

The only reason I can see is if they outlaw general computing (including linux) and we're all stuck running Patriot Windows 3000.

And even then it's gonna be underground.


I learned from this that there’s a Debian package that gives me definitions from the justly renowned 1913 Websters, and installed it right away! (The comment on the OA is correct: it’s `dict-gcide`. This is great.


This is very true, you get more out of honing a small set of useful tools that continue to operate year after year, than you do chasing the next cool thing in UX.


I suppose I'm hedging my bets by using vim in VSCode.


I am still dumbfound by the fact that no manufacturer seems to have produced a decent laptop after x220, which the author mentions. And by decent, I mean with a decent keyboard, and which doesn't break easily.

What is the deal with all of that think junk? ... I mean, ok, ok, I know many people like thin laptops. What about all the rest of us? I need to type, damn it!


The Gnome desktop environment must be uncool then, it changes its looks and the location of its buttons every six months.


'units' can also do currency conversions:

  $ units 100EUR USD
  * 105.88302
  / 0.0094443845
In which case it is useful to enable the service to update the exchange rates:

  sudo systemctl enable units-currency-update.timer
(or something like that...)


Emacs' calc too.


What did he mean with "when Wayland finally happens"? It happened ages ago! :-D


I saw a product someone had created to help preserve flow state. It let you record and play back your last programming work session so you could quickly pick up your prior context.

Anyone recognize this? I can't remember the name of this product.


I don't get it. Is the author literally just flexing about using vim and other basic unix command line tools we've known about for decades? Man, the post quality is really going downhill.


What does it even mean to say that vim is 30 years old? The vim program I use is compiled from source code that is completely different from the vim people used 5 or 10 or 20 years ago.


The problem with bash utils is that they require further context. Where I can write any problem i have to google in one box rather than via 7 bash utils.


Does calc do something neat that

bc -l

is missing?

It is a funny omission in an article about using old programs -- it is arguably older than vi (not vim).


You can do a lot of this in Spotlight


Lindy effect predicts that Lisp will be running on our civilization's future Dyson sphere.


Me: screenshots => desktop, forget to cleanup for 3 years.


IMO he should be using Xaw or Motif applications, the stuff on his desktop is a little too new and pretty. Your desktop should at least be an authentic recreation of the late-80s. Not a convincing act for performative oldness.


WindowMaker enters the chat.


as calculator `node -p "15*15"` works well too


gcide-dict doesn't exist in Debian (anymore?)


The author's `spell` script doesn't work either. No suggestions appear, no definitions, and even entering "0" to get the original text does not copy the word to the clipboard despite the script claiming that is does. Tested on Debian 11.


I use Notes.


The best desktop is the one I can customize however I want, and then run anywhere, without jumping through hoops. If I can untar some config files in my home dir and install a couple binaries, and have my desktop just appear the way I want it, that is the best desktop. It's not only portable and easier to set up, it's more likely to be both backwards and forwards compatible.

If, on the other hand, a given desktop depends on some bizarro set of 20 different services, interfaces, libraries and apps which can only work in one way on one platform, and it's near-impossible to move the settings somewhere else, and most apps can't even use it, that desktop sucks.

I don't want a cool desktop, I want a desktop that doesn't suck.


What's with the repost the day after you submitted it? https://news.ycombinator.com/item?id=31761636


It may not even necessarily be the submitter themselves that has resubmitted it — I’ve had posts I’ve submitted that have gotten low traction get resubmitted (not by me) hours later (but still with me listed as the submitter). Guessing it’s something the moderation team does when they feel a submission didn’t get enough traction the first time.

Edit: Never mind, I see that’s not the case here after looking at their submission history!


HN ignores dupes when the original got little traction.


Ah. Guess I still have some learning to do with how things work here a little differently than elsewhere.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: