Can the mods un-dupe this? That was 2 months ago, and it's generated fruitful discussion on a hacker-relevant matter. Certainly more relevant than the Atlas Obscura "look at this quirky history" articles that get upvoted so much.
I appreciate re-posts when they receive new insightful comments, but 60ish days is perhaps a bit on the short side? However, this re-post now has 300+ comments and has justified its existence.
I remember turning on the computer and waiting for the motherboard company logo to finish flashing. After that was the Windows logo. I also remember waiting for the dial-up modem to connect to the internet and waiting hours for a video to download. Heck, last week I played a game streaming on GeForceNow and I was surprised by how seamless the whole experience was.
Maybe some web apps are inconvenient, but you can easily setup a command line interface or Linux system and move blazing fast.
> you can easily setup a command line interface or Linux system and move blazing fast
Well... not really. Terminal response times are way slower than they used to be in the 80's. Sure, we have greater throughput, but we also have greater latency. https://danluu.com/term-latency/
> Terminal response times are way slower than they used to be in the 80's.
Do you have data / a citation to back up that statement? Most things seem faster to me. As long as the command line responds as faster as I can type, I don't care about it being faster.
That being said, overall speed and what can be handled by the system certainly does matter in many cases, even if it is _just throughput_. I've recently been running commands to search/process data from the command line recently
1. Search through a directory structure with millions of files (find)
2. filter down to ones containing a certain thing (grep)
3. find certain values in the files based on pattern matching (sed)
4. filter to files where one value equals another (test)
I can't imagine that would _ever_ finish running on a computer in the 80s. It works just fine on my current computer (albeit taking a minute to run).
The page in question discusses latency in terminals, not how it compares to performance of previous generations.
That being said, it also seems to have to pull in the 99th percentile behavior in order for the timings to be an issue, and even then it's not horrible; it doesn't get actually "bad" until you consider the worst performance (99.9th percentile) of the worst performer (st). It seems overly picky to me.
Also, almost all performance issues come due to features, which nostalgia about the past always ignores. Some specific tasks may have been tangentially faster, but you also had orders of magnitude less features available, and said feature make your overall workflow much faster. None of these people would actually give up on their zsh/autocomplete/fzf/custom git-integrated theme for a few ms better latency.
Let's face it,none of those are compelling reasons for more latency when at the same time the computing power available has ballooned to its current level.
Adding auto-complete given processors litterally orders of magnitude faster should have been feasible without getting slower.
Here is another survey which takes it from a different angle.
I should have clarified my original post but it's too late:
I am lamenting primarily against the latency introduced by the amount of layers involved both at the hardware and software level compared to yesteryear.
Compositors, virtualization, higher level languages, etc. do not come for free; despite explosive increases in computing capability, we also have an explosive increase in the number of running threads and the resources required to maintain them.
Today's terminal has some difference, the fonts should look anti-aliased, flickering should be as minimal as possible, should be able to theme it based on preference. One terminal able to support so many different terminal type VT100, rxvt, and still need to perform fast. I have worked enough on text only and 256 graphics terminal in yester years and feel today is still better.
Another case, Google maps is a bad example in tours. Used it to explore many cities, not only turn by turn direction. It worked get although personally like openstreet maps, because I feel one company s shouldn’t control so many aspects of life and like open source nature of it.
Indeed able to discover a onsen built by community in Hokkaido. That’s based on a lake I saw by just zoom in out of the map to do exploration and saw it built by Ainu community living there. In that turn by turn map helped quite a lot to reach there subsequently just by click and dropping a pin there. Submitted a text describing it and subsequently visited by many people.
Took a journey through treacherous mountain and went up to 2500 meters high and crossed it via a tiny off-road designed for motorcycles. That too did based on exploration of map on phone.
Paper maps didn’t provide such convenience. Yes POS is slow, but then it has different expectations today. Try integrating old POS with 10 payment network working under milliseconds to authorise transactions. Worked enough since 1990 with dos based POS and also wrote IBM MQ C++ code for integration. Don’t want to go back there.
I think today is a step forward. Like always there will be good and bad as it was in 1980 or 1990. But largely today we do much more with computers than we ever did in those days.
So not everything is perceptually slow on computers. Apple Watch, iphone, androids are responsive and usually fast. Windows 10 for certain tasks might be slow but then overall it’s not that bad either.
As far as exploration goes you can do the same or even more today than before. I can discover planets. Built3D model in physical form, use CNC on desktop, do laser cutting in my home.
Built a computer like BBC micro, raspberry pi, fly a computer using fly by wire and test it using real physical model plane. Can do farming on a small scale using hydroponics at home. Can just go on and on about it. All possible thanks to advancement in speed and reduction of size in computing devices and peripherals.
So I feel it’s not slow, just the work we do with old computers is very different from what we do with modern one’s.
The argument is that a terminal shouldn't be slower given the increase in computing power by several magnitudes. Maps has its own scalability challenges, but a terminal doesn't need a live network connection and isn't/shouldn't be maintained by 100 developers who each only know a small bit of the codebase and step on each others' toes.
Given some corrections, we certainly have the capability to build terminal workflows with less latency. So there is nothing wrong with pushing the community forward.
It's not a matter of nostalgia or just being content with the way things are. People used to argue vehemently against the features we enjoy today such as autocomplete and advanced theming, but a few people kept their heads down and made it happen anyway, because it was The Right Thing.
"""
It's caused by conhost.exe excessively hammering the registry to query the following two values:
HKCU\Software\Microsoft\Windows\CurrentVersion\ImmersiveShell\TabletMode
HKCU\Software\Microsoft\Windows\DWM\ColorPrevalence
Having previously reported this issue through Feedback Hub (to no avail), let me offer my observations from debugging this issue:
It's not specific to Dell's XPS series - I've been able to reproduce on any Windows 10 installation from version 1703 and up
It only occurs when the console application writes output that causes the console to scroll
It only occurs when the console application in question is in foreground/focus
Each reg value mentioned above is queried 6 times, per scroll!!!
"""
The solution I use is to minimize the terminal when it's scrolling. When I compile my projects if I don't minimize the terminal it takes forever to finish.
"Terminal response times are way slower than they used to be in the 80's." In 1983 the top selling computers were the TRS-80, Apple II and IBM PC (among others). I started on those computers. And what you said is crap... straight out BS. Any person from that era should remember what happened when you typed dir (or equivalent) into a folder with a couple hundred items. It was time to go make some coffee while each...item...was...written...to...the...screen. The OP has good points but the start was WEAK because it is factually inaccurate like your comment.
Right, I can recall the first time I saw my friend's parent's 486.
I remember doing 'dir' on a large directory, it scrolled so fast, so fast. I said to my friend, "The next generation will be so fast it will have finished scrolling before you finish typing the command."
Sadly, it was not to be. DOS on a 486 is as fast as computers have ever gotten, perceptually.
That still happens though, and it's not because it actually took 50ms for your computer to read a filename and 500 ms for it to read 10.
It depends on how fast you allow items to be written (pixels per unit of time) on your display and if you force writing of everything or allow line skips. If you want to experiment I think putty has pretty fine-grain control options.
The terminal is not experiencing input latency while the shell runs the dir program. The drawing program is experiencing latency. My comment is oversimplified for brevity, but not factually incorrect.
Consider a VT100. It could only go up to 19200 baud, and couldn't actually handle constant streams at that speed.
However I do agree with the general message of the article. Current UX is a mess, websites are overblown by tons of javascript. There's no reason there should be loading spinners in this day and age with the power we have at our fingertips.
Consider a VT100. It could only go up to 19200 baud…
The VMS manuals suggested that system administrators may want to keep the hard wired terminals down to 2400 buad and hide the fact from users by turning on "smooth scroll" (which only scrolled about that fast anyway) in order to keep the interrupt load down on the Vax. Something about batching of interrupts on the multiport serial cards.
Yes there was that but it also just couldn't handle the full speed continuously. It depended a lot what kind of commands were sent. Full text was fairly OK but drawing commands took a while as far as I remember. In any case, flow control was a must-have in those days. Either CTS/RTS or at least XON/XOFF
There's no reason there should be loading spinners in this day and age...when spinners are almost exclusively used in web apps in order to denote that the web browser is yet to receive the necessary data over YOUR internet connection.
Tell that to my former boss. They wanted me to delay sending responses from the backend, because they liked showing off their custom spinner. The spinner would not be seen at all during normal operation, without the delay in sending responses.
Then why do I still see them with my verified 600/600 connection :)
The problem is just that there's too much javascript running in the browser and it's too poorly written (loading each post individually in Reddit by example instead of batching them). Reddit's redesign in particular is really really bad. At least they offer the old site but it looks so bad. I'd love to have something looking more modern but without the javascript overload.
...I'm a little confused. My computer still flashes the motherboard manufacturer until the fans are all assessed and then the windows logo.
And while my connection is fast, it's a lot less reliable than the one I had 20 years ago, with about three 15-minute disconnects per day on average (although that has more to do with the general shit-quality of the Comcast monopoly than anything else).
Desktop machines may POST at a reasonable speed, but enterprise servers are still damn slow to start. Whether its the HP servers we deployed a few years back, or the more recent UCS servers we are building our new platform on, it takes ages for them to get to booting the OS (ESXi).
Check bios settings a lot of them let you configure that time. Worth keeping for a few seconds so you can hit the appropriate key to access bios again but can certainly turn it down.
A single longer wait is much better than constant tiny interruptions. I'd gladly wait for 10 minutes for my computer to boot up if it means that I get a seamless experience afterwards.
Of course, your point about connection speeds is entirely right - but I don't think anybody would honestly advocate just entirely going back to 56k. Rather, we really should be trying to answer the question of why using a computer feels much more sluggish (probably a better word to use than slow) than it did back in the day.
Yes, some of it is rose-tinted glasses and nostalgia, but most of what people discuss is worth taking into consideration. Nobody would seriously go and use a 20 year old computer unless they were making a statement of some sorts - and that statement is not "20 year old technology is entirely better than what we have today" but rather "why can't modern technology do things this 20 year old device can?"
Most computers weren't that bad, but there was a time when Windows 95/98 and the mainstream hardware it ran on weren't a good match. That's what you remember.
Even then, once the thing did boot up, applications were rather responsive.
> ...you can easily setup a command line interface or Linux system and move blazing fast.
No, you can't. The majority of people uses software that is imposed upon them. The majority of work can not efficiently be expressed in CLIs.
It's the responsibility of software developers to write responsive software and in many cases, they have failed. Of course, with the inefficient web platform, their job is made difficult from the start.
I feel bad for the metaphorical user. They were robbed of their tools.
We get to keep our shell utilities and CLIs, but they were robbed of tiny single-purpose GUI tools.
We get to keep our shell pipelines and perl scripts, but they're getting locked out of their Excel spreadsheets and VBA scripts.
Hell, with how things are going even automation tools like AutoHotKey are going to perish.
"People don't know how to code" or "users aren't tech-savvy enough for data processing and programming" are bullshit statements. They just happen to not code the way Silicon Valley does. And now these users are getting corralled into caged web apps and experiences that give them no flexibility at all and just treat them like children, all the way from the feature set to the presentation.
> Most computers weren't that bad, but there was a time when Windows 95/98 and the mainstream hardware it ran on weren't a good match. That's what you remember.
That's been the issue with every Windows OS. The cheap consumer hardware available at the time of the release was not enough to provide a snappy OS experience.
Unfortunately, hardware frequencies have sort of stalled, so now we're back to square 1 again, where W10 doesn't run snappy on median consumer hardware, but the hardware isn't getting faster at the same rate it was during the first decade of 2000's.
I have a Windows 7 machine for some legacy games and software, and it feels much faster on a 6-year old CPU (Haswell) than W10 does on a 1 year old CPU.
In my experience Windows 10 runs well enough on even the shittiest of hardware as long as it has an SSD. I can't tell you how many people (both personally and professionally) have asked me to troubleshoot slow Windows installs, and almost invariably they are running Windows on an HDD. Many times they got it because "well it was the bigger storage size, so that must mean it's better, right?" - In the premium segment that might be true, but in the value segment where most people shop they're still trying to offload inventory of HDDs, I guess.
Mac OS isn't much better, I had a 2011 MacBook Pro w/ the best CPU for that model which was borderline unusable until I put an SSD in it. That was HFS which was designed w/ spinning rust in mind, I couldn't even imagine running APFS on top of spindles for an OS volume. CoW & the features it brings is awesome on flash w/ lots of IOPS, but painful on anything else.
---
That all being said though, Windows 10 is just crap. I'm not convinced there's any amount of hardware you can throw at it that will make it feel as snappy as Windows 7 did, because it's evident to me that the developers just don't care anymore. I initially installed Windows 10 on an i7-6700K w/ one of the fastest consumer NVMe drives and one of the best GPUs available at the time. The start menu lagged perceptibly on a pristine installation. That was before all the "feature updates" shoveled Microsoft's ever increasing amounts of managed code, adverts, and online crap into the start menu.
If the developers gave a shit about UI latency the Start menu wouldn't require internet connectivity[1] and the task bar wouldn't make hundreds of thousands of IO calls.[2] Even if you can come to accept the speed the start menu, the functionality just isn't there. It routinely feigns ignorance of plain text files sitting in the root of my "Documents" library; even if I type the exact file name. I would gladly welcome the Windows 7 file index service back into my life.
I agree with you the first half of your comment - the move from spinning rust to an SSD is absolutely huge; you have Windows starting in a few seconds, which is a far cry from an HDD.
But the 2nd half of your comment, I disagree with. To me, Windows 10 has the perception of being faster than Windows 7 was with the same hardware - most of my machines have been on Windows 10 for ages, but I actually finally upgraded one of my work laptops just today, and this still holds true for me.
With great support for multiple monitors, great high-DPI support, a better command line, Hyper-V, WSL and more, I actually love Windows 10 - it's a fantastic OS.
> you can easily setup a command line interface or Linux system and move blazing fast.
Maybe the HN crowd can but not your average or median computer users
> I remember turning on the computer and waiting for the motherboard company logo to finish flashing
I still have computers that take some time to boot. Issue is, I just rarely power off/on. Instead the computers go to sleep which if much faster to come out of.
Try Manjaro w/ cinnamon. I think it looks pretty good graphically out of the box and the latest stable release is still blazing fast on my old hp 8440p laptop with an old dual core i5 from the early 2010s. I dont know but cinnamon on Manjaro feels pretty damn snappy to me even on older hardware.
I run xmonad on Debian bullseye with 5.5.3-grsec kernel, it’s definitely snappy. There’s no perceptible input lag, even when testing using the iPhone slo-mo thingy. I mostly use Chromium, electrum, signal-desktop and urxvt.
Everything goes to shit as soon as you install one of those “desktop environments” though.
How come? All I see is loads of unnecessary fluff.
I for one find xmonad & co extremely pleasant to use even if there’s a slight learning curve. I really don’t see what Cinnamon, Gnome & co. have to offer (except to users who aren’t very technically oriented, who’d probably be more productive on OS X or Windows anyway)
On recent distros, I've found MX (mxlinux.org) to be quite responsive. They are now also experimenting with FluxBox ( https://mxlinux.org/blog/mx-fluxbox-2-0-settings-now-availab... ) and that really makes it quite snappy as it is very light on resources.
Another distro I like is "Linux Mint Debian Edition" ( https://linuxmint.com/download_lmde.php ) that feels more "light-weight" than its Ubuntu derived counterpart Linux Mint.
I've been using Linux since... oh, 1999 or 2000, and I've never seen a Linux GUI desktop I'd describe as snappy and responsive. The best I've ever achieved is "not incredibly clunky" and that only by ensuring the graphical interface basically did nothing but run the current application or two I had open. I've seen BeOS, QNX, and (Apple) iOS graphical operating systems/environments that were consistently, remarkably snappy and responsive relative to comparable peers. That's... pretty much the entire list. CDE on a wonderfully powerful Solaris machine is close to making the cut. Never used Amiga but I hear it was good.
Wanna talk about stability, not responsiveness and snappiness, you can add OSX/macOS to that list in addition to the above. It's not great but it's vastly better than anything else not on the above list. I've never, in 20 years, had a Linux GUI system that wasn't fairly crashy. "Oh it was just an app crash" or "oh X just quit but the OS is still up and you can restart it" or "oh some KDE daemon just shit the bed, restart X or do [arcane invocation] to put KDE back in a good state and it'll be fine" yeah OK well it's still a crash and I still lost stuff so it may as well have BSOD'd for all I care. Before I switched to macOS I was kinda blind to how bad it was because it was just normal. Modern Windows seems to be getting better on that front but I only use it for gaming so IDK.
[EDIT] to be fair, if I hadn't experienced BeOS and QNX (on shitty hardware, mind you, even by the standards of the time when I ran them) then super-minimalist Linux GUIs might be what I'd consider "snappy and responsive". Those were just in a whole other league, and really reset my understanding of what was possible (but rarely achieved)
Your point about Linux GUIs is quite true. I've found DEs like Gnome and KDE to be more resource intensive and klunky. Some 8 years back, there was only one distro (I forget the name) that used FluxBox that was blazingly responsive but lacked intuitiveness. (I've heard i3 is also very fast, but I have never used it as I don't like tiling WM).
Before corporates entered the Linux market (e.g., Red Hat and Canonical), Linux actually used to be faster than Windows. Today, I find Ubuntu (and most Ubuntu based distros) to be equal or sometimes worse than Windows on the same hardware. It just feels as "weighed down" as Windows.
And yeah, you are absolutely right about BeOS - it felt revolutionary to run two sessions of a movie player on a 200Mhz powered system with 16 MB ram!
I'd love to know what BeOS did to achieve those magical-feeling results, and why it hasn't been copied by everyone else. I reckon it's gotta have a lot to do with process scheduling and how UI events are dispatched, but if it's mostly those couple things, why isn't every modern OS doing what BeOS did? There must have been trade-offs I guess but from the user's perspective it seemed to be 100% an improvement over everything else—it's not like multimedia took a backseat to user interaction, or parts of the UI lagged so others could feel fast, everything felt smoother, quicker, and more responsive. On single core machines with 1/16 the memory it takes to start Slack, too!
I have to disagree. The comparison you make is unfair. I experienced QNX and Beos too. But what could you really do with it out of the box, compared to a full blown Linux-distro?
The 'feeling' of snappiness is a matter of carefully choosen defaults IMO. Almost nobody does that anymore. So have this lowest common denominator, and mostly crappy tools to change them from within the GUI, or have to dive down in the cesspool of myriads of different config files in different formats and places, and in addition an explosion of complexity of subtle interactions of several subsystems.
Regarding the shitty hardware...
The f-ing fastest desktop experience i ever had was somewhere in the late days of KDE 3, across NetBSD, ArchLinux and Gentoo, on Pentium3@933Mhz with 512MB RAM, onboard I815 Intel graphics(edit: with the 'superspecial' 4MB VRAM Dimm-thingy), and some IBM Deathstars.
With the same toned down look across all three systems, and toolkits, instead of the usual teletubbyfication.
What can i say? When your KDE 3 crashed you were holding it wrong, or it was somehow miscompiled, be it the optimization , or some libraries.
For me it was rock solid and lightingly fast, at least in my configurations.
The same can be seen nowadays with systemd and pulseaudio.
They can be used if configured right. The question is if one wants to be hassled with that, if it isn't.
shrug I've had Linux desktops across Debian, Mandrake, Gentoo, Ubuntu, and maybe some others. Oh I think Fedora in there somewhere. Gentoo and the first few years of Ubuntu (pre-Pulseaudio, basically, though it's not the only reason it suddenly got worse right around that time) were the only ones that felt like they weren't built on some kind of horrid Jenga tower always on the verge of toppling over, but in Gentoo's case that's only because I placed all the pieces myself. And it's never felt any "faster" or more responsive than Windows unless stripped down to bare bones on the UI side. And God did X used to crash a lot. At least Xorg's mostly better on that front, but then Windows doesn't BSOD a couple times a day anymore either.
[EDIT] oh as for BeOS, it ran a lot of the same junk I ran on Linux just fine. Better, really. It wasn't suitable for a server, but that didn't and doesn't really matter for one's desktop OS experience. Especially these days—can it run a VM (as one supposes any modern BeOS-alike could, because why not)? Cool, then I can run most any deployment target I like on it.
That reminds me of something wrt configuration files.
What always rubbed me the wrong way, for instance in Debian were the commented out examples in config files inapplicable for the configuration of the running system. I remember this for the bootmanager, framebuffer and X when setting up a flicker free boot with the SAME video mode as early as possible, networking, and file systems.
Only Mandrake and PCLinuxOS got this right, they had nothing in /etc which wasn't applicable, because somehow generated programmatically by their installer, according to the choosen configuration.
That was CLEAN, there were no useless ##commented out things
for stuff the system didn't even had installed.
Unfortunately their package repos at the time were 'clean' (meaning lacking the stuff i needed/wanted) too.
SUSE also tried this with YAST, but got it wrong often, and felt sluggish and bloated.
I can't really remember XFree86 and later Xorg crashing on me, except when pushing its boundaries wrt multi monitor and hotplugging, i guess that depends on the used hardware and the quality of their drivers?
I wasn't writing software or using computers in 1983. But, my uses cases in 1990 were pretty close to my personal use cases today. 1990, I think, was the year I wrote my first word processor. So I could write, save, edit, and share documents.
- predictability: it can be non instant but I know what's coming
I had an old pentest distro with linux 2.4 on a key and a few recent ones using systemd and tailored DE. Whenever I used the old 2.4 era distro, maybe I had less capabilities overall, maybe a few glitches here and there, but somehow I felt in contact. Everything reacted as-is, throughput felt large (console output, file IO). Even the old sequential boot had a 'charm' to it. Even though systemd is so vastly superior in all aspects .. there was something that appealed to me. Weird.
Unfortunately enabling this means everybody within a 15 meter radius knows when my laptop has rebooted because this option also enables a BEEEEP that cannot be disabled or turned down. :(
Can disable the low-battery beeping; cannot disable the POST beep unless you switch to vendor logo mode.
(It's your standard old-style beep - just rendered through stereo speakers that have uncannily decent audio reproduction at that particular frequency for some reason. In a serene, quiet room.)
404 sense not found; please check the garbled and try again
I can still remember some of the text displayed when booting up my family's Win95 beige box because stayed up so long. Now if I'm trying to read an error message when booting even a slow modern computer I have to take a video and step through it frame by frame.
As the OP who wrote this while drunk two years ago and thinks that about 30-50% of it is objectively wrong, it's extremely funny to me that every five months someone reposts it here and everyone gets in a fight about it again. Many of the details are entirely inaccurate - the thesis is still completely valid, and it's Computer Person Thinking that wants to attack the details while refusing to stand back and look at the overall picture, which is that using computers is now an incredibly messy experience where nothing quite does what you want, nothing can be predicted, nothing can be learned.
I hypothesize that this is because programmers and other various Computer People consider "being on the computer" to be a satisfying goal in itself that sometimes has positive side effects in the real world, while everyone else simply has stockholm syndrome and has long since given up on even imagining a better experience.
That tendency to rathole and nitpick, while discarding the larger point, is increasingly frustrating.
It seems like a lot of this boils down to people getting stuck in a local maxima. If someone isn't used to zooming out to the big picture, are they more likely to solve problems in isolation and end up with layers of dependencies, npm installing all the things? Resulting in an unstable stack and less predictable ecosystem at large.
Yes, the ratholeing vs. generalizing is an interesting phenomenon. I get the sense that it's largely personality-based. Some people (most computer people) are wired up to get every detail exactly correct and if anything is out of place they break out the tweezers and start tweaking. That's exactly what's required in most engineering work. It can lead to perfecting something that's about to get thrown away.
The other personality deals with abstractions, analogies and top-down thinking. When faced with an issue they'll start by defining goals and values. And it's easy to be blind to the details when you're thinking of the big picture and ask for things counter to the laws of physics.
Even if you're a two-minded kind of person and can deal with both generalities and specifics, it's incredibly hard to deal with both at once and quite a task to switch from the one mindset to the other.
If you were a hammer manufacturer, wouldn't you nitpick if someone said "I feel hammers have gotten worse since the 80's, and handles aren't what they used to be"? To many people, computers are a tool;to many IT people, they are a way of life. Its not about big picture, its about proximity- some rant about some minor detail on something you value captures more attention than something you don't care about.
I appreciate your perspective a lot. And I agree that 90% of our use of computers is complete trash, and for those of us in the industry the addiction of being able to understand and maneuver through that trash can overpower the desire to remove the trash entirely.
Studying the history of CS and following it to its roots has been simultaneously invigorating because of the elegance of the fundamental ideas, but also disappointing, because of the line I can trace from the hopes and dreams of those pioneers to the current state of computing.
I'm hopeful that a new wave of software is just over the horizon. Perhaps I am naive, but I'm holding out against pessimism about the future for as long as I can.
I've been working on Google Maps for close to five years now. I somehow missed the previous postings of this thread, but a little hyperbole never hurt anyone :) Your larger point about the inability in our profession to take a step back and consider the overall situation is spot-on. The mindset I most commonly encounter among other engineers when discussing any kind of systemic problems in our practices is a kind of "it's $CURRENT_YEAR, of course things are the best they've ever been and we've solved all known problems. Also what is $KNOWN_SINCE_DECADES_TRIVIAL_SOLUTION_TO_MAJOR_PROBLEM?"
Huh. I did, and thought it was atrocious. Its only benefit at the time was live re-routing: the competition at the time, short of buying an actual GPS, was printing out directions from mapquest and hoping you don’t make a wrong turn.
I recall a few times driving out of state where I’d miss an exit, be unable to figure out my way to where I needed, pull over at a phone booth or public business, and start calling to see who was home and could run mapquest for me to get me back on the road. Don’t miss that.
> the thesis is still completely valid, and it's Computer Person Thinking that wants to attack the details while refusing to stand back and look at the overall picture, which is that using computers is now an incredibly messy experience where nothing quite does what you want, nothing can be predicted, nothing can be learned.
FWIW, I don't care about the details, there are plenty individual examples of bad design today and slower software. I disagree completely with your larger thesis that things have gotten worse, and with the title in particular as a summary of your point. Why do you feel it's completely valid, and what does that mean exactly if you agree that many of the supporting examples are wrong?
And I also disagree with characterizing people who disagree with you as "Computer Person Thinking" and "Stockholm Syndrome". That charge seems doubly ironic, quite hypocritical, given the content of your tweet-storm, but I understand if you're feeling a bit attacked or defensive about this discussion. It's okay that it wasn't all true, and if this causes discussion that seems annoying, at least it's a discussion. We can continue to improve software and hardware and UX, and it's good to discuss it. It's just not true that it's gotten worse, and you don't really need to crap on today's software or the people who disagree with you to prove the point that we still have room for improvement. There's always room for improvement.
Anyway, computing is objectively faster today than in 1983 (I was there), and not by a little, by orders of magnitude, especially if you are fair and compare functionality apples to apples, but even if you only tally wait times for activities that seem similar on the surface but are completely different today (such as in-memory queries vs internet queries).
I don't see much objective data or measured results in your UX argument on the whole. Arguing that function keys are intuitive and that the mouse is useless is pure opinion based on what you like and being used to something, not something that can be shown by any large scale user studies to date, and history has already somewhat demonstrated the opposite, that many people prefer mice navigation to keyboards, and that function key workflows are for experts, not Twitter users, by and large.
A thesis cannot be "completely valid" when 50% of the reasoning behind it is "objectively wrong". Some people would rather point out in which ways your idea is misguided and naive than attempt to force sense out of an idea that's based on wrong information.
This post again with its ridiculous ranting examples.
"This text searching program that searches text against text is way faster and simpler than a fully rendered, interactive map of the entire world with accurate roads and precision at the foot / meter level."
No. Shit, really? Get out of town.
Yes, some very popular apps have bad UX. But some apps have incredible UX. Cherry picking the bad ones while ignoring the good ones to prove a point is boring and not reflective of the actual industry.
These posts fondly remember just the speed, but always seem to forget the frustrations, or re-imagine them to be something we treasured.
Remember autoexec.bat files? Remember endless configuration to get one program working? Remember the computer just throwing its hands up and giving up when you gave it input that wasn't exactly what it expected? Remember hardware compatibility issues and how badly it affected system stability? Remember when building a computer required work and research and took hours? I do, and it wasn't fun then, it was a detriment to doing what you wanted. It wasn't a rite of passage, it was a huge pain in the ass.
So yeah, things are slower now, and I wanna go fast. But I also don't need to spend an entire weekend setting up a PC and a printer for my mom anymore either. I don't need to teach her arcane commands to get basic functionality out of her machine. Plug and play and usability happened, and while things feel slower, computers are now open to a much wider audience and are much more usable now.
These posts always have a slight stench of elitism disguised as disappointment.
Huh. I'd say the examples are perfectly good and on-point. While dealing with autoexec.bat and random BSODs wasn't fun, it's entirely orthogonal to the fact that a DOS-era POS still offers orders of magnitude better UX than current-era browser POSes, or than most web apps for that matter.
It also doesn't change the fact that Google Maps is very bad at being a map. It's entire UI flow is oriented for giving turn-by-turn directions for people who know where they are and where they are going; it gives almost no affordances for exploration and cross-referencing.
> Remember when building a computer required work and research and took hours.
As someone who builds their own PC every couple years: it still does. It's actually worse now, due to the amount of products on the market and price segregation involved. Two PCs ago, I didn't have to use two parts compatibility tools, several benchmarking sites and a couple of friends, and didn't have to write CSS hacks for electronics stores, just to be able to assemble a cost-effective PC.
> But I also don't need to spend an entire weekend setting up a PC and a printer for my mom anymore either.
You don't? Printer drivers are only slightly less garbage than they were, but now there's also less knobs to turn if things go wrong. When my Windows 10 doesn't want to talk to a printer or a Bluetooth headset, all I get to see is a stuck progress bar.
Bottom line: I agree 100% with the author that one of the primary functions of a computer is enabling easy cross-referencing of information. This ability has been degrading over the past decades (arguably for business reasons: the easier it is for people to make sense of information, the harder it is for your sales tactics to work).
> These posts always have a slight stench of elitism disguised as disappointment.
That I don't get. Is it "elitist" now to point out that the (tech) "elite" can actually handle all this bullshit, but it's the regular Joes and Janes that get the short end of the stick?
> It also doesn't change the fact that Google Maps is very bad at being a map. It's entire UI flow is oriented for giving turn-by-turn directions for people who know where they are and where they are going;
As it turns out, that is probably the most popular use case for maps in the world.
Note also that for most smartphone users of Google Maps the use-case is actually much broader than that. The UI flow also totally accounts for users who only know where they are going—thanks to GPS and Google Maps knowing where you are often isn't necessary.
I'm confused by the complaint that the "Maps" app only caters to the 90-percentile use case for maps, but doesn't cover the other uses-cases well.
> I agree 100% with the author that one of the primary functions of a computer is enabling easy cross-referencing of information. This ability has been degrading over the past decades
I just find this not the case at all. For expert-users the tools that existed decades ago are still there and still usable. Or you can craft your own!
For non-expert users the information in the world is orders of magnitude more accessible than it used to be.
> As it turns out, that is probably the most popular use case for maps in the world.
There's a very subtle point the Twitter thread was making here. This use case may be most popular not because it's what the people want, but because it's all that they can easily do. The tools you use shape how you work, and what you can work on.
FWIW, I learned to pay attention when the machine doesn't help me do what I want (it's a good source of ideas for side projects), so I've noticed that I do want a map that works like a map - something I can explore and annotate. I do sometimes resort to screenshotting GMaps, or photographing paper maps in the past, just to have a map on my phone. I've seen non-tech people do that as well. So I can be confident that it's not just me and the Twitter thread's author that want this.
> For expert-users the tools that existed decades ago are still there and still usable. Or you can craft your own!
The Twitter thread's point (and mine as well) is that expert users can and do work around this degradation. It's a frustrating chore, but it's not all that difficult if you can code a bit and have some spare time. It's the experience for the non-expert users that has degraded in a way they can't fix for themselves.
> For non-expert users the information in the world is orders of magnitude more accessible than it used to be.
The way it's accessible, it's almost as if it wasn't. Sure, you can easily Google random trivia. But good luck trying to compare things. That's always a pain, and usually involves hoping that someone else made a dedicated tool for similar comparisons on the topic you're interested in, and that the information hardcoded in that tool are current and accurate. Notably, the tools you use for searching have no support for comparing.
> so I've noticed that I do want a map that works like a map - something I can explore and annotate.
I don't doubt that there are use cases for a map that works this way. Even if Google Maps covers 80-90% of the use-cases for mapping, mapping is an absolutely massive domain. 10-20% of use-cases still represents a huge volume.
But it doesn't have to be Google Maps. It actually seems worse to be for one "maps" app try to handle all possible use-cases for a map.
Why isn't there a separate different tool that handles the use-case you describe?
I guess, going back to the original thesis, what would the "1983" replication of what Google Maps does, but faster. Or, what would the "1983" version of the mapping behavior you wanted.
In the thread they say:
> in 1998 if you were planning a trip you might have gotten out a paper road map and put marks on it for interesting locations along the way
I'd argue that this use-case still exists. Paper road maps haven't gone away, so this is still an option. People largely don't use this and prefer Google Maps or other digital mapping tools for most of their problems. Why? If you gave me both the 1998 tools and the 2020 tools, for 95% of the options I'm going to use digital tools to solve it because they let me solve my problems faster and easier. I know this because I have easy access to paper maps and I never touch them. Because they're largely worse at the job.
> There's a very subtle point the Twitter thread was making here. This use case may be most popular not because it's what the people want, but because it's all that they can easily do. The tools you use shape how you work, and what you can work on.
Ultimately, my point above is my response to that. None of the old tools are gone. Paper maps are still available. And yet they have been largely abandoned by the large majority of the population. I agree that there are limitations to our current digital tools, and I hope in 2030 we have tools that do what the article describes. But the 1983 version of the tools are worse for solving problems than the current tools, for most people.
pretty much all games in the early 80's had [so called] pixel perfect scrolling. Each frame showed exactly what was required.
Today it is entirely acceptable for a map to be a jerky stuttering pile of crap. The same goes for the infinite scroll implementations. Its preposterous to start loading things after they are needed.
There is a good analogy with making things in the physical world. The professional doesn't start a job before he has everything he needs to do it, the amateur obtains what he needs after he needs them.
Games have huge advantages of constraint of application that mapping applications don't. You can get pixel perfect scrolling when you constrain the max rate the user can pass through the dataset, you deny them random access into the dataset, your dataset isn't trying to represent a space the volume of planet Earth, etc.
There's a huge gulf between the use cases you're comparing here, and I don't believe for one second that loading the Google Maps dataset into Commander Keene's engine would make for a better experience.
(Also, not to be overly pedantic, but "The professional doesn't start a job before he has everything he needs to do it" pretty much classifies all building construction as unprofessional. The professional doesn't magically have everything on-hand, especially bulky or expensive resources; they have a plan for acquiring them at reasonable rates of input and mitigation strategies if that plan can't be followed)
Illl ignore the pedantic part since it was just an analogy, if it doesn't work for you there is little to talk about.
> Games have huge advantages of ....
I have thoughts like that but I consider them "making up excuses". You don't have to see it that way but I can't see someone fix a problem by making up excuses for it to exist. For me it is just like you can always come up with an excuse not to do something.
8 gigabytes of memory / 64 kilobytes of memory = 125 000 times as much memory.
14 gigahertz (4 cores x 3.5 GHz) / 1.023 MHz = 13 685 times as much processor power.
4108 gigahertz (2560 Cuda cores / 1605 MHz) / 2 MHz = 2 054 000 times as much video power.
Can I just call 2 Mhz memory bandwidth 16 Mbit/s?
If so, 500 Mbit / 16 Mbit = 31.25 fold the bandwidth
We are not rendering many layers of colorful animated game content. A map is just a bunch of boring lines. The modern screen however is a lot bigger. I see a glimmer of hope for an excuse!
320x200 px = 64000 px
1920x1080 px = 2073600 px
2073600 / 64000 = 32.4 times the screen size
meh?
We must applaud everyone involved in making all this hardware progress. It truly blows the mind and defies belief. No one could have imagined this.
Then came weee the software people and... and.....
I'm cringing to hard to continue writing this post.
The numbers don't lie, we suck. Lets leave it at that.
I'm still happy with the configuration we have where my map is a little slower than maybe I'd like it to be (though honestly, I just loaded maps.google.com and moused around randomly and... it's fine? Certainly not so slow I'm bothered by it) but the mapping app also can't crash my computer due to the three layers of abstraction it's running on top of. Because that would suck.
If you're curious where the time goes, btw... Most of the visible delay in Google Maps can be seen by popping the browser inspector and watching the network tab. Maps fetches many thin slices of data (over 1,000 in my test), which is a sub-optimal way to do networking that adds a ton of overhead. So if they wanted to improve maps significantly, switching out for one of the other protocols Google has that allows batching over a single long-lived connection and changing the client and server logic to batch more intelligently could do it. I doubt they will because most users are fine with the sub-three-second load times (and engineering time not spent on solving a problem most users don't care about is time spent on solving problems users do care about). You're seeking perfection in a realm where users don't care and claiming the engineers who don't pursue it "suck;" I'd say those engineers are just busy solving the right problem and you're more interested in the wrong problem. By all means, make your mapping application perfect, as long as you understand why the one put out by a company with a thousand irons in the fire didn't.
Also, I think the analogy was great, but you reached the wrong conclusion. ;) That is how large-scale engineering works. Scheduling becomes the dominant phenomenon in end-to-end performance. Games have huge advantages on constraining the scheduling problem. General-purpose apps do not. Hell, to see this in action in a game: Second Life's performance is crap because the whole world is hyper-malleable, so the game engine cannot predict or pre-schedule anything.
Since it is software nothing is set in stone, everything can be changed, we know how to do it really really fast and really really efficiently.
People did incredible things to improve almost everything.
To me this means all of the performance loss is there for no reason. All we need is for people to stop making excuses. I for one know how to get out of the way when better men are trying to get work done.
You are the engineer if not the artist, impress me! Impress the hardware people! Impress the people who know your field. Some attention to consumers wishes is good but Gustave Eiffel didn't build his tower because consumers wanted that from him.
Why would you even have tap water if the well is down the street? A horse and carriage is just fine, it is good enough for what people need. no? What if our doctors measured their effort in "good enough's" and living up to consumer expectation only?
The hardware folk build a warp capable star ship and we are using it to do shopping on the corner store because that was what mum wanted. Of course there is no need to even go to Mars. It's missing the point entirely you see?
> pretty much all games in the early 80's had [so called] pixel perfect scrolling. Each frame showed exactly what was required.
> Today it is entirely acceptable for a map to be a jerky stuttering pile of crap. The same goes for the infinite scroll implementations. Its preposterous to start loading things after they are needed.
This doesn't make any sense to me. In which game from the early 80's could I view accurate map data for any region on the earth, and quickly scroll across the planet without any loading artifacts?
Of course you can manage pixel-perfect scrolling if all of your data is local and fits in memory. That's not anywhere close to the same domain as maps.
> ... I do want a map that works like a map - something I can explore and annotate.
You can do that with Google maps, if you're logged in. You can create a custom map, with multiple places marked, and with annotations. You get a URL for it. And looking at it later, you can zoom in as needed.
>FWIW, I learned to pay attention when the machine doesn't help me do what I want (it's a good source of ideas for side projects), so I've noticed that I do want a map that works like a map - something I can explore and annotate. I do sometimes resort to screenshotting GMaps, or photographing paper maps in the past, just to have a map on my phone.
How is a world where search is convenient and automated but comparison isn't less accessible than a world where neither search nor comparison are convenient and automated?
because it's not practical to write a Google Maps (or whatever) replacement most of the time? It's a ton of work. Sure we can smooth over rough edges but actually implementing lots of missing non-trivial things? Or, making things faster that rely on some 3rd party back end? Usually either not an option, or not an option without recreating the entire front end.
As it turns out, that is probably the most popular use case for maps in the world.
Your sentence starts out like you're stating a fact, but then peters out with "probably."
Do you have data on this?
I'd posit the opposite: That exploration is far more used in online maps.
Aside from Uber drivers, SV types, and wannabe road warrior squinters, nobody uses maps for their daily activities. People know where they're going and they go there without consulting technology. That's why we have traffic jams.
I think this isn't true at all. The vast majority of people I know use maps solely find something like an ATM/gas station/coffee shop/etc and then figuring out how to get there or how long it would take to get there.
We don't have data, but the only people that do are Google and they have designed their UX around this use case. And it has become one of the most used tools on Earth. If we are going to play the 'your comment is bad because you don't have data' game, I think the onus is really on you to prove that the company with all the data is getting it wrong.
> Aside from Uber drivers, SV types, and wannabe road warrior squinters, nobody uses maps for their daily activities. People know where they're going and they go there without consulting technology. That's why we have traffic jams.
I might know where I'm going but I don't always know how to get there so I use Google Maps all the time. I don't use it for my daily commute but if I'm going to a friend's or something I'll usually use it.
When I sit in my friends cars we also use it all the time. Often we're going to a restaurant or some other location that we don't often go to.
For exploration my friends pretty much just use Yelp or Google Search. I sometimes but rarely use Google Maps for this because I find that the reviews are usually much lower quality and Google Maps is too slow (I have an old Pixel 2)
I think it depends where you are driving. I live in a large city and there are way more people using maps for their daily activities that you might think. In my parents neighborhood in the suburbs, and in rural areas, I think you're probably right. If there are only a few ways to get somewhere, you probably aren't using maps.
> Your sentence starts out like you're stating a fact, but then peters out with "probably."
Fair enough. I'd argue that I put the word probably in the first half of the sentence, so I don't see it as "petering out", but fair enough.
I will agree with the critique that I'm making an assumption about mapping use cases, and don't have hard data. I'm happy to be corrected by any real data on the topic.
People use maps for daily activities all the time. Not to find where they're trying to get to, but for how to get somewhere more quickly - ie. public transport directions, or if it's quicker to walk than go by bus/train/tram.
> As it turns out, that is probably the most popular use case for maps in the world.
I don't think fully deciding an entire path was ever the major use of paper maps. But well, now that apps only allow for this usage, I am pretty sure it's the most popular one.
Personally, I very rarely use maps this way. What makes Google shit useless for me (worse, actually, because it insists on opening by default, and people now decided they can send a Google marker instead of an address, and Google makes it impossible to retrieve data from a marker).
Most people I know, faced with the option of turn by turn directions or nothing choose nothing nearly all the time.
Personally, I never use turn-by-turn even to walk around in unknown places, because it loses context. Instead, I stop at the screen that shows the route options; I don't press "Start" to activate the nav. Besides being more useful when walking around, it doesn't eat such ridiculous amounts of battery power that the turn-by-turn nav does for some reason.
That's the major usecase for mobile GMaps, because it absolutely sucks for any other usecase, and thus people can't use it for any other purposes really. Except maybe looking up timetables for a previously known public transport stop (only in certain countries, some have much superior ways to plan public transport travel); planning a transport between two stops if you're not standing at one of them right now probably requires a phd.
Also if your mapping app is good, the people will use it as little as possible. That's certainly my desire as a turist. So maybe perverse incentives are at play, if wrong usage metrics are used to improve the app.
Anyway, I'm used to using paper maps from the childhood, so ergonomics of mobile phone apps is really irritating, because of how paper maps work so much better for me most of the time.
I think you’re measuring use-cases by volume (i.e. Monthly Active Users) instead of by mass (i.e. number of man-hours spent engaged with the app’s UI by those users.)
Certainly, a lot of people use Google Maps for turn-by-turn directions. This means that they interact with the actual UI and visible map tiles once, at the beginning of the trip; and then from there on are given timely voice directions every few minutes, which they can maybe contextualize by looking at the map on the screen. Even if you count the time they spend hearing those directions as time spent “interacting with the UI” of Google Maps, it adds up to fewer man-hours than you’d think.
Meanwhile, I believe that there are a much larger number of collective man-hours spent staring at the Google Maps UI—actually poking and prodding at it—by pedestrians navigating unfamiliar cities, or unfamiliar places in their city. Tourists, people with new jobs, people told to meet their friends for dinner somewhere; etc.
And the Google Maps UI (especially the map tiles themselves) is horrible for pedestrians. Half the time you can’t even figure out the name of the road/street you’re standing on; names of arterial roads (like main streets that happen to also be technically highways) only show up at low zoom levels, while names of small streets barely show up at the highest zoom level. And asking Maps to give you a pedestrian or public-transit route to a particular place doesn’t fix this, because GMaps just doesn’t understand what can or cannot be walked through. It thinks public parks are solid obstacles (no roads!) while happily routing you along maintenance paths for subway systems, rail lines, and even airfields. (One time it guided me to walk down the side of an above-grade freeway, outside the concrete side-barriers, squeezing between the barriers and a forest.) And, of course, it still assumes the “entrances” to an address are the car entrances—so, for example, it routes pedestrians to the back alleys behind apartment buildings (because that’s more often where the parking-garage entrance is) rather than the front, where the door is. I don’t live here, Google; I can’t even get into the garage!
The thing is, these are such distinct workflows that there’s no reason for Google Maps to be optimizing for one use-case over the other in the first place. It’s immediately apparent which one you’re attempting by your actions upon opening the app; so why not just offer one experience (and set of map tiles) for people attempting car navigation, and a different experience (and set of map tiles) for people attempting pedestrian wayfinding?
Or, someone could just come out with a wayfinding app for pedestrians that does its own map rendering. There’s already a Transit app with a UI (and map tiles) optimized for transit-takers; why not a Walkthere app with a UI (and map tiles) optimized for pedestrians? :)
> Or, someone could just come out with a wayfinding app for pedestrians that does its own map rendering. There’s already a Transit app with a UI (and map tiles) optimized for transit-takers; why not a Walkthere app with a UI (and map tiles) optimized for pedestrians? :)
This wouldn't really make me happy. It makes more sense to integrate the walking instructions into the Transit app and be good at giving directions for multimodal transport. I need to know if I should get off the bus here, and walk through the park, or wait till three stops later, which leaves me closer as the crow flies but further away overall. The car app doesn't need to work multimodally since it's not normal to drive somewhere, walk 10 minutes, then drive somewhere else.
Google maps is still the best general purpose multimodal transport app I've used, but it could be so much better. I'm in Austria right now and it doesn't know about the Austrian buses. There's an app (OEBB Scotty) from the Austrian rail operator which I assume everyone uses instead.
Honestly, the two use cases have an obvious intersection: a planning/wayfinding session in which I want Google to compute me a route that I want to then inspect, perhaps modify, and then save into my planning session.
Only thing I'd like is for maps to not give directions for the parts that are obvious for you since you'd done that part a million times i.e. from your home, turn right to get to the freeway
> Huh. I'd say the examples are perfectly good and on-point. While dealing with autoexec.bat and random BSODs wasn't fun, it's entirely orthogonal to the fact that a DOS-era POS still offers orders of magnitude better UX than current-era browser POSes, or than most web apps for that matter.
I know of an ERP system that somehow manages to take about 15 seconds to search an inventory of ~100k items. If you export all those items to CSV, with all their attributes (most of which are not searched), the resulting file is about 15 MB.
It is boggling how they managed to implement search this slowly (in a C++ application using MS SQL as the backend). 3.5 GHz computers, performing plain text string search at about 1 MB/s.
It is even more surprising that users feel this is not an completely unreasonable speed.
(They managed to pull this stunt off by completely not using the SQL database in the intended way, i.e. all tables are essentially (id, blob) tuples, where the blob is a zlib compressed piece of custom TLV encoded data. All data access goes through a bunch of stored procedures, which return data in accordance to a sort of "data extraction string". Search works by re-implementing inverted indices in tables of (word, offset, blob), where blob contains a zlib compressed list of matching IDs; again processed by stored procedures. The client then is wisely implemented using MS SQL's flavour of LIMIT queries which effectively cause a classic quadratic slowdown because the database engine literally has no way to fetch result rows n...m except by constructing the entire result set up to m.
Unsurprisingly the developers of this abomination claim to be competent. They also invented a funny data exchange format involving fixed field lengths and ASCII separators - some time in the 2010s.)
Tell me about it! A company I recently did maintenance for pays several thousand € each year for possibly the worst, most broken accounting software, despite not actually having an in-house accountant. The reason: one (1!) person in the company uses the software's (barely functioning) inventory feature and refuses to use anything else.
They're currently considering my offer to develop a custom, pixel-identical system and just replace it without telling her. I could probably do it in a week because she only ever interacts with like 4 out of the at least 100 views the app has, but I suspect she'll catch wind of this and stop it. I don't actually know what she does there besides inventory, but she seems to have more power than anyone else below C-level.
There was a time when I would have been interested in how such a system came to be, but now I think I have been around long enough to guess that someone was protecting their job, and then either got fired anyway or became technical-architect-for-life.
What makes Google Maps bad? My computer in 1983 didn't have a map application at all, how can Google Maps possibly be bad compared to that?
And if I had had a map application (I'm sure they existed) it would have taken several minutes to load from cassette tape. That's not faster than Google Maps, either perceptually or objectively.
The article explains all the ways that make Google Maps' UI bad. Your paper map or driver's atlas in 1983 offered better xref functionality. As they do in 2020, if you can still find them.
Sure, hardware progressed in the past 40 years. CPUs are faster, storage is larger, we have a global network. But it's all irrelevant to the point that Google Maps UI is optimized for being a point-by-point nav/ad delivery tool, not a map. That's an absolute statement, not relative to last century's technology.
I have several atlases and IMO they have considerable advantages over Google Maps. As does Google Maps over them. But none of that is relevant to the matter at hand: "Almost everything on computers is perceptually slower than in 1983".
Thus far in this thread nobody has offered any counterpoint to the speed argument.
The first PC we bought at home was a 133MHz Pentium.
My current box is a Ryzen 3700 at 16*3.6GHz.
It doesn't feel like I have that much more power at my disposal doing everyday things. Web browsers aren't 400 times faster today than Internet Explorer was in 1995.
They should be. Even if you account for all the extra stuff that's going on today things should be at least 50 times faster. Why aren't they?
RAM bandwidth and speed, network latency, display sound like the most important. If that 133MHz pentium rendered a web page, it did so at 640×400 pixels, right? 16 colours? Or just in text? So it had to process about 4k (if text) or 128k (if graphics). Your current display involves a little more data.
RAM access takes about 10ns now, it took longer back then but not very much longer. Your sixteen cores can do an awful lot as long as they don't need to access RAM, and I doubt that you need sixteen cores to render a web page. The cores are fast, but their speed just removes them further from being a bottleneck, it doesn't really speed up display of text like this page.
And then there's the latency — ping times are a little closer to the speed of light, but haven't shrunk by anything close to a factor of 400.
It's also driven by a graphics card with more RAM than I had HDD space back in the day with a dedicated CPU that's also a whole lot faster than 133MHz.
Every piece of hardware is better but software has bloated up to remove those speed gains, except when it comes to things like AAA games where they're still pushing the envelope. That's the only place you can actually tell you've got hot new hardware, because they're the only ones caring enough about performance.
The increase in video RAM requires more of the CPU and GPU. Downloading and displaying a JPEG at today's resolution requires more of everything, not just video RAM.
Anyway, If you come to over to the server side you can see other code that performs very differently than it could have back in 1983. Sometimes unimaginably different — how would a service like tineye.com have been implemented a few decades ago?
The point I'm making is that my desktop PC sitting in my office now does have more of everything compared to my 133MHz PC from 1995. Not everything has scaled up at the same pace, sure, but literally every piece of hardware is better now.
People talk about difference in resolution and color depth? 640x480x16 isn't that much less than 1920x1080x32. My current resolution has 13 times more data than my 1995 one, and my HW can handle refreshing it 120 times per second and fill it with millions of beautiful anti-aliased polygons all interacting with each other with simulated physics and dozens of shaders applied calculating AI behaviour, path finding, thousands of RNG roll, streaming data to and from disk and syncing everything over the network which is still limited by the speed of light. As long as I play Path of Exile that is.
Opening desktop software is perceptually the same as in the 90s. From launching to usable state is about the same amount of time, and it's not doing so much more that it can explain why current software takes so long.
If I can play Path of Exile at 120fps it's obviously not an issue of HW scaling or not being able to achieve performance.
Who knows? My hunch is there's two main factors influencing this. The first is that constraints breed creativity. If you know you only have 133MHz on a single CPU you squeeze as as possible much out of every cycle, on modern CPUs what's a few thousand cycles between friends?
The second is SDK/framework/etc. bloat, which is probably influenced by the first. With excess cycles you don't care if your tools start to bloat.
I think it's primarily an issue of attitude. If you want to write fast software you'll do it, regardless of the circumstances. It all starts with wanting it.
I worked on a framework in the nineties and did such things as render letters to pixels. Here are some of the optimisations we did then, compared to now:
We used much lower output resolution.
We used integer math instead of floating point, reducing legibility. How paragraphs were wrapped depended on whether we rendered it on this monitor or that, or printed it.
We used prescaled fonts instead of freely scalable fonts for the most important sizes, and font formats that were designed for quick scaling rather than high-quality results. When users bought a new, better monitor they could get worse text appearance, because no longer was there a hand-optimised prescaled font for their most-used font size.
We used fonts with small repertoires. No emoji, often not even € or —, and many users had to make up their minds whether they wanted the ability to type ö or ø long before they started writing.
Those optimisations (and the others — that list is far from complete) cost a lot of time for the people who spent time writing code or manually scaling fonts, and led to worse results for the users.
I think you're the kind of person who wouldn't dream of actually using anything other than antialiased text with freely scalable fonts and subpixel interletter space. You just complain that today's frameworks don't provide the old fast code that you wouldn't use and think develpers are somehow to blame for not wanting to write that code.
Perfectly well? Really? Scrolling around the map or zooming causes you no kind of rendering delays or artefacts? It feels consistently snappy no matter what you do?
Apparently Microsoft Autoroute was first released in 1988, covered several dozen countries, and could be obtained by ordering it. Thus using it for the first time would involve a delay or at least a day in order to order, receive and install the program. After that, starting it should be quick, but I can't tell whether it required inserting the CD into the drive. Even if the appliocation is already installed on the PC and not copy-protected, looking something up doesn't sound obviously faster than opening a web page, typing the name of the location into the search box, and waiting for the result.
And you had to wait for new CDs to arrive by mail whenever roads changed. And I'm not talking "2 day delivery Amazon with UPS updates" here, I'm talking you send an envelope + check into the mail and maybe a month from now you get a CD back.
It didn't actually work that well. The internet is the only real way you can get a Google-maps like autoroute feature with reasonable update times. Constantly buying new CDs and DVDs on a subscription basis is a no-go for sure. I don't think anyone's 56kb modem was fast enough to update the map data.
Even if you bought the CDs for the updated map data, it only was updated every year IIRC. So there was plenty of roads that simply were wrong. Its been a long time since I used it, but Google Maps is better at the actual core feature: having up to date maps, and an up-to-date route information.
Hint: Microsoft wasn't sending around Google cars to build up its database of maps in the 90s. Nor were there public satellite images released by the government to serve as a starting point for map data. (Satellite imagery was pure spycraft. We knew governments could do it, but normal people did NOT have access to that data yet). The maps were simply not as accurate as what we have today, not by a long shot.
--------
Has anyone here criticizing new software actually live in the 90s? Like, there's a reason that typical people didn't use Microsoft Autoroute and other map programs. Not only was it super expensive, it required some computer know-how that wasn't really common yet. And even when you got everything lined up just right, it still had warts.
The only thing from the 90s that was unambiguously better than today's stuff was like... Microsoft Encarta, Chips Challenge and Space Cadet Pinball. Almost everything else is better today.
With browsers the network latency caps the speed ultimately, no matter how fast a CPU you have. Also HDD/SSDs are very slow compared to the CPU caches. Granted PCs of the previous era also had the same limitation but their processors were not fast enough to run a browser 400 times faster if only the HDD wasn't there.
But other simpler programs should be very much faster. That they perceptually aren't is because (IMO) code size and data size has increased almost exponentially while CPU caches and main memory speeds haven't kept up.
Main memory speeds haven't increased like CPU speeds have but it's nowhere close to where it was in 1995. You can get CPUs today with larger cache than I had RAM back then, as well.
I know that CPU speed isn't everything and so a 400x speedup is not reasonable to expect. That's why I hedged and said 50x.
Every part of my computer is a lot faster than it was back then and I can barely tell unless I'm using software originating from that era because they wrote their code to run on constrained hardware which means it's flying right now.
It's like we've all gone from driving 80 MPH in an old beater Datsun to driving 30 MPH in a Porsche and just shrug because this is what driving is like now.
There was the BBC Domesday book that let you scroll around annotated maps on a BBC micro. They loaded in near-realtime from laserdisc. It was fantastically expensive, and also so tightly tied to its technology that it was impossible to emulate for years.
I believe around 2010 the BBC managed to get it on the web, but this seems to have died again.
> Huh. I'd say the examples are perfectly good and on-point. While dealing with autoexec.bat and random BSODs wasn't fun, it's entirely orthogonal to the fact that a DOS-era POS still offers orders of magnitude better UX than current-era browser POSes, or than most web apps for that matter.
What are you talking about? How in the world is a DOS that can only run a single app at a time better than a system that can run dozens of apps at once? How is non-multitasking a better experience? I remember DOS pretty well. I remember trying to configure Expanded Memory vs Extended Memory. Having to wait for dial-up to literally dial up the target machine.
Edit: I didn't realize the poster was talking about Point-of-Sale devices. So the above rant is aimed incorrectly.
> It also doesn't change the fact that Google Maps is very bad at being a map. It's entire UI flow is oriented for giving turn-by-turn directions for people who know where they are and where they are going;
That's exactly what it's made for.
> As someone who builds their own PC every couple years: it still does. It's actually worse now,
No way. Things just work nowadays. Operating Systems have generic drivers that work well. Its so much easier now to build a machine than it was years ago. I remember taking days to get something up and running, but now its minutes. Maybe an hour?
I really hate these "good old days" posts, no matter what the subject. The days of the past weren't better, there were just fewer choices.
> What are you talking about? How in the world is a DOS that can only run a single app at a time better than a system that can run dozens of apps at once? How is non-multitasking a better experience?
The person you were replying to was specifically talking about POS (Point of Sale) systems.
Retail workers use these systems to do repetitive tasks as quickly as possible. Legacy systems tend to be a lot faster (and more keyboard accessible) than modern web-based systems.
It is not uncommon for retail workers to have a standard Windows workstation these days so they can reference things on the company website, but then also shell into a proper POS system.
In no way shape or form does DOS-era UX beat current-era UX (there may be specific programs that do so, but writ large this is incorrect). Getting away from command lines is one of the core reasons that computing exploded. Getting farther away from abstraction with touch is another reason that computing is exploding further.
Command-line systems simply do not map well to a lot of users' mental models. In particular, they suffer from very poor discoverability.
It is true that if you are trained up on a command-line system you can often do specific actions and commands quickly but without training and documentation, it is often quite hard to know what to do and why. Command-line systems also provide feedback that is hard for many people to understand.
Yes, it is true that many current-era systems violate these guidelines as well. This is because UX has become too visual design focused in recent years, but that's a symptom of execs not understanding the true value of design.
> Command-line systems simply do not map well to a lot of users' mental models. In particular, they suffer from very poor discoverability.
I don't want to push too hard into one extreme here, but I believe modern design makes a mistake of being in the other extreme. Namely, it assumes as an axiom that software needs to be fully understandable and discoverable by a random person from the street in 5 minutes. But that only makes sense for the most trivial, toy-like software. "Click to show a small selection of items, click to add it to order, click to pay for order" thing. Anything you want to use to actually produce things requires some mental effort to understand; the more powerful a tool is, the more learning is needed - conversely, the less learning is needed, the less powerful the tool.
It's not like random people can't learn things. Two big examples: videogames and workplace software. If you look at videogames, particularly pretty niche ones (like roguelikes), you'll see people happily learning to use highly optimized and non-discoverable UIs. Some of the stuff you do in 4x or roguelike games rivals the stuff you'd do in an ERP system, except the game UI tends to be much faster and more pleasant to use - because it's optimized for maximum efficiency. As for workplace software, people learn all kinds of garbage UIs because they have no choice but to do so. This is not an argument for garbage UIs, but it's an argument for focusing less on dumbing UIs down, and more on making them efficient (after all, for software used regularly at a job, an inefficient UI is literally wasting people's lives and companies' money at the same time).
> While dealing with autoexec.bat and random BSODs wasn't fun, it's entirely orthogonal to the fact that a DOS-era POS still offers orders of magnitude better UX than current-era browser POSes, or than most web apps for that matter.
These aren't orthogonal in any way. A significant chunk of modern performance hit relative to older Von Neumann architectures is the cross-validation, security models, encapsulation, abstraction, and standardization that make BSODs an extremely unexpected event indicative of your hardware physically malfunctioning that they have become (as opposed to "Oops, Adobe reverse-engineered a standard API call and botched forcing bytes directly into it when bypassing the call library's sanity-checking; time to hard-lock the whole machine because that's the only way we can deal with common errors in this shared-memory, shared-resource computing architecture!").
The time that matters isn't how long it takes to restart the app; it's how many hours of changes just got eaten because the app crashed and the data was either resident in memory only or the crash corrupted the save file (the latter scenario, again, being more common in the past where correctly shunting the right bytes to disk was a dance done between "application" code and "OS toolkit" code, not the responsibility of an isolated kernel that has safeguards against interference in mission-critical tasks).
OTOH, lower runtime performance of modern apps eats into how much people can produce with them - both directly and indirectly, by slowing the feedback loop ever so slightly.
While there are couple of extra layers of abstractions on our systems that make them more safe and stable, hardware has accelerated far more than just to compensate. Software of today needs not to be as slow as it is.
In general, people will tradeoff fixed predictable cost to high-variance cost, so even if the slower tools are nibbling at our productivity, it's preferable to moving fast and breaking things.
I'm not claiming there's no room for optimization, but 90% of the things that make optimization challenging make the system reliable.
> a DOS-era POS still offers orders of magnitude better UX than current-era browser POSes, or than most web apps for that matter.
As someone who helped with the transition from crappy text-only DOS interfaces on POSes to graphical interfaces on POSes, I have to disagree. Learning those interfaces was terrible. I don't know where the author gets the idea that even a beginner had no trouble learning them. I worked at NCR during the switchover from the old system to the new one that used graphical representations of things and the new system was way easier to use and for beginners to learn. And that's not just something we felt, we measured it. (Interestingly, it was still DOS-based in its first incarnation, but it was entirely graphical.)
For a software that's used professionally, it does not matter how easy it is for a beginner to learn. That's an up-front investment that needs to be made once. What matters is the ongoing efficiency and ergonomy of use.
You can never really make a map for "exploring" because what people want to explore is domain specific. Do you want to explore mountains? Do you want to explore museums, do you want to explore beaches?
There is too much stuff in the world to present on a map without absolutely overwhelming people, in the same way that the internet is too vast to "explore". You can't explore the internet from google search you've got to have some vague starting point as a miniumum.
> There is too much stuff in the world to present on a map without absolutely overwhelming people
Nevertheless, I think that Google Maps et al. typically show far too little on the screen. My Dad tells me that he still prefers paper maps because he doesn't have to fiddle around with the zoom level to get things to show up. While I'm sure that Google has more data on most places than it could fit in a small window, when I look at my location in Google Maps, it looks very barren: in fact, it seems to prioritize showing me little 3d models of buildings over things that I care about, like street and place names. Paper maps are typically much denser, but I don't think that people 30 years ago were constantly "overwhelmed" by them.
In a world where you can plot a pin of yourself via gps on the map the need for street names, building names e.t.c. at higher zoom levels just doesn't matter because you don't need it to figure out where you actually are, with paper maps you did.
If adding that information to the map serves no need anymore but clutters it up, why do it?
Google Maps' refusal to show the name of the major(!) street names is infuriating. I don't need that information to determine where I am, I need it to figure out what street signs I need to be looking for so that I can get to where I am going. And the really infuriating thing is that it shows one random useless street somewhere until I zoom in until the street I want is the only thing on the screen.
I fired Google Maps and now use Apple Maps unless I'm looking for a business by name that I think is not mainstream enough to be on Apple Maps.
> offers orders of magnitude better UX than current-era browser
And they also did orders of magnitude less. If you've ever done UX, you would know that it gets exponentially harder as you add more features. People love to complain, but not a single person would realistically give up on all the new features they've got for the slightly better UX in certain edge cases.
Sure. Most popular on-line stores seem to adopt the "material design" pattern, in which an item on a list looks like this:
+---------+ ITEM NAME [-20% OFF][RECOMMENDED]
| A photo |
| of the | Some minimal description. PRICE (FAKE SALE)
| item. | Sometimes something extra. ACTUAL PRICE
| |
+---------+ <some> <store-specific> <icons> [ADD TO CART]
I started to write userstyles that turn them all into:
ITEM NAME Some minimal description ACTUAL PRICE [ADD TO CART]
Sometimes something extra (with a much smaller font).
The userstyles get rid of the photos and bullshit salesy disinformation, reduce margins, paddings and font sizes. This reduces the size of the line item 5-6x while preserving all the important information; it means I can now fit 20-30 line items on my screen, where originally I was able to fit 4-5. With refined enough search criteria, I can fit all results the screen, and compare them without scrolling.
If it truly was worse at being a map, people would still use physical maps, at least in some scenarios. I have never met a person who still uses a physical map for anything more than a wall decoration.
> That I don't get. Is it "elitist" now to point out that the (tech) "elite" can actually handle all this bullshit
It's easy to critique and complain, while at the same time doing it puts someone in a position of superiority. Saying "all the effort and knowledge out there is bullshit because my 1983 terminal searched text faster", in a way says that the person writing this knows better, and thus is elite. OP says it's disguised as disappointment, which I agree, because the author describes (and disguises) the situation from a frustration point-of-view.
But I also think that elitism can be traded with snobbery in this case.
>But I also don't need to spend an entire weekend setting up a PC and a printer for my mom anymore either.
You still have to waste a lot of time doing this crap. Last time I set up a printer for my mom, it was a PITA: I had to download drivers, install some giant pile of crap software, go through a bunch of screens of garbage, etc. Needless to say, her computer runs Windows 10.
By contrast, when I wanted to print to that same printer with my own laptop (running Linux Mint), it was simple: there was absolutely no extra software to install, and all I had to do was go to my Settings->Printers, search for printers on the network (took a few seconds), notice that it found the (WiFi) printer, verify it was an HP and whatever model, and then it was done.
Things could be much faster and simpler than they are now. They aren't, because it's not profitable for various actors for it to be, and because most users accept this. Most users are happy to have gigabytes of malware and spyware on their systems, wasting CPU cycles reporting their behavior back to Microsoft or their printer vendor or whoever, and that's a big reason things are so much slower than in 1983.
So if I'm being "elitist" to point out that spyware is a bad thing, then I'll happily accept that moniker.
Personally I would recommend Brother every time for a printer. They're not fancy but they are well built and don't require you to install a software suite. Roommate and I are still using the same laser printer from college. It's not wireless but if we really wanted to we could attach a pi and setup a printer pool but for now a long cable serves us just fine for the few times we need to print.
+1 on Brother printers. I'm on my second Brother laser printer and It Just Works. I also have a Canon Ink Jet that is mostly harmless. My sister's HP printer, however...
A tangent but still - what does make a printer fancy? Brother has models with every bell and whistle everything else has as well - color LCDs, wifi, airprint. What else is there that would make it fancier?
Some systems require a big software suite or make it very hard to download basic drivers, so you end up with a "fancy" bunch of software - yes it let's you do a few more things, but not a ton. You can have stuff where you pay by the page now - with full telematics back to the printer owner. The one issue - you stop paying and the printer stops working. It's pretty cool - but the overhead (everything has to be working including internet, credit card not changed etc) means its more brittle.
Thanks, I'll be in the market soon and this is exactly the sort of info I was looking for. Since you mention "the few times we need to print", is it safe to assume that their kit is fine with very very infrequent use? Inkjets really didn't like this, and I don't know much about lasers.
Inkjets use liquid ink, and if the ink dries on the spray nozzle, it's dead. This process takes about a month. If you're lucky, the nozzle is part of the cartridge and you need to spend $100 (or more) on new cartridges. Otherwise you need to buy a new printer. Some printers have a mode where they'll spray a little bit though the nozzle if you haven't used the printer in ~2 weeks, but they need to be plugged in.
Laser printers use dry ink that never... gets more dry. I pulled my Brother out of storage after 2+ years and it worked great.
Toner is also considerably cheaper than inkjet ink, and lasts significantly longer. I haven't bought new toner in 8 years.
Personally, I use a black and white laser printer, and if I really, really need to print in color I'll do it at work. (happens basically never) I recognize not everybody has this luxury, and some people have far more need to print in color than I do.
If you need color printing volume is high enough to keep the nozzles in good shape, you're probably better off with a color laser printer because the ink is so much cheaper. If you don't print in color that much, it's a terrible, terrible idea to buy an inkjet printer.
Sorry, but a bit of a pedantic note here: laser printers do not use dry ink. They don't use ink at all; they use "toner". Toner is really nothing more than microscopic particles of colored plastic. The printer uses electrostatic attraction to put the particles on a sheet of paper in a pattern, and then a "fuser" (a small heater) to melt the plastic so that it binds to the paper, without catching the paper on fire. So toner never goes bad because it's really nothing more than dust.
As for your B&W laser, it used to be that color lasers were horribly expensive so only companies had them. These days, color lasers have gotten pretty cheap, and aren't that much more than the B&W lasers. My Brother was about $200 IIRC. Of course, you can get a small B&W for under $100 now, but still, $200-300 is not budget-breaker for anyone in the IT industry. So even if you don't really need color that much, if you're in the market for a printer, I'd advise just spending the extra money and getting the color model, unless you really want your printer to be small (the color models are usually a lot larger, because of the separate toner cartridges).
I would never advise using an inkjet unless you really need to. They're a terrible deal financially; the only thing they're better at is costing less initially, but the consumables are very expensive and don't last long. They do make sense for some high-end high-volume applications, but those use more industrial-sized printers with continuous-flow ink, not small consumer printers with overpriced ink carts. Honestly, consumer inkjets are probably the biggest scam in all of computing history.
Last printer I had to configure on my Linux machine required finding a downloading a hidden mystery bash script from the manufacturer's website and running it with root permissions. Not exactly "plug and play" or "safe"
Then you got the wrong printer. Some manufacturers do a good job of supporting Linux (HP is probably the best actually; their drivers all ship standard in most distros), others don't bother.
I guess you don't like Apple evangelists either: you can't just buy some random hardware and just expect it to work on your MacBook. It only works that way on Windows because Windows has ~90% of the desktop market, so of course every hardware maker is going to make sure it works on Windows first, and anything else is a distant maybe.
Would you buy an auto part without making sure first that it'll work on your year/make/model of car?
Sure, it was "plug and play", but like every printer, it took some work: you have to download and install a driver package and software suite, because this stuff isn't included in the Windows OS. This simply isn't the case in Linux: for many printers (particularly HPs), the drivers are already there.
Did you run Windows Update? Windows used to ship all drivers with the OS, but since 2018 they don't by default and instead match you in the cloud. You still might not have had to run the software suite; it might have been able to match and ship your drivers from the cloud.
I'm pretty sure that M$ has a bigger driver database than linux.
I have HP LaserJet 1018 and it is pain to setup. Currently, when I will need to print, I will just turn on printer and then upload firmware manually using cat to make it work.
>Last time I set up a printer for my mom, it was a PITA: I had to download drivers, install some giant pile of crap software, go through a bunch of screens of garbage, etc. Needless to say, her computer runs Windows 10.
Weird. Printer drivers usually auto-install for me on Windows 10.
> So if I'm being "elitist" to point out that spyware is a bad thing, then I'll happily accept that moniker.
The article isn't talking about spyware. It's a chuntering rant about "Kids these days" with no analysis of why things are the way they are. Spyware doesn't fit, because it's doing exactly what it's meant to do, and the question of whether it pisses you off is secondary at best as long as you're convinced there's no alternative to the kinds of software which bundles spyware as part of you getting what you pay for.
Today basically every piece of external hardware is compatible with pretty much every computer. Better than that, they are plug and play for the most part. Thanks for reminding me to properly appreciate this.
I'm not convinced by your argument because the examples you cite of things having improved since these times have hardly anything to do with having a text interface feeling sluggish. There's no contradiction here, you could have the best of both world (and if you eschew web technologies you generally get that nowadays).
Are these modern Electron apps chugging along using multi-GB of RAM and perceptible user input latency because they're auto-configuring my printer in the background? Can twitter.com justify making me wait several seconds while it's loading literally megabytes of assets to display a ~100byte text message because it's auto-configuring my graphic card IRQ for me?
No, these apps are bloated because of they're easier to program that way. Justifying that using user convenience is a dishonest cop-out. If you rewrote these apps with native, moderately optimized code there's nothing that would make them less user-friendly. On the other hand they'd use an order of magnitude less RAM and would probably be a few orders of magnitude faster.
It's fine to say that developer time is precious and it's fine to do it that way but justifying it citing printer drivers of all things is just absurd.
> Remember endless configuration to get one program working? Remember the computer just throwing its hands up and giving up when you gave it input that wasn't exactly what it expected? Remember hardware compatibility issues and how badly it affected system stability? Remember when building a computer required work and research and took hours?
There are examples, but it's no longer the de facto experience. I have put PCs together from parts since about the 386 era, and there's absolutely no question that it's far smoother now than it ever was before.
Things were getting pretty good for a bit with HDMI, but the USB-C, DisplayPort, HDMI mess is a real shitshow. Sure, I don't have to set IRQs on my ISA attached serial cards, but there are still plenty of things that are really unstable/broken.
Don't forget that there is a sub-shitshow inside USB-C cabling all of their own, as well as DisplayPort enabled USB-C, docks that do some modes but not others, monitors that run 30hz on HDMI but 59hz on DisplayPort....
Just yesterday I couldn't connect my laptop to a new classroom projector with a USB-C input. When I selected "Duplicate Display" it caused my whole system to hang unresponsive. Works fine in other rooms with HDMI to USB-C. I have no idea what is wrong.
My new iPhone XS doesn't connect to my car's bluetooth, while the 4 year old phone I replaced does. My wife's iPhone 8 does. I'm stuck using an audio cable like it's 2003 again.
A new coworker couldn't connect to the internet when he had a an Apple Thunderbolt display connected to his ~2014-2015 MB Pro via thunderbolt. Mystifying. Even more mystifying was corporate IT going "oh yeah just don't use it".
That's probably because he had his network connection priorities ordered so that the TB networking had priority and there wasn't a valid route on the connected network to the TB display. That's very likely a fixable problem, as I have encountered similar and resolved it.
You forgot to mention that computers have become at least ten-thousand times faster in the meantime. There really isn't a single objective reason why we should wait for anything at all for longer than a few milliseconds on a modern computing device to happen. I actually think that this is just a sign that things are only optimized until they are "fast enough", and this "fast enough" threshold is a human attribute that doesn't change much over the decades.
PS: the usability problems you mention have been specifically problems of the PC platform in the mid- to late-90s (e.g. the hoops one had to jump through when starting a game on a PC was always extremely bizarre from the perspective of an Amiga or AtariST user, or even C64 user).
I (unlike most, I think) still today don't keep a lot of apps open. This is less about performance, and more some irrational OCD sense of comfort.
And it doesn't improve performance.
I actually think the one thing that has improved performance-wise is concurrency. Our computers can now do many more things at once than before. But all of those things are slower, regardless of whether you're doing 50 or just 1.
> Remember autoexec.bat files? Remember endless configuration to get one program working? Remember the computer just throwing its hands up and giving up when you gave it input that wasn't exactly what it expected? Remember hardware compatibility issues and how badly it affected system stability?
Maybe my memory is bad but I seem to have the same issues today, just in different shapes.
The operating system choose to update itself in the middle of me typing a document and then 15 minutes later while in a conference meeting the app decides that a new version is immediately required. Then I open up a prompt and the text is barely readable because the screen DPI is somehow out of sync with something else and now the anti-aliasing (?) renders stuff completely broken. According to Google there are 31 ways to solve this but none of them works on my PC. Then all the Visual Studio addins decide to use a bit much memory so in the middle of the debug process the debugger goes on a coffee break and the just disappear. In the afternoon I need to update 32 NPM dependencies due to vulnerabilities and 3 had simply disappeared. Weird.
> Cherry picking the bad ones while ignoring the good ones to prove a point is boring and not reflective of the actual industry.
Can you give us examples of the "good ones" for the "bad" examples he cited.
> These posts fondly remember just the speed, but always seem to forget the frustrations, or re-imagine them to be something we treasured.
No, he is saying that certain paradigms made computers fast in the past and instead of adopting and working progressively on them, we have totally abandoned them. He is not advocating embracing the past, wart and all, but only of what was good. He is not asking us to ditch Windows or Macs or the web and go back to our DOS / Unix era.
The original Kinetix 3D Studio MAX was NOT as slow as the current AutoDesk products. It had a loading time, but I can live with that.
With the speed of the M.2 SSDs we have today and everything else I really wonder why it got like this.
Maybe it's the transition to protected mode that did all this? Now everything has to be sanitized and copied between address spaces. But then again, Win95 were also protected mode. I don't know... :)
>But I also don't need to spend an entire weekend setting up a PC and a printer for my mom anymore either. I don't need to teach her arcane commands to get basic functionality out of her machine. Plug and play and usability happened
The slowness is orthogonal to all those things that have improved. We can put the fast back just fine while keeping those things.
I don't think you can, not really. Practically all the slowdown you see comes from increasing levels of abstractions, and basically all good abstractions have a performance cost.
Abstractions in general buy you faster development and more reliable/stable software. UI abstractions buy you intuitive and easy to use UX. All of these benefits define the modern era of software that has spawned countless valuable software programs, and it's naive to think you can skip the bill.
I would actually say that Microsoft Office suite was really good (up until recently when they started trying to funnel you into their cloud storage). But even with that blemish, the UX is still pretty good. Hitting the alt key let's you access any menu with just a few keystrokes that show up when you hit the alt key so they don't have to be memorized. I can also work a hell of a lot of magic in Excel without ever touching my mouse, and not needing to memorize every keyboard shortcut I need. I wish every app and desktop in every OS followed a similar standard for keyboard shortcuts.
Macros/VBA is still awful, but overall, despite the horrifying computational abuse I put office suite through, it's actually very stable!
And, it's fast enough once it loads. It's still pretty slow to load though.
Pretend the modern web made computers worse and then try Figma in the browser. It's glorious. Granted it's not written in React but it really shows what the web can be. It's BETTER than any desktop equivalent I've tried. It requires zero installation. Plugins are added instantly. It has native collaboration. It's scary fast. I'm not at all nostalgic about the old days.
It's okay. The functionality is great, but that's pretty unrelated to it being a webapp. I often run into slowdowns when using it on my medium spec'ed laptop. It sometimes gets pretty bad. It consistently feels slower than using native desktop apps. The zero installation is definitely a perk, though a minor one. Plugins adding instantly and native collaboration aren't functions of it being a web app.
The trade seems to be a still quite noticeable performance hit vs no installation. It's probably the single best serious web app I've ever used, and I'd still trade it for a high quality desktop app if I could.
In my opinion, people have had their expectation dragged way down by shitty web apps so Figma feels good. It doesn't beat what tools like Photoshop used to feel like before their UI got rewritten in HTML and whatever fuckery they're doing now.
Yes, it is a rant. Yes, the examples are terrible and there are plenty of counter examples. Yet there is also some truth to what is being said.
There is clearly going to be added complexity to address what we expect of modern computers. It is going to have a negative performance impact. If it isn't carefully planned, it is going to have a negative impact on UX.
Yet there is also a reasonable expectations for things to improve. The extra overhead in hardware and software should, in most cases, be more than compensated by the increase in performance of hardware. The UX should improve as we learn from the past and adapt for the future.
In many cases, we do see tremendous improvements. When a limitation is treated as a problem that needs to be addressed, we have seen vast improvements in hardware and software performance. Arguably, the same applies to UX. (Arguably, because measuring UX is much more opinionated than quantitative.)
Yet so much of what we get is driven by other motivations. If inadquate consideration is given to performance or UX, then the experience will be degraded.
I don't know if autoexec.bat was the most annoying thing from the 90s. (Although it was certainly annoying...)
My example of choice would be ISA, specifically configuring IRQs.
That's why UARTs back in the day are faster than USB: because your CPU would INSTANTLY software interrupt as soon as that ISA voltage changed. While today, USB 2.0 is poll-only, so the CPU effectively only asks USB once-per millisecond if "any data has arrived". (I think USB 3.0 has some bidirectional travel, but I bet most mice remain to be USB 2.0)
--------
For the most part, today's systems have far worse latency than the systems of the 80s. But two points:
1. Turns out that absolute maximum latency wasn't very useful in all circumstances. The mouse updated far quicker in the 80s through the Serial port than today... but the difference between a microsecond delay from a 80s-style Serial port and a modern-style USB millisecond update routine (traversing an incredibly complicated, multilayered protocol) is still imperceptible.
2. The benefits of the USB stack cannot be understated. Back in ISA days, you'd have to move jumper pins to configure IRQs. Put two different hardware on IRQ5 and your computer WILL NOT BOOT. Today, USB auto-negotiates all those details, so the user only has to plug the damn thing in, and everything autoworks magically. No jumper pins, no IRQ charts, no nothing. It just works. Heck, you don't even have to turn off your computer to add hardware anymore, thanks to the magic of modern USB.
-------
Adding hardware, like soundcards, new Serial Ports (to support mice/gamepads), parallel ports (printers), etc. etc. before plug-and-play was a nightmare. PCI and USB made life exponentially easier and opened up the world of computers to far more people.
Every now and then when I visit my parents... I find my dad's old drawer of serial-port gender-changers, null modems, baud-rate charts, 7n1 / 8n1 configuration details and say... "thank goodness computers aren't like that anymore".
Pick a web app that can render without JS. Then remove all JS and they will render 5x faster.
Some aspects of computers got better, sure. But the web is at an super shitty state. I mean were here posting on a website that has bare mininum design and features. Why is that? Twitter and reddit are absolutely horrible in terms of usability and performance. If you know that they could do better makes tham even worse. It is an attack on humanity. You can try to find excuses in unrelated things like noted, but that wont change anything.
I do wish shortcuts on web apps would be more universally implemented and more universally discoverable. I wish browsers had a common hotkey that would show available shortcuts, for example.
The biggest problem with his argument is that, done correctly, a mouse augments the keyboard and does not replace it. If you've ever seen a good print production artist, that person can absolutely FLY around a very visual program like InDesign... without leaving the keyboard.
I second this. Once one learns the keyboard shortcuts and builds the habit to learn shortcuts in general they won't be encumbered by small visual tasks like aiming lots of boxes like in menus and sub menus. These interactions micro-dent the flow. When I use the keyboard to navigate I'm not even consciously thinking what shortcuts to press, it happens naturally. I also noticed that I don't even know the shortcuts and have to think a bit before telling someone what shor-cut i use, my fingers remember them.
All this said, the software has to have a good shortcut flow in mind. Not all software offers this benefit.
Webapp need to stop implementing some shortcuts, if not most. It annoys me to no end when a site hijacks ctrl+f and presents their own crappy sesrch rather than just letting me search the page with Firefoxs built in search, which actually works.
Really, because two co-workers have told me they spent weekends trying to configure a printer and downgrading a graphics driver to get their graphics working again. Yes, they are engineers (mechanical, not software), but I have had my share of issues like this, and I started programming in 1978 on a Commodore PET. I do prefer lightweight interfaces, and my wife can't get my Maps app working while I am driving because she has a Samsung and I have a Blackberrry (KeyOne). Things look prettier, and there are a lot of advertisements, but I relate to a lot of these "cherry picked" rants.
About printers, it depends on manufacturer. Some printers are easy to setup. I know at the places I used to work whenever I went to printer setup and network I would see the printer very easily.
They say the fastest code is the code not written. I say the best printing is not needing it. I don't have a printer since I only need to print things every once in a while (few times a year). Whenever I need to print, I grab my usb stick put what I need on it and drive off to FedEx or UPS and print out what I need. Point and click, very simple. Cost of printers haven't justified buying one for me since it is a couple cents per year.
Agreed. Comparing UI responsiveness to yesteryear can be an interesting case study - examining precisely why things feel slower, to see whether they can be made better - but more often than not the answer is, "because we traded a bit of efficiency for an enormous wealth of usability, and iterability, and compatibility, that people never could have dreamed of back then".
Rants like this post are willfully ignorant of the benefits that a technological shift can have for millions of actual humans, simply because it offends the author's personal sensibilities.
The endless configuration pain you mention pretty perfectly describes anytime I try to dip into a modern framework and find myself spending hours and hours updating ruby or python or some random node module until all the dependencies finally align and I can write a single productive line of code.
Or when I spend 10 minutes fighting various websites just to get a dumb TV streaming site to login with the app it’s connected to.
I have to agree with the general premise: despite the phenomenal power of computers today, they generally feel just as irritating as they always have.
For a better comparison, I ran Windows XP in VirtualBox on a Macbook Air last year to interface with some old hardware.
I was surprised that it ran in something like 256 MB of RAM. And everything was super fast and responsive out of the box.
I consider OS X less janky than Windows or Linux these days, but Windows XP blew it out of the water.
Try it -- it's eye opening. The UI does basically 100% of what you can do on OS X, yet it's considerably more responsive on the same hardware, with the handicap of being virtualized, and having fewer resources.
> to a much wider audience and are much more usable now.
Doctors hate their life because of computers: https://www.newyorker.com/magazine/2018/11/12/why-doctors-ha..., but maybe you'll think the article is too old to be true or maybe you will quibble about the headline. Why not believe something true instead?
I was about to say same thing. I remember I had to recover some files off the diskette, and oh boy it was frustrating. That feeling when you start copying a megabyte-size file and just sit there staring at the screen for five minutes - we forgot how bad it used to be.
> These posts fondly remember just the speed, but always seem to forget the frustrations, or re-imagine them to be something we treasured.
More to the point, I grew up with early Windows XP computers and early 90s Internet, and I don't remember the speed. Maybe I'm just too young and things were magically faster in the 80s? Maybe I was born just slightly before everything took a giant nosedive and became crap?
There are lots of things that annoy me about modern computers, but none of them take upwards of 2-3 minutes to boot any more. I remember loading screens in front of pretty much every app on my computer. A lot of my modern apps I use don't even have loading screens at all. I remember clicking buttons on apps and just kind of waiting, while the entire computer froze, for an operation to complete. Sometimes I'd start to do something complicated and just walk away and grab a snack because I literally couldn't use my computer while it ran.
There were entire joke websites like Zombocom set up to riff on how bad the loading screens were on the web back then. I would wait literally 10-15 minutes for Java apps like Runescape to load on a dial-up connection, despite the fact that the actual game itself played fine over that connection, and the delay was just due to dropping a giant binary that took no intelligent advantage of caching or asset splitting.
I can't imagine waiting 10-15 minutes for anything today.
I got a low-key allowance out of going to other people's houses and defragging their computers. Do you remember when Windows would just get slower over time because there was an arcane instruction you had to run every year or so to tell it to maintain itself?
> On the library computer in 1998 I could retry searches over and over and over until I found what I was looking for because it was quick
Now I have to wait for a huge page to load, wait while the page elements shift all over, GOD FORBID i click on anything while its loading
What library were you going to in 1998? I also did library searches, and they were insanely slow, and prone to the exact same "don't click while it loads" behavior that the author is decrying here. And that's if I was lucky, sometimes the entire search engine would just be a random Java app that completely froze while it was loading results. And forget about giving me the ability to run multiple searches in parallel across multiple tabs. Whatever #!$@X cookie setup or native app they were wired into could never handle that.
The modern database search interfaces I have today are amazing in comparison. I have annoyances, but you couldn't pay me to go back in time. A lot of those interfaces were actively garbage.
Again, maybe I'm just too young and everything took a nosedive before I was born. But even if that's the case, it seems to me that interfaces are rapidly improving from that nosedive, not continuing to slide downwards. The computing world I grew up in was slow.
>I also did library searches, and they were insanely slow, and prone to the exact same "don't click while it loads"
Not the person you asked, but around 2002 at my university library the computers were running a DOS like application, on the top part of the screen there were printed the commands and you can put your search and hit enter, after a few years it was replaced with a Win98 like GUI, you had to open the app if someone else closed it, then use the mouse to find the right dropdowns and select the options then input the search term then click Search. Before you would type something like "a=John Smith" hit enter and it would show all the books of that author.
The problem with us developers is that most of the time we are not heavy users of the applications we create, we create since test projects and simple test to check the application but our users might use the application many hours a day and all the little issues would add up.
> More to the point, I grew up with early Windows XP computers and early 90s Internet
He is talking about running local software before mainstream internet was a thing.
That is, locally installed software without a single line of network code in them.
MS Word started faster on my dads Motorola based Mac with a 7MHz CPU and super-slow spinning rust-drives than it does on my current PC with a 4GHz CPU and more L3 cache than the old Mac had RAM all together.
Even from a premise that the introduction of networking and graphical GUIs was a huge mistake that caused software quality to plummet dramatically, a lot of modern software today is still faster and better designed than the software I had as a kid.
I can maybe accept that I got born at the wrong time and everything before that point was magical and wonderful. I never used early DOS word processors so I can't speak to whether the many features we have today are a worthwhile tradeoff for the startup speed. I'll have to take your word for it.
But if people want to say that we're actively making stuff worse today, they're skipping over a massive chunk of history. If you look at the overall trend of how computers have changed since the 90's, I think literally the worst thing you could say is that we are still recovering from a software quality crash. People either forget or dismiss that a lot of early 90s software was really, really bad -- both in terms of performance and UX.
From the original post:
> amber-screen library computer in 1998: type in two words and hit F3. search results appear instantly. now: type in two words, wait for an AJAX popup. get a throbber for five seconds. oops you pressed a key, your results are erased
I'm still calling bull on this, because I also used 1998 Library computers and a lot of them were garbage. And I've used modern Library search engines, and while they're not great, a good many of them have substantially improved since then. This is a rose-colored view of history based off of individual/anecdotal experiences.
I'm not wildly optimistic about everything in the software ecosystem today. I do wish some things were simpler, I do see systemic problems. But holy crap, HN is so universally cynical about computing right now, and I feel like there's a real loss of perspective. There are tons of reasons to be at least somewhat optimistic about the direction of computing.
If you had used a Mac or an Amiga, you wouldn't have those bad memories. Instead, you would fondly remember an era when software was pristine.
There also was a time when people just started to use C++ and software was much buggier.
I wouldn't even say software today feels much slower across the board, but it definitely far more wasteful, given the resources.
You can see a lot of software that essentially does the same things as it did 20 years ago, but (relatively) much slower. Try using some older versions of e.g. Adobe software, you'll see how snappy it feels.
> almost everything on computers is perceptually slower than it was in 1983 .... amber-screen library computer in 1998: type in two words and hit F3. search results appear instantly. .... now: type in two words, wait for an AJAX popup. get a throbber for five seconds. oops you pressed a key, your results are erased
So we start with something 15 years AFTER the title as evidence of the "good" times, then make a vague anecdotal reference to something modern. I've seen POOR performances in places, but the majority of experiences I see now are faster than in 1998 and definitely than 1983. Faster AND more convenient.
> And it's worth noting that HYPERTEXT, specifically, is best with a mouse in a lot of cases. Wikipedia would suck on keyboard.
Um....no. It's less convenient than a mouse, but way better than the function key-based commands than the author lists.
I think there is a lot of room available to complain about terrible interfaces today, and in particular how choked everything is by the need to track and add advertising, but there's no actual evidence in this article, and it comes across as a rant and selective nostalgia.
> Most of the 1980-1995 cases could fit the entire datasets in CPU cache and be insanely fast.
They couldn't then. They had to fit it in RAM.
> Most things I query these days are in the gigabytes to terabytes range.
That still is in "fits in RAM on a typical PC" to "fits in SSD on a PC, fits in RAM on a server" range.
There's little excuse for the slowness of the current searching interfaces, even if your data is in gigabytes-to-terabytes. That's where the whole "a bunch of Unix tools on a single server will be an order of magnitude more efficient than your Hadoop cluster" articles came from.
>Um....no. [Wikipedia is] less convenient than a mouse, but way better than the function key-based commands than the author lists.
Okay, but slap on an extension, and Wikipedia is much more usable without a mouse! For example, VimFX/Vimium allow you to click links (and do a bunch of other stuff) from the keyboard, and that makes it tremendously more usable.
Of course, that’s only true because Wikipedia is standards compliant and doesn’t break extensions’ ability to see links, which is not universally true of websites.
How so? The article is based on an assertion that directly counteracts my memories. I remember using computerized library card catalogs in the 80s. I remember using AltaVista in the 90s. I remember using the Up/Down/Left/Right navigation (and full-screen re-renders) in Mapquest pre-google maps. I remember using the cgi-bin version of imdb.com. I can see how promptly things work now. I can see the additional benefits I get now.
If the author makes an assertion that contradicts my experiences, they better provide SOME evidence, or be dismissed (which is presumably NOT their intent).
>I remember using the Up/Down/Left/Right navigation (and full-screen re-renders) in Mapquest pre-google maps.
Web apps sucked then, and they still suck today. The only difference between then and now is the decades of development effort that has been wasted trying to make JavaScript fast enough to mimic old applications.
I used MS Streets & Trips[1] in the same era, and it was awesome. It worked offline (something Google maps only got recently, and it sucks pretty bad at it), the search for POIs along a route was wicked fast, and it supports the author's purported use-case of mapping out a whole trip.
This shit ran blisteringly fast on a Windows 2000 laptop w/ no internet connection & 256MB of RAM. Now I'm expected to have an always on connection & I need gigabytes of RAM just to accomplish the same thing inside a web browser. -- I've missed many a turn because "navigation apps" assume LTE coverage is perfect; which is just not the case for many parts of the midwest. So I'm resigned to using a dedicated GPS unit.
You'll click through results in these catalog apps and there's sometimes seconds of delay because they're doing completely unnecessary shit like querying inventory at peering libraries. It wastes cycles and network round trips trying to answer a question I didn't even ask. (Inventory levels of a given title.) -- Browsing a digital card catalog is literally the perfectly indexed dataset computers dream about, and somehow people have gone and managed to make that excruciatingly slow.
It's gotten so bad I usually just go talk to the librarian if they're not busy, since their interface to the library's inventory typically looks more like the one's referenced in the article.
> How so? The article is based on an assertion that directly counteracts my memories.
Human experience is subjective and one account is not invalidated by another.
> If the author makes an assertion that contradicts my experiences, they better provide SOME evidence, or be dismissed (which is presumably NOT their intent).
Well....yeah, sure. If the author of the piece has no interest in actually supporting their point, and if readers on HN (the audience of my comment) have no interest in finding out that the article is just an opinion restating the title at length, then yeah, it's all inconsequential.
When an area of engineering is new we make products better than the market will eventually pay for. Over time, we learn how to make them just good enough to reach an equilibrium in a market. This is an idea that is explored in _The World Without Us_ by Alan Wiesman. The Roman Colosseum still stands, but buildings today don't need to be that durable. It would cost significantly more to make them that way, so we don't. The same could be said about supersonic travel. We could do it again, but enough of us are happy to not pay the premium, so that product has disappeared.
The problem is, the market equilibrium is usually the worst possible garbage that's still fit enough for purpose that it won't break (on average) before the warranty period ends. Some of the money you save on this isn't really saved, you just pay for it in having to replace the product sooner, or being constantly stressed and irritated with the quality and fragility of your tooling (it's essentially death by a thousand cuts for your psyche).
So there is a reason to add some extra pressures to the market to raise the floor.
Cars today are pretty reliable. Random other stuff seems pretty decent as long as you don't buy the absolute cheapest stuff around. I'm working on a solid wood desk with no nails or screws (it's called a "puzzle" desk because it fits together in a stable configuration without any connectors). I'm looking at a backpack, which I assume is nylon, and it seems pretty tough and has lasted through a few adventures. I'm looking at my metal-framed bookcase, which has fake-wood shelves but it still seems pretty sturdy.
There are certainly some things that seem worse. Furniture is probably worse overall, but that's because good wood is so much more scarce now. If you buy metal/glass furniture than it's fine. There are also some worrisome practices that are pretty widespread, like using plastic-threaded screw holes rather than metal; or just using really cheap metal that can be stripped or cross-threaded easily.
On balance, where are we? I have a feeling that we're basically better off. We remember the old stuff that lasts, and forget about the stuff that breaks (unless it's still celebrated/historical, like the Liberty Bell).
Definitely seems that way, but I don't think it's just an illusion.
Cheapest stuff is absolute garbage, arguably to the point you could consider it a waste of natural resources (for a few cents here and there, you could make things not break halfway their first use). What worries me though is that price isn't a reliable quality signal anymore - there's profit being made in selling cheap garbage at high prices, because many people are fooled into believing a more expensive good must be quality. Brands are no longer a quality signal either. Even ignoring that the market is currently DDoSed by throwaway "brands", even the proper ones don't mean much anymore. For instance, witness any discussion about white goods. Next to the people telling you which brands are historically quality ones, you'll find people reporting that recently, those quality brands started to cheap out on hardware and their products are no longer reliable or repairable.
Ultimately what I'm trying to say is that the market equilibrium for price-sensitive customers is complete garbage; some companies can survive on less price-sensitive customers, but I see a trend towards converting quality into better margins even there. It's true that we have more of everything now, but it also seems most of those things are short-lived, and increasingly shorter-lived. It would be fine if matter and energy were too cheap to meter, but they aren't - especially not if you factor in the environmental costs.
> The same could be said about supersonic travel. We could do it again, but enough of us are happy to not pay the premium, so that product has disappeared.
There are indeed a lot of curious examples of computers seeming to go slower today than 20,30,40 years ago. But on the whole, I completely disagree with the summary. My laptop today is faster than anything I’ve ever used before today. It boots faster, it loads files faster, it responds faster, it does more simultaneous things, it crashes far less often.
My first computer was an IBM PC jr, and using it was an exercise in patience at all times. Maybe the author never played Ultimate III on an IBM PC. It was sooooo slow. The exact same was true of the computers my friends had, C-64, Atari 800, IBM PC. It probably took a full minute to boot. Maybe the author never saw the memory test PCs used to do before even starting to boot, or how slow floppy drives were. My first modem was 110 baud... that’s a whopping 15 7-bit bytes per second. Downloading a 40x40 character image (picture made out of keyboard characters) took a minute and a half. Downloading games routinely took hours. The PCjr hard crashed and needed reboots all the time. Even my later 486 would do the same thing. Rebooting was something you just did constantly, multiple times per hour. Today, almost never.
One thing this article completely avoids acknowledging is the general difference in functionality we have today than with computers in 1983. The database lookups and maps that were faster were faster precisely because they are 7 orders of magnitude smaller data. The article is comparing DBs and maps that fit in the main memory of a computer at the time to databases and maps that are out on the internet and don’t fit on any single computer. It’s amazing that we get the results as fast as we do.
"one of the things that makes me steaming mad is how the entire field of web apps ignores 100% of learned lessons from desktop apps"
Not perhaps 100% but sure, in the modern age of web apps for everything I miss having proper desktop apps.
Even more so, programming web apps is nowhere near as convenient as developing desktop apps in late 90's with tools like Delphi.
I remember that. It was very easy.. I sort of have that same feeling with .NET, but I wonder if it's still going to be a layer on top of windows C system functions which might just be wrappers for system calls, which require an expensive context switch.
One big difference between win95 and now is the security aspect. Another big difference is the tons of services running on Windows 10 now that is always there (such as YourPhone). On Android for some vendors you are forced to use FaceBook and it can't easily be removed.
I am just stepping away from web topic as it is irelevant. Slow VM, huge text files, lasignas of code stacked one on another. Comparison here just isnt fair, I do understand that people who never expirienced MFC 6 application cant compare it to browser. As they know only browser/electron/...
But just last weekend I have installed Windows 7 on modern i7 computer (with SSD). Sure, half of drivers didnt work etc. but it did boot. Everything I have started was instantanious. The dreaded decompressing stage at end of installation? 5-10 seconds. Then I have installed Windows 10 with office, I am so sorry I didnt install Office 97 (I didnt have installation) to the Windows 7, as I did install 356 to the machine, so I cant compare here (and for my usage they are both more than enough). But. The Windows 7 was much faster in everything while not offering major improvement in features.
Bottom line. The software was optimized for what the hardware speed was at that time. On new computers it is lightning fast. When the hardware become faster, no one cared to optimize anything and layers of bloat added to it. Sure, the production of software is cheaper, everyone could code today ("omg, i need to free allocated memory, the horror").
Instead of improving on features, performance, we have optimized on production costs. And major improvements on hardware processing power are lost in more bloat in code and adding endless layers of abstractions, virtual machines, wastefull protocols, languages. It is sad.
I think if something is possible to do given better hardware, no matter how bloated or useless, developers at large will end up doing it. I don't think RAM usage or performance are concerns for the people that go straight to Electron because in the present it's easy develop for and nobody has the right to force them to chose otherwise. Electron isn't going away; the only ways it improves are Chrome getting more performant or computers getting even faster. Either one is nontrivial to work on as an outsider - there are only two significant browser engines in the present age because they've become so complex and accumulated so many features that they require the support of well-funded corporations to maintain them, and computer processors can only become so small.
Browsers on the whole will not lose features. The web is ubiqutous and cutting down the number of features to maintain would break existing webpages. The resulting web standard establishes a hard baseline on the amount of complexity needed to engineer a standards-compliant browser. That in turn means a hard baseline on the amount of processing power required to run it acceptably, save for more performance gains that will surely be buried immediately under even more features.
I once tried using Firefox 1.0 and found it was impossible since it has no understanding of modern security protocols. The web as viewed by Firefox 1.0 in 2003 no longer exists. You can't practically divorce the features necessary to use the web such as new protocol compatibility with the extra features like Pocket in the code changes. And trying to do so anyway leads to Pale Moon where a scant few contributors are responsible for merging in dozens of security patches - which themselves come about due to the growing complexity of the web platform. Numerous people have called using it irresponsible from a security standpoint. In the end the endless growth of web standards and the code needed to support them has made that fact a necessary one.
For all practical purposes, you either use one of the two major browsers or get left behind by coders and organizations that have orders of magnitude more resources than you.
Society and the economy don't incentivize creators with being happy with a modest amount of features if there's a glaring increment in performance that ought to be seized on. It's about growth and innovation. Finding new ways to reinvent the wheel and get further away from the hardware, because it's now easy to.
Sometimes I wish we could undo growth. It would be nice if Electron and the like fall out of favor and we have a renaissance of software like it was designed in 2004. But enough developers have decided that the performance tradeoff is worth it, and average users have become desensitized to SPAs and their expectations are low enough that they would rather put up with it all in order to do what they need to instead of demanding better.
I feel like it can only keep growing. That's one thing the collaborative nature of software pushes forward. More features. It's just that some are better at limiting the scope of projects than others.
I have explicitly said that I will refrain myself from browser as it is uncomparable. The whole browser based infrastructure is so bloated (from browser which can be treated as vm, to bloated, text based, human readable language (thank god for someone to push webasm - I said it is necessaty in 2001 but everyone was fine in hacking together js for next ~20 years, now it came to the electron absurdity of 100mb+ applications that run js on packaged browser on operating sytem that runs on cpu, the overhead is just incredible) to text based protocols (http! rest!). Nodejs? I wont even start. The waste of cpu power, electricity, hardware,... is just fantastic.
On the other side, android sdk/framework/whatever looks like bunch of academics pouring their frustrations into one library - if you want to learn all patterns just dig in. MFC 6 in 1998 was more organized (https://t1.daumcdn.net/cfile/tistory/11781B244C45903F42 please try to draw this one for android! No vision, just chaos. Oh you loose context with orientation? How nice...). Android development environment catched what Windows Mobile had in 2000 barely when they pulled in IntelliJ (which is copy of visual studio) while for machine code they still lack behind. Please, DONT believe any of this, DO check VS 6.0 and Windows Mobile. Everyone is reinventing everything without/ignoring past and its mistakes, repeating them over and over again. Loosing 20 years for the sake of another corporation gaining monopoly is so crazy that you need to see it with your own eyes.
Cloud. We had mainframes. Enough said. We are moving to those again, for short term profit for users and long term profit for providers. I wonder when affordable server hardware will dissapear from market.
While I think that hardware industry is moving forward, the development is racing backwards or barely catching what we had 20 years back. Drowning itself in lasignas and stacks of different overheads, from overdesigned interfaces to VMs. IoT is still sane (due to lack of resources to waste) but the common software development just went crazy.
I do suspect that at some point, hardware will stop supporting bloating due to physics limitations and optimized code will come back, but I wonder who will still remember how to write it? Ah, we will reinvent it again?!
But nevermind the rant of an old fart. He just doesn't understand waiting on cpu 1000 times faster than what we had at 50% while loading megabytes of web based software doing what he would implement in 100kb. With 0.0015% of cpu load. It is just him. He doesn't understand the hype.
The author bemoans how everything has gotten (perceptually) slower, and then proceeds to communicate this not through a single letter or blog post, but instead through ninety-five (95) tweets.
The experience of the 80s was really slowness, things were so slow that now It's unimaginable now. So unimaginable that we don’t remember any more common actions by 1983.
The experience was so slow, that you can play another game while loading [1]. Namco even patented "playing games during load” by 1987 [2].
As an example, I have fond memories of my C64 tape loading Hunchback [3] ... while watching "V". A single tape can take one hour if it fails in the middle and you need retries. All those waiting times were order of magnitude of what a common toddler will expect today.
We were so relieved when the fast loaders appears [4] or when a 1541 arrived home [5]. And then we were so frustrated when the 1541 was only 300bytes/s instead of the 300bits/s of the Datasette.
The scenario in Xenix or PC-DOS loading from 8" floppies was similar.
I'm not arguing that now are better times, I’m arguing that in the 80s we have the best for the time, and those were good times to remember.
As the OP who wrote this while drunk two years ago and thinks that about 30-50% of it is objectively wrong, it's extremely funny to me that every five months someone reposts it here and everyone gets in a fight about it again. Many of the details are wrong - the thesis is still completely valid, and it's Computer Person Thinking that wants to attack the details while refusing to stand back and look at the overall picture, which is that using computers is now an incredibly messy experience where nothing quite does what you want, nothing can be predicted, nothing can be learned.
I hypothesize that this is because programmers and other various Computer People consider "being on the computer" to be a satisfying goal in itself that sometimes has positive side effects in the real world, while everyone else simply has stockholm syndrome.
>I hypothesize that this is because programmers and other various Computer People consider "being on the computer" to be a satisfying goal in itself that sometimes has positive side effects in the real world, while everyone else simply has stockholm syndrome.
Oh believe me, as much as I love being on the computer for the sake of it, I don't enjoy having that screen time utterly wasted by shit software. There's only so many hours in a day after all.
I work w/ ERP software, so I see people still using text-mode UIs on a daily basis (hell I track my time w/ one), and I also support a "modern" ERP that is GUI based and can "run inside a browser." (Which nobody actually does, because it sucks, doesn't support all features, and swallows up tons of keyboard shortcuts.) One of these packages can be run comfortably from a $5 Linux VPS. The other package asks for two _very fast_ SAS storage arrays, 32GB of RAM, minimum of 6 CPU cores, etc. Of course you've gotta license Windows for all of that, which thankfully is not my job. (That or you spin it up in "the cloud" I guess, and pay thousands of dollars annually to rent somebody else's computer, since this software is not "cloud native" at all, no matter what their sales people say.)
I try to leave the software better/faster/more usable than I found it, but it's hard when the upstream vendor is just piling shit on the fire so they can pitch their half-implemented features on the sales brochure: with absolute no regard for the added operational overhead of the garbage code.
A tweet thread where you complain about AJAX popups and other Javascript-based slow downs seems utterly ironic. Whatever happened to having your own blog? Even better is that these tweets were from 2017.
This makes me yearn for building an ncurses-rooted non-GUI flavour of Ubuntu.
Yes, for a good long while UIs were improving and the last few years I'm fighting most of my computers to stay as usable as they were a decade+ ago. And it's usually a losing battle wherever a terminal isn't available, most painfully on the phone.
Online stores have less and less useful categories and filters. Ubuntu's take on Gnome is a weak version of their Unity DE and much less efficient on smaller screens. Google Maps mentioned in the OP in addition to all these problems slowed down significantly.
And yes, others have it worse because they don't even have the escape hatches like command line apps and are cast to the webshit wasteland looking for a webapp to unlock their pdf or extract images or whatever.
Joe should be offered incredible power by software that encourages him to use it with purpose-built interfaces optimized for him.
I am upset by the way that computers disenfranchise non-nerds. I wish it was better for me; I wish it WORKED AT ALL for everyone else.
Yes, for a good long while UIs were improving and the last few years I'm fighting most of my computers to stay as usable as they were a decade+ ago.
I blame the advent of the "mobile era" and the much smaller screens, and differing I/O capabilities, of mobile devices. And the fact that, at some point, everybody decided "we need one UI that is applied for both desktop and mobile, despite them being completely different types of devices." And now UI design is all muddled up and nobody can (or will) optimize for large screens, keyboards, and mice, because "not everybody has a large screen, a keyboard, and a mouse." But on mobile, the pull remains to keep a UI metaphor that is basically a slight variation of the WIMP interface that originated on desktop. In the end, nobody wins.
I wouldn't generalise. Behind the scenes modern computer systems are of course processing billions of times more instructions than computers of old. Some portions of the UX do have noticeable delay and sometimes even inordinate delay. This is usually due to poor choice of algorithms and program logic, but sometimes it is the case that blazing speeds of modern hardware has trained most of us to expect every computation to be almost instant, but some algorithms are still pretty slow, no matter how fast the hardware and they are likely to remain that way. Of course the UI in particular doesn't fit into this algorithmic complexity constraint and there is little excuse for their perceptible slowness in recent years except for poor coding and feature creep.
Disagree. It looks prettier but its often not more intuitive.
Not 1983, but Windows 95 all the icons had text underneath them, so you knew what they did.
More recently it took me a couple of years of seeing hamburger menus to make the connection "it's an icon to give you a menu". Those are everywhere but are in no way intuitive.
That may have been true circa 2010, but in the past decade we've seen rich settings menus and compact, information dense interfaces stripped away in favor of "mobile first" designs.
Typically, whenever a website / program I use gets a UI update, the only change is that all padding on buttons, text, etc. has been increased by 50% meaning I need to scroll in order to see a 5-item list! Windows 10 control panel is an example of this phenomenon.
Damn do I hate this. One specific example that pops out in my head is when I upgraded from Sony movie studio 12 to 13, and the only difference between the versions was that every button and menu had been expanded in size to be enormous with tons of whitespace.
Why the fuck would they do that to a desktop program in 2014, it wasn't even for mobile.
And increasingly, everything _with_ a computer too. I literally cannot use the laundry machines in my apartment building without an app I need to get from Google/Apple app store.
Maybe than 1983 but versus 2003 the main positive changes have been greater bandwidth and storage. Everything else useful we have now we had then, pretty much. Some of it was a much slower, some of it was much faster. The only OS & environment I've seen that was significantly more intuitive than anything around in the 90s and 2000s is iOS up to version 7, at which point its intuitiveness started going to hell.
A problem with a lot of web apps today is that they are designed by visual designers that put a lot of emphasis into aesthetics and animations to make it 'stand out' rather focusing on user experience and keeping it dead plain and simple. This is why I love TUIs because they make use of all the screen real estate and all the possible actions are laid out on the screen to easily navigate.
(And all the comments about how much faster computers are today: I interpreted the article to be talking about the ease of UI, in other words how fast the interface lets you do things, not how fast the computer does things.)
I have a private Rails app I've used as a freelancer for 7-8 years for time tracking, invoicing, expenses, and financial reports, and I've tried to make it very keyboard-focused, so that I can enter info without clicking around. I've always wanted to polish it enough to share and charge money for, and I've thought a nice niche would be to double down on the "keyboard productivity" theme. But making that work well in a browser is tricky.
Do you mostly rely on browser-native functionality for keyboard nav/input? That's how it is today. It's fine for me, but it's not really "enough" I think. There are lots of other navigation moves I wish had keyboard shortcuts. But a lot of taste is required here. Usually when people override browser behavior it makes things worse (e.g. scrolling). But there are still things that would be actual improvements.
So a couple years ago I rewrote the Invoices section in React+Redux, hoping that would make it easier to do some custom UI behavior. It made a couple small things better, but it's a lot more code, and it breaks some things the browser used to give me for free. For example if you add a new invoice and see it at the bottom of the list, then you click to a non-React part of the site, then click Back, the new invoice disappears from the list! I'm not even sure how to fix that, unless I want to add a spinner and Ajax call on page load. (Right now I pass the initial state as JSON in the rendered HTML.) Maybe I could use local storage, but then somehow when you click Back I have to decide whether to trust local storage or make an Ajax call. . . .
Anyway, rather than complaining about the article, I'd be more interested in a conversation about how to build webapps with a keyboard focus, so that you can complete entire tasks without touching the mouse. What native browser features would you leverage? What frontend tech would help you? Can you imagine "standard" patterns you could encode in a JS lib, both for DRY code and for a more predictable UI? A "Bootstrap for keyboard-first webapps" would be really cool. In fact Bootstrap already does a lot for accessibility.
I heard people saying this a lot so I decided I'd give it a try. My experience is that this quote is actually, "The web is lightning fast when we don't use most of it." The number of features and websites that just wouldn't work was significant.
However, some sites I visited that were informational only did get a bit faster which for some people with bad Internet connections might make a difference.
This probably already exists, but what I want is an AdBlocker-style plugin that executes a community-curated list of "websites that need JavaScript to work." I can then get the best of both worlds: faster websites with less unnecessary JavaScript, but without crippling a large portion of the Internet for myself.
the best computer application I interacted with was an AS400 era tax filling system.
It was a superb tool in the prettiest meaning. It was a care free, lag free, pedagogy free thing. Fill out data as it comes (the only smart key you needed was tab and shift-tab), smarter than the average web app form validation checks and suggestions. Think instant Elm typecheck where you enjoy the computer assisting work.
I was in 4th year in college and blown away how this thing beat anything we'd see or studied in school yet it was antique pre-OO pre-UML (maybe an ancestor of modeling hygiene though). I wish I could have a talk with the team who wrote that.
Of course it was about to be replaced by a web2.0 variant which was a resource hog, required constant scrolling, didn't do half of the old work but was full colored css.
The problem is human memory! People forget all the features, and dependencies software has.
Since the natural direction of the Universe is entropy. As time goes on, software is developed, dependencies are broken, and features quietly stop working.
When you look into fixing it, you notice that its this giant hairball of dependencies which nobody can unwind. So then the next best thing is to put another layer of abstraction on the hairball, and you build on. The price of this is things will get slower.
My Commodore 64 had a more responsive terminal than my macbook, since it was thin layer over hardware, only one or two abstractions deep.
HTML/CSS and Javascript are garbage technologies for making responsive UI, they are so so many levels deep, with so many dependencies and constraints.
The way forward is start fresh with something new, but no idea what it is.
Computers of the past were undoubtably slower on average; both in terms of physical performance as well as their UX. The software of now, when compared to the software of then, is slow relative to the hardware that it runs on. Before we used to strive to do more with less. Today we strive to do more with more. In an attempt to do more, faster and easier; we've made trade offs.
This is a major source of dissatisfaction for me as well. Especially latency related to keyboard input and program startup. Are there any resources out there specifically intended to help you speed up your (Mac) with these things?
It boggles my mind how terrible the web is becoming on lower end connections. I have about 200ms of delay to most servers and about 5Mbit of bandwidth, and many major web pages take over 30 seconds to load if I don't block most of the assets.
It's insane that webpages are now loading megabytes of data across hundreds of http requests before the images even start downloading.
I've completely given up on the mobile web since I can't block things the way I need to. It's just unusable. It's a platform that just flat out doesn't work anymore for lots of people in remote areas.
I think there is a serious culture problem in the web development world.
Web devs are obsessed with the fancy new frameworks that allow you to use even more JavaScript and somehow convince themselves that it's what users want.
And not only do the users get stuck downloading megabytes of JS, but now the client computer has to do all the calculation of putting the page together!
There is a strong aversion to simplicity. A single web server serving HTML is simple and fast and easy. But web devs want "new and interesting", users be damned. So we get huge kubernetes clusters running all sorts of useless junk just to serve huge JS payloads for no benefit.
>Web devs are obsessed with the fancy new frameworks that allow you to use even more JavaScript and somehow convince themselves that it's what users want.
Do they really convince themselves of this, or even care about users? Why would they care about anyone besides their employer, who only has an interest in profit, and doesn't care at all about users aside from how much profit they can extract from them?
>There is a strong aversion to simplicity.
Right, because simplicity doesn't generate profit. Simplicity is a web page that tells the user what they need to know, and no more. But that doesn't generate as much profit as having loads of JS which tracks the user so that information can be sold to marketers.
>But web devs want "new and interesting", users be damned. So we get huge kubernetes clusters
The other factor is skill development. Crafting a simple website with minimal JS doesn't look as good on your resume as working with the latest trendy tool, so having Docker and Kubernetes on your resume looks great for future job prospects, so devs have an incentive to use those things everyplace, even if it's just overly complicating the solution and making things slower.
> I've completely given up on the mobile web since I can't block things the way I need to.
Can't you just use uBlock Origin with Firefox Mobile and run it behind a localhost VPN like Blokada? It's not perfect but it seems enough for me. A different localhost VPN will probably give you even better results, if you customise it to your specific purpose.
Not on iOS, I got one recently for while my Android phone's being repaired, and it was an unwelcome surprise that A) FF (or anything) can't be set as default browser instead of Safari; B) 'content blockers' only work in Safari, even though all browsers are forced to use it to render anyway.
You shouldn't be trying to set anything besides Safari as your default browser in iOS: Apple doesn't want you to. Your desires are not important if you're using an iOS device, and you're thinking wrongly to be wanting to do anything differently. If you want to do things your own way, Apple products are simply not for you; they've made this abundantly clear for decades now.
I thought I'd try it out while my usual phone's being repaired (plus I'm sceptical it can even be repaired, so that two weeks might turn into four if a replacement isn't forthcoming) - since Android has its issues too.
But yeah, I don't think it's for me. It may be a step forward on first-party privacy, but it's two steps back on third-party. And 20 steps back on 'platform freedom', no changing defaults, no file access for running apps like Syncthing, extremely limited customisation of the home screen (I like a simple alphabetical list of apps, nope, can't have that) or 'control centre' (not possible to put hotspot toggle there? Come on...) etc.
Plus fonts and UI elements feel massive, on the lowest text size. The Slack 'sidebar' takes up almost the full width, and about five channel names fill the height. (On an SE. Larger models of course have more space, but it's the size of things that irritates me, that presumably wouldn't change, there's just be a few more of them.)
I usually browse with my phone on wifi with fiber and it's still mostly terrible. There are ads and popups everywhere. It feels like Indiana Jones avoiding traps to get to the golden idol.
This is just totally untrue. Even my Windows 10 computer boots from BIOS screen to login screen in under 8 seconds.
I remember what it was like running windows in 1994, even after waiting 40 seconds for windows to start up, I'd wait upwards of an additional 3 minutes before the computer was actually usable. I remember with keen frustration waiting for the churning disk noise to die down as a signal that the computer was no usable. Since the mid 2000s or so this has just never been a problem.
Google Maps is great for A to B. I love it. Use it even if I know the route since traffic can be a pain. It appears fast for me. I'm very much disconnected from what the post says. I've used DOS, Windows 3.1, Windows 95 and so on. Now I'm on Mac and Fedora. Never will look back.
Let's remember here that most operations were executed against a cache of data on the local system. That cache is way too big to store on the local system nowadays, so of course we are going contact a server to perform the operations and we will experience network latency that will vary. US internet speed has stagnated. Some time in the future, if typical ISPs don't get their way and we get faster and faster connection speed then these latency complaints will be irrelevant.
Developers could do better in sending less data in general, but that will take more time. Does product management care much about latency if they are not given the mandate? We seem to all want features and more features, but less of the cruft that follows as a result of development, which is weird. Best way to vote in the end is to just build your own application however you can. Waze's features would have never made it into Google Maps if users were not using them instead of Google Maps.
I use Haiku which is a fast responding OS. Yet I was able to double the speed of the text search program by writing a custom version of GREP, however I also knew the type of searches I did and got rid of a lot of the matching options that were in the original code.
Result a far faster program, but far less options available to the user. I think you will find many of the older systems that are faster are also more limited in what options they offer to the users.
Even if true, our computers are able to access many, many, many orders of magnitude of information than my computer could in 1983. What my computer can do for me today is probably 1,000 times more valuable to me than what my computer could do in 1983.
If it takes being slower, I'll take that tradeoff in a heartbeat.
(But I also remember it taking up to a couple minutes to load a program from a floppy, so there's also that.)
Wasn't around computing in 1983 but phone and web app latency are still way behind the average early 2000s office or line-of-business app.
Also ironic is the act that the most significant improvement in responsiveness during the last 20 years was afforded not by better software or even faster CPUs but by the move to SSDs for OS, data and swap space.
I would say it doesn't have to be like this. I have made a whole thing out of stripping complexity out of things, which includes favoring cli tools over gui counterparts, etc, and most of my daily stack is nice and fast and not suffering from this. Webpages with good adblocking and scriptblocking start to seem functional again, but are still the main sticking point since I am doing a whitelist approach (and sometimes forget to commit my rule changes).
If a program is written in electron it's an automatic "won't install" for example. The same for npm. If one can learn to be more discerning in their tooling they can avoid many of these pitfalls.
I would also say; go on a pid hunt sometime. Does that process really need to be running? No, like really?
I'm old enough to have done applications that were meant to run on IBM 3270 terminals.
Applications that dealt with text and filling out forms were probably better on the 3270-style interface than they are on most web applications, for no good reason. Several times a week I get bitten by a bad form. Forget to put in my Country, hit submit, and the entire form gets erased with an error message, etc.
That being said, for graphical applications his hated mouse is better than what we had in the late 70s. Doing CAD on Tektronics 4010 light-pen terminals wasn't fun. People still went back to T-squares and drafting tables because of this.
people went with HTTP and other shitty models. now we live with the pain. good for selling new hardware tho, so who really cares!
There could be tons done to still have our modern looks, but have old skool performance. issue is mainly in how we store and subsequently use data.
tons of people still use DOS era softwares just due to this very fact. their old programs they can perform the same work 1000 times faster due to how data is stored / presented. It generally doesn't go through 1000 layers of processing each click, and a lot of data is stored as a 'view', not some raw binary blob to be parsed out again on demand..
I wish all websites were as lightweight and fast as HN.
I sorta want to use New Reddit (because I like having smaller windows for multitasking and responsiveness makes that much more convenient) but boy is it painfully slow even on a 100+ mbps connection
Whoa no need to attack the poor mouse. It has its place. Especially in a generation of PC gamers that came up with optical mice. Me personally the muscle memory I have from PC gaming for some reasons or other seem to have made navigating ui with a mouse that much more intuitive to me over the last 2 decades and some odd years. To the point it actually feels really good and extremely proficient to me. I feel like I'm faster using a mouse than just pure kb shortcuts. Although I do use a combination of keyboard short cuts and my mouse. Guess it just depends on the task/work flow.
I hate when games do something like have their main menu be a webpage. All I care about is the start button and it takes a good 10 seconds some time just for it to pop up.
One of the things that drew me to Linux in '93 was running 'find /' on the console. This was blindingly fast versus the same on a SparcStation 4 (a far more expensive and generally superior platform).
This was sort of an accident, in that SparcStation consoles were slow partly because they weren't meant to be used. But still, it was impressive as hell at the time.
The only slowness I feel running off an NVMe SSD is iTunes on Windows. For some reason Apple made every basic click in iTunes have a built in delay. Doing almost anything has 300-5000ms of delay. Playing video and seeking ahead or back freezes for about 5 seconds for me, while VLC, MPC-BE or Power DVD are nearly instant for 60GB 4K UHD files played off a NAS.
A use case where I see this issue very often and would love to see pure text based interfaces is healthcare. Half of providers’ and nurses’ time is spent clicking on tiny things in drop down boxes before scanning a barcode, which to me is pretty silly. Hitting F4 or something and entering whatever, and then scanning would make it so much faster.
This is faster in your browser than your whole computer can run the current bloated crap. Click "reset" in the gear menu and watch it reboot. OOoooo so fast.
Oberon OS plus "Gadgets" has never yet been equaled or exceeded by any other UI, anywhere. If you haven't used it you can't know.
Just the separation of selection and (text, not mouse) cursor is so useful, let alone the built-in cut/copy/paste with mouse chords. Don't get me started on the Gadgets system...
But enough about the greatest OS+UI ever made...
(And my computer freezes for a full second when I scrolled down to the broken (on my machine) image. The fuck Firefox Y U so slow!? Junk. It's all junk. Made out of junk, fractal dimensions of over-wrought crap. Redundant computations that burn energy to do nothing but fail to be better than decades-old versions of itself, the fires of a digital hell.)
> When you're all done, you go back to your plotted trip and start laying out the chosen locations and optimizing your path.
> You can do this with a paper map. You can't do this with gmaps. So you just don't do it.
Has this guy never used Google Maps? You can definitely plot out a trip with multiple points and pick specific routes for each leg along the way. I know because I take a lot of road trips.
Anyhow, computers might be perceptually slower but do so, so much more and are more pleasant to use. Being able to save all my documents to the cloud and use them from any browser is massive. Google Maps has changed the way I travel. The fact I can control my TV from my browser while using HN and planning a route on Gmaps at the same time. The fact I can plug anything into my laptop and have it just work (even my Linux laptop!). And if you really, really miss that 80's era text-only interface, well, you can still have that.
Again though, the sheer amount of data you have access to nowadays and the way computers can visualise it is mind-boggling. Staying on the maps theme, the fact you can find any location on earth, get a satellite image of it and a whole bunch of info in it just by dragging a map around is insane, and I grew up in the early internet era.
Some 50 years old developer is angry because he must use "the web".. I'm 40, I was using PC XT/AT in 1988, and no, clipper, reading from floppy disk A and rendering it, Wordstar and Lotus weren't faster. You are just older.
I found recently that dragging the middle half of the screen on mobile street view now scrolls up and down the street instead of rotating the view. I don't know why companies obsessively break things that worked fine for years.
Maybe a small select group have been. Do you really believe that the resources available in 1983 are even in the same category as the current ones? Come on.
> Do you really believe that the resources available in 1983 are even in the same category as the current ones?
No, I never wrote anything remotely close to suggest so, if by "in the same category" you mean volume. If you implied quality, well, that is another matter. There is so much low-quality content online; you could argue it makes enough noise to drown out the high-quality content. The first search results are rarely the best.
However, people should stop disseminating distorted realities. People learned things before the Internet. I would even suggest many learned better before the Internet. You did not need a superficial Medium blog post on a subject or a hand-holding tutorial. You reverse engineered the binary. Besides, there's been a ton of books on programming for a long time. I have a great one from '78, The C Programming Language.
Google maps is a bad example in tours rant. Used it to explore many cities, not only turn by turn direction. It worked get although personally like openstreet maps, because I feel one company s shouldn’t control so many aspects of life and like open source nature of it.
Indeed able to discover a onsen built by community in Hokkaido. That’s based on a lake I saw by just zoom in out of the map to do exploration and saw it built by Ainu community living there. In that turn by turn map helped quite a lot to reach there subsequently just by click and dropping a pin there. Submitted a text describing it and subsequently visited by many people.
Took a journey through treacherous mountain and went up to 2500 meters high and crossed it via a tiny off-road designed for motorcycles. That too did based on exploration of map on phone.
Paper maps didn’t provide such convenience. Yes POS is slow, but then it has different expectations today. Try integrating old POS with 10 payment network working under milliseconds to authorise transactions. Worked enough since 1990 with dos based POS and also wrote IBM MQ C++ code for integration. Don’t want to go back there.
I think today is a step forward. Like always there will be good and bad as it was in 1980 or 1990. But largely today we do much more with computers than we ever did in those days.
So not everything is perceptually slow on computers. Apple Watch, iphone, androids are responsive and usually fast. Windows 10 for certain tasks might be slow but then overall it’s not that bad either.
As far as exploration goes you can do the same or even more today than before. I can discover planets. Built3D model in physical form, use CNC on desktop, do laser cutting in my home.
Built a computer like BBC micro, raspberry pi, fly a computer using fly by wire and test it using real physical model plane. Can do farming on a small scale using hydroponics at home. Can just go on and on about it. All possible thanks to advancement in speed and reduction of size in computing devices and peripherals.
So I feel it’s not slow, just the work we do with old computers is very different from what we do with modern one’s.
Google maps is actually bloody good at just being a map. It's extremely convenient to zoom in and out, click around, view locations, add them to your route, etc. Yes you can up the productivity by using more tabs and that is the standard nowadays because we humans are being trained to be multitasking monsters. Forgive me for ending my read there because to me it just sounds like a rant from a bitter old person that doesn't understand this brave new world any more.
I hate to be this guy but there are good points in here so why is there a newline after every sentence? This is not how written english is supposed to work :(
EDIT: Apparently its output from a twitter thread unroller. My point still stands and, ironically, its another example of how randomly smashing a bunch of applications together is broadly a degradation on the status quo.
The original content was a series of tweets and the link is a website that summarizes long twitter threads, so each newline indicates a follow-up tweet.
I just wish the author used actual paragraphs. I find a string of 1-2 sentence "paragraphs" to be difficult to read. I glanced through it, thought "he's got a good point here and there but its not THAT big of a deal" and then closed it.
To me, this the major paradigm shift is away from native clients to a model that, at the very least, allows for the entire client to be re-downloaded every time the application is loaded, which is the case in the current web app landscape.
Even if the entire application is stored locally, the goal is frequently on code reuse with systems like React, so it is still baked into the design.
We talk about function keys being missed--but the fact is, one of the main issues is that your web browser already makes use of many of these keys. This is not a problem with the web app you are using, but with the fact that you are using a web app, which is a client running within your browser, itself a native client.
Simply returning to native desktop clients for productivity software would be a huge step in the right direction. There are economic incentives not to accommodate this user need, however. Developers need to offer cloud-based, app-based access to their services in order to remain competitive. For many companies, it is built into their SaaS business model! If you develop a productivity application professionally, you would need to believe that the existence of the native client (and all of the resources and man hours it consumes) will give you a meaningful competitive edge. The benefits to users are obvious to developers who have time to reflect on it, and to the users running a POS system, but not to the people several levels above the POS operator who are making decisions. For them, a cloud system that centralizes access, control, and supervision works better for them.
I had a job recently where I had to do data entry for a lot of invoices into a intranet cloud system. The system was slow; fields had bugs that literally caused the text to be entered backwards for some reason (seriously)... and it sucked for me. But for our business partner who was a majority owner of our main asset, it affords them control, centralization, and remote administration.
I actually agree with the OP aesthetically, but it dwells on the POS operator and forgets the hidden user-- POS operator's managers and business ownership.
I will also say that to those of you who feel anti-javascript--an SPA is much better at managing stateful applications than dynamically pages generated by the server, and I'd much prefer to minimize chat between client and server where possible, which SPA frameworks do afford.
I think the whole thing could be improved by better support for native-style applications in the browser, possibly running in some "native mode".
The last thing all say is that windows are nice for multitasking, but they do represent a significant cognitive load to manage, and you can't usually do it from muscle memory. I remember teaching my grandparents to use AOL in around ~2000, and I literally had to make exercises for them to try to establish "object permanence" with windows that were hidden. The interface appeared flat to them, and they didn't intuitively grasp that another window jutting out behind another was a part of a separate interface altogether. They got it eventually, but it is the type of thing that is easy to forget when we live our lives enmeshed in window based operating systems. But look at phones--they don't have window-like functionality even after all this time, with bigger and better screens, multi-core processors, etc. Why? Because we still consider a phone to be a "device". It's capable of multitasking, but we subdue that aspect of the interface in favor of focus and usability. Terminal applications are similar in that respect, and I think that's why people find them attractive in some contexts.
To anyone who feels strongly about this, I'd say: Develop a beautiful productivity app for native desktop usage. Market it. Make people fall in love with it. A deluge of such apps may promote a reversal of this trend.
An article like this get posted every so often and it is normally absolute trite.
When I was a kid I had a hand me down computer. Generally it took 20-30 minutes to load a game from cassette. There was no interchangeability between computer manufacturers. I had the choice of BASIC of Assembly to program in and the whole interface was BASIC.
Today I can load up a web browser in a second or two, transfer files between machines with ease (for the most part) and I can download in minutes any language I want to program in.
I remember programming culture in the 2000s and it was so stupid. Every programmer whining 24/7 about 'bloat', they were complaining about Java though not electron. Thank God the whiners never got anything they wanted.
It's even worse now. JavaScript, originally intended for form validation and image preloading, took over. And they decided to compile it (sorry, transpile), thus making front end work as tedious as back end.
Oh we still exist.
I had this discussion not too long with a colleague about how some slowdown because of bloat he could have easily avoided was a very minor thing.
Would require the dude that made it a day to change, would cost another half day to distribute, etc
He found it really strange that i was so adamant about such a tiny bit of slowdown that nobody was probably bothered by until i after low-balling the amount of people using the software and amount of times they experienced that delay every workday over the past years calculated how much that had been in total and came out on months of lost time. Even if just a fraction of that ctime ould have been used it would have been worth it.