Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you explain the sloppiness of modern software?
157 points by etamponi on Feb 1, 2022 | hide | past | favorite | 242 comments
This is a recurring theme on HN, so I think you all have very good opinions on this topic: why does modern software seem so unpolished, slow, bloated, unprofessional?

Let me provide a (frustrating) example: the last straw for me has been OneDrive. I am using it to select and share photos from my wedding. It is an app written by one of the largest and most ancient software companies in history, so they should know something about making apps. And still:

1) The directory list view keeps "losing" the position at which I am, so every time I share a photo, I have to scroll down to where I left (in a directory with 5000 pictures).

2) If I screenshare using the Google Cast functionality, after a few dozens photos it loses the signal and I have to wait a few minutes before reconnecting. The entire app becomes extremely slow in the meantime.

3) The app in general is inconceivably slow. What is taking so long? I am viewing the same directory for 2 hours, why is it still so slow to load?

So at this point I am struggling to understand: how comes such an app got released? Are the incentives given to developers so at odd with app quality?




I think companies like Google, Apple and Microsoft have realized that QA departments and software quality aren't worth it. People have gotten used to buggy software. At Apple, there's no Steve Jobs that cares about whether things actually work. Releasing new features to get media attention is more profitable than making sure the features actually work. We'll never see something like Snow Leopard again with its "no new features". Internally at these companies, there's also no reason for developers to care about quality. It's not rewarded by the managers.

Additionally, we as developers keep building software using more and more complicated tools that seem fancy and new to us, but are brittle and don't deliver good software in the end. We keep adding more and more layers of abstraction, both on the frontend and backend. Why? To put it on our CV. Things are moving so fast that we're afraid to get left behind. We're at a point where things just keep getting more and more complicated – actually keeping something alive (let alone building new features or making those features work) takes more and more man hours.


> actually keeping something alive (let alone building new features or making those features work) takes more and more man hours.

And the people who made the brittle architectural decisions aren't the ones suffering the fallout increasingly often; with companies shunning internal promotions and substantial raises, more and more developers turn into nomads who switch companies every other year just to keep up with inflation. They rarely if ever get a chance anymore to grow with a single codebase and learn what makes it stable and easy to iterate on long term.


> And the people who made the brittle architectural decisions aren't the ones suffering the fallout increasingly often;

Ugh. Ain’t this the truth. I’m stuck working with Laravel because of this. The two people that pushed for Laravel the most are now gone.


> And the people who made the brittle architectural decisions

Bitrot is unavoidable, unless code is rewritten from scratch. Complexity rises proportionally to the number of links, which is `n(n - 1) / 2`, where n is a number of nodes, or `n²/2` for large n.

It takes significant effort and lots of experience to beat n², to keep code simple. The easiest way to reduce complexity is just to rewrite code from scratch. The harder way is to drop some old nodes (concepts, files, database columns, cases, fields, steps, etc.) when introducing new ones, which may shock somebody who has no formal training in software engineering.


This comment made me sad. I always believe in excellence as an art and something that is good to strive for as part of a pursuit interest, and what makes life more meaningful, and beautiful.

Steve Jobs probably understood that.


I have always considered myself to be a craftsman, as opposed to a programmer.

I take pride and joy in the sheer Quality of my work. I am quite aware that most corporations consider the way that I develop software to be “inefficient” (i.e. doesn’t result in inbound cash flow).

But I work surprisingly fast. I often have to pause, in order to let the team catch up. That is what comes from over 30 years of programming experience, where Quality techniques become rote habit.

My approach doesn’t always win friends. The people I work with, absolutely love the results, but everyone else seem to think that I’m a “snob” (may have a point).

I’ve learned that it’s best to just keep my opinions to my own work (or the work of others, that I plan to integrate into my work).


For some reason, the term "craftsman" has come to mean "obsessed with quality" in programming circles. I'd just like to point out that most traditional crafts (ie tailors, carpenters, metalworkers, etc) both today and in the past are absolutely not obsessed with making heirloom quality stuff all of the time. As long as the customer is happy and they get a competitive price for their time, they will be happy to take your project on.

Also as any historian will tell you there was a LOT of shitty quality stuff produced back in the day as well. It's just that nobody bothers to keep a wooden cabinet or a woolen greatcoat for multiple centuries unless it is absolutely magnificent. You can still get great quality bespoke furniture/clothing/etc today btw, but don't expect IKEA prices.


I'm not necessarily "obsessed" with Quality, but I have learned that Quality today, means less time spent, scooping up kitty litter, tomorrow.

I also worked for a corporation (Japanese) that is pretty much synonymous with "Quality." It was a religion, there.

In most American companies, the running joke has always been, that if you have "Quality" in your job title, your career is over.

At this company, it meant that you were a high priest.


In most American companies, having "Quality" in your title means you're never thanked, it was never your idea despite being mentioned and repeatedly shot down 9+ months ago, you're either tolerated or downright feared, and fenerally you can target no shortage of intrigue in the enterprise, but no one wants to acknowledge it.

Amusingly, my career actually started with a Quality bearing title.

Maybe I should try a Japanese company one of these days. See how the real adherents of Demings-san do things.


I suspect that a lot of the other cultural shenanigans would drive you nuts.

For example, we used to joke about "The Japanese 'Yes.'"

This was where we would patiently explain how some command/policy/idea would not work, and the person that we would explain it to, would understand perfectly, and say "Yes, you are correct."

Then, they would instruct us to do it, anyway, because some higher-up said "Make it so," it was in the project plan, or it was corporate policy.


...This may sadly be a refreshing change in my book. I think I might be able to handle face saving and climbing the ladder and being repetitive about it. I've honestly gotten kind of used to that.

I'm much more tolerant of the straightforward "Yes, because..." than the American "but I can't".

One is still potentially a productive conversation, the other is a platitude, and exhausting. I will take that into advisement however.



Yes, and worth emphasizing: this "surviorship bias" phenomenon is common in comparisons of the present to the past. People imagine quality was higher, standards were higher long ago.. they weren't. We just don't retain the vast majority of low quality stuff for centuries, we only retain the top 1% or 0.001%, while the low quality stuff is forgotten.

Analogously, people sometimes ask why the average perceived quality of classical music written in 1810 or so is so much higher than the quality of music being written today. Well, the real average quality of 1810 music wasn't higher; we just don't bother performing or listening to the bottom 99% of music written in 1810 that was not very good. We only listen to Beethoven and a few others who were at the very top. This creates the inaccurate perception that "most music was really great in 1810, much better than today; standards have slipped and nobody cares about quality music now."


It’s just great you find pride and most of all joy in your work. It makes going home after a day or enjoying the results so much better. Lots of companies (business school drones most of the time) simply don’t understand it.


He knew. Below is a quote from Isaacson book.

Fifty years later the fence still surrounds the back and side yards of the house in Mountain View. As Jobs showed it off to me, he caressed the stockade panels and recalled a lesson that his father implanted deeply in him. It was important, his father said, to craft the backs of cabinets and fences properly, even though they were hidden. … In an interview a few years later, after the Macintosh came out, Jobs again reiterated that lesson from his father: “When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood in the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.”


On the other hand, when push came to shove, an arbitrarily chosen deadline trumped all else. Taking the original Macintosh as an example again, see chapter 7 of Steven Levy's book _Insanely Great_, which describes how the Mac software team scrambled to get a barely working version of MacWrite done in time so the Mac could be announced at Apple's yearly stockholders meeting and available in stores the day after.


> “When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood in the back.”

As a professional programmer and amateur cabinet maker, this is a weird quote. I get the point of what he’s trying to say, but the truth is that all the most beautiful wood goes to the veneer mill to make very expensive, very nice plywood. Plywood is not MDF or particle board and there’s absolutely no reason not to use it for the back of a cabinet.


Did he? To quote Steve from the iphone4 antenna issue: "You're holding it wrong."


I think that was just his asshole-businessman side trying to save company from a PR disaster and billions in lost sales.


You know I am coming to hate the word "just". Mainly because in modern vernacular it is always used to excuse behaviour that would otherwise be described as wrong. Just is the one word that the normalisation of deviance relies upon.

If Steve cared about the user he wouldn't have had to say it. Because they would have found that bug in testing. So either they knew were incompetent and didn't know about it or they were incompetent knew about it and said fuck it the idiots would buy it anyway.


> Steve Jobs that cares about whether things actually work

Did he? at least according to my experience Apple cared about the looks of things and not whether they actually work flawlessly.


Jobs cared deeply about the details, both UI and UX, it's just that the details he cared about and how he prioritized them might be ones you and me disagree with. He almost never sacrificed form for function, but he certainly did care for both.

That attention to detail was one of the reasons people loved Apple products and why so many Mac-only or truly Mac-native apps exist. Contrast with windows where most "native" apps are either Qt-based or some WPF monstrosity full of custom controls. Microsoft Office has its own UI stack ffs.

That doesn't mean his vision was always good though, or that he was a good designer (he was good at picking things and making sure they fit, but he didn't come up with them himself)... Some decisions are just baffling to me, seriously how long did macs stick with 1 button mice?


“It just works” used to be a motto. One of Jobs’ most quoted phrases is “design is not just what it looks and feels like; design is how it works”. Snow Leopard was advertised with “zero new features” because the focus was on improving what existed.

There are plenty of other examples, in interviews and stories from internal events, which corroborate he cared deeply about Apple products working well.


Jobs as a QA engineer would be doing smoke tests by "eating his own dog food". He would make sure that at least the fundamental tasks of software/hardware work flawlessly.


While I agree that bugs can be infuriating, the big question is when a feature is "good enough" to be marked as done. You just have to set a goal post somewhere, otherwise we'd still be in the software stone age.


New features have diminishing marginal utility. Everyone using Word notices if typing words has lag before letters show up, most people notice if you can’t set the font, fewer people notice if it can’t set the margins, down to features that almost nobody is aware it has.

Raising the quality bar quite high before you call a feature done is generally the correct call. Yes, your software might not check off every box on some list, but you can ensure the features people care about work well.


That only works if the ones purchasing it care about quality instead of which one checks the most boxes.


Big projects, big turnover of people. Each team develops its own unit. Maybe they don't even see the final product. I wonder how many of then even try to run it once it is done?


I'm not sure how far back you'd consider "modern" but back in 2000, Microsoft Office was unreliable, bloated crap that would occasionally refuse to open, reinstall or be repaired; I had to occasionally resort to reinstalling Windows entirely. Adobe Flash would randomly chew up the entire CPU after playing certain applets, even after the browser closed. Browsers? When they crashed, which they did often on some machines, they had no session restore. Boom.

Sage Payroll across a network - even a 100Mpbs/switched setup was laughably slow because it would IO bytes at a time.

Symantec's antivirus would refuse to update, even on freshly-installed servers. Norton had official instructions on troubleshooting LiveUpdate which included having to unplug the modem to fool it into thinking you were offline.

Windows would hang, crash, run like treacle, die if you plugged in a printer, insert a blank CD-R that hadn't been formatted properly.

When iTunes first came out? Oh god, one early version broke the Windows Installer somehow, killing all MSIExec-based software in the same way.

IME, software is just as bad as it always has been. Microsoft still ignore their own best practices, design guidelines and even user consent recommendations. Electron may be new but it's as efficient as software has always been: insultingly, laughably so. Installing printer drivers seems easier until you realise, three months later that it's pointing to an IP address, not a host name, when it's given a new one. Forced to set up an online account to scan, or even just login to your computer.

I don't even have to work on enterprise software which is why I do not have grey hairs :-)


> back in 2000, Microsoft Office was unreliable, bloated crap

Sadly, it did not stop there. When I worked as Windows sysadmin / helpdesk monkey, I had to install Windows 2013 on several machines because it just stopped working for no apparent reason.

On one machine, an update to Office 2013 broke Autodesk Inventor (by replacing/updating some vbs file, I suspect); reinstalling Inventor kind-of worked, but it caused MS Outlook to crash whenever the user tried to write an email (some might consider this an improvement, but not everyone agrees). Reinstalling Office fixed that without breaking Inventor again.

The worst part - from a helpdesk perspective - was that there was no rhyme or reason to it, this type of problem would pop up randomly on some machines, while other machines with practically identical hardware, OS, and applications installed ran perfectly fine.


I agree with all, but this is a misrepresentation:

> Electron may be new but it's as efficient as software has always been: insultingly, laughably so

Electron is definitely bloated, but it's possible to write sufficiently fast¹ software like Visual Studio Code. It can't be put on the same basket ("insulting") as Sage Payroll.

¹="sufficiently" fast, not "blazing" fast


As far as I recall, VSCode had to fork electron in order to make it acceptably performant. The team tried to use stock electron but the performance was not good. And they're probably going to move to because electron is still slow and terrible.

https://github.com/microsoft/vscode/issues/118308


> The team tried to use stock electron but the performance was not good.

Can you provide references of this fork? It you actually look at the VSC manifest, you'll find that the vanilla "electron" is a dependency.

> And they're probably going to move to because electron is still slow and terrible.

I take you haven't used Visual Studio Code. As I wrote, it's certainly not the fastest editor, but it's fast enough for professional usage. It's actually faster (or comparable at least) to other "famous editors".

> https://github.com/microsoft/vscode/issues/118308

This doesn't have any actual technical content; it's essentially a variation of Rewrite It In Rust.


So I know that currently they use electron, but I vaguely recall them working on a fork of electron to replace chromium etc. across all their products (edge, code, others?). I can't seem to find public references to it related to code, it might have been discussed more in regards to the edgehtml/chakra implementation. I may also be mixing things up, that was around 2015 I think.


Much lower memory consumption and startup time cut down to ~50% sound like technical enough reasons to me.


Untruths.

https://github.com/microsoft/vscode/blob/main/package.json -> "electron": "13.5.1",

Referenced github issue seems to have only random people unaffiliated with the project team.


Yes I'm aware that they currently use stock electron, I was referring to earlier development. Maybe like 2015 or so. Could be mistaken, I'm not sure that work was ever published.


I've recently installed Visual Studio for the first time and it uses up 3-5% CPU while doing nothing. It takes an age to load. It uses heaps of RAM with no projects open. I'm not sure if VS is also Electron but it's another black mark against Microsoft either way.

I beg to differ on Electron: I have never seen a single use of it where it doesn't use an extraordinary amount of RAM and there mere concept of using an embedded web site browser for a GUI is simply an unacceptable thought, let alone a realisation of modern-day software. It should have never have happened.


> it uses up 3-5% CPU while doing nothing

This is meaningless without context, as editors typically run indexing processes in the background. I used to use $beloved_native_editor, which in some cases ran at 100% single-thread for extended amounts of time (due to indexing).

> It takes an age to load. It uses heaps of RAM with no projects open [...] I have never seen a single use of it where it doesn't use an extraordinary amount of RAM

I've taken some real-world measurements, with the disclaimer there is no universal interpretation of measuring memory:

  >  Private  +   Shared  =  RAM used
  >
  >   8.9 MiB +   9.1 MiB =  18.0 MiB
  >  17.0 MiB +  17.3 MiB =  34.3 MiB
  > 119.2 MiB + 149.4 MiB = 268.6 MiB
  > ---------------------------------
  >                         321.0 MiB
  > =================================


  >  Private  +   Shared  =  RAM used
  >
  > 332.1 MiB + 519.2 MiB = 851.3 MiB
  > ---------------------------------
  >                         851.3 MiB
  > =================================


  >  Private  +   Shared  =  RAM used
  >
  > 124.0 KiB + 176.0 KiB = 300.0 KiB
  >   1.1 MiB +   1.1 MiB =   2.1 MiB
  >  95.5 MiB + 200.2 MiB = 295.6 MiB
  >   1.3 GiB +   1.3 GiB =   2.5 GiB
  > ---------------------------------
  >                           2.8 GiB
  > =================================
Startup times (approximate), before being able to type: 0.3", 1.2", 4.7".

The editors are vanilla installations, and I've opened the directory of a relatively large project. The machine is a modern desktop one.

The editor that people love to hate is not the one you'd expect.


> back in 2000, Microsoft Office was unreliable, bloated crap that would occasionally refuse to open, reinstall or be repaired

I hope you are not describing Office 97 which is widely regarded as the best version of the Office suite. As for being bloated - today's "apps" would like to have a word with you ;)


Office 2000 was not particularly stable, or fast, and if it managed to corrupt a document in some way, it'd always crash trying to open it.


> Office 97 which is widely regarded as the best version

Only in retrospect. I was in high school when it came out, and I vaguely remember one of my classmates griping about it being slow and/or bloated.


Just don't try to use cross document embedding (like putting an excel chart in a word doc) - that would blue screen my PC fairly reliably


Up till now, MS Outlook still has a Safe Mode with limited functionality that would activate in case normal Outlook won’t boot properly - I think this speaks heaps about their software quality.


I think most apps with plugins have a safe mode.


It still had its complexities but it was markedly more reliable than Office 2000, certainly. It also didn't take three days to install!


Regarding Sage. Sage provide the software for the card reader machines at my local SPAR. Frequently lunch time many of the employees on the tills are waiting for card payments to complete which slows down serving customers.

As for the rest. I don't specifically remember many issues with Windows XP, if the install was kept tidy. Symantec was probably the source of your woes and many people used to use AVG.


When XP came out I remember many people vehemently staying on Windows 2000 since Windows XP just added a bunch of "fisher price bloat". It wasn't until some service pack (SP2?) that people upgraded.


People used to say a lot of nonsense back then. The thing that slowed down XP a lot was ClearType (which was off by default) and the animations. If enabled Windows Classic theme (I forget what it was called) it was as fast as Windows 2000 and looked exactly the same.

The other problems that people frequently had was that a lot of software used Win 95/98 only hacks which wouldn't run on XP (or anything NT based).


XP definitely improved with age, any by SP2 was a pretty decent operating system. Other than the fisher-price approach to the GUI and naming schemes I don't have a book of gripes against it :-)


The GUI could be set back to classic Windows quite quickly. There was also the Zune, Royale and Royale Dark (unofficially leaked IIRC) themes that looked nice.

XP 64bit (for x64 machines) however was the best Windows OS I've ever used. Fast and super stable.

People used to complain about Vista being bad. However that was mainly a problem with 32bit Vista. 64bit Vista was again amazing.


Oh yeah, and buying a video game usually entailed an afternoon of troubleshooting to get it running.


Agreed, when you look back to early 2k years, we might as well view current age as a renaissance.


There is a parallel in the history of workplace safety, e.g. i picked this one from google at random, all these histories tell the same thing:

https://eh.net/encyclopedia/history-of-workplace-safety-in-t...

  The sharp rise in accident costs that resulted from compensation laws and
  tighter employers’ liability initiated the modern concern with work safety
  and initiated the long-term decline in work accidents and injuries.
There was a long history of companies not caring about accidents and death in industry machinery. Companies claimed it was unavoidable, blamed their workers for doing this wrong, etc... Then the law made them care. The unsolvable non-problem turned out to be a solved problem in a few years. We also got some red tape as a side dish, but that's a good deal, all in all

Software is the same: If organizational leadership felt immediate pain from sloppiness, that sloppiness would disappear quickly. But now, we have fallen in a point where software gives no warranty at all, users have accepted the low quality as a regrettable fact of life, and companies attribute their low productivity to vague nebulous unsolvable mysteries instead of sloppy software. So here we are.


> Then the law made them care.

A while back I came across a couple channels from various government agencies and organizations that document industrial accidents. They were quite interesting to watch, but after watching several they make me wonder if the companies do care with or without the laws.


I assume that they care if and when the fines would overtake the profits. With risk assessment being an entire field unto itself...


It's easy to forget how bad software was back in the '90s. Third-party software on Windows 95 was horrible--crashes due to drivers and such were a constant source of headaches that we just don't deal with to anywhere near the same frequency anymore. You could WinNuke anyone connected to an IRC server. A Web page could blue screen you by embedding c:\con\con in an image. "Microsoft Word crashed" was just a constant thing people had to live with; there was no good way for Microsoft to know about the crashes before telemetry. "Starting Java..." would regularly kill your browser. Over on the Linux side, forgetting to run LILO after recompiling a kernel would render your system unbootable. XFree86 was a nightmare to configure. Etc.

Modern software is in fact more reliable than older software.


> It's easy to forget how bad software was back in the '90s

> Modern software is in fact more reliable than older software.

Putting this as "90s vs now" is borderline strawman IMO.

I genuinely believe the peak in terms of performance and polish was somewhere in the early-mid 2000s; developers of the time had experienced the 90s and true shortage of CPU time and other resources. The mindset was still alive, even if PCs had gotten very powerful by 2005.

Mid-to-late 2000s mark the start of mass webification and mobilification of software. "Mobile eats the world" and "nobody wants to install software." In my experience it's been downhill from there; software started being developed with a completely different mindset and completely different approach.

During the early phase of this transition, your bottleneck was invariably network I/O; so developers didn't really need to give a crap about CPU cycles or RAM. Lack of UI polish could be blamed on the web being what it is, a document platform coerced into becoming an application delivery platform. Most of the early "app web" applications were truly horrific compared to "native" software, but no matter, the web was considered more accessible and a better platform for developers. And users didn't have to INSTALL SOFTWARE! Woohoo.

Of course the constraints are not the same today, but the original mindset that held until early 2000s is long gone. For example, instead of trying to stick to your platform's long established UI conventions as you would if developing Windows software in 2000, designers nowadays are more concerned about branding and making up their own experience; every "web app" looks and feels and behaves different.

Now as for reliability, I can't say whether the mid 2000s were worse or better than today. 90s software definitely had a lot of issues, but I had no trouble running Windows XP SP2 with months of uptime. The nature of software failures has changed, and today you have less crashes (though they're still there), but on the other hand, I regularly see applications being confused about their state, being stuck trying to do something that evidently doesn't succeed, infinite spinners, "whoops, something went wronk", "we can't serve your request at this time" and "please reload and log in again." If we consider all these to be failures, then I think software in general isn't any more reliable than it used to be. Updates are another interesting phenomenon; back in the day, I wasn't forced to update. I never lost work or had to reboot and restart everything because of updates.


I just want to point out that you're saying windows ME was the peak of software performance and polish.

And following that - I call complete and utter bollocks.


The GP actually praised Windows XP SP2, not Windows Me. Now one could argue that in some ways, Windows XP was a regression and Windows 2000 was the peak. But not Windows Me; that's widely regarded as the worst of the doomed 9x line.


Sure - and apparently we're all happy to claim that "today is the worst - yesteryear was fantastic!"

And then ignore all the stinking piles of garbage that actually made up the past.

This is nostalgia in action. Those rose-tinted goggles have slipped down over your eyes. You remember "Windows 2000 was great", you don't remember "Windows ME was so bad it was literally (not metaphorically literally - literally literally) unusable, and Microsoft STILL SHIPPED IT!".


> Those rose-tinted goggles have slipped down over your eyes.

I never actually said I agreed with the nostalgia; I was just clarifying the position of the person you were replying to, and what I perceive to be widely held opinions.

> Windows ME was so bad it was literally (not metaphorically literally - literally literally) unusable

Perhaps it was that bad for some people. If so, that's unfortunate. But I ended up with Windows Me sometime in college, and it was usable enough that I used it to develop and ship the first few versions of the first commercial software product that I was hired to develop. So it wasn't that bad. Still, once I could afford a PC with Windows XP, I got one and never looked back.


Windows ME was strange. In some computers it was a modernized Win98, not really better or worse. On others, it caused all kinds of crashes, including total irrecoverable data loss on hard disks.

One of the weirdest examples I know of was a particular series of desktops, sold in the supermarket with ME. I knew 2 people buying identical machines in the same week from the same supermarket. One of them worked flawlessly. The other one was such a miserable crash prone machine. We forced Win98 on the crasher and all was well again.


Right, I don't really know anyone who seriously used ME. I guess some people who bought prebuilt computers at an unfortunate time might have been stuck with it for some time. The people I know held on to 98 (or 98SE) for as long as it served them, and then switched to 2000 or XP. I also never saw ME in schools, libraries, corporate settings, etc.


Did I really say that? From any given era, you can find gems and garbage.


You could almost even say that the past is just like today! :D

It turns out it's not all roses and bliss. It's basically the same half-baked, mostly unusable, bucket of dirt you dig through to find the diamonds.

But everyone only ever remembers the diamonds.


No-one said it's all roses and bliss. My argument is that mainstream software was, on average, more performant and more polished (and not considerably more buggy or unstable; possibly less buggy due to being less complex) than it is today. I was relatively happy using mostly the same software that everyone else was using too. Today, not so much; I find most mainstream software unbearable, repulsive, slow, full of annoying quirks & rough edges. I'm not happy using it so I don't. For me it's a completely different era.

I don't know if anyone is measuring this objectively, and I assume not. So we might not ever be able to say whether the claim is objectively true or just my subjective experience. But it's not fair to dismiss the whole argument by mischaracterizing it.


My recollection of software from the Windows 95 era is different. The reliability of desktop apps varied depending on the type of software but problems were often related to hardware. Today, hardware is much more reliable and adheres to widely adopted standards.

Faster, more capable hardware hasn't necessarily made modern software better. In many cases the opposite: apps are slow and gobble up computer resources with no restraints.


>Faster, more capable hardware hasn't necessarily made modern software better. In many cases the opposite: apps are slow and gobble up computer resources with no restraints.

Not exactly a new phenomenon: applications always grow to exploit new features offered to them


Yes, that's very true. But in the 90s, both developers and users would not tolerate slow or memory-hungry apps. The hardware and memory constraints meant developers chose languages and tools for that purpose.

Today, developers choose languages and tools that suit their comfort first. The user's comfort often takes secondary consideration.


We have very different memories of the 90s. I remember it being full of slow, memory hungry and buggy apps.


Your memory of the 90s isn't mine - memory-hungry apps have been a thing for over 50 years, my friend


I switched to Linux because of Windows 95. And getting Linux up and running back then was a real pain. But once it was running, it was way better than those bluescreens of death.

So in my memory it was not the hardware.


Win95 was a mess. However the Win2k I had, as I recall, was stable, solid and performant, esp. considering that it was running on much less than the weakest Raspberry Pi (IIRC I had a 450Mhz Pentium3 machine with 128 MB of RAM and a 5400 rpm HDD).


Win2k was really great. Super-stable and efficient. But we have to ignore the wide open security holes before the first few service packs.


Yep, constantly restarting PCs, having to kill random rogue processes in Task Manager (even explorer for some reason), having the alert come up in internet explorer that a script was taking too long, the list goes on and on.

There have been a lot of improvements in the meantime that have led to better software, amongst others: version control, automated tests, automated deployments and more useful programming languages.


> Over on the Linux side, forgetting to run LILO after recompiling a kernel would render your system unbootable. XFree86 was a nightmare to configure

I'm not sure those are reliability issues.


I wanted free photo editing back in the day so I could learn to be a graphic artist. Six months of messing with Xfree86 and I got a career in programming and sysadmin


I totally disagree with that. However, I was a Mac user during the 90s. In my experience, software was very reliable then. Of course, you had to be careful with extensions and other software that could crash the whole system, but once you got a stable system it remained fast and stable. Weird things only happened after Apple transitioned to OS X -- the first versions of OS X with their spinning beach ball of death really were awful.


The Mac didn't even have memory protection before OSX iirc? I seem to recall that people would have to restart their Mac if something crashed, since it wouldn't be just the application that would crash - it would be the entire OS.


restart their Mac if something crashed

Perhaps the lack of memory protection and the catastrophic consequences of an app crashing made Mac developers more careful and aware of the importance of stable software.


I think you might have a bit too much nostalgia (or maybe we just had very different experiences), but I remember Mac OS 8/8.6 crashing a lot, with the dreaded bomb icon.

Everyone I knew with an iMac had a toothpick or paper clip in the top handle, due to how common they needed to press the reset hole.

At the time my father used to edit a local news paper in PageMaker and it was particularly bad.


I don't know why people are downvoting you.

I went to a middle school that was "mac only" in the 90s. They were complete and utter crap.

They crashed constantly. The spinning pizza of death made a near hourly appearance.

We had two machines issued to each classroom, and most times the kids wouldn't even turn them on they were so bad.


>Everyone I knew with an iMac had a toothpick or paper clip in the top handle, due to how common they needed to press the reset hole.

What kind of Mac were you running?

The straightened paperclip was only there to eject recalcitrant floppy disks (early on), and then recalcitrant CDs later

Restarting the machine was always a menu option or physical button


He said iMac - the G3 iMacs had reset/interrupt holes (see to the right of the Ethernet jack) https://i0.wp.com/sixcolors.com/wp-content/uploads/2020/12/B...


there are a LOT of iMac models

Never heard of someone needing to reset any of them with a paperclip before


Reset holes for paper clips? I can't remember any of my Macs having a reset hole like you describe. In any case, I never had to use one. There was a key combo for resetting the OS.


As someone doing web development in 2001-2003 and still testing stuff on OS 9... No. It crashed all the time in the office. Several different machines and all I did was open a few web pages and sometimes write down some notes.


Thats why we got AutoRecovery as a feature! /s


Nearly spit out my coffee at this XD


The answer is there is low ROI on quality and businesses are hyper optimised for ROI. Only when the customers lose their shit publicly it affects ROI and quality will improve.

We got into a situation where people are so disempowered by design and poor access to routes to complain and demotivated to complain so poor quality is the norm.

If you want this to stop we have to collectively rip a new asshole in every half baked pile of muck out there loudly. Really loudly. Start burning up people’s ROI. Buy an app and it’s shit? Get a refund and then tell everyone everywhere exactly how bad it is.

When I do this I am repetitively told that I’m negative and this is not a productive attitude but I disagree and see this as a defence of the improper norm. The first step of quality is acknowledging you have a problem which needs to be shouted in most companies faces until they can hear through the fingers in their ears.


> When I do this I am repetitively told that I’m negative and this is not a productive attitude but I disagree and see this as a defence of the improper norm. The first step of quality is acknowledging you have a problem which needs to be shouted in most companies faces until they can hear through the fingers in their ears.

I don't think you're negative at all, but the "fingers in their ears" is based on a misunderstanding. Companies have their ears wide open and their customers are telling them -- mostly indirectly by voting with their wallet -- that they want quick and dirty features over quality. Your suggestion to punish bad quality would actually help these companies by removing those perverse incentives.


I'm not sure this is true. I think there is an ROI on quality, but the incentives are set up not to value it. The problem is that "quality" - even "performance" - is quite a nebulous, holistic property of the project as a whole. It doesn't fit neatly into a bullet-point of features which all link back to predefined OKRs for the product management team. It's also a nightmare to measure.

One thing I've been banging on about at web teams I work with is to measure the relationship between time to first contentful paint against propensity to buy/click/watch/spend time-on-site/do whatever the relevant value metric is. For the analytics tooling that teams usually have in place, that's crossing two completely disconnected toolsets: you've got the behavioural analytics tools, which are typically owned by someone else in the business, and then you've got the technical observability tools, which are usually owned by the development team, and marrying the two up can be astonishingly hard to make happen.


"poor access to routes to complain"

this


Software was always terrible, it's just that it was smaller in scope and so the suckiness was more contained. Waterfall had its problems but a strong QA/QC phase did help to avoid the worst issues. Nowadays proper QA is too expensive and pushing updates too cheap, so you get telemetry and permanently half-broken software instead.

However, even worse is that the complexity of software has grown to absurd levels:

- Massive distributed microservice architectures for simple chat apps and news websites.

- Huge distributed teams that need to coordinate rapidly in an Agile environment (which was never meant for teams that size, which is why you have crap like SAFE).

- Team churn due to lousy pay without changing jobs every 2 years.

- Bad practices that are still taught as best practices to this day in university, e.g. OO-inheritance hierarchies.

- Terrible foundations, how do you make a native windows UI again? The web grows at a rate that's impossible to keep up but even desktop apps build on that these days because the OS APIs are so lousy (even Apple is starting to crumble there).

- Sheer size of software with shorter time to market, leading to lots of open source code reuse, which is a big pile of code that can contain a big pile of bugs you probably can't do much about. I work on a piece of software that has thousands of dependencies if you count them transitively.

It's all just... too much, it's too hard to keep up and shortcuts are constantly taken (tech debt yada yada). There's good software out there but it tends to be small, focused, slow to evolve and developed by a small team.


Because it doesn’t matter, and none of this is a labor of love anymore.

Quality is a cost, and users don’t generally pay for the marginal value of a less buggy app, they pay for the massive value of the categorical problem being solved.

You sat there for two hours viewing a single directory, clearly making the page faster doesn’t mean you’ll use their service more, so why should they make it snappier?


> users don’t generally pay for the marginal value of a less buggy app

Do they get the choice to?

It's not like I can get a less buggy version of MS Windows by paying an extra $20.

And while many open source projects accept donations, and many enterprise products offer support contracts, it's not like $20 gets you a guaranteed bug fix.

I agree with your point that companies don't always have a good business case to produce quality software - but we can hardly blame customers for not buying things we aren't selling :)


> Do they get the choice to?

Yes. At least somewhat. There's a reason lots of competing products become popular, e.g. Google Sheets taking some of the popularity of Excel. New features like live sync actually matter. And to the extent that users actually care about quality, software with better quality will evenutally also win.

I mean, I'm using an iPhone. I'm using a Mac. I'm paying a huge premium over someone on Android and Windows. I do it largely for the quality of the Apple ecosystem.

But yes, there are still problems. The world of products is hard. :)


With services you get a lot of lock-in/exclusives. Two examples are Discord and Netflix. I wouldn't say you have much of a choice if you want to join a popular place to chat or if you want to watch Stranger Things. The user experience doesn't matter so much if there's no other way to do something. This is what I think the person you were replying to means.

On that note, Google Sheets became popular because you can use it in your browser and because of Google Drive which is what locks you in. Sheets is much more open than my examples.


Of course $20 doesn't get you a bug fix, it costs a few orders of magnitude more to actually implement it.

For an open source project, consider what it would cost to hire a contractor to implement the fix and get it to a state where the fix would be accepted by the project.


Quite a bit more than $20, but the option to pay more for Apple products and get a better experience has always been there. I don't want to come across as a shill or anything, of course Apple has made blunders and many people have always been locked into Windows. But it does seem relevant, that despite network effects and lock-in, they've been able to provide an alternative ecosystem all these years primarily by having slightly-less-worse products that people are willing to pay a premium for.


If the customers are still buying it, it can't be that bad can it. Yeah I'm a bit sarcastic too, but I'm convinced the reasoning is similar - if it kinda works throw it at people and move to building the next shiny facade. Quality is simply not directly translatable to money, so we're living with this half-baked everything. And I'm not blaming the developers, they produce exactly what they're asked and fix only the bugs they're assigned.


> If the customers are still buying it

In a lot of cases, the customers aren't buying it, it's "free" (the cost being not monetary but elsewhere) and no new entrants can compete with the giant piles of cash the incumbents are sitting on


You do get the choice, in the form of moving to another piece of ostensibly less buggy software or not using the service at all.


You’re not entirely wrong but I would push back a little and say that as a user, in a lot of situations I’m not given an actual choice and need to use a particular piece of software for one reason or another.


I've noticed this in the educational sector. It's technically not that hard to produce good software for it.

It has mainly settled at a local maximum. The software is good enough, money is tight, the priorities are different and procedures are bureaucratic.

Switching to new and better software means, first making it a priority. The procedure for larger schools also mean getting legal on board, writing an invitation to tender, etc.


Many reasons really. Some others have alluded in other comments:

* Fixing bugs isn't sexy. Constantly pushing new features is. Tech debt therefore isn't managed properly. I am leaving a job right now because of this.

* Problems with performance arise because developers frequently only test the application on their machine and/or phone and developer typically will have nice hardware that will run the application quickly.

* Mobile applications only get tested in areas where there is a good signal and issues that occur when signal strength and/or bandwidth are poor aren't accounted for.

* Decisions on what get prioritised are based entirely on user metrics collected in app. Which means anyone that blocks this reporting or doesn't fit neatly into the most common scenarios will be left out in the cold or issues de-prioritised and possibly never fixed.

* Tech debt at these companies is insane. Your app probably has a large client and server side solution that probably takes months for someone to be proficient even in making the most basic changes. There will be 500 lines in each switch statement blocks, classes that are 1000s of lines long etc. etc. This is due to tech debt not being managed and people just hacking to get stuff done to sprint deadlines.

* There will be 1000s of long standing bugs that can only be reproduced on particular devices (which may or may not affect you). Frequently devs might not even be able to get their hands on the device to actually fix said issue.

* I would wage in some cases the info-sec team will have ridiculous hang-ups about plugging in a phone via USB to a PC / Mac to debug. Frequently issues will be debugged via archaic or jerry rigged solutions which seem ridiculous to most freelance / non-corp devs.


Jonathan Blow has interesting ideas and views to share.

Please watch the [Preventing the Collapse of Civilization (1 hr)] presentation by him going into details why it is happening what you observe.

https://www.youtube.com/watch?v=ZSRHeXYDLko

Edit: Corrected the uppercase spelling of person's name.


In a similar vein Casey Muratori's The Thirty Million Line Problem is also quite good. https://www.youtube.com/watch?v=kZRE7HIO3vk

He's also often collaborates with Jonathan Blow.


I saw this years ago and it's really appealing. I still don't understand if the thesis here is true (are things really getting worse or are we just seeing software of the past through rose tinted glasses). But it feels like we could do things much better with better incentives and if we aligned our efforts much better across the industry.

It feels like it could be a possible scenario that we otherwise end up in a situation where software in general become unmaintainable in practice.


For those that enjoyed the video you linked, Joe Armstrong's "The Mess We're In" talk could be interesting as well https://www.youtube.com/watch?v=lKXe3HUG2l4


I second this, this video is amazing. It had a big impact on my thinking the first time I saw it


This is not a new phenomenon. As early as the late nineteen sixties, people were debating the so-called software crisis and arguing how to improve the perceived lack of quality.

The simple answer is that a lot of software is simply good enough. It's doesn't have to be perfect under all circumstances. And making it much better than good enough has diminishing returns because it requires non trivial investments in quality.

In this case, you pushed an app out of its comfort zone and ran into some issues. Simple suggestion, use something more suitable for what you are trying to do and accept that this maybe isn't what this specific thing was designed to do or even do well.


In a nutshell, this is what I'm going with these days: Compositionality failure.

Software is built without thinking about whether will compose. How it will compose.

The elements that make things compose can be defined as the strange word compositionality. The "laws" that make things composable if they are followed. The properties that things can have to make them composable.

An API is more composable than a stateful mishmash of library functions. Now let's imagine that underneath the API there is a core data structure that has its properties and relationships clearly and usefully defined. Can we automatically generate the API code for it? We should be able to, at least to a significant degree. Then that's more composable. More compositional.


You all must've experience a different past than I did. Sure there were exceptions, but software in the 80s (lol), 90s, 00s, and 10s were generally slower, buggier, and uglier than what we have now. We're talking minutes to start some programs, sometimes it just stopped working and you couldn't use it on that computer ever again without reinstalling your OS. Software regularly crashed - like once an hour for Word. Sure, there were some fast programs, but they often did very little or have any modern conveniences. Global, fast search basically didn't exist in most programs. Rendering issues all over the place. "Oops, that file you were working on is now permanently corrupted!" was not some bizarre occurrence, but something you regularly prepared for by making copies of your files as you worked.

I think this is similar to video games - people see a pixelated edge in a game and think it is crap because it rarely happens.

Now if you remove "modern" from the question, and just ask how you explain the sloppiness of software, then all the same answers come back up. Today, where software development feels about 90% gluing stuff together, it's because there is so much stuff out there we can't even be aware of it anymore.

It used to be that a person could be read on nearly everything (100+ years ago), then you could do so within your field and be aware of everything else, then, for most of the 20th century, you couldn't know everything in your field, but you could mostly know it and be aware of everything else. Sometime about 10 years ago we hit an inflection point, where you can no longer even be aware of everything in your specialty. If you are a database "specialist", you can't even know of all the databases that exist, let alone understand them.

So weird, and so impactful to us and our society.

Anyway, old man rant over.


Software developers tend to think there's something special about our field, something inherently different.

There isn't.

Building products is hard. Building complex products, like software tends to be, is incredibly hard.

If you ever want to read about how terrible pretty much everything is when it comes to design, read "The Design of Everyday Things". It's a classic for a reason. It shows how so many products that people rely on every day are just terribly designed, even something as simple as a door or a remote control.

Software isn't uniquely bad. It's just regular old bad. Incentives are, like in every field, to make money. Up to a certain level quality is hugely important, and after that level it isn't important (for making money).

I mean, the question isn't actually would you want this or that bug fixed or not. It's would you give up other pieces of software that are important to you, so that some bug that most likely doesn't affect you is fixed. Because that's the tradeoff. More quality = less products, because to a large extent it's the same workforce.

And btw, modern software is way better than most older software. Way more features, and largely more stable.


My personal theory is that the the demand for IT workers in general (devs, QA, "architects", program managers, ops, sysadmins... Etc) is so damn high relative to supply that our industry attracts and retains large quantities of people that are just plain shouldn't be here.

Witness the distressingly high number of devs that fizzbuzz catches out.

I don't know if they are in the wrong industry / career, are just generally incompetent, or could be good and just don't care, but it seems like the percentage of incompetents is rising.


Companies trying to lower wages: “Everybody should code”.

Now everybody does.


Yes seems to be the case, then also there's so many new entrants that the majority of developers actually have very few years experience.


Many companies expect devs to be all those now. Except program managers.


If you think this is exclusive to software, you're deluding yourself. We live in a very aggressively profit-seeking society. If it will produce a short-term profit, well and good. If not, then it's not going to be prioritized. Those of us on HN see this phenomenon reflected through software, but look at literally any other facet of modern society and you will notice the same thing.


> It is an app written by one of the largest and most ancient software companies in history, so they should know something about making apps.

I think tradition and collective experience matter very little with the issues you're describing. It is hard to form and keep a team that is both technically excellent and will make sensible UI choices.

Additionally, I don't know if this is true, but I get the feeling that pulling "product" and "UX" into totally separate professions has meant that ownership of the overall quality is now theoretically in the hands of people who can't ensure it. A similar idea to the principal-agent problem, although in reverse.


> I get the feeling that pulling "product" and "UX" into totally separate professions has meant that ownership of the overall quality is now theoretically in the hands of people who can't ensure it.

Of all the answers proposed in this entire thread, this is the one I agree with the most. You put into words exactly what I’ve been thinking for years.

The fundamental problem is the lack of accountability for the product owner.

Consider the film industry: A reputable film director owns the film in the sense that their reputation and career depends on the quality of the final product. They still hire/outsource massive swaths of the work needed to produce a movie, but they are the person held ultimately responsible.

Contrast that with some well-known software like Microsoft Teams. Teams is probably the worst piece of shit software I’ve used in 30 years. Who the fuck owns Teams? Not Satya. I have no idea. I bet no one at Microsoft can answer that question. The answer is probably nobody at all.


> They still hire/outsource massive swaths of the work needed to produce a movie, but they are the person held ultimately responsible.

I sort of wonder whether software engineering is just weird, in the sense that in theory a director could probably do most of the jobs in a movie production, e.g. point a camera, select cast members, acting, set construction. They would do them badly, but they would have some idea of what is involved and what they want.

Software is so iceberg-like that such equivalent direction seems impossible without a technical background.


Software now is too complicated. Too many things are taken for granted. Methods like agile / scrum often force developers to do things a certain way to keep up the impression of speed. There's rarely time to think carefully.


Software development seems very vulnerable to cargo culting. People are not deciding the tool that they need and the architectures of their applications because of what technically makes sense but because of what is popular. One could start to suspect that what requirements the customer has actually has no influence on what architecture is chosen...


Yeah cargo culting with the majority of developers having not many years experience and doing resume driven development.


As both a military officer and a senior JavaScript developer for a Fortune 50 company the problem, industry-wide, in this slice of software is weak leadership completely opposed to quality in institutional terms.

Institutionalized quality would include:

* Uniform minimum accepted standards of practice.

* Uniform foundations of minimum accepted product quality.

* Uniform accepted measures and metrics.

* Formal training dedicated to standards and software as a platform, not tools.

* Credentials to certify standards of practice and conformance to ethics therein.

Instead children are left to drive the bus and cry about how hard life is. Anytime this subject comes up there appears to be no adults in the room.


This is an under-rated comment. The industry as a whole lacks discipline. Nobody wants to do things that are "boring".

I've seen some movement in some aspects - I no longer have to fight for introducing automated testing - however this is mostly cargo-culting which means I have to fight for doing things in a relevant way instead of accepting copy/pasting stack overflow.


My rule of thumb is that the quality of software is inversely proportional to the distance between the user and the programmer. If the programmer is in direct contact with the user you generally get good software. If there are layers and layers of people in between user and programmer, the software gets progressively worse.


I tend to agree, but I wonder if that's less about customer-advocacy and more about layers and layers of business and internal noise that muddy the waters.


Talking about OneDrive type apps - lets discuss Dropbox. Who's idea was it to make file revisions - even as a paid subscriber, limited to 30 days (or whatever it is). This essentially makes one of the best features of Dropbox (sharing files and not having to create "v2 final final final.doc" copies), useless unless you're on the highest tier. Well they lost my business pretty quickly after that.


Oh there's many reasons for sloppy software but I wouldn't call it all bad. There's plenty of good software around but of course people generally point at the faults first.

I'd say top 2 reasons are tied to either adversity in software engineering or simple economy. Devs are pressured to work fast, overtime, in understaffed teams, prioritizing features over quality. Once a project becomes old enough changes suddenly become much more expensive and issues start getting lost in the bottom of backlog. Large projects are chock full of tech debt, bad design decisions that used to be good at the time of writing, undocumented features, untested edge cases. IT workers constantly changing jobs means knowledge slowly evaporates. And lets not forget how fast tech becomes a piece of legacy and all the associated difficulties of working with old tech.

The economy part is simple. Businesses prioritize profit. If people can live with a few quirks they'll stay there indefinitely. If a company gets X cash for a project they'll maximize the profit by doing as little as possible while getting it done. If a product isn't really making that much profit it goes to maintenance mode sacrificing quality for a shiny new thing that will hopefully be more profitable. Projects once they go to maintenance mode only fix what's being payed for and that's the bare necessities.


We saw another fun anecdote illustrating what happens here on HN recently. It was an unnamed MS engineer who explained the issue with post-Win7 design at microsoft was, that even though the actual programmer engineers _building_ it knew which way north was, the design struck out for them was dictated by Designers who themselves would only ever touch Macs (if touch any computer at all, not to smirch their design genius), and not lower themselves to consider how LEGACY WINDOWS (ie windows from 6 months prior) actually worked - which would anyway be IRRELEVANT, as they were responsible for designing the FUTURE, and you don't do that by looking at the PAST.

And this is how we end up with the world's most succesful desktop OS having "Title menu bars that can no longer be used to drag the window", menu-bars consisting of ALLCAPS to hint it might possibly be a menu, a start-menu that you can no longer browse, where your installed programs never show up or disapppear, and where central OS windows no longer follow basic UI guidelines (goodbye incremental-keyboard-search, and the ability to sort and resize list columns), and hello scrolling lists without scrollbars, and goodbye indicator of how big this scrolling list is, and whether there are further items below to scroll into view. It is telling that Microsoft has not been able to produce a viable GUI framework/toolkit since 2006. Interestingly, WPF still sports 'the worst folder picker in the world', even though Forms has offered an excellent and superior folder-picker for around 20-22 years by now.


These stories are such red meat for engineers that I don't know how trustworthy they are. "The engineers are misunderstood geniuses who aren't allowed to do the right thing by Designers/PMs/Management" is just such an incredibly easy story.


Just wanted to vent that, for whatever reason, after about half hour, let's say one hour of constant chatting on Google Hangouts the whole interface starts to lag, inputting text certainly "develops" a lag, it becomes very annoying, often times I have to close and re-open the browser instance just to kind of "reset" the whole thing.

I'm on a MacMini from 2018 (I think) with 16GB of memory, on the latest version of Chrome, still can't believe Google "bug-regressed" on something as basic as online chatting through a browser. I was able to chat in a Netscape 4 instance on a 5x86 (or similar) more than 20 years ago with no such inconveniences.


The question: are you still using OneDrive? Have you switched to something competing?

Would you be willing to pay more - or pay at all - for a better solution? Or write own software?

People overall are accepting tragically user-hostile sofware. How many stayed on Windows after it got ads in the start menu?

-----------------

In case of OneDrive following things can happen:

- people escape to superior competing solutions - someone spots that noone provides high quality solution and makes own and earns money on that - people accept low quality - OneDrive gets improved

(multiple can happen for multiple groups of people)

-----------------

How much you are willing to pay for cloud storage?

How much you are willing to pay for high quality search engine without ads? (I would say that 20-50 euro per month may be viable for me if search quality is noticeably better than Google - I am aware about Kagi but not sure is it actually giving better results)

How much you are willing to pay for open data navigation not tracking you? (in my case I put significant effort into OpenStreetMap)

Note that for many people answer to all above is "nothing at all" so services provided to this people care about stuffing as many ads as possible rather than about quality.

-----------------

Also, software in many aspects is strictly superior that what was available in past.


I was thinking about all of this while writing my question. I agree: I can just switch to another app. IF I find a better app, but can you find such a better app at all? I think that my question is more about this (which is resonated in many other comments): in general, the incentives given to developers by companies / society does not align with better quality software. At the contrary: quality costs time and money, it feels like it's not motivated financially.

Still, I think that this is somewhat corrupted. A beautiful, comfortable, durable chair is better than a ugly chair that breaks after 1 year. Still, probably the ugly chair will have a way larger financial margin. Guess what companies are going to produce?

I guess it's yet another demonstration that no matter how perfect a system is, it won't set people free from morals, from taking their own responsibility to make the world a better place. Beauty and good will always need to be fought for.


In many cases there is a better app, but we still can't use it because of other reasons.

Slack or Teams? I've never heard anybody say they prefer Teams, yet it is used everywhere. Probably because of lower direct costs and it integrates with all the other Microsoft stuff and active directory automatically. Maybe if we could measure happiness and productivity costs it would turn out to be much more expensive.

WhatsApp or Signal? If you value privacy Signal is definitely better, but if none of your friends is willing to use it that doesn't matter a lot.


> but can you find such a better app at all

In this case - probably yes, and even making own one is feasible.

It is not as bad case as with mobile OS where making own is infeasible and among existing ones all are terrible. You can only select flavour of terribleness.

> in general, the incentives given to developers by companies / society does not align with better quality software

Exactly

> Still, probably the ugly chair will have a way larger financial margin. Guess what companies are going to produce?

Ideally there would be both company producing cheap ones (IKEA) and ones producing masterpieces (carpenter making one-off furniture on request) and anything in range.

> Beauty and good will always need to be fought for.

++


Alright I guess I'll take the other side of this.

Most of the software I use is great. Absolutely, mindblowingly great. This is in the face of seriously hard challenges with physics and mathematics. I get that you're having some weird troubles with Google Drive, that I've personally never had, but when I sit back and think about just how much software I use I'm astonished that I'm not frustrated more often. I send messages, they get delivered. I take a photo, it has crazy high quality for the size of the lens and sensor. I google a python question, I get an answer. I look for music, it's there ready for me to listen to it.

You have high standards, I get that, but software is hard. It moves so fast and even seemingly simple things are way harder than first glance. I don't know why the edge case you hit was so bad, but in general the GSuite has been great for me. It's always possible it was something weird like a browser extension.

Anyway, what I'm trying to say is that so much of software works well that you may have stopped appreciating it. I try not to forget what computers were like twenty years ago. It was madness.


I agree, this might be the case of negative bias. We don't think about all the software that "just works."

I just had a video call with someone halfway across the world, but got bored, so I pressed the home key on my android so it became a windowed call so I could scroll through Instagram instead. All on a $500 phone.


Move fast and break things. The cicd model of delivery means that engineers are encourage to deliver functionality and polish things only if they are critical. The consequence is that people tend to reuse solutions of intermidiary problems no matter if those solutions are optimal. This accumulates technical debt that nobody really wants to repay because it is not a solution of a business requirement.

Additionally, the Moor's law and the well-paid jobs of developers mean that the machines where the code is written are not the machines where it is ran with the difference often being a few generations of processors, graphic cards, and monitors.


The hiring process is bad. Real skills get rejected while leetcode gets a job.


Empathy is the key in my experience so far.

No amount of discipline and techno trickery will get you to a happy user if you lack the capacity to feel what they may be feeling while using your software.

In most large organizations the breakdown is obvious. Too many layers. Opportunity to engage empathically is lost to some twitter support bots. Everyone on the inside is reduced to seeing some polished interior of a mindless corporate automaton.

Steve Jobs is an excellent example of how to cut through this problem in a large org. He reinjected the actual feelings of the end user from the top. Few other corporate leaders do this today.


I think one reason is failure to embrace simple, open standards. The railways worked because, despite essentially infinite choices of track gauge, they picked standard gauge and stuck with it. Is it the perfect gauge? Probably not. But it works well enough and having a single standard gauge is far more valuable than optimising the gauge for each new line.

In modern software, integration wins. Look at MS Teams. Complete shite. A buggy, bloated mess. But it wins because it integrates many different services together. Be honest, every time you have done any kind of integration between two pieces of software there have been hacks. But appealing to standards never works. Instead you are just told to make it work. Those who make it work are rewarded. Those who appeal to standards are sacked. You've basically hacked your train wheels to make them work on dodgy track and then wonder why the train derailed.

It's not all bad. From the TCP layer down we have all agreed to a few standards that work. It's all the stuff above it that breaks.


Teams wins because subscribing to Office means subscribing to Teams.


Yesterday I had exactly the same thoughts: I had to RESTART a windows PC in order to print a word document because the driver refused to recognise that there was actually paper in the printer afterall. I'd be ashamed to ship stuff with such obvious failings.

My other gripe is how, with increasing screen resolutions, UIs are eating more space totally unneccessarily. That Word print dialogue box is a good example - it must use 1/3 of a 4k res screen's estate only to display a dialogue with a button that is confusing, as it isn't consistent with Windows overall UI.


In general there are a lot of factors, in the example you give I would point out that it relates to cargo cult followers of a poor summarizations of best practices.

TDD/BDD proposes a world where all things work and remain working when investigated by an automated test. The automated test is built up step by step and happy to do everything from start to finish to look at the one feature and the feature may remain and continue to get repairs long after it continues to make any sense in a flow and with very little observation of how broken it might be in natural use.


At least where I work, the only thing project managers care about is releasing the next big feature for their resume or next promo packet. Making it work well or be well tested is very low on the priorities. In fact, if you time budget that in as an engineer, they'll find someone else to implement it who will estimate half the time.


> This is a recurring theme on HN, so I think you all have very good opinions on this topic: why does modern software seem so unpolished, slow, bloated, unprofessional?

Major software companies rediscovered that users will tolerate crap, so they've adopted processes and mindsets that serve the company at the expense of the user.

Also, in many ways the web sucks compared to native software, and the modern impulse is to web-all-the-things. So what was once a relatively slim desktop app is now some bloated website in a can.

The mean is now some shitty website or shitty website in a can. And no matter how good a software company starts out, it's run by businessmen and most of them will push it back towards the mean ("Why are we developing this fast, slim native app? We can reduce duplication and cost by shipping an Electron app that just repackages our web version).


Cause sadly KR > O

Management is by now mostly "OKR: Objective Key Result" driven.

But the performance review mostly focused on the KR as its designed for measurbility, the O is an afterthought.

Should be the other way round. Isn't. I by now quite often (when the company works with OKR but the quality that comes out product wise sucks over multiple time periods) recommend to trash the KR completely and only focus on the O.


Or make quality one of your KRs.


And then the demand for "how do we make this measurable", and it gets measured with inane shit like linter warnings, or maybe something that makes sense like bug reports but then the incentive is to start clamping down on marking things as not a bug rather than improving the software.


No one in the comments mentions complexity.

The complexity of the interactions you described is orders of magnitude anything software in, say, the 80s had to deal with. Just imagine how many technologies you are using to display photos from "somewhere in the cloud" into your TV.

Now we can discuss how much is essential and accidental complexity, what alternatives are there, and whether the tradeoffs are worth it.


> Just imagine how many technologies you are using

Indeed. I weep every time I see how many layers of layers today's software is build upon.


A properly installed Ubuntu on a fairly modern laptop, with i3, firefox, xterm, webstorm/sublime text and VLC for the occasional video works like a breeze. Rarely use more than a few gigs of RAM even when multitasking on multiple projects. Runs for weeks. Never crashes.

Compared to the Windows Millenium of yore with its blue screen every now and then, Visual which would take forever to load, firefox would crash miserably every few hours.

I don't deny your experience but this is just anecdata. The whole tendency of software is to get more reliable over time. Even if the complexity can set reliability back sometimes.


I don't get the random hate for Visual Studio Code on HN, it runs perfectly fine and is super fast. Are there genuinely people who think the 1 second it takes for VSC to load is too slow compared to the 0.75s it takes Sublime to load? (Made up numbers)

Unless you meant Visual Studio, but that wouldn't be a fair example at all.


Dude I am talking about Visual Studio 6.0, 25 years ago !


I suspect it’s the tools, frameworks, and the way the majority of software developers are taught. And also the computers we develop on.

The tools and frameworks add bloat. One some level we do this because the complexity of what we’re dealing with gets in the way of the problem we’re trying to solve. We add a layer of abstraction to hide the details and the solution becomes tractable.

We were supposed to be able to have our cake and eat it too but it turns out that a lot of these abstractions aren’t free in terms of performances.

The other factor is that a lot of developer tooling is designed for developer convenience. This is nice when you want to try out an idea. But the prototype often becomes the product. It’s nicer to work with, perhaps easier to add features to it, but that trade off is still there: performance.

And we’re taught to feel guilty or told we’re doing it wrong if we show any concern about performance. I’ve seen developers accused of the dreaded, “premature optimization,” sin for making suggestions in PRs to avoid code that would introduce poor performance. Or for suggesting anything performance related.

Lastly a lot of developers get to work on the latest and greatest hardware. They probably don’t spend any time or effort testing on older or low-tier hardware. This leads to designs that are “good enough” on these machines but will be slow as molasses on anything an average consumer would use. There’s a highly myopic view about platforms and is often not even considered.


There is monetary value being first and rolling out new features that generate news which attracts new users. Once a user is hooked more difficult for them to leave. Also it's a lot easier now to issue a patch/update over the internet instead of send out CDs etc. There really isn't and good reason to be as bullet proof as possible as there used to be unfortunately. This is especially true the bigger a company gets, nobody is leaving apple/Google over a bug.


It is a lot easier to ship a patch, yet it's still not done, even for very basic pieces of software. Think about the dumpster fire called printer drivers, be it HP or Epson, or the entire print spooler concept of Windows. I would be ashamed to put such garbage on my CV, yet it all lives on since decades.


imagine, in a world where there wasn't a concept of clean code but to write code as simply as possible. not having thousands of classes with indirection. what if your code logic was simple functions and a bunch of if statements. not clever right, but it would work. what if your hiring process was not optimizing for algorithm efficiency but that something simply works reliably. imagine a world where the tooling used by software engineers wasn't fragile but simple to use and learn. oh the world would be a wonderful place, but the thing is most people don't know how to craft software. but here we're building software on a house of cards


Culture of the product company.

That is, move fast and break things (cause it works for them) vs let's get this right (else we'll be targeted on HN).

In the case of Google, they don't really have an incentive to do better product. The products are simply a smokescreen for their search / advertising monopoly.

Furthermore, you and your One Drive, Cast, etc. are more of a source of data to be harvested than a means to bringing joy and satisfaction into your life.


The internet brought with it constant updates, and with that, we have now entered the age of planned obsolescence. Now the majority of users with spending power are forced to upgrade to the latest devices, which has removed the hardware constraints of old. With little to no hardware constraints on both the user and developer end, performance optimization has now taken a back seat in the list of priorities for shipping product.


The extra cost to produce high-quality software is not offset by higher profits from that software. If low-quality software generates 90% of the revenue for 30% of the cost, why bother with that extra work? It makes sense from the management's perspective, however frustrating it is from the user's end.

I face the same thing. I'm locked into using a particular GIS software package for my work, supposedly a central system accessible from any device. However, in practice they have 3 different apps and 2 different web interfaces, none of which have the same feature set. And often I want to use data that's accessible on one of their platforms to do processing that's only available on another of their platforms. And there are backend problems for which I've periodically been sending in support/suggestion tickets for a decade without any fixes. However, their marketing feeds are constantly bragging about new features and integrations. Why? Fixing their broken shit is harder work for less payoff. So they just don't. And they've got the patents, so they don't have to care.


Short answer: features above all. Yes, modern software and technology in general are irrevocably broken and will only get worse. The good news is, they are not essential to our everyday lives… right?


HDMI ARC being out of sync etc etc


In the US, I can get paid very well to do mediocre work. Despite our buggy software, we keep raising more money and our valuation increases. PMs and managers get paid more if they delivery new features. At some point the company will get acquired or fail. Either way, the code will be gone and I'll be at another job.

Why bother caring?


It is dead simple, the sloppiness of the software does not affect its ability to generate money for shareholders. That is all that matters for those in charge. Ship things fast; explain that you'll fix stuff with frequent updates; each update breaks something else; explain fixes are incoming; keep making money.


User resources: time, computer CPU and memory, user network bandwith, etc. are free for company. Why care?

In free software, developers are users, so they care. Software is free, but developer_as_user time is not free, so it better to spent 1 hour time once to save 1 minute every time in the future.

In commercial software, developers are not users, so they don't care. Software is paid, so ROI is important, thus it better not to spent 100 hours of developer time + 1 hour of upkeeping time every month to save 1 minute of users time, unless it's important to crush a competitor.

Commercial producers are quadruple prices, until users stop to buy, drops quality, until users stop to buy, stuff ads in, until users stop to buy, shrinkflate, until users stop to buy, and so on.

Did you switch to a competitor product? No? Then why company should care? Vote with your money.


Monopolies, Linux and race to the bottom.

Monopolies like Microsoft, Google, Apple for stuff that have no real competition, so the manufacturers have no incentive to polish their products.

Linux for products made by volunteers that work when they can, how much they can and have the attitude "you get what you paid for, so don't complain too much". it is totally fair, but it leads to sloppy software.

Race to the bottom, like our MES supplier that fired most of their US based people and hired juniors in India; the quality went downhill, but the lack of better alternatives (again ... monopolies) makes it good for them, financially, for a while. Their CEO gets the bonus for the savings this year, not for the death of the company 5 years later.

EDIT: adding Internet, the way you can ship shitty software that you may partially fix later.


Except that the usual quality of Open Source software is usually much higher than it’s commercial counterparts.


Public cloud trend, growing development and deployment stack complexity that takes consumes large portions of developer productivity away from quality and polish. We are expected to release features in limited timeframes, if so - and we have to battle what the cloud providers have been pushing in form of hundreds of services, multitude of stacks, or in frontend understand dependencies between thousands of packages to understand impact of a particular change, then we eat up time that we could've spent on quality. Unless of course there is a dedicated QA team that is responsible for testing - but that's obviously too expensive, and has its drawbacks itself, as can introduce the wrong mentality (outsourcing caring about quality to another department).


Software is shit because we (programmers) don't need to do better. Doing anything is not just about doing that thing, it's also a kind of an optimization problem, quality being just one of the variables. In software, writing whatever on fast machines and huge abstract framework is fast, and gets the job done, which keeps the income flowing, for both the developers and their product managers, managers, etc. The annoyance, or actual grief of the end user is secondary to many other aspects.

This is what ended up pushing me into the open source / free software world. As long as I need to deal with shitty software, I'd rather deal with the annoyance of not being cared about, than with the dark pattern galore of the proprietary world.


Incentives are pretty much always to push out software as fast as possible.

If you're the one guy who takes twice as long to write code, even if it works really well and needs little maintenance, you'll be the one who doesn't get bonus / gets laid off.


Easy. Management decides to use new tech without giving time to devs to learn it; new tech has huge blind spots making it impossible to do anything serious above Hello World; devs need time to figure out how to go around new tech’s limitations, no such time is given, ever to anyone; a management mandated culture where the dev that closes the ticket first is praised even though that means that he skipped almost all the edge cases; product owners and managers that lead the way but they have no clue about UX and dev, the blind leads the way.


I've worked only once in a decade with real QA Analysts. It was a relief as a software developer.

It's really a hard sell. I feel uncomfortable and I even try to do more testing by myself, but I fell pressure to deliver more features and scrum points than polishing some corner cases.

People do point out to me the pareto distribution (solve 80% of the ROI, forget the 20% that has less value and costs a lot), but it's stupid when your service scales to high thousands or millions of users when the chance to have edge cases poping up are increased.


Ah QA Analyst, the person no one wants to be but everyone wants on their team.


By the current economic model.

Is food in the mass market fast foods quality food? Are mass produced electronic gadgets and toys of good quality? So on and so forth. The most amount of money is to be made in bringing mass quantities of shiny crap to the market.

Educated clients - the lack thereof. As opposed to the past where most software was for professional use these days the software that brings the most bucks is for mass consumption. And not very long lived either, people get bored fast.

As the economy changes consuming habits will change too, I think.


- Open plan offices and presenteeism culture "you're working if you're sitting in an office".

- The industry disincentivises people to stay long at any one job, encouraging loss of institutional knowledge, and discouraging possibilities of fixing issues that might require fixing over a long time period (years).

- Sales lead growth is a double edged sword, when you sell a piece of software without the features existing, which means you might have an impossible timeframe to complete those features without cutting corners.


In my opinion there are two main reasons:

1. Parallel programming is hard: Everybody programs for multiple cores now, but this makes the program architecture much more complicated and dealing with errors much harder. You have all kinds of threads or green threads doing things in the background, each of which may fail in various ways, and you to deal with this asynchronous command execution somehow. The indeterminacy of such processes is the cause of unexpected behavior that makes an application look unprofessional. As a drastic example, you press a button, it starts doing something, and then shows a spinning wheel of death until an automated timeout is met after 2 minutes.

2. GUI frameworks are much worse today than twenty years ago: For example, web application frameworks consist of layers of different programming languages and document formats (JS, HTML, CSSS, frameworks on top of that, and maybe another programming language connecting with all this mess) running in web browser (or some kind of half-baked emulation of a web browser!). It's a wonder these work at all. Desktop and mobile frameworks not based on web technology are often thin layers on top of SDL nowadays. This means they reinvent everything, and it's very, very hard to do this correctly. Even native OS controls/widgets often have problems, and these have been fine-tuned by Apple and Microsoft for decades. Your multiline edit field behaves weird? That's the reason why. Write your own multiline rich text editor with image support and you'll see why this is hard.

In terms of QA, it seems that once native user interfaces are given up, an "anything goes" mentality becomes prevalent. Maybe it's because companies think they compete with web apps (=even worse user interface) rather than native apps.

Tl;dr Programmers have been piling crap on top of crap for two decades now, and if you combine that with parallel programming and connections between multiple layers, it's going to be fragile and error-prone.


> GUI frameworks are much worse today than twenty years ago

GUI and Web frameworks seem to change too quickly to make it worth the while to study them. Despite advances in software engineering elsewhere, frameworks in these two areas have become more complex instead of more simple.

Rapid pace of change, technical debt, proliferation of options even for one language.

The good news is there's plenty of work to be done here to clean things up. The bad news is this isn't something that one can do alone (as much as I like Linux, on the desktop applications lack uniformity much more than on MacOS X or Windows).

The solution? Smart app developers build the business logic as API and try to maximally decouple the UI from the business logic. But this doesn't work well for all kinds of applications: e.g. it's hard to do for a WYSIWYG word processor, whereas it may be very easy for an image format converter.


> Are the incentives given to developers so at odd with app quality?

Yes.

To expand, as others have said QA is a cost centre, and they often reported things it wasn't clear that users would actually view as a problem.

By cutting that out you get two theoretical benefits:

1. The people actually using your software will be the ones telling you where the problems are. This helps you prioritize work.

2. You don't spend as much money on QA people or dev time on fixes, which means more on feature work.

Maybe a not so obvious side effect of agile development is that it sort of reorganized software development into feature-focused development, meaning developers' bonuses at a lot of companies are often tied to the amount of new features they implement, and there's almost no focus on maintenance or fixes.

The same thing happens in politics: The incentive is on shiny new things, rather than maintaining the old things everyone uses constantly and desperately need the repairs. It isn't until bridges start literally falling down that tunes change on this.

Basically, if your performance is measured by how many new features you're able to deliver on time, to timelines set by a manager or executive who also has a bonus tied to how many new features or products their teams were able to deliver in a quarter, you're not going to think about QA at all if you can help it.


Here are some reasons I can think of that impact software quality:

- Profitability prioritized over user experience

- Design prioritized over functionality

- Never-ending push towards new (mostly useless) features

- Poor product specification for new features

- Insufficient involvement from customers/users in the Product life cycle

- Technical team not involved in Product decisions

- Lack of investment to reduce overall technical debt

- Disregard to system architecture and coding conventions

- Deficient onboarding/training of new developers

- Desire to adopt new tools/frameworks instead of proven/established ones


Software is bad because it's good enough. Management doesn't care if it's not optimal. There's always something else that needs attention.


The adage for software is “fast, cheap, good - pick two”, but it has devolved further into “fast, cheap, good - pick fast”. This is obviously a generalisation, but the reality for most software development is that speed to market, pivot and patch trumps everything else. I expect it will get worse before it gets better.


I disagree with the sentiment here. Modern software is much more polished and professional but perhaps more slow.

I remember installing stuff 10-15 years ago it would constantly fail with weird errors and bugs. Using linux was a real pain. Everything was much more monopolistic than it is today. You had to use x for that, you had to used y for this. While we still have a lot of issues today it is vastly better than it used to be.

I haven't had any machine crash with a BSOD on windows for years and on linux everything just kind of works and I have fewer and fewer issues. I can work remotely 100% of the time and do impressive things in the browser, stuff that was simply not possible at all 10-15 years ago.

I remember what a big deal Gmail was when it came out and today we take that kind of service for granted. Even with all the warts modern software has, it is still better than it used to be.


Or you’ve just learned to work around the problems and you’re no longer noticing it.


One word: energy!

As we burned all hydrocarbons we eventually discovered the personal computer and the internet AFTER we decoupled our economy backing from hard assets.

Now that energy, which is just stored sunlight, is running out and so everything else will slow down and have to be optimized.

We are at peak EVERYTHING, both good and bad!

Personally I'm staying on HTTP/1.1, SMTP and DNS, OpenGL (ES) 3, SPI, JavaSE on server, C+ on client (C syntax with C++ compiler), vanilla HTML, .css, .js for GUIs for life.

Windows 7 is still the best Windows. Linux is still the worst desktop.

Hardware is NOT getting better, I'm staying on socket 1151 until they become too expensive to power!

Intel Atom as load balancer, Raspberry 2/4 as file/compute servers, Raspberry 4/Jetson Nano as desktop and Raspberry Pico as mobile communication is the only viable future.

Let's get busy and build a low energy future!


Not sure what you mean by the worst desktop. I guess you mean UI/UX which tbh, I stopped caring a long time ago.

I'm using Linux as my daily driver, usually work from the commandline and have multiple screens which I can cycle through with a keyboard combination.

This way, I'm at least 3x more productive (speed, efficiency, ease of use) than Windows. Most of the time, I'm not even using a mouse.


I use TWM on Atom/NUC/Raspberry 4, and it works ok... BUT the GUI focus, copy/paste and /\ problems are troubling me... WHY?

I said 2 years ago I would switch to the Raspberry 4 as my main driver, but it's still on my side monitor, with the keyboard on the side!!!

I'm waiting for higher KWh prices... probably next winter I'll have to switch!


Why twm? Have you considered using a modern but non-bloated Wayland compositor like sway, using applications that natively support Wayland where possible and Xwayland where needed? I imagine that would make much more efficient use of the GPU that the Raspberry Pi has always had.


Hm, I'm usually not a fan of using new stuff unless it has been stable/working/default for some years... last I heard of wayland was NOT good, but it was a couple of years back... The way you should use the GPU is with OpenGL, 2D can get along without hardware acceleration, I don't see how wayland can be better at using OpenGL as that is the application who calls the GPU directly without going through the window manager!?


Comfortable office workers and computer users are more aware now than ever of the billions of people in the world less fortunate than them. It just feels gauche, these days, to get too upset if the loading spinner on your diet tracking app freezes for one frame when the animation loops.


BS. You inflict terrible software on one user, you've inflicted it on them all. We spend more of our day dealing with facing the burden of other's lack of care than we do making things that unequivocally work.

I can't fix that employers are exploitive due to prevailing incentives; I can't get rid of the Jack Welch wannabes, the unethical fintechs or feature mills, the daftness that is the perpetually inflating asset bubble, etc... Yet despite that I try hard on a real regular basis to help people navigate an ungodly complex networked world a smidge easier every day; and all I've gotten out of it is trauma, hate, discontent, and a precious handful of really amazing people that all try to keep each other sane while we watch the world seem to burn down around us.

I never sacrificed, nor accept the sacrifice of standards, and I will continue to pound those into to anyone mildly receptive. Until everyone stops seeing Software as the latest lottery ticket though, this isn't going to stop. At all.


Well, I mean, old software was also pretty bad; there may be an aspect of rose-tinted spectacles here...

For instance, see Lotus Notes.

There does seem to be an increased tolerance for UI latency in _consumer_ software over the last decade or so, and I'm not totally sure why. Pretty much any electron app is laggier than pretty much any consumer app from the early 21st century, and people seem to be broadly okay with that.

Also, Microsoft may at this point view OneDrive as basically an enterprise thing, and you can get away with practically anything in enterprise software. Without seeing statistics, I would guess that OneDrive is not commonly used for consumer purposes vs the competition; it definitely _feels_ like an also-ran. Microsoft's enterprise offerings have always been pretty awful.


Ultimately: our version of capitalism. Which focuses on growth. Which translates to features. Which rewards proofs of concept. And devalues things like longevity, maturity and stability.


It's the TODO that's the killer. We are constantly adding tech debt, and never paying it back.


The amount of times I've come across a TODO I added that I could have just done in 2 minutes instead of leaving future me an annoying note makes me want to slap myself!


But would you be allowed to do that?

Minor renames and similar fixes would be fine, but at work I'd get in hot water sooner or later if I added (more) random fixes to my pull requests.

While this is not the case all the time, a very good reason is that some times one persons bug might very well be another persons feature.


>> 3) The app in general is inconceivably slow. What is taking so long? I am viewing the same directory for 2 hours, why is it still so slow to load?

Practically speaking that's probably because nobody has tested the app with more than a dozen pictures in a folder. At a guess?


Have you seen the world we live in? I have seen inside many organizations which on the outside look polished with great brands....underneath politics, discord, and in some areas chaos. The state of software reflects this...


A lot of the time software is unreasonably reliable and resilient compared to how the world usually works. Just look at 99.99999999999999% durability for object storage. I am signifcantly more likely to die tommorow than lose a file.


They release app just to provide minimum features. Sometimes they have better UX on webapp than mobile app.

I think people should create a ticket and ask for support. If nobody reports, they don't know there is a demand for a particular feature or fix.

I saw a page talking about Microsoft updating Team to make it less bloated [1]. I think they have higher priority for popular product.

[1]: https://tomtalks.blog/microsoft-teams-2-0-will-use-half-the-...


So Microsoft is moving a flagship product with hundreds of developers from one web browser javascript framework to another (node to react)... That seems to be part of the reason of the bloat.

MS could afford to do it right - hire a few dozen elite native developers who can code it up in native C# with 10x the speed and 0.1x the resource consumption like they did for every other MS Office application, rather than this half-baked layering of 3rd party frameworks on top of an interpreted language on top of a web browser...


Sadly they don't care. There's countless reports where the the reporting user gets pointed to answers.microsoft.com, and some asshead employee replies telling the user their issue is invalid and they should post on uservoice instead. For example, Onenote class notebooks can still not delete group folders, ever. A bunch of them get added by default, and can never be deleted. Adding one takes a few clicks and it can never be deleted. This was first reported in 2016 and hundreds of frustrated teachers clicked on "I have this problem too". Some microsoft employee replied telling people that that's not going to happen, and they should post on uservoice. As of this year, the entire uservoice instance with thousands of issue reports and hundreds of comments on each was deleted. Microsoft has used every tool available to tell its users their feedback is not wanted and they should fuck off.


Unrealistic, arbitrary delivery timelines, so that someone looks good on their performance review, leads to cutting corners in development, testing and finally accepting non-showstopper defects as release-ready. Oh, and I don't think teams working in AGILE helps, unless every team is coordinated with others so that no one steps over each other's feet (I've unluckily not been able to see this work).

But this is not a modern problem. I saw this from when I started working in software 25 years ago.


Increased cycle budget, developers much less into 'hardcore' stuff (algorithms), move fast and break stuff, automatic updates.

Pick any or all of them as fractional contributors.


Speed. Everything has to move fast now for some reason.

Money. Every dollar of QA is potentially a dollar wasted on making something better than what customers can tolerate.


I think the "some reason" is fairly obvious: if you don't move fast in a competitive area (i.e. where the money is), someone else will and they'll eat your lunch.


Everything being revenue optimized and run by MBA people


5000 pictures is a lot of pictures. Adobe Lightroom struggles with that many.

It seems to me that if somebody said ‘let’s make a program that can easily view 10,000 picture albums on a high-end computer’ it could be done. You would have to think through the data structures and apply the methods used in high-end video games.

It seems to me nobody is taking the problem seriously enough.


I have 130,000 photos in iCloud Photos and the photos app on my Mac and iPhone can scrub back and forth in the history from today back to 1985 completely flawlessly in literally a second with no hitch.

It sounds like OneDrive is a file syncing solution where they hacked on some photo browsing functionality to preview a handful of pictures in a directory and it's not thought through for people who actually want to use it for more than that.


Any particular set of algorithms and data structures is going to scale in a certain way.

Some people might really be happy with having 500 images in a directory.

With 130,000 local thumbnails and the right data structures you'd think you could browse quickly, but it is a lot of thumbnails (might not want to store that much.) It would make a big difference if you had SSD or a hard drive or could afford to store a lot in RAM. If it is based on cloud storage you have an entirely different problem but also different solutions available (e.g. when you log into the application a huge amount of data gets spooled into RAM sequentially so temporarily you are using vastly more resources than would normally be available; this is a lot of why gmail and google analytics are so responsive.) There also are tricks that would make scrolling from the beginning to the end seem really fast but not be so good at scrolling into the middle.

In my case I am dealing with 50MB RAW files from my camera which are admittedly excessive (most of the pixels are surplus, I shoot pictures that look great on a screen or on a 8.5x11 print but if I zoom in they rarely look 'tack sharp') The scaling of that is a different story than 0.5MB JPEG files.

I remember iPhoto being absolutely awful circa 2010 with 10,000 local files on a magnetic hard drive. You're telling me it is much better w/ cloud storage in 2022.


Fail fast culture.


In the end, you need to decide between features and stability. You can certainly write a very stable, simple app without all the extras that make it complex, stuff like Google Cast. But unless users demand it by paying more for a simple but stable app than a fully-featured app, there is no incentive to invest in stability instead of features.


I see a slippery slope in ML engineering, where to make something “production ready” engineers layer a relatively simple workflow in about 8 different technologies. I sometimes wonder if it’s preferable to just deal with occasional issues as they occur than to anticipate every deployment bug under the Sun.


Good thing is, if one searches for it, one can find really good, stable, fast software from developers who care about bug reports for most tasks. It’s not always easy, it’s most time more expensive, and you’ll probably not get all those shiny features that look good on landing pages. But it exists.


Easy - nowadays value is not created from software itself, but from data lock-in and "capture".


Profit motive and relentless ruthlessness in business by every company exploiting their customers.


It's probably already been written below, but the core problem appears, when the software is not being SELECTED by INFORMED receivers. In that case, no 'evolution'/survival-of-the-fittest is happening.

E.g. when you are using an app tied to a service you are already sold on, you cannot select an alternative app to use (because e.g. your fitness-chain only has their own app). Similarly, if you are a gmail user, most users are not informed enough to figure out they could access their mail with an alternative IMAP client.

Good software happens when you can choose between competing alternatives. But the world is currently filling up with siloed monopolies that don't have to compete.

I see something similar with christmas calendar candles. Every december, supermarkets stock christmas candles of horrible quality. They get away with it, because every shopper only needs to buy a single calendar candle, and every shopper is an inexperienced christmas-candle buyer. Meanwhile, the rest of the year, you can buy big candles of the same size, of excellent quality, just without the dates of december marked out. TL;DR: A lot of inexperienced candle buyers exist only in december.


It’s people who code for money not passion and borderline nefarious product managers.


People still haven't gotten used to magic that works. We're still in the snake oil phase of this technology. "The wonder isn't how well the bear is dancing -- it's that the bear can dance at all."


Start with your base assumption first, is software now worse than it used to be?


> The directory list view keeps "losing" the position at which I am, so every time I share a photo, I have to scroll down to where I left (in a directory with 5000 pictures)

It was not tested with that large folders.


no dogfooding

too few developers enjoy their job

managers are incompetent but multiply like flies


Its because of JIRA :P


As others have already pointed out, at large, software is much more reliable today than it used to be. But you also have more of it, some % of which is bad.


Did you pay for that app? If you did pay, did the license allow you to hold the seller accountable for any bugs? No? Ok then.


Ambition. Most software is created too quickly, and the projects are driven by people with more money than wisdom.


> the last straw for me has been OneDrive

Did you ever use Windows 95? Seems like the quality of now and then is a bit the same.


Interesting responses in here.

It's all so vague, it's all to easy to see one's favourite enemy as the culprit.


And if you copy and paste to any of the other apps this firm sells, good luck with the formatting!


Not addressing your question, but your problem: OneDrive is crap, use Dropbox instead.


People get the software they deserve. The market is rewarding that type of software.


Simple answer. Software companies have HR departments.


It's made by giant org charts.


time to market

growing faster than competition

distribution networks

growth hypothesis > value hypothesis

Sufficiently Strong Optimization Destroys All Value


SW in its current state of things is horrible and unacceptable. I agree. I cannot understand how remotely is it possible basic basic intuitive logic fails. I've been in this industries for decades as QA, SW, Pgm Mgr/TPM. And the fault is at all levels. Not just QA.

QA: For the most part QA always wants to do the right things and push but they are so low in say, it matters very little to none. And having worked at FAANG/M, QA's are wives/friends/no industry experienced hires as favors of engineers who couldn't spell QA if their lives depended on it.

ENG: they do what they are told and agreed to an absolute minimum. I.E. don't even bother with boundaries yet alone check corner cases and don't even bother with basic smoke/regression testing before checking in. They rather just break the build and spend 2-3 days on retraction than to spend the 2-3 days of being thorough, complete and working.

TPM/Prj/Pgm Mgrs: Meeting their own badly projected schedules and/or trying to meet unrealistic schedule driven by factor outside of their control.

Prod Mgr: Badly researched, designed and defined product. A good product doesn't need to be explained. It should be designed to just work. Thanks Steve Jobs for pushing that mantra. It's like products for babies designed by people who has no concept of a child, or designing Autonomous driving by a person who doesn't own a car or drives.

A clear example what happened to me today. I have Googl home thingy in every room. As a father to a toddler, having voice control at times is indispensable. Today, I tried to get a mini to stream music but it tried to stream a video stream and complained that I couldn't on that device, but it proceed to then stream the audio portion? And before I noticed that it was, I clarified my request by saying to stream music. So it did...BUT it was streaming the audio portion of the video it says it couldn't do and the new content. I tried everything under the sun to stop one or the other or all on every device...NOTHING. It kept playing. I was bathing my kid and had to dry my hands, get to a phone to manually pull up the home app and kill the streams. OMG. Now my toddler screams one thing and only one thing because that's all that is screamed in our house; "Hey Googl....STOP!"

There is no question the idiot who said, "move fast and break things" is being literally executed everywhere.

Love and hate with Tesla the same. For every update, there are 10 things they break/step backwards.

I have no solution and it's a mindset of next gen companies to give their users quality and quantity. I loved gadgets, tech and all things shiny shiny but the disappointed is almost unbearable.

PS> Now that we have literally an army of geriatrics with time and money. Someone should create the "Grandparents tested and approved" certification. If they can find what they need to do without intervention, you got gold.


I reject the premise


Bad programmers.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: