Hacker News new | past | comments | ask | show | jobs | submit login
Saving Lives (2004) (folklore.org)
340 points by compiler-guy on Aug 21, 2023 | hide | past | favorite | 313 comments



A story an old engineer at Apple told me:

When working on MacOS 8.x (not sure which point release), they surveyed users, and their number one complaint was boot time. It took long for the system to boot (around 45s on average at the time). They looked into it but also asked the question, why do people care about boot times at all? At this point, the systems were capable of sleeping, so reboots should be rare.

They found that people were rebooting because of instability, not just once a day or once a week. While they did improve the boot times, they put more effort into making the OS more stable. When the new release shipped, people stopped complaining about boot time, but not because it was vastly improved, instead because they were doing it less often.

The moral of the story is to make sure you understand both what your customers are asking for and why your customers are asking for it.


> When working on MacOS 8.x (not sure which point release), they surveyed users […] They found that people were rebooting because of instability, not just once a day or once a week.

That didn’t require a survey. The OS didn’t have memory protection and typically got patched at startup by ten or so different extensions from both Apple and numerous third parties.

The rules for patching were unclear, to say the least (1), so an extension might, for example, have a code path where it allocated memory inside a patch to a system call that might be moving memory around (a no-no, as the memory manager wasn’t reentrant)

And that had to run code that typically was compiled with a C compiler of the time, with very, very limited tools to prevent out of bounds memory writes.


Apple's customers had been screaming for better stability for years and Apple repeatedly tried and failed to deliver a meaningful solution. Even MacOS 8 introduced very limited memory protection that didn't help much in most practical cases. In context, it's really a story about an organization's capacity and will to rationalize - this very nearly killed Apple as a business.


> In context, it's really a story about an organization's capacity and will to rationalize - this very nearly killed Apple as a business

What damaged Apple's Mac business in the 1990s might been due to tunnel vision and self-delusion, but the driving issue was a loss of obvious differentiation vs. cheaper PCs running Windows. They were all beige boxes with a serviceable GUI that ran the same software, and customers didn't see the value in paying Apple's premium prices.

With the return of Steve Jobs, Apple resolved the PC differentiation issue by refocusing on design in both hardware (iMac) and software (OS X); Apple also sidestepped Windows dominance by focusing on non-PC devices such as the iPod, iPhone, and iPad.


That certainly sounds about right. I definitely lost more time to the fact that a Quadra would freeze with high probability during a scan than I ever lost to intentional reboots.


Mac OS X took time to shut down though.

When a friend first showed my wife Mac OS X and went to shut it down she frowned, "That's something I liked about the Mac, it would shut down instantly."

"You'll have to find something else to like about Mac OS," he said.


Well, most of the time it's not like you have to sit beside your (desktop) computer and wait until it has finished shutting down, so I guess that's less of an issue than the startup time...


It's an immutable law of the universe that consumer computers will always take at least 30-45 seconds to boot. If yours is faster, wait a few years... the developers will allow enough regressions to slip in that it'll go back up again.


Every time I’ve had a computer that would boot faster than that, it seems I’d be stuck with a monitor that would take 30 seconds to come on and decide to display something.


Probably because your monitor’s OS is booting


I should start thingsthathaveanOSandshouldnt.net or something.


My Kaypro 1 boots into CP/M right away if the screen is already warmed up.


The hack around this would be to use your smartphone as your computer. Connect a keyboard to it. Do heavy stuff in the cloud.


Or run the 68k mac OS 9 in emulation. It's fun to watch it boot up like lightning.


By the time Mac OS 8.0 was released in 1997, Macs all had MMUs and Apple was already working on merging Mac OS and NeXTSTEP; the first iteration was released in 1998 as Rhapsody.


Trust people when they report there's a problem, but don't trust them with the solution.

Otherwise we would get faster horses instead of cars.


If I remember right, in "The Inmates are Running the Asylum", Alan Cooper says there are two golden rules:

* The user is always right.

* The user is not always right.

And then the explains the first point is that the user should be treated as the authority on what their problem is. You can't just tell them they're "doing it wrong" or rationalize away their pain.

The second point is that users are not designers and shouldn't have to be. They'll often come up with ideas for solutions, but you shouldn't take those as what needs to be done.


The first point is really common in programming. If you ask a "stupid" question, you don't get an answer like "here's how to do it, but by the way, you could also do this instead" but just flame you with "you shouldn't have been doing X".

Good example is FTP. Obviously, for anything requiring any kind of security, use SFTP. But I kid you not, almost all FTP-related questions on the internet have answers like "are you still using that INSECURE protocol in 2020??" without being constructive at all. Even if it's just some random hobby project. Or a legacy system they can't change. Doesn't matter, it's more important to score points from virtue-signaling than actually helping the poster.


Even if it's just some random hobby project. Or a legacy system they can't change.

Or a modern system.

My brand new image scanner only transfer files wirelessly via HTTP or FTP.

People in places like HN freak out with "Oh noes! Teh securities!" But its wireless connection is as a wifi access point that only allows one client to connect, and only stays active for a few minutes.

Not every computer is inside NORAD.


Bill Hader puts it this way for writers/artists/creatives, and I think it applies perfectly here too:

"When people tell you that there is a problem, they're always right.

When they tell you how to fix it, they're always wrong."


I like this, and I think it applies best to creative works, in which the creator is the expert on the characters and storytelling. (A screenwriter needs to know the audience isn't feeling the romantic chemistry between the two leads, but probably doesn't want to hear their steamy fanfic scenes.)

But when designing solutions for people who are the domain experts (and this might even just be the domain expert of how a particular factory line works in practice), the dialogue with them likely includes their ideas for solutions. These ideas don't have to be "right" as-is, but might suggest the right direction, or just be loaded with bits of relevant knowledge that inform whatever the solution ends up being.


Valid! But I think it still works. Because when you're designing software for domain experts, they know their domain, but they don't know yours (software design)

So yeah, they'll say why something won't work, but the solution will probably involve another button, another drop-down, a hamburger-menu, or another option in the settings.

And then it's your job to figure out if it's the right solution.


That's a nice way to put it!


We'd also get intergalactic ships


> The moral of the story is to make sure you understand both what your customers are asking for and why your customers are asking for it.

One reason engineers enjoy questioning the premise of a difficult feature is to avoid the work entirely. The problem with this is not that engineers are lazy its that the success metrics after the goal posts are moved can be futzed in a way that ultimately is detrimental to users.

Did Apple really improve boot times and OS instabilities to a complete resolution or did an aspiring PM or Lead achieve the bare minimum of the goal to claim victory internally?


I must have heard this story and forgot it, because I used this argument on my team when I ran the group at Blizzard that did installing and downloading and patching. “We have 10 million people downloading and installing this patch, so every minute extra we take is another fraction of a human life we’re spending”. Sure, overly dramatic, and corny, but helped drive improvements.

The other more important metric I pushed was “speed of light”. When installing from a DVD (yeah, olden times), the “speed of light” there was the rotational speed of the disc and so we should install as close to that speed as possible. Keep improving speed of operations until you butt up against whatever physical limits exist. Time is precious, you don’t get more of it.


I wish more engineers thought this way. As someone who works in infrastructure it’s the story I tell myself to justify/rationalize my place in the world. When I ship big infrastructure performance improvements it’s not about the speed or money saved per se, it’s less CO2 in the atmosphere and more human life (amortized over millions of people) spent on something other than waiting for a computer to respond.

We aren’t doctors saving individuals’ lives but what we can do is give people fractions of their lives back. Some software is used by hundreds of millions or billions of people, so small changes there can save many “lives” worth of time.


Back in the day I was hacking on WoW-related stuff like server emulators, and it was always very noticeable how much care Blizzard put into this kind of stuff. The (iirc) torrent-based patch distribution for WoW etc. was really well done. Kudos, especially in such a high-pressure industry!


That last part is important. I have worked with many engineers who I would even classify as hard working, but spent little to no time understanding the hardware they were running on and the possibilities that it provided them.

I have heard "that's slow" or "that's good" too many times in performance talks that have completely ignored the underlying machine and what was possible.


Learning about how the CPU cache works is probably the most useful thing you can do if you write anything that's not I/O limited. There are definitely a ton of experienced programmers who don't quite understand how often the CPU is just waiting around for data from RAM.


It is a shame that there are not better monitoring tools that surface this. When I use Activity Monitor on macOS, it would be useful to see how much of “% CPU” is just waiting on memory. I know I can drill down with various profilers, but having it more accessible is way overdue.


Instruments?


Digging around in Instruments is the opposite of accessible.

Every OS always had easy ways to tell if a process is waiting on disk or network (e.g., top, Activity Monitor). The mechanisms for measuring how often a process is waiting on memory exist, but you have to use profilers to use them. We are overdue to have them more accessible. Think of a column after “% CPU” that shows percentage of time blocked on memory.


What would you do with that information? You'd need a profiler (and either a copy of your code, or a disassembler) to make it actionable…


I would do the same thing with the information I get from top and Activity Monitor: use that to guide me to what needs investigating.

I am often developing small one-off programs to process data. I then keep some of these running in various workflows for years. Currently, I might notice a process taking an enormous amount of CPU according to top, but it might really be just waiting on memory. Surfacing that would tell me where to spend my time with a profiler.


I’m having a very hard time imagining how you would go from a “percent time waiting on memory” to something productive without doing more work in between. Even assuming you’re dealing with your own, native code, the number tells you almost nothing about where the problem is. The only process I’ve ever seen working is “hmm I have a CPU-bound performance problem (as reported by e.g. Activity Monitor)” → “I used a profiler and the problem is here or it’s spread out” → “I used a specialized tool”.


> The only process I’ve ever seen working is “hmm I have a CPU-bound performance problem (as reported by e.g. Activity Monitor)

I want to be able to do the same for memory bound performance problems.

But the top level tools are stuck in the land of decades ago when CPUs were the bottleneck.


My point is that this isn't how performance work is done. You have to first diagnose that the issue is CPU-bound before it being memory bound can enter the picture. Time spent waiting for memory is accounted the same as any other CPU work, so it goes under that metric.

To make an analogy, this would be like adding a metric for function calls into Activity Monitor and using it to diagnose quadratic performance. You can't just take that number and immediately figure out the problem; you need to go look at the code and see what it's doing first and then go "oh ok this number is too high". The same applies to waiting for memory. What are you going to do with a number that says the program is spending 30% of its time stalled on loads? Is that too high? A good number? You need to analyze it in more detail elsewhere first.


> Time spent waiting for memory is accounted the same as any other CPU work, so it goes under that metric.

Yes. I know. That’s my point. Tools exist to dig deeper and are not surfaced better.

I do performance work often. I simply stated that it is a shame that the highest level tools do not show us valuable information.

You are free to accept the status quo.


You’re really just making a case for firing up a profiler more often. That’s fine, I do that a lot. But what you’re looking for has no meaning outside of that context.


I would like to fire up a profiler less often.


Instruments is not nearly good enough for any serious performance work. Instruments only tells me what percent of time is spent in which part of the code. This is fine for a first pass, but it doesn’t tell me _why_ something is slow. I really need a V-Tune-like profiler on macOS.


I’ve used it professionally and generally been happy with it. What are you missing from it?


I’ve tried to use it professionally, but always end up switching to my x86 desktop to profile my code, just so I can use V-Tune.

It’s missing any kind of deeper statistics such as memory bandwidth, cache misses, branch mispredictions, etc. I think fundamentally Apple is geared towards application development, whereas I’m working on more HPC-like things.


Have you tried using the performance counters? They've been useful in my experience, although I don't touch them often. Instruments is definitely not geared towards this since most application developers rarely need to do profiling at this level, but it has some level of this built in when you need it.


It’s only useful once you understand how algorithmic complexity works, and how to profile your code, and how you language runtime does things. Before that your CPU cache is largely opaque and trying to peer into it is probably counterproductive.


Okay, you've made me want to learn about it. Where do I start? What concepts do I need to understand? Any reading recommendations?


Haven't read through it, but I suspect this would be a good place to start: https://cpu.land/

HN Discussion: https://news.ycombinator.com/item?id=36823605


Drepper's "What every programmer should know about memory", though you mightn't find it all interesting. https://gwern.net/doc/cs/hardware/2007-drepper.pdf


Having lived through the early horrors of WoW patches and updates, I have nothing but praise for how WoW works today in terms of updates and distribution.

My favorite feature is how it supports incremental loading. WoW is a huge game, but you can start playing with a fraction of the assets. It will play in a downgraded way with place holders and lower quality assets as well as skipping entire areas completely.

You can reinstall from scratch and be up and playing in minutes. It’s one of the hidden joys of the platform that players mostly take for granted, but I appreciate the no doubt legion effort involved to pull this off, to change the wheels on a moving train, and to deliver just uncountable amount of data with little drama and great performance.

So kudos to you for whatever your contribution was to making such a core facility to the system so painless for the end user.


> Time is precious, you don’t get more of it.

In this particular example, the time saved on the download will go towards the noble cause of ... playing video games? Is that so much better use of time than the wait for it to download?


That’s assuming people play more when the download is faster.

And to answer your question: for everybody involved it’s better yes.


Steve Jobs would always make up stuff ("reality distortion field") to motivate and push people. One of his famous stories that I found very funny --

According to Mike Slade, he was working at Microsoft around 1990, and Jobs was trying to recruit him to NeXT. (Bear in mind that Microsoft was only a few years from launching its mega-hit Windows 95, while NeXT was struggling to sell computers.)

During a conversation, Jobs told Slade he would find his talents wasted in Seattle. In contrast, Jobs called Silicon Valley a hub of excitement and activity where Slade could blossom.

Jobs then launched into a spontaneous, impassioned speech. He described Palo Alto, California, as a “special place” and likened it to Florence during the Italian Renaissance. There was so much talent in the area, Jobs said, that you could walk down the street and bump into a scholar one moment, an astronaut the next.

Jobs’ off-the-cuff description of the place bowled over Slade. It was a twist on Jobs’ famous pitch to Pepsi CEO John Sculley. (Jobs asked whether Sculley wanted to sell sugar water his whole life or join Apple and change the world.)

After the talk, Slade agreed to pack up his stuff and move to Palo Alto.

Jump forward a year, and Slade and his wife were eating in Il Fornaio, an Italian chain restaurant with a location on University Avenue in Palo Alto.

“We were sitting there, in early ’91, and I’m reading the menu,” Slade recalled. “And on the back of the menu at Il Fornaio it says, ‘Palo Alto is like Florence in the Renaissance…’ And it goes through the whole spiel! The fucking guy sold me a line from the menu! From a chain restaurant!! Bad ad copy from Il Fornaio, which was his favorite restaurant, right? Such a shameless bullshitter!”

https://www.cultofmac.com/573753/how-jobs-poached-a-microsof...


It's really funny when you think about how underwhelming Palo Alto is too.


In harmony with Silicon Valley, a nearby university, and a popular fruit-named computer company, Palo Alto cultivates an arrogance and narcissism that I would not describe as "underwhelming".

So I'm completely unsurprised that Il Fornaio's pitch would resonate with Jobs and that he would quote it verbatim.

Be that as it may, a number of cool things - and successful companies - have in fact come out of Palo Alto. And you can meet some interesting people there. I believe you can still catch Pac-4 football games there as well.

Il Fornaio and Palo Alto might also have been better in 1990, when Jobs quoted his pitch.


Different strokes for different folks. I quite like PA in relation to other cities on the peninsula. It's upscale, has a good mix of cuisines and fanciness/expense for them, has interesting differing types of shops to explore on a few different streets so it isn't just me main strip like Castro in Mountain View, or Laurel in San Carlos, and is generally safe and clean.

I'm not sure what would make it more appealing to you, but it may be that you're just seeking a different vibe or are at a different point in your life where you may not value some of those things the same.


...but can you bump into an astronaut walking down a street?


I mean, the food is pretty mediocre especially when you look at the prices (except Bevri and Zadna that are delicious), the nightlife is basically non existant, the shops are eh, there’s cars everywhere downtown and the general amenities (playgrounds, parcs, cool places to walk and bike around) for kids are sub par. I don’t see how it can worth paying $2k a sqft to live there imo.


Sushi Shin is a Michelin starred sushi restaurant. We have decent ramen options, we have Zareen's and Broadway Masala for Indian, we have Redwood Bistro which has some authentic sichuan and iDumpling for XLB. We have good beer and decent brewpub options, and a few good brunch spots. There's several other decent food options as well. Agreed things are overpriced at many, but that's the peninsula as a whole.

If you're at the stage of your life where you have small children, or are not someone who does a ton of going out, nightlight is irrelevant.

Agreed it could have more shops, it's very one dimensional right now.

Every downtown has cars everywhere, and the strip of restaurants on Broadway is notable in that it is still blocked off for foot traffic and is wonderful.

I have no idea what your idea of above average is for parks and playgrounds. We have a Magical Bridge, Maddux and Stafford parks and others. We have Stulsaft and other options.

I'd be really curious where you're getting $2k/sqft.


Oren's Hummus is pretty good ¯\_(ツ)_/¯


Radiohead even wrote a song about it.


Man, Steve Jobs' Palo Alto must have been a truly special place. The only memorable thing I encountered in Palo Alto's street (while working there a few years ago) was the overwhelming stench of urine in the underpass beneath the Caltrain Station.


The Smell of Palo Alto! You know it too?

More seriously, there really is an absurd shortage of clean, available, public restrooms that are open 24 hours. It's a huge issue in the Bay Area and SF especially, but it's bad in many US cities.

At some point we seem to have decided that clean streets (and train stations) are not worth the changes in regulatory requirements and funding that would be required. Palo Alto actually has a pleasant and quaint waiting room from its Southern Pacific days - but of course it's always closed, especially post-pandemic.

I was shocked recently though when I was in a BART station that had a public restroom that was actually open and maintained. Seems like a good idea considering how often escalators are "closed for maintenance."


Seems kind of apocryphal. You mean to tell me a smart professional engineer working at one of the biggest and most prestigious (at the time) companies of the world is going to quit that job, uproot his life, and move to an entirely different state, just from a single "Trust me, Bro, it's awesome" endorsement from a potential employer? I'd have wanted to at least fly down there, look at a few apartments, visit the office, and so on, before making that kind of commitment. It makes a cool story, but there must have been more to it.


Steve Jobs, whatever else you want to say about him, had charisma. It's a big part of why he was successful. So that's kind of the point. He had an ability to take a message like "trust me bro, it's awesome" and say it in a way that it would resonate, and that ability was most of the secret sauce of being Steve Jobs.


Eh, at least this short story did not say that. What it stated is the 'hook' line that got him was pulled from a menu. Not that this guy didn't at least to go Palo Alto first and make sure it wasn't a total shithole.


> Bad ad copy from Il Fornaio, which was his favorite restaurant, right?

Funny story, but I find it hard to believe Il Fornaio, with its mediocre Italian fare, was Jobs' favorite restaurant.

This is the restaurant we'd go to when all other options were booked or it was too late to drive further.


Just because he is great at business doesn’t mean he has great taste in Italian restaurants


I'd actually argue there's more evidence of Jobs having good taste in general than being good at business.


Do they sell fruit?


It’s a funny story, but… yeah, the early 90s was a special time in Silicon Valley. It was THE center of the computing world. And you really did just randomly bump into amazing people at Fry’s or restaurants or bars or whatever. I don’t think younger people understand how much around them today, when it comes to technology, can trace its roots to 90s South Bay and Peninsula.


I object to the idea that San Francisco, with its yuppie tech culture, was truly comparable to Florence in the Renaissance. The Renaissance produced works of culture and art in addition to the technological advances. In that regard, Seattle produced the best music of the decade and would be an equal contender to the title.



This is my new favorite Jobs story.


He Keyser Soze'd him


> "reality distortion field"

Huh - I was about to post "I just listened to that podcast!" and bro out with you about Dan Carlin and Hardcore History, but it now occurs to me, and Googling confirms, that "reality distortion field" in that podcast was probably a reference to a known saying about Steve Jobs rather than an original thought.


It was a well known term, and even Apple fans would refer to the RDF (often as an excuse why something hadn’t turned out to be amazing as rumor had it).


He sounds like a sociopath. I could believe him gaslighting Wozniak out of the money he should've paid him for the Atari gig.


As time goes on that seems to be how he’s remembered more and more. A weird psycho.


Programmers and engineers have to apply this thinking holistically. The totality of waiting for slow software is enormous. Performance needs to be given a higher priority by more development teams.

I don't tend to consciously sum all of the time I spend waiting on slow software and slow services. But waiting on slow software impacts my subconscious in the moment, making me feel uncomfortable and frustrated with the system, as if it is antagonistic. If I do spend any time consciously thinking about it, I feel disdain for the engineers and project leaders who believed that what they had produced was good enough to ship.

With the processing capacity of modern computers, waiting for hundreds of milliseconds for trivial requests, or much longer for only modestly-complex requests is evidence of gross negligence on the part of the programmers.


I commented elsewhere about ADHD. So here is a story about a myself I won't name. O_o

My Thursday night girlfriend wanted me to clean up an old MacBook. Just a few steps, unlinking accounts tied to hardware, figuring out how to remove a firmware key some other me must have set, a clean install, updates, etc.

It took me 6 months, because several steps or restarts took more than single digit seconds ... and my work was a siren.

After many aborted attempts, I put it on my desk next to my keyboard. It only took 30 minutes, spread across 6 hours. Victory!

If someone chained my hands to the laptop it would have gone faster, but the suffering incurred by the forced observation of blank screens, status bars and busy balls would have been unimaginable.


You have a different girlfriend every night of the week?


That would be something! But sounds complicated.

I have a committed girlfriend, and another girlfriend who just wants a man for Thursday nights. I do what I can!


Yep. Computers should wait for people, not the other way around (unless it's a long running batch job).


It's a pretty good point, ordinary computers could boot up from cold in under 30 seconds on 5400 rpm spinning rust, so why can't they boot up in under 1 second on the latest and greatest NVMe SSDs?


Complexity. Size.

Windows 95 was about 50MB installed with most features.

Windows 2000 fit on a CD for the install.

Current Windows 10 installers won't even fit on a single layer DVD anymore, and forget doing the install with a FAT32 USB stick (some older UEFIs won't handle exFAT yet).

The fastest computer I've ever used, perceptually, was a dual Pentium 3 866, with Rambus, booting XP (probably SP1 or so) on 15k U320 SCSI disk. The thing was telepathic.


The P3 era was really a golden age. Clock speeds were still rapidly doubling, you could get SMP but most people didn't so everything had to optimize single-threaded perf, and likewise "normal" memory spanned 32MB to 512MB so you could really keep multiple programs' full working sets ready at once.


I would've said P4 era with hyperthreading opening the door to multi-core programming paradigms. Clock speeds mostly capped around 5ghz since that era


I’d rather call that the Athlon era. P4’s ran like (literal) hot garbage, Athlon’s absolutely crushed them.


If we're going back to that time period... Alpha.

How I coveted my roommate's machine that had one in it.


Athlon was amazing.


Recently, I was able to get a NVME SSD into an old dell (i5-4590) using a modified bios and a PCIE adapter card. It booted into fresh win 10 in seconds.

I think it's the old problem where the more crap windows accumulates, the longer it takes to boot.


Icons used to be 32x32 monochrome with a mask. Now they're 512x512 in 48-bit color. System fonts used to have ~200 characters, now they have tens of thousands.

Extrapolate to everything else and it becomes pretty clear. There's just so much more to load.


The math indicates otherwise, as another user pointed out, a 9.54 MHz Tandy 1000RL could load to MS-DOS in 2.2 seconds with 512 KB of very very slow RAM and a very slow 20MB drive.

Even factoring in 100x more resource usage for a 2023 computer to deliver all the features expected, it definitely should be way under 2.2 seconds.


You've got to go way more than 100x. An 80x24 character screen used 2K of memory. Running two 4K monitors today uses 50MB of memory.

That's 25,000x more usage of memory for the interface alone.


100x total system resource demands.

Display memory usage made nowhere near a straight multiplier of a difference even in 1989, as demonstrated by the 1000RLX vs 1000RL, which you would have known if you followed the link in the other comment and watched the video.

You can verify this yourself by hooking up a VGA resolution display, the same as the 1989 Tandy 1000RLX shown, to a modern desktop computer with VGA out and it doesn't reduce boot times by any significant amount.


It was just one example.

Total resource demands are still way, way, way over 100x. Data speeds. Peripheral inputs. Storage. Basically everything.

And no, of course a computer doesn't boot at a different speed depending on display size. It's about the assets and code that fill displays of that size -- all the graphics you've got to load, all the code that has to draw the antialiasing and transparencies and shadows and subpixel font ligatures and everything else.

Same way the code for dealing with storage capabilities is way more than 100x as complex. For peripherals. Etc etc etc.


If you don't fully understand the topic, the smart choice would be to re-examine your own assumptions.

Do you understand how VGA displays or graphics rendering work? Or how computers boot up?

A modern linux system, RHEL, Debian, etc..., isn't going to try to load 4K graphics on a single connected VGA display, especially if you use it without any third party video drivers or adaptors that support 4K video out.

Many motherboards, even in 2023, have a direct VGA out port that it defaults to, reliably. Which is what this typically refers to.

If your still worried, then there's always the option to manually verify the installed files and boot sequence to confirm that it isn't attempting to force it.


I think you're misunderstanding me. This has nothing to do with VGA. I was using 4K screens as just one example of the many, many dimensions of growth.

It's simply the point that there's so much more to load during booting. Your contention that computers only use 100x more resources than in 1984, and should therefore boot in "way under 2.2 seconds" is way, way off.

Computers use way, way, way more than 100x resources compared to two decades ago. Hence, booting still takes a bit of time. It's pretty simple.


You appear to have lost track of the conversation?

> Icons used to be 32x32 monochrome with a mask. Now they're 512x512 in 48-bit color. System fonts used to have ~200 characters, now they have tens of thousands. Extrapolate to everything else and it becomes pretty clear. There's just so much more to load.

If so, let me spell it out step by step. That was the initial reply to me. Therefore...

> I was using 4K screens as just one example of the many, many dimensions of growth.

This example, is likely close to meaningless, as elaborated on previously.

Hence why I suggested to review whether '32x32 monochrome' or '512x512 in 48 bit color', etc., has any observable effect. With the help of a VGA display showing, presumably, graphics roughly corresponding to the first, another display corresponding to the second, and so on.

If you want to discuss something later on, then it should be in your interest to resolve the first claim as soon as possible in your favour.

For example, if you disagree and still think resolution makes a noticeable difference, then show that convincingly, especially as it's a positive claim, which HN readers tend to treat more critically.

It really seems a bit odd to try to skip that discussion and then claim it 'has nothing to do with VGA' which can only reduce the possible avenues to prove your credibility.

i.e. You are the one who raised the possible "to do" regarding lower resolutions. The reason why I started discussing 'VGA' at all was because of that comment.


My Windows 11 PC boots in about 20 seconds. Over half of that time is the POST. Once that's done, I see the Windows login in about 5-10 seconds. It's fast enough that I don't really notice.


My NUC boots Ubuntu in 3 seconds flat, including POST.


How is this possible? Is it super new?


There were two things I had to do to shave the last few seconds, the most beneficial was disabling all the unnecessary peripherals in the BIOS. When I looked at the Ubuntu boot log it said it spend 1.7 seconds uploading firmware to the bluetooth controller, which at that point was like 95% of the post-POST boot time, and not needing that I just turned it off in the BIOS.


Mostly a matter of software not being written to make use of the SSD capabilities. You need parallelism or prefetching to keep the IO queues non-empty. If you have a single-threaded workload which interleaves blocking IO with CPU work and the IO patterns are not amenable to readaheads the SSD will be mostly idle. Similarly anything calling fsync or performing other file system operations that trigger synchronous writes on the critical path will stall the entire boot process. Due to caching writes are fast no matter the medium as long as you don't demand instant durability.


They do, on the same workload. But if you look at the virtual memory breakdown, the vast majority of pages are non-executable data pages. Just did a rough check with Firefox and the executable pages are ~200MiB compared to ~2GiB of Private+Shared Pages. So its not so much the code, its all the data - the graphics, dictionaries, icons, fonts, textures, cached data, etc, etc.


Not my recollection, TBH. Yes, my VIC-20 or C64 turned on to immediate usability, but it had no spinning media or real operating system. My Atari ST took quite a few seconds to spin the floppy and dump to desktop. My next computer in the early 90s, a 486 50 running Linux I think would seem interminably slow to me now; Linux boot was faster than DOS/Win3.1 but still we're talking a big chunk of time.

Honestly, things are much faster now than they used to be.

Plus I can shut my laptop lid, use basically no power, and come back to my session as-is almost instantly. That's new and way better than the 80s and 90s. Then you either had to leave the machine on or suffer slow cold boots.


RISC OS, the operating system that ran on the first few generations of ARM CPUs in the 1980s and early 1990s, was stored on ROM chips. It booted in a few seconds, to a real OS with a GUI etc.

https://youtu.be/5M6OIOIND-0?t=1278 — I think about 12 seconds, or which 3-4 is waiting for two hard drives to spin up.


Atari ST also booted from ROM. But it also expected a floppy disk to be in the drive, to check for auto boot programs, etc. So that slowed the boot. If there was no floppy, it would hang for a while waiting for one, even. Poor choice.


In RISC OS that was optional. There was a setting[1] in NVRAM which set whether or not to look for a boot device, and what that boot device was (floppy disc, hard disc, network).

I don't remember what happened if you configured it to look for extra boot files on a floppy disc, but the drive was empty. I think it would give up very quickly (1-2 seconds), as it was a normal way to load a program on the earlier BBC computers — insert the program disc, which would be bootable, and press the key combination (Shift+Break) to reset.

"Podules" (expansion cards) could also map extra modules into the OS from their own ROM, usually the required device drivers for the card.

[1] https://www.riscosopen.org/wiki/documentation/show/*Configur...


On the ST you could hit ESC to make it skip the floppy check, I believe.


Which is great and fast - until you want/need to upgrade the OS. Security hole, too bad, that is baked into ROM and can't be fixed...


RISC OS could selectively replace parts of the ROM (in RAM) with new code/data, for upgrades, new device drivers and so on.

(I think some viruses loaded themselves with this mechanism. And virus checkers.)

https://www.riscosopen.org/wiki/documentation/show/File%20fo...


Sure, but everytime you need to do that boot time goes down and so what was the point?


In practise I don't remember this being a big deal. At some point I remember helping my dad upgrade us from RISC OS 3.something to 3.11, by replacing the ROM chips, but patches to the OS loaded into RAM were unusual.

The OS in ROM was 2MiB, and looking at some module files intended for potential loading at boot time I have in an emulator, they are around 5-40kiB.

The computers typically had 2 or 4MiB RAM, so there isn't space to replace a significant amount of the OS anyway. (1MiB or 8MiB was possible, but unusual.)


Wow, that's a large ROM. Even in their latest, 68030 based models, Atari never shipped a ROM bigger than I think 512kB (in the TT).

If you wanted the full multitasking modern version of the OS (MultiTOS), you loaded parts of it from disk. Or ran SysV Unix (or NetBSD or Linux, later).

Then again, binary sizes would be smaller on a CISC 68k machine I'd expect.


Some of it's shown later in the long video I posted if you want to look, but the ROM includes the multitasking OS, GUI, BASIC interpreter, text editor, graphics editor, vector drawing editor, calculator, clock/alarm clock, font manager and four fonts, network stack (though not IP), RAM filing system, audio support.


On those old eight bits it wasn’t immediate either. You had to wait for the memory to all get zeroed out and for the CRT capacitors to charge.


because parts of code these days is written in languages like java, python, etc which means at least some software runs slower.

add to that that people think that because machines are faster they don’t need to optimize anything.


Not every part of the boot process is bottlenecked by disk I/O.


Every part it's bottlenecked is similarly exponentially improved from the olden days though.


One of the slowest parts of boot-up is memory checking, where the speed has increased exponentially, but so has the size.


Maybe it's the single slowest individual item, but it's very far from being a significant fraction of boot time. And the capacity really hasn't kept up the way speed has. My desktop has 24GB of DDR3 1600 and manages to post in under 2 seconds. And that's pretty old by today's standards. Mid level modern hardware runs at least a circle or two around this system in terms of speed, but in terms of capacity it's still right in line with a higher end system today. Maybe I'm atypical but my boot time is dominated by my OS spinning itself up, by a long shot.


I suppose it depends. My AMD DDR5 machine spends most of its boot time on memory training. Once that's done it's only a few seconds into the OS. (I know I could enable fast boot to skip that most of the time, but I rarely reboot and would rather have the guaranteed stability.)



That is speedy, 2.2 seconds on a 9.54 MHz Tandy 1000RL!

The ~4 seconds to boot up to a GUI desktop is actually even more impressive: https://youtu.be/JIEPqD4luG8?si=9gVtFCIxFYma1erC&t=556

My top of the line i9-9880H Macbook Pro from 2019, with PCIe 4.0 NVMe speeds, needs over 20 seconds to boot up in comparison...


I had a Tandy 1000 TL/2; it had a tandy specific MS-DOS 3.3 with Deskmate setup in ROM, booted pretty darn fast; but you had to give that up if you wanted to boot a newer dos. A newer MS-DOS still booted quick, and there wasn't much to the BIOS before it hit the drives, but you couldn't run Deskmate on standard MS-DOS.


I realize at the time it wasn't very easy for most people but a computer that often receives upgrades via ROM is the Atari ST. I (sort of) recently upgraded my 1040. I bet one could produce DOS 6.22 replacement ROMs for the Tandy!


God I wish I could run MacOS 8.1, Windows 95 OSR 2.1 or Windows 2000 SP4 on modern hardware. Especially with some tweaking that removes artificial animation delays that stuff should be flying so ungodly fast.


my fresh PC with 64GB DDR5 takes a minute til POST.


DDR5 memory has this thing where it needs to be "trained" to figure out the best settings for a particular memory/motherboard combination.

Maybe your PC "trains" the memory every boot instead of just the first one.

https://www.crucial.com/support/articles-faq-memory/ddr5-mem...


I'm skeptical if that's design behavior, or if this article is in that category of "stories manufacturers write to trick people into not RMA'ing obviously defective product batches" ("A small number of DDR5 systems...")

It's not only the multiple 15-minute RAM boot times (!?) that are worrying: it's that I'd have zero visibility into what underlying cause is responsible for these "small number" of events, and what other symptoms could develop later on (outside the RMA window). I couldn't just take the manufacturer's reassurance at face value.


> This essentially involves measuring lengths of wires from memory controller to individual DRAM chips. The idea there is that it is impossible to make them well-enough matched for the frequencies involved, so the deliberate difference is compensated for in logic and software (also it saves space on PCB of both motherboard and the DIMMs themselves).

https://www.systemverilog.io/design/ddr4-initialization-and-...

https://www.youtube.com/watch?v=_U3-hST9YBg


Check bios for fast boot setting perhaps?


My PC takes about 5 seconds to boot to be usable.


Most likely because that ordinary computer of that time wasn't trying to bring up any network devices.

Simply put, strip down an OS to the same feature set of that ancient computer and the modern OS will be a lot faster. Some of the networkless VMs I mess with boot in a second or two, but you see we've abstracted most of the hardware away. So, mostly the problem is a hardware one.


Windows 8.1 boots up in under 30 seconds on a 2012 mid-range Thinkpad X220 with the stock HDD.


If you have to wait on a computer, it's not fast enough.

Steve's argument here is widely used in the industry. It's almost emotional blackmail (fail and be a killer) but classic nonetheless.


> It's almost emotional blackmail (fail and be a killer) but classic nonetheless.

I read it much more as inspiring people to consider that they have an impact on peoples lives.

It's strikingly easy to blame the user for slow software, or blame the PM or Org for pushing features and speed of development over speed of the product.

Steves mantra here is that software performance has a material impact on daily lives. Pointing something out is not emotional blackmail.


> Pointing something out is not emotional blackmail.

True, but disingenuously implying that something like slow boot times costs lives is.


Not costing lives. But saving lifetimes across a population.


"So if you make it boot ten seconds faster, you've saved a dozen lives."

That's emotional blackmail. The implication is failing to do that will cost a dozen lives. It's also incorrect. Making it boot ten seconds faster saves zero lives.


I don't know how to understand it for you.

You could make claims like that about anything. You could equally claim "Have a nice day" is a threatening remark.

You have to be spectactularly stupid to think that not making software faster is going to cost people their life.

But it could, across a population, cost entire lifetimes worth of time.

So yes, it's important to consider your software's cost to humanity.

The issue here is that you're rejecting a clear explanation of your externalities.

Stop rationalising the externalisation of your costs; they still exist even if you don't enjoy being confronted with it.


> The issue here is that you're rejecting a clear explanation of your externalities.

No, the issue here is Jobs making a legitimate point in the most emotionally manipulative way he could think of.


Honestly I couldn't possibly disagree more.

Pointing out that you're stepping on toes is not emotional blackmail.

Emotional blackmail is either direct:

“If you hang out with your buddies tonight, I will pack my bags and leave you.”

or indirect:

“I don't even spend time with my friends because I want to spend more time with you."

What Jobs is doing is pointing out a fact.

That it hurts you this much informs me more of your developmental attitude than it does about Jobs being an asshole. Which he was, but not for this.


I'll go out on a limb and suggest the clever people at Apple were aware making ill-performing operating system software isn't going to literally kill people.


Of course. I'm not asserting otherwise. I'm also not disagreeing with the underlying point. What I'm disagreeing with is engaging in highly manipulative emotional language -- which isn't even technically correct -- to make it.


I can easily imagine a situation where life support hardware needs to reboot, and taking too long to do so would be life threatening.


Wasting people's time. That's a good enough reason.


Sure, I agree. I just take issue with the framing. It's highly manipulative.


What is a life if not time well spent?


This dovetails with another Jobs story -

> After the iPad launch, Jobs supposedly walked into a meeting with the Mac team, carrying an iPad. He woke up the iPad, which happened instantaneously. Then he woke up a Mac, which took a while to come out of sleep. Then he asked something like, “Why doesn't this do that?

Without the iPad there to show it was possible there would have been arguments about memory speed and disk speed etc. And faster Mac sleep/wake put pressure on Windows to up their game.


If this is valid, how about the countless animations everywhere in UIs today that waste time for no other reason than looking pretty the first hundred times? The application switcher on a phone I use has a switch time of 0.5s-1s with animations, practically instant without.


There's real UX benefit to it is why. Things instantly changing to entirely different layouts takes time to process visually, if things lerp to their new positions then that processing time is cut down to the length of the animation, which are usually around a quarter of a second, not half or a whole. It might get in the way of speedrunners and power users, feel free to disable them, but you're not the target audience. It's the average user who doesn't have every UI nook and cranny burned into muscle memory.


It's a nice theory but it only works if the animations are smooth and designed to improve understandability. The vast majority of UI animations are pure visual flourishes that take twice as long as they should and don't make any kind of sense spatially or physically or improve the user's understanding of what's happening at all. There's a lot of cargo cult UI design out there.

And what's worse is that most of the animations either don't start at the initial state of the UI or finish at the final state, or perform so badly that they hardly show any frames in between, so you have the worst of both worlds: abrupt jerky transitions and wasted time.

UI transitions that make spatial sense, are fast enough, are fluid, and don't slow down typical use of the UI are rare unicorns.


I unfortunately 100% agree. While an amount of whimsy should be everywhere, animation shouldn't be used as just eye candy. Like every other aspect of UI design, it has to be used with purpose and care. And yeah, that's way rarer than it should be.


Funny. I've had people hovering over my shoulder comment how my PC is so much faster than theirs when it was actually an RDP session to another PC, which seems to disable almost all window animations by default.


Not all animations are useless. Actually, any useless animation has no place in the UI.

- Some animations can be overlapped with time-taking tasks to keep user engaged but waiting at the same time. I think iOS does that when switching to an app that was swapped out to the disk. Loading takes time, so the animation compensates for some of the delay while the app's resuming. If there was no animation, the user could think that they didn't perform the action correctly, and might be inclined to repeat it, causing frustration.

- Some animations are necessary to orient the user in UI flow. For example, the minimization animation moves the window to the icon that user needs to click in order to restore the app. The animation also makes user differentiate between close and minimize operations.

- Some animations are necessary to give user proper feedback while keeping the responsiveness. One example would be the spring animation you get at the end of a list when scrolling using a touch screen. If there was no spring animation there, user would have no way to know that if that was the end of the list, or the touch screen stopped working.


A lot of software also puts in some kind of input delay/rate limiting for no apparent reason.

Video game console system UIs and some game menus seem to be really bad about this for some reason.


Cheap phones have terrible frame rate so they have to make the animations long to appear smooth.

Imagine short animation in 200ms at 25fps only gives you 5 frames. It’s going to look janky and tacky. Make it 1000ms and it looks smooth and nice, except hopeless to use.

(Unpopular?) solution: get an iphone. Their app switcher works as fast as your finger moves, with no problem of delivering consistent 60fps.


The phone in question has smooth animations. They're just very slow as if to show how smooth and cool they are. It's also from a very well-known brand. I could double animation speed with developer settings but even with this I felt like animations were too slow.

Solution was to disable animations. However this sometimes breaks things. For a few month this broke multi-tasking (two apps on the same screen), but the vendor fixed it (though they reworked multi-tasking at the same time to prevent switching one app so it became almost useless for me at the same time)

Unfortunately I like having the ability to install what I want too much to get an iPhone but I understand how people valuing stability may prefer it.


I used to think there was something wrong with _two_ of my Plasma installs because everything felt generally sluggish. It wasn't unusably slow, just enough to notice it.

Turns out it's because the animation speed was so low (default). I doubled it and everything feels 1,000x better.


If you are on iPhone, you can switch

Settings -> Accessibility -> Motion -> Reduce Motion

The Android a11y menu probably has something similar. Try it out and see if you like it more.


You can do it in a per-app basis. I turn it off globally and then turn it back on for Home screen, Books, and a couple of other apps.


Windows 11 takes about 12 minutes to boot from an HDD. Imagine trying to boot it from an FDD.

Installing Windows 11 and then waiting for all the updates to install on a HDD takes about 8 days.


Come on now, HDDs aren't that bad.

https://www.youtube.com/watch?v=MpNagBwWlNk


Well that is not at all what happens on my machine, which is a Core i9 13900K with 128GiB of memory. It just grinds and grinds and grinds for ages.


Why do you use a HDD? Of course, I ask in jest, but I'm also a bit curious.


I have a very excellent SSD which I removed from that system because I am certain that the Windows installer would fuck it up, and I did not want the hassle of trying to fix it. So I pulled it out of the box to keep it from harm, and the only media I had at hand at that moment was a WD SATA HDD. I thought it would be slow, not kill-me-now slow.

I do not "use an HDD" of course. It was improvisational.


The same reason anyone has always used a HDD? … they're dirt cheap, compared to SSDs.

I'd consider hybrid being the best cost option, with a small SSD backing frequently used data, like the OS. But there's more complexity in that setup. I'm also a Linux user, and boot times don't bother me.


> they're dirt cheap, compared to SSDs.

If you need very large (4TB+ drives) maybe, but 1-2TB SSDs are so cheap nowadays. 2TB SSDs today are cheaper than 2TB HHDs from 10 years ago, and the price discrepancy is quite narrow unless you're looking at 4TB+ drives.

I don't even bother looking at HHDs for my own computers anymore unless I need bulk storage for videos or something.


Dirt cheap in which measure?

I was at Microcenter and some 1TB (rather questionable) NVMe drivers were $30 on special. Going to be difficult to get cheaper than that.

Now, lets turn your equation around. What is the cost per IOPS of your HDD versus SSD? HDDs start to get expensive very fast in that measure.


In terms of $/B.

Yes, HDDs are slower than SSDs. If that axis matters to you, you'd use an SSD, particularly NVMe. (Which is sort of implied by the hybrid setup I suggest.) If storage capacity matters, HDDs. You can see this reflected in market prices, though it does look like SSDs are surprisingly cheap these days, comparatively. Historically this has not been the case. (I wonder if economies of scale are now working against HDDs suddenly, or what? There's no reason for them to cost the same or more than an SSD — the market would collapse. Although I swear market pricing for many components hasn't made a lot of sense, recently… i.e., RAM has seemed horrendously expensive.)


There's some sort of big SSD price drop in the past 3 months. I dunno what that's about, but I did upgrade a machine, so that's nice.

There does definitely seem to be a pricing mechanic in that hard drives never really scaled down in minimum unit cost; the basic parts of a hard drive still cost real money, so if you can do 2TB per platter, and a top of the line drive has 10 platters, a single platter 2TB drive costs a lot more than 10% of the top of the line drive. OTOH, flash controllers aren't that expensive and/or the cost of the controller scales with the capacity, so SSD prices tend to be more linear with capacity.

If you need a lot of space, $/B means a lot, but if you just need an ample amount of space, $/device is more important, and SSD drives have hit the point where an ample amount of space is available for less than any hard drive.


The most efficient cost option is to have one cheap SSD for booting and a handful of apps that need the speed and then using a HDD for storage. Been that way for 10+ years


The slowness you see with NVME isn't in boot anymore - instead it's in BIOS. As memory times get faster, it takes longer for the motherboard to train to hit those XMP targets, especially with memory still super far away from the CPU. For me, rebooting has ~20 seconds of staring at a blank screen with the Motherboard doing memory training/initialization on 6000 MHz ram.


… that video really doesn't sell Windows very well. My Linux laptop boots ~40s faster.


I fucked up the partitions on my 2017 iMac with Fusion drive a short while ago trying to create a dual boot system and even since my Mac was slow.

I think from beginning of start-up to a somewhat usable system was maybe 5 minutes? Quite long either way.

But just last weekend got sick of the slowness and found there's a 'diskutil resetFusion' [0] command that restores the partitions to the default. So I ran this command, reinstalled the OS and now my iMac is pretty speedy again. Not great mind you, but way better then before.

Lesson learned: dual boot on a Fusion drive is a bad idea.

---

[0]: https://support.apple.com/en-us/HT207584


Not seeing those boot times, but I rarely reboot. I usually reboot my W10 box once every few months or so. Our IT department commissions our Windows PCs in about an hour. Something seems very very wrong here, but I'm not an IT expert.


If you're not rebooting your W10 box every month, then every time you do reboot you're doing windows updates.


True, but our IT only lets important/critical updates through, so its not really a big burden.


They must have a lot of "Telemetry" to collect on you.


I thought telemetry was supposed to improve the experience. Not make it worse?


It improves "the" experience; not "your" experience. ;)


> Windows 11 takes about 12 minutes to boot from an HDD.

Mine doesn't. It takes 3-4 minutes (which can easily feel like an hour).


This cannot be true.


I have a W10 install on a 7200rpm HDD and I believe it.


Can confirm the boot time with Win10 on HDD. Can't argue about updates, took about a couple of hours, def. not days.


Try to remove Windows/Microsoft from your life, Microsoft no longer is decent.

We need to migrate to Linux.


Wow. That is outrageous.


I only know this because I needed to use a utility from Asus to update the Intel ME, and it only runs under Windows. I naively assumed it would not be that much trouble to throw a hard disk that was laying around into that PC and install Windows thereupon.


Also false.


I'm kind of surprised Windows 11 allows you to install to or run from something that isn't an SSD. Windows 7 ran just fine on spinners, but Windows 10 is pretty bad; I'm not surprised Windows 11 is worse, but they really should just disallow it.


Maybe they shouldn't have such resource-creep that REQUIRES an SSD. Maybe they WOULDN'T if there weren't mountains of bloatware and telemetry bs.


I mean, that would be great; but if nobody is holding the line on resource-creep, as is obviously the case, and nobody is testing if releases are acceptable on HDDs, as is obviously the case, they should just change the published requirements to reflect reality.


> they should just change the published requirements to reflect reality.

Marketing won't let them so long as it would piss off PC OEMs that still ship crappy systems and want to use the Windows logo.


Shouldn't be a hard sell for OEMs; official specs are 64 GB is enough storage for windows 11, and I can get a 128 GB SSD for $15 retail, whereas the lowest price hard drive I can find is $25 retail (500 GB, but 3.5"), so if you're a cheap PC OEM, putting in a crappy, tiny, SSD saves money. And the only systems without SSDs I saw on BestBuy were refurbished machines shipping with Windows 10.


Why shouldn't they? People don't buy operating systems to be slimmed down...

If you got a computer and it didn't come with all the needed drivers and a web browser along with most of the functionality needed to print, you'd most likely wonder what decade it came from. All that stuff I listed, without the telemetry is still going to run like dog shit on a HDD.

I honestly think users are forgetting just how badly fragmented hard drives of days yonder used to run, and those same spinning disks are not any faster these days. Cutting down the OS to barely do anything still took more time than the complete boot cycle of my current computer up to a browser on an SSD.


Yeah, hard drives are never going to be great (although 15k rpm drives aren't too bad), but IMHO, the real thing that causes perf to be awful is that windows 10 (and I assume 11 has gotten worse) can't seem to ever stop writing to the disk. Those writes seem to interrupt reads enough that you never can get good sustained read speeds, so loading anything is painful.

I'm not going to setup a system to test, because it's too painful, but I'm now idly wondering if you could set the checkbox on a hard drive for "Turn off Windows write-cache buffer flushing on the device", and if that would help. Doing a aggregated write of a couple MB once a minute would probably work better than doing a few KB every second. Of course, at great risk of data loss, but YOLO. (a smidge of research seems to indicate this is for asking the device to pretty-please flush its internal write cache, so that might help a bit, but probably not very much; maybe there's a knob somewhere to tune the system file cache)


Well... I remember some press and discussion about "InterBase" (now FireBase) - and it's storage/self-healing recovery model being critical for some scenarios "back in the day", some quotes:

"AFATDS includes 935,000 lines of Ada code, running on an HP RISC Workstation and the Army's Light Weight Computer Units," according to John Williams, spokesman for Magnavox Electronic Systems Company, the prime contractor on the project. "We needed to work with a single database that could scale and operate across Unix and PC platforms. The product also had to install quickly and provide high availability without monopolizing our systems resources."

"Decision support of this nature requires a modular and flexible architecture that would support both distributed processing and distributed databases. That's why we chose InterBase. It out performed the competition and convinced us that it would be reliable in life and death situations."

The exact nature of the discussion was that in some situations, the firing of the main weapon in certain tanks would generate an internal EMP event, so systems would reboot - they had to have extremely fast reboots and recovery-times... so they could fire again...


If only he knew how many millions of lives would be lost indefinitely scrolling on a small sheet of glass.


Errr my biggest shock and awe moment was Guild Wars 2. A bit after launch I was playing and an update came in. "Please restart the client now after patching"

Okay... Let's click that button!!!!

Game... shuts down... downloads an update... patches... starts up... loads me back into where I was.

All this in... 1 minute flat! Baldur's Gate 3 can't do that on today's hardware with an SSD and a much faster processor, and 4x the ram, compared to a game 13 years ago on significantly crappier hardware.

That's what solidified to me that the game was rock-freaken-solid.


Sort of related: https://www.hillelwayne.com/post/performance-matters/. Well, not really - that one is about saving lives with performant software, but more literally.


I wonder what he would have said about the 20 minutes when you can't use the computer and the 1+ GB download it takes to update a state-of-the-art mac from macOS 13.5 to 13.5.1 that has one (1) bug fix ("macOS Ventura 13.5.1 fixes an issue in System Settings that prevents location permissions from appearing.")

https://news.ycombinator.com/item?id=37206660

I miss having Steve running Apple.


Coming from the patching experience on various Linux distros (and even Windows), I really want to know what Apple is doing under the hood with macOS updates. Their point updates are multiple gigabytes and often take 20-40 minutes to install. My Arch system updates itself in a couple minutes (even if I haven't updated for a month) and there's no "unusable" phase of the upgrade process, other than a normal reboot for kernel updates.


A few years ago, they moved the OS to a “sealed system volume” — basically, the entire OS is stored on an immutable disk image, signed and verified with a Merkle-tree sort of structure. This has a few advantages: malware cannot modify the OS, you can’t brick your system by accidentally deleting OS files, updates are far more robust (they don’t have to change files on your root filesystem), and the OS can be stored unencrypted meaning you can boot the system without requiring the user’s password first. (And of course, there’s an opt-out if you really want to modify OS files.)

The big downside is that installing an update means you have to rebuild and re-sign the entire OS image, which takes forever. When they first introduced this model, I was surprised at this: I expected they could generate the new OS image in the background, while you’re still using the computer, then just swap over to the new image with a single reboot, instead of requiring a ton of downtime. I think they might finally be doing this with macOS 14/iOS 16 — I’ve been running the betas for both and noticed restarting to install updates has become far, far faster — like maybe a minute or two.


I’ve been running the betas for both and noticed restarting to install updates has become far, far faster — like maybe a minute or two.

Nice! (And thanks for the backgrounder. It's the first time I've seen this explanation on HN.)


Isn’t Fedora Silverblue follows the same model? I haven’t used it much, but I remember upgrades being faster, not longer.


I assume he would say the same things I would hear him say in meetings where the installer team would show him the lastest versions of the application. A special memory comes from the time where the installer progress bar starting going in reverse. The installer and mail teams received a lot of abuse. It took a special person to stay motivated given all of the challenges they faced and the feedback they got from SJ.


As a customer (my personal computer/display/phone/etc spend with Apple over the past 20 years or so: $25k+): I would prefer having someone in charge who can tell/understand/sense and say that something is clearly not good enough and then actually getting it solved. Tim Cook does not strike me as that kind of guy.

The abuse isn't required though.


(These extremely slow updates thing has been going on for many years now. So many lifetimes wasted.)


I recommend reading “Revolution in the valley” by Andy Hertzfeld, who is also the author of this story. The book is a compilation of all stories from folklore.org including more interesting details about development of the Macintosh.


Back when most everybody ran Connectix RAM Doubler and Connectix Speed Doubler on their Macs (which actually worked!), I was praying for Connectix to release Boot Doubler, that made every other boot instant!

https://en.wikipedia.org/wiki/Connectix

https://apple.fandom.com/wiki/RAM_Doubler

https://apple.fandom.com/wiki/Speed_Doubler

https://68kmla.org/bb/index.php?threads/connectix-speed-doub...

https://www.betaarchive.com/forum/viewtopic.php?t=31852

https://news.ycombinator.com/item?id=21768641


This is also a great argument for power saving. Shave a Watt or two of consumption from your mass market device or application, and suddenly you've saved hundreds of Megawatt-hours over the years.


It's very fascinating how small amounts of time people take to do/wait for something add up over a huge population.


Reminds me of this scene with Stanley Tucci talking about building a bridge in the movie Margin Call: https://www.youtube.com/watch?v=m8Mc-38C88g


I certainly wish Windows would do something about these endless boot and even worse shutdown times.

Even worse, I want to go home, not wait 30 minutes for updates to install.


I think about this a lot whenever I'm waiting on a long compile. How many lives has complicated template metaprogramming in C++ taken?


I was reading this while compiling


Nicely played on the double meaning of "save". Couldn't be done in every language.


It’s not Steve. It’s the engineers who care about saving lives. I have tried to pitch the idea of saving lives to different people. Many of them think it’s nonsense to care about other people business.


Ah, funny, I just shared this link in a comment a couple weeks ago: https://news.ycombinator.com/item?id=37053941

It's applicable in conversation so frequently around software/computers as it reflects a really empathic mindset that I feel is becoming more and more rare...


There are about half a million minutes in a year, so 50 million seconds is a year and two thirds. At the rate of saving 50 million seconds a day, in a year you'll have saved around 608 years—which is only a dozen lifetimes if a lifetime is around 50 years. Still, that's a pretty close approximation for an off-the-cuff guess.


I'm sure he'd have planned or thought about this before hand.

Steve's famous "computers are a bicycle for the mind" was refined over a long period of time and countless interviews. We only hear about the one time where he perfected it, where it made an impression. Many other instances are on YouTube, in one you can see him trying out different alternative lines.


The problem is that while it's an impressively close approximation for an off-the-cuff guess (at least if we charitably translate his "dozens"/"dozen" to 12), to the extent it was pre-planned it's a terrible approximation. ~50 years as a lifetime is the right order of magnitude (and thus a very good result for a guess), but is too far off to be any good if precalculated.


In a comment here: "We have 10 million people downloading and installing this patch, so every minute extra we take is another fraction of a human life we’re spending."

Following the logic of an eye for an eye, the failure of developers to remove long waits in mass-market software products should be punishable by... death ?


It probably saved the careers of thousands of ADHD people, who just wanted to start working before they got distracted!


This is good bullshit because it’s close to the truth. Not quite a dozen lives but order of magnitude right:

  (50,000,000 seconds saved per day) ÷ (60 seconds / minute) ÷ (525,600 minutes / year) ≈ 1.6 years saved per day
  (1.6 years saved per day) × (365 days / year) ≈ 580 years saved per year


I never understood why people calculate time savings like this. Similar for a developer 1 times 5 hours yields not the same producitiy/results as 5 times 1 hours, due to "context switching overhead" for example.

Claiming you saved a couple of lifetimes when all you can gain is a couple of seconds is so misleading.


I wish someone at microsoft would do this for o365. Losing 2-5 seconds any time I click a link is painful.


If we took all the lives lost to waiting for Jira, we’d have the time to speed up things like Jira.


I remember my MacBook booting up lightning fast in 2013 (Leopard? Earlier? Dunno). Those days are gone.


My M1 MBP on Monterey boots in a few seconds.


I didn't know this before, but it's cool that originally back then Apple's directory explorer was still called 'Finder and it's not changed since.


This story really drives in the Bill Burr joke about Steve Jobs real legacy being all about screaming at engineers to do stuff


What Andy giveth, Bill taketh away.


Just load an addicting easy game during boot process. Then users won’t even notice :)


When our website was down due to maintenance, we used to run JSTetris on the error page, so people would stay on the page, and they would get redirected to the web site as soon as the maintenance was over.

Some people even complained that they shouldn't be redirected automatically because they'd lost their progress :)


Games on the Commodore 64 started doing that in the early 80s. Loading from the cassette could take ages, so the devs would put in something like space invaders or missile command to entertain while the user waits for the main event.


Namco had the patent on that.


I was hoping it would explain what they did to speed it up.


Nostalgia this and that... in 1983 I had a calculator and flew to Venus...


This reminds me of the “XY Problem” framing [0], a concept that has been very helpful over the years when communicating with customers about feature requests.

Many people can imagine how they’d solve an immediate problem, but never pause to examine whether or not this solution is ideal, or generalizes beyond a specific situation.

Another phrase that comes to mind is “fall in love with the problem, not the solution”. If you understand the problem space deeply, either many solutions can emerge, or one solution emerges as clearly the best place to focus.

In my years as a product manager, it surprised me how many PMs don’t think this way, and just tack on feature after feature, convinced this is the best thing for the customer, when often the thing they need is not something they know how to ask for.

- [0] http://xyproblem.info/


> This reminds me of the “XY Problem” framing [0], a concept that has been very helpful over the years when communicating with customers about feature requests.

It also ruined stackoverflow, since replies which ignore the question and assume that the OP really meant something else end up being so much easier to write/vote on than an actual answer.


Many, many things are wrong with stackoverflow. Insisting that every discussion be factual and opinion-free pushes you deep into the McNamara fallacy of believing that things that cannot easily be quantified don't matter.

It's a site I sometimes use but dislike intensely.


I think the absolute hardest thing to get information on is "I have XYZ problem, I am aware of solutions A, B, and C. What is the best solution among these, what are the trade-offs between them, and what solutions am I not aware of?". Now, this is just a truly difficult question, but Stack Overflow solves that problem by forbidding such questions, which is understandable, but I think also a shame. At one point in time, I thought maybe Quora would try to fill this gap, but they went off in some other direction that I never understood. Most other "social" things (reddit, etc.) are discussion rather than Q&A. Or they are blog posts, where the focus is usually on solution A, with solutions B and C presented only for contrast, because solution A is what motivated the author to write the post.

I kind of want Wirecutter, but for technologies.


Yes, that would actually be much more useful to me than what stackoverflow is. A vast number of the questions found that can be easily answered by RTFM and/or doing some direct experimentation. The harder ones would be more useful.


Yeah. I think it's also why chatgpt (and copilot, etc.) actually did turn out to be a strong SO competitor, because it actually can do a pretty good job on these factual questions.

But unfortunately it's pretty bad at this other kind of judgment-based compare-and-contrast question. It's especially bad at the "what other solutions am I not aware of?" part, because it isn't kept up to date.


Isn't Bard kept up to date?


Good point, maybe I should try using that more.


You seem to be trying to replace a basic peer-review of an engineering design that typically involves a paid team with advice from poorly-known, pseudonymous strangers with reputation crowd-sourced from a web site's user-rating system.

Frankly, I think that's asking a bit much. If you want a high-quality peer review of design proposals to bounce ideas off of others and discuss tradeoffs, you need a team. Maybe something like a meetup group or mailing list for a specific technology, programming language, or industry sector. But it goes beyond one-off Q&A, and I can also understand why Stack Overflow, with a goal of becoming a repository of perpetually useful knowledge that is general enough to be useful for anyone into the indefinite future, does not want to host such project-specific discussions.

Why not just develop in the open and collaborate explicitly with other parties also working on the same project? What you're asking for sounds close to something like the various special interest groups and public discussion of improvement proposals you see in things like the Python programming language or Kubernetes, or discussion on LWN about specific challenges the Linux kernel team faces.


I don't think that's asking a bit much.

If it were, then there'd be no reason to prohibit such questions... people wouldn't ask them, because they would never be answered. The only reason to prohibit them is because they would get attention/answers where none was desired.

The trouble with StackOverflow, is that what the users want and need does not match what the owners want. The owners want something monetizable, something that can look elegant and beautiful (hence the PR release a couple of years ago where they were positioning it as some "encyclopedia of computer science" or whatever). They figured out that the users could be denied what they want, while still (slowly) creating what the owners themselves wanted.

> and I can also understand why Stack Overflow, with a goal of becoming a repository of perpetually useful knowledge that is general enough to be useful for anyone into the indefinite future, does not want to host such project-specific discussions.

I'm not sure I'd characterize them as wanting that, but if they did... how would that be at all useful to anyone except CS undergrads trying to get someone to do their homework for them? Literally nothing of what people ask there day to day will be generally useful into the indefinite future. What do you want to ask, that will be useful 40 years from now? Neither anything language specific, nor anything domain specific will be relevant to anyone not a historian. Even the cutting edge stuff today will have long since been wrapped up into some blackbox library that everyone will use without unerstanding it.

If you were correct, SO could never be anything more than some useless little dumpster where the same 5 people whine n about the quickest sort algorithm.


All true, to which I'd add:

It's like giving a big sales force a financial incentive: you have to be careful because they'll just game it, relentlessly, all day long. They won't care about your corporate priorities -- just getting that incentive money.

On SO, people get "reputation points." Those "same 5 people" game that system like salespeople winning that prize. You answered a question? They don't want you as a competitor, so they downvote you. You don't like their answer? Too bad, you don't have enough reputation points to downvote them.

To pick another analogy: they're like high school cheerleaders voting on who can become one of them.


> You seem to be trying to replace a basic peer-review of an engineering design that typically involves a paid team with advice from poorly-known, pseudonymous strangers with reputation crowd-sourced from a web site's user-rating system.

Yes that's a correct interpretation! In fact, I almost wrote something like "even though this is forbidden on SO, it's one of the most common kinds of conversations within and across teams in the workplace in my experience".

I also agree with pretty much everything else you said about how these discussions do happen in project-specific ways on mailing lists and such.

But I disagree that a generic version of this couldn't exist. (Maybe it can't anymore though, because everyone will try to do it solely with AI now because that's what's hot.) I think before the existence of Stack Overflow, you could have said all these same things about the kind of questions that appeared there. Prior to having a good place for getting factual answers from "poorly-known, pseudonymous strangers", it was necessary to get those answers from professors or or teammates or consultants or independent research, just like it still is for these more subjective questions.

I definitely agree that this is a more difficult genre than factual Q&A, and I don't begrudge SO their choice in what content to focus on, but I think it was a choice to go that route, not an inevitability.


This is the most techie social media site I use, and I see constant complaints about the other techie social media site, StackOverflow. Why doesn’t someone test the theory and come up with some competition?

I think this is normally an unreasonable ask (when we’re complaining about, like, cars, clearly that’s not in this site’s aggregate wheelhouse). But I mean this is a website about start-ups, full of techie web-devs complaining about a website that they all use.


I think it’s because Stack Overflow is so close. It does 80% of what you need with a bunch of major warts.

But nobody wants to duplicate the 80% just to solve the warts.


> Why doesn’t someone test the theory and come up with some competition?

ChatGPT Has entered the chat.


I don't see that as a bad thing. Stack Overflow only wants to focus on questions that have a verifiable answer. Other types of questions still matter, they just don't matter on Stack Overflow.


Like anything, it needs to be applied appropriately, and I agree that blindly redirecting every request to this framing is not helpful.

But the number of times that it is helpful has been pretty high for me over the years. This probably depends a lot on the customer’s own ability to comprehend the true nature of the problem. I worked in the enterprise/B2B space, where a significant number of requests came from people not technical enough to fully know what to ask without some deeper exploration.


Agreed. But sometimes, especially if you know about your problem domain, it feels like asking "how do a I keep water out of my basement" and all the answers are "simply rebuild your house at the top of the hill."


It's a matter of vastly different costs, in that case: the solution to the modified problem costs much more to solve than the originally stated problem. The trick is avoiding such a large gap, hopefully with a breakeven that comes in the foreseeable future, if not immediately.

For example: how do I repair water damage on my ceiling in a way that's quick enough to do it after every storm? You mean how do I repair my roof so I only have to repair the ceiling one more time? It's more upfront cost to do both now, but the breakeven is only a small handful of storms away, which is palatable enough to get serious consideration. If the breakeven was (for some reason, hypothetically) 20 years away, actually figuring out how to make quick work of repeated ceiling repairs might be more desirable.


Also when, for example, someone suggests a strategy that is useful in scenario X, but because it can be problematic in scenario Y, they get a bunch of replies warning them about that - even though they had no intention of advocating applying it in scenario Y. That’s also a kind of XYing - “oh don’t do that, it’s bad if you re trying to Y…” when we’re not, we are trying to X.

For example, when someone says they think the XY problem model is a useful framing when evaluating customer feature requests in product design, they are talking about using it in scenario X.

But inevitably they will attract a bunch of replies telling them how bad it is to apply the XY problem approach when answering questions in a technical Q&A forum. That would be scenario Y.


"You keep mentioning XY problem, but you really meant the AB problem, and that answer is ......"

That's it in a nutshell. And concur with this de-framing non-answer as one of the leading causes of bad StackOverflow solutions.


Stack overflow started out with a lot of Microsoft ecosystem people, eg. Joel Spolsky. I worked at Microsoft in 2008 and this kind of de-framing was a bit of a corporate cultural obsession there at that time. You'd report a bug internally and PMs would ask you what you were really trying to do ... It was frustrating when you wanted people to just fix their shit. Instead people would universally treat you like you didn't know what you were doing and really meant to ask something else. I saw this trait a lot on SO around the same time.


Apparently I was too subtle so let me put a lampshade on it.

The replies to the post which said that the XY problem approach is useful in product development, which are talking about XY reframing being a problem on stackoverflow are XY reframing the parent post.

They are doing exactly what they decry.

The smell of irony is apparently not as thick in the air as I thought it was.


For what it’s worth, I saw what you did there and appreciated/enjoyed it.


Even if you know that the strategy is problematic in scenario Y, other viewers of the reply may not; you are only one of the many potential consumers of the response. Isn't it useful to flag the potential gotchas of a given approach for a naive reader?

I feel like many of the complaints Stack Overflow users come down to this: in many users' minds, the site is a Q&A forum, while the SO team wants it to be an authoritative repository of technical knowledge.


Most people ask how to make some absurd hack when there is an easy and proper way to solve their problem.


Sometimes what you think is an absurd hack is still what I want to do after having thoroughly considered all other options. It's infuriating in those cases to end up on a Stack Overflow question where someone wanted to do exactly what I want to do, and the only answers are redirecting them to other solutions that I've already considered and ruled out.


The huge majority of people asking questions on SO are noobs and most likely haven't thoroughly considered all other options.

If they did, they should say so in the question.

The majority of people answering questions are also noobs, and this should be taken into account. Experts in their domain don't need SO, and so don't go there at all.

When I was writing my thesis, years ago, SO was already basically useless to me because nobody could solve any of the problems that I was encountering then.

I just use it when I can't be bothered to look up stuff in the documentation, but I see it mostly as a resource for people who are learning or are very early in their career.


This feels like you're saying "SO is and should remain useless". SO is populated mostly by inexperienced developers because the community is hostile towards experienced developers, and part of that hostility is this XY-ing. I also don't find SO to be useful, but I wish it were.

> If they did, they should say so in the question.

You're operating on the assumption that the purpose of SO is for an asker to get an answer. The extremely strict duplicates policy suggests the opposite: the goals and motives of the original asker are essentially irrelevant, because only one similarly-phrased question is allowed ever. If their question doesn't get answered this time, it will never be allowed an answer.


It is very useful for noobs.

Once you become an expert you spend much less time on SO because you just don't need it. So what are the chances that when you do need it, another expert in that specific thing (of which there are maybe 20 in the world) will also at the same time need to use SO, and stumble upon your question?

It's like using tinder in a sparsely populated area.


There’s more general concept of perception here that is worth thinking about.

Users can get awfully confused by generic, misleading or overly technical error messages. So they call/write you and confuse you even more.

“There is something wrong about X.” Where X is some misinterpreted partial of a message. This only gets cleared up if you let them walk you through what happened step by step and/or examine logs etc.

Error messages are an important part of a UI. No matter if they’re user errors or internal errors.

There are always errors that you don’t foresee and just need to display generic messages for. But even then there should be a very clear, short(!) description and a unmistakable call to action.


Ugh, not 15 minutes before this I was testing a new yet to be released version of my companies software. And while testing I get an error message like

"Cannot do X with upload"

Number one this is a behavior change and should not have been changed in the first place.

But number two, all the error had to say was "Cannot do X with upload because application is set to Y"

The first one generates a support ticket, the second one gives a legitimate reason on why the failure occurred and what they can do about it.


The XY problem:

> This leads to enormous amounts of wasted time and energy, both on the part of people asking for help, and on the part of those providing help.

This is not really true though.

The time spent to answer is not wasted. There are people searching for it via Google, e.g. how to get the last N characters from a variable, and they will find the correct answer.

The time spent by the asker is never wasted. I sometimes know that this is not directly the thing I want to solve, or how I stumbled upon this question. Still, it's a question I have because I'm curious and I just want to know. So, in any case, the person asking for help will learn something.

And all other people on the Internet who stumble upon the question are likely searching for exactly the answer to this exact question, so they get some good value out of it. Or even if not, it likely will have references to what they are interested in. Those other people are ignored here.


Unfortunately the XY problem is now mostly used by know-it-alls trying to show off. At least in my experience.

If you ever find a question that you think is an XY problem, answer X first and then say "did you want Y?".

The worst possible answer is "you should be asking Y".


"Here's the answer to what I wish you asked..."


Politicians do it all the time: "Answer the question you wish you were asked, not the question you were actually asked." And reporters are pretty bad at taking this on.


> reporters are pretty bad at taking this on.

The format of a typical press conference is designed to make it hard for a reporter to follow up when the politician dodges their question, because the politician usually moves on to the next reporter. If they ever get a chance to ask a follow-up, it's after the original context is long gone from anyone's working memory.


If reporters really wanted an answer to the question, the next reporter to be called on could just press for an answer to the previous question. But they don’t; in a press conference situation, the goal of reporters is to be seen, so their fame goes up, and to avoid antagonizing the host, since if they do, they won’t be invited to the next press conference.


Eh, that's part of it, but it's also that the next reporter already knew which question they wanted to ask. They probably didn't pay that much attention to the answer to the previous question because they were busy formulating their own question.


> And reporters are pretty bad at taking this on.

If they do, they won’t get the interview next time.


While I agree that it’s not useful if people are using this to show off, I’d prefer to deal with a few know-it-alls if it means that better product decisions are being made, and dev teams are spending less time building things that customers can’t use or didn’t even want.

The way I see it, there are failure modes with both extremes. I’d prefer the failure mode that involves some occasional annoyance over the failure mode that results in significant amounts of wasted code/effort, and a return to the XY framing anyway when things go wrong.

Ideally, people who are using this find a balance, and can recognize the difference between an obviously straight-forward request and something that needs deeper exploration.

It’s not perfect, but I think it’s a better default.


> answer X first and then say "did you want Y?".

That's a surefire way to cause your suggestion of Y to get ignored and proliferate the bad practice of X.

It's not anyone's responsibility to explain how to do things in a way that they believe is wrong.


If they don't want to explain how to do things in a way they disagree with, then the appropriate response is to not say anything at all.

The current culture on SO is to flood questions with "don't do X, do Y", then upvote those answers. The result is that questions look answered but actually aren't, so the questions stay unanswered. When I come along months or years later having already considered all options, I don't want to have my time wasted by a question that perfectly matches my goal but was never answered because it got drowned in alternative approaches that I already ruled out.


> The current culture on SO is to flood questions with "don't do X, do Y", then upvote those answers. The result is that questions look answered but actually aren't, so the questions stay unanswered.

I think this is the #1 reason why SO isn't a great resource for me.


Isn't it the question author who gets to choose when an answer is satisfactory or not on SO? If a question is full of answers that aren't marked as satisfactory, then there's still an opportunity for someone to come in and get the points by providing a different one. What more can they do, ban people from trying to provide alternative solutions? Surely that is going to create much more harm than good.


> Isn't it the question author who gets to choose when an answer is satisfactory or not on SO?

This would be a fine policy if SO didn't also make a huge stink about duplicate questions. As is, there's one canonical copy of each similarly-phrased question, and a re-ask that says "but for real, I actually want to do it this way" is going to get shut down as a duplicate.

> If a question is full of answers that aren't marked as satisfactory, then there's still an opportunity for someone to come in and get the points by providing a different one.

The system rewards being one of the first responders, not the one who actually answers the question. This is especially true now that they've updated the system to place the highest-voted answer first rather than the accepted answer.

> What more can they do, ban people from trying to provide alternative solutions? Surely that is going to create much more harm than good.

I don't know that there's anything the company can do, since it's pretty clear that they've lost control of most aspects of the culture.


Fair enough, I totally agree that SO moderators are way too overbearing when it comes to duplicates.


Okay, but I've been in plenty of conversations where I ask "I read in a book that we should be doing X, how are people doing X?"[1], and the answers I got, _from a community that included the book author_, were "first, make sure you're doing A, B and C."[2] When in fact I am doing that already. Do I have to really preface every question with "i promise i'm not the idiot you assume I am?"

1: "This book says to monitor ML systems for distribution shifts; what tools are people using to store that data and monitor for changes?" 2: "Make sure you're monitoring normal SRE statistics like request failure rate"


> Do I have to really preface every question with "i promise i'm not the idiot you assume I am?"

Yes, first of all I do think it's up to the person looking for help to fully elaborate their situation in such a way that makes it clear why the X/Y problem doesn't apply to them, since other people with similar issues who stumble upon your thread might not realize that you have that additional context, and the answer is just as much for them as it is for you (if not moreso, since you're just one person).

Secondly, even if you did fully elaborate your situation, it may be that there are people interested in trying to help who don't know the answer to X but do know the answer to Y, and by answering Y they are still providing more value than not answering at all. There's nothing about answering Y that prevents X from being answered by someone else.


> other people with similar issues who stumble upon your thread might not realize that you have that additional context, and the answer is just as much for them as it is for you

IMO, this is what books are for: advice for some form of large common denominators. And if I cite a book, I think it's fair game to assume I am familiar with its contents. And if you encounter my question and haven't read the book, I would hope you benefit just by knowing it exists and maybe even read it.

> Secondly, even if you did fully elaborate your situation, it may be that there are people interested in trying to help who don't know the answer to X but do know the answer to Y, and by answering Y they are still providing more value than not answering at all.

I mean, the longer a thread on slack is, the fewer people bother reading it all. And I have to read it as well before I know it's not actually helping me.


It's not anyone's responsibility to answer at all.


Agreed! Which is why I think it's especially disrespectful to criticize people making honest efforts to help as being "know-it-alls trying to show off" in cases where their idea of the ideal kind of help is different than what the original poster had in mind.


It's frequently NOT an honest effort to help. It's just "well that's a stupid question, let me show you how I know more..."

When you really are trying to help and you think it's an XY you can answer politely by actually answering their question and then saying "but you may want to do this instead". Try it.


Indeed, a good answer to X will make clear why Y is the better option in most cases. But its a thin line to tread between subtly implying that X is bad, and saying "only idiots do X, anyway here's how an idiot would do X".


You suggested that in your previous comment, and I explained already why I don't think that's a good idea: it's liable to cause your alternative suggestion to get ignored and proliferate bad practices.

If someone has a genuine desire to help, then they also inherently have an interest in making sure people don't continue down paths which are likely to lead to more problems in the end. Otherwise, you might end up spending more time supporting the follow-on issues created due to the misapplications of your own advice than you spent providing the support in the first place, which would not be an efficient way of helping.


> it's liable to cause your alternative suggestion to get ignored and proliferate bad practices.

I just don't think that is the case. On the rare occasion that somebody has answered like this to me I've just read their answer and thought "oh right that makes more sense I'll do that".


For all it is rightfully derided, it is this aspect of "user story phrasing" I find valuable. If you can politely ask stakeholders to state their problem in the form "As a _____ I want to _____ so that I can ______", then you can find out that why as filled in on the last blank. And then you can use that why to figure out the best actions to take, being careful that you still scratch the itch the that middle blank in that story brought up.


We detached this subthread from https://news.ycombinator.com/item?id=37212534.


@dang, I was pretty surprised to see my reply turn into such a huge subthread.

When you say you detached this subthread, isn’t this essentially removing it from the discussion entirely?

It wasn’t my intent to derail the discussion, but I’m also trying to make sure I understand what just happened here and what you mean by detached.


You wrote a reply to a comment that took the discussion off on a tangent. That's fine (usually), but the threads sometimes make more sense if the tangents start on their own top-level comment, which is what happened: your comment branched off a branch of the tree, was lopped off by Dan and grafted to the trunk.


Ahh, I didn’t realize it was now a top level comment, but I understand now. Thanks for clarifying. Wanted to make sure I’m adjusting my behavior if necessary.


I can't speak for the mods but I think you're fine in this case.


I suppose this makes TV advertisers worse than Hitler


Correct.


Not if the ads targeting wrong person.


I think about this every time I see a cookie banner. It’s a 1-2 second delay that plays out millions if not billions of times per day. How many lifetimes have been wasted since those were forced into existence by GDPR?


Cookie banners were present well before GDPR, and they are not mandated by law.

You can avoid the cookie banner in two ways: 1. Do not use tracking cookies (or other tracking tools); or 2. Ask the consensus in a non-intrusive way, e.g., directly in the page itself.

We know that no company wants to remove tracking cookies because they need to "improve the service". However, there is no reason for not using solution 2. The only reason is annoying the user: a dark pattern to force users to accept cookies.


GDPR does not force those (In fact most of them are illegal according to GDPR). Every site could avoid those banners by just not tracking visitors.


Most of them were not forced into existence by the GDPR.


https://www.youtube.com/watch?v=m8Mc-38C88g

A similar (fictional) sentiment from Margin Call


I like arguments like this because it's a reminder that details matter. I clearly see them as the manipulation they are, but I do like them nonetheless.

I remember watching a story about asylum seekers who had to use Skype to dial in to get an appointment. At one point, one of them says to the camera "I often dream about the call music." I would be surprised if the call music isn't (at this point at least) configurable in some way, but it's still humbling to realize that a minor thing like a loader or sound file can represent the entire product to someone at a very stressful time in their life.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: