Hacker News new | past | comments | ask | show | jobs | submit | luu's comments login

> Some of my most popular posts are throwaway quips and memes that went viral on social media. One of my life’s crowning achievements is this: [witty, throwaway, quip tweet].

> In contrast, some of the work I put weeks or months into essentially lost the SEO game and gets nearly zero traffic ... Even though I don’t write for money, there is an immense pressure to produce clickbait — even if simply to add “hey, since you’re here, check out this serious thing”.

This will be different for different people, but I've noticed a moderately strong negative correlation between how much effort I put into something and how much engagement it gets (this seems likely to be different for people who apply their effort to generating engagement). The highest engagement content content of mine tends to be thoughtless social media comments I make without thinking. Something like https://danluu.com/ftc-google-antitrust/, which summarizes 300+ pages of FTC memos and is lucky to get 10% of the traffic of a throwaway comment and is more likely to get < 0.1% of the traffic of a high-engagement throwaway comment. Of course there's a direct effect, in that a thoughtless joke has appeal to a larger number of people than a deep dive into anything, but algorithmic feeds really magnify this effect because they'll cause the thoughtless joke to be shown to orders of magnitude more people so something with a 10x difference in appeal will end up with, say, a 1000x difference in traffic on average and even more in the tail.

I don't think this is unique to tech content either. For example, I see this with YouTube channels as well — in every genre or niche that I follow, the most informative content doesn't has fairly low reach and the highest engagement content leans heavily on entertainment value and isn't very informative.


My experience on the internet suggests that high-signal information usually has very low memetic fitness. All the good sources of information I've found have been buried away and I've come across them serendipitously.

Not particularly surprising though, entertainment is the lowest common denominator so it's much easier for that kind of stuff to spread. High-signal information is the complete opposite: very few people can actually tell if it's valuable, and it's not particularly shareable.

To be fair, most people aren't really looking for super high-signal info anyway. Closer to the minimal amount of information I need, presented in an easily digestible way, or looking for infotainment around something they're interested in!


> To be fair, most people aren't really looking for super high-signal info anyway.

This is the charitable interpretation. It's not that people are necessarily dumb or that "the algorithm" is trying to appeal to the lowest common denominator. It's that the vast majority of the time, people are "engaging" with the Internet in order to seek out entertainment in the first place.

The world wide web began its life as the promise of unlimited information at our fingertips. But most people who need to engage in "serious" academic research on a topic are going directly to specialized sources for that information that they are already familiar with and trust. Even on the Internet, websites like StackOverflow come to mind when it comes to software development. But more often than not I'm going back to text books, be it for algorithms or design patterns or what have you.

When I'm taking time to engage with the web, even here right now on Hacker News, it's because I'm taking a break. I'm not engaged in any specific productive endeavour at the moment. I'm looking to fill my time and unwind until I'm back to work and chores.

I used to think about this when I considered why there is so much hate and outrage on social media. My theory is that people are on social media because they're on a break, or they just got home, or they just got their kids to sleep and all they want to do is look at cute cat gifs or watch TikTok videos or whatever. Then the news feed shoves a bunch of stuff in their face that they disagree with or find contrarian, because they're most likely to engage with it, and the fact that they're tired and not in the mood to have a well reasoned conversation goes a long way towards triggering that outburst.

It's a cliche truism to say that the algorithms are giving people want they want. I think what people want most of the time is entertainment and "easy reading/watching." And so there is far more of that than anything else.


Saw this recently as the 'Toilet theory of the internet.' [0]

[0] https://news.ycombinator.com/item?id=40409671


Yeah I agree. That was what the last bit of my comment was supposed to be about, but I unintentionally underplayed it a lot.


Somewhat related is that the type of format that some people like to glibly dismiss as clickbait--how to..., 5 takeaways for $EVENT, etc. Whether they deliver or not there's at least a promise of practical advice or easily digestible information that doesn't involve working through 2,000 or 3,000 words to figure out what the key points are. I may appreciate a good New Yorker article but sometimes I just want some highlights.


Yeah 100%. We're all in that mode sometimes.


any high-signal sources in particular?



Thanks for the list!


Anything that comes from an academic (.edu / .ac.uk / your local equivalent) domain, Youtube channels of academic institutions (the less professionally recorded the video, the better), the Wikipedia references for a particular topic.



you just want some to look at? OTOH danluu, prog21, lcamtuf (of course). I had more, but I forgot them. Because they have low memetic fitness.

Hacker News, compared to Reddit.


> Closer to the minimal amount of information I need

It makes me wonder what this 'need' is that can be filled by memetic drivel.


Meme: contagious idea.

It is easy to remember and it produces a positive reaction on the spectator. Quality of information, even veracity, are a second thought.


If I had to guess, "memetic drivel" gives off the same subconscious signals as safe, healthy communities.

Demonstrating that the speaker is at ease, receives broad support from listeners, and can say things without fear of reprisal.


That's very interesting food for thought, thanks!


Eric Helms, an exercise physiology researcher out of New Zealand who runs a reasonably well-known but certainly not viral Internet coaching practice and podcast, both for weighlifting, made a comment to this effect on a recent episode. If he fixates on engagement metrics, he can see the lowest effort cheap shit is what gets the clicks and the eyeballs. Books he has published push nowhere near that volume, and he can only coach a few people at a time.

But what kind of engagement are you looking for? One of his coaching clients is a two-time world champion. Think about some of history's great teachers. Jaime Escalante directly ever engaged with what? A few hundred, maybe a thousand math students in his entire 40 year career? Sure, but he deeply impacted these people, and in some cases totally changed their lives for the better. Do you want to briefly amuse a billion people for a few seconds each or produce world champions and paths from the ghetto to the middle class?


I agree and it can be very frustrating. You put a lot of effort into something and get almost no feedback/traffic (I also don’t write for money or make money from my blog, but it’s nice to see people read it) and then you can write a pithy comment or post that doesn’t dig into the nuance and it will blow up.

Some of my highest upvoted HN comments are 1-2 sentences but when I’ve responded or posted something well thought out or detailed it sometimes gets no votes/replies.

I try to “forget” this phenomenon because I don’t want to trend towards one-liners optimizing for engagement but every time a throwaway comment blows up I get somewhat frustrated that the well-reasoned reply/comment I wrote the previous day/hour/minute was ignored.

Thankfully I’ve had longer comments/posts gain traction so I know it does happen. I wrote a detailed blog post [0] on the Kroger (grocery store) app and posted it to HN but it got no upvotes (maybe 1-2) and didn’t get any comments. Thankfully, and this was news to me, the HN mods will occasionally take a post they liked that didn’t get attention and throw it on the front page to give it some more visibility (and then it can sink or swim on its own). They did that for this post and I got to participate in some enjoyable discussions on the topic.

That’s really what drives me to write (which I do rarely), the resulting discussion/feedback, well at least it’s a close second to just getting my ideas down on “paper” which forces me to think about them in new/interesting ways. More than once I’ve gone into writing a blog post thinking a certain way then I’ve changed or altered my thinking after putting my initial thoughts on the page.

[0] https://joshstrange.com/2024/02/11/krogers-digital-struggle/


I remember that post, thanks for sharing!

Probably half of my posts that hit the front page got there via the second chance pool. It feels like dang is single-handedly turning back the tides of the internet by hand-picking links.


+1 - I've spent months on some videos—research, testing, sometimes travelling to different places to get better data and video to use. Then I spend a few hours on others. Some of the 'big' ones do well, sure, but it's nowhere near proportional to the amount of time worked.

I still do those projects because personally I don't feel like I'm doing as much good when I whip up a video in less than a day from concept to posting. I try to at least have something interesting/educational in each video, even if it's just a Gist or a new GitHub project someone can fork.

That extra work doesn't result in any extra reward/revenue, but at least it keeps me motivated.


It’s what I call the comedians valley. Naturally funny people are off the cuff and effortless. If they try stand up it takes a long time to marry up elite performance professionalism with natural talent. For a long time there’s an uncanny awkwardness to it. The same thing is goes for other forms of memetic influence


I think the dichotomy here is a false one. There is plenty of well-researched, in-depth content that does well on social media. The trick is that it’s also presented in an accessible way that provides value at every step, even the “superficial glance” one.

The mistake many of the “I spent hours researching this piece” creators is that they don’t package that content in an accessible way. Instead it’s a huge block of text without any coherent organization or entry point. Ergo it’s not surprising that it does worse.


> The mistake many of the “I spent hours researching this piece” creators is that they don’t package that content in an accessible way. Instead it’s a huge block of text without any coherent organization or entry point. Ergo it’s not surprising that it does worse.

It's like a lot of open source software. Tons of high quality effort put together to make great things, but very little time put toward packaging it to be appealing to the masses. Which is fine if you're okay with that, just don't be surprised when the average person isn't setting up a toolchain to compile your project from source or not editing config files to changes settings that aren't accessible in-program.

It's possible hour+ blog posts don't have much of a market, but I know super in-depth, informative, hour+ videos have some form of a market. At the very least Summoning Salt does something right in their videos to get millions of people to watch hour long historical video game videos.


> The mistake many of the “I spent hours researching this piece” creators is that they don’t package that content in an accessible way. Instead it’s a huge block of text without any coherent organization or entry point. Ergo it’s not surprising that it does worse.

I honestly think this is what draws me to the documentary/podcast/video essay genre so much. I have a hard time concentrating on reading non-fiction but take the exact same material and deliver it via someone with decent charisma and the willingness to construct into a narrative story, and I'll watch a 4-hour deconstruction of a TV show I don't even care about.


My gut says - and this is very much gut - that throwaway content/posts probably tend to be comedic, and sometimes we just strike comedy gold. One good, funny sentence can ripple very quickly.


> The highest engagement content content of mine tends to be thoughtless social media comments I make without thinking.

I think this goes deeper than just comments. I've noticed a similar phenomenon with artists posting their work. Some of their most engaging posts tend to be pencil sketches, rather than their most polished pieces.

I think we tend to be more impressed with things that seem like they were achieved effortlessly, in general. If I was to guess, I'd say it might have something to do with our brains craving energy efficiency, and rewarding us for discovering someone who's more efficient than we are at some task.



Michael Goldharber - People have limited attention to give anything, but unlimited capacity to receive attention.

The Attention Economy has been inflated exploiting the above inequality.

Bottom line is the social media is not designed to optimize allocation of limited Global Human Attention. It does a fine job squandering it. And people are begining to notice.


Social media destroys the value of intellect because it limits the initial exposure of posts strategically in an unfairly tiered manner based on popularity. There is no real way to reach others unless you cheat or pay for ads now.

There is no real logical explanation to it's utility any more, as it serves as a casino for scams and ads, it's really no longer a forum for delivering important information and developing reputation in my opinion, because the popularity garnered on these platforms can easily be bought, forged, plagiarized, and sold on the black market, and often if you're highly controversial.

Even the people getting attention these days know their popularity only lasts for seconds at a time, there is nothing durable, and most social media clout is also not very memorable in the long term. Eventually {hopefully} real life interactions will become more important again after the sheen of tech manipulation on art and news wears off.


Doesn't this kind of make sense? Stuff with high personal resonance, by virtue of being personal, has a specific audience. That sounds trite, but I at least tend to underrate how much the stuff I really like is just a weirdly shaped key fitting into a weirdly shaped lock somewhere in my brain.


> the most informative content doesn't has fairly low reach and the highest engagement content leans heavily on entertainment value

Same observation here. I think it's because these channels are optimized for people looking to be entertained, not looking to be informed. There's an impedance mismatch between high value content and what people want while doomscrolling on the couch at 8pm or while on a coffee break at work.


I think this is more of a quantity vs quality thing. You make 100 low effort posts in the time of 1 high effort post. Even if there's a 1% chance of the low effort posts getting traction compared to the 10-20% chance of a high quality post getting traction, on a net basis, the low quality posts will end up being more popular.


Yeap, just +1'ing this too.

One other axis of engagement is "topical relevance" -- and I think that does have some overlap with the axis of "effort put in". Meaning: putting a TON of effort into a long-form piece tends to relate to some original thought or framing you have. But a lot of people are explicitly looking for something, even if that something is an entertaining throwaway meme comment.

If you go too heavily down the "flesh out topic of deep personal interest", you can end up too far away from the "topic everyone wants to talk about on the internet today" stuff.

Sadly (or not!), I take great enjoyment fleshing out topics of deep personal interest, even when they have limited relevance to the topic du jour. If it were different, perhaps we'd be journalists or more mainstream authors.


On a positive other hand though, I think some of that (engagement with the low-effort joke or whatever) only comes because of respect earnt with the higher effort content.

For example, I read the Money Stuff newsletter and follow its author Matt Leavine on Twitter as a result; I'd be much more likely to 'engage' on Twitter if he posted some joke (for other readers: a cartoon of him leaning forward into an email client after a holiday, say) than I would to reply by email with a well thought-out and in-depth response with additional information/correction on some technical detail say, even if I worked in the industry to have that information. But I'm only following him on Twitter because of the newsletter.


I think it has to do with the desire to consume organic content.

We're currently live in an SEO-optimised hell where everything is monetized. Years of wading through this swill has subconciously made us seek out spontaeneous, unplanned interactions and content, which seem more humane.

Thoughtless social media comments, quips, and many of the funny tweets are prime examples of stuff that is done for the fun of it, without any "ulterior motive" in mind, and refreshing to come across.


I actually am beginning to worry about this from a scientific publishing point of view. Yeah I know there’s editorial board meetings where articles are discussed for publication, but given that the content is being pushed out increasingly through algorithmic channels (google scholar, social media, pubmed search, researchsquare), I have to wonder how much choices are made to optimise for the channels. What are the metrics editorial decisions are measured by? Does channel performance factor in?


Scientists appear to optimise for citations because that's how they're "measured" against others. The quality and innovation of the research almost doesn't matter if it won't get citations, so you must publish something around what other people are working on, not on what you believe there's more chances of progress. To get citations, you also need to "play SEO" on those research search engines, of course (which is why every research paper uses as many buzzwords as they can fit in it), or make sure you have mutual agreements with "friends" to cite each other in every possible publication. Most heads of departments require everyone to cite their work on everything they publish. It's a wonder that with such a idiotic system (ironically coming from our brightest educational institutions) science still manages to make any progress at all.


I think it was probably the best you could do before the Internet.


I see this even on enthusiast discussion forums and subreddits where high value content is encouraged.

Very often someone will post some thoughtful, high value post in a thread that gets at least a handful of positive reactions. But if someone quotes it with some silly quip that's 5 words or less, it invariably gets 3-4x the response.

Yeah, sometimes it's a tdlr; situation, but it seems common enough even with just a few sentences.


Same goes for programming projects. My most popular projects are always ones that were very quick to make. If I spend months making an app, it always seems like no one cares. If I spend a few days making an app, people will use it.

Perhaps it instead is the simplicity of the idea that resonates with the most people rather than the complexity of the content. Perhaps this isn't a bad thing, either. After all, simplicity is the ultimate sophistication.


I wonder though if there is a long term benefit.

E.g. people might pay attention to you because of your reputation. Your reputation might be based on high effort posts (over the long term) even if they get less attention. The lower effort posts might get more direct attention but only because of your reputation which is indirectly caused by the high effort posts which much fewer people read.

Just a theory, i wonder if people who are actually internet famous would agree or not (since i am not).


There is definitely a long-term societal benefit to doing both kinds of work. Often work seen by only a few people (which might gather only a handful of citations, or none at all) inspires a later generation's breakthroughs at the frontier of human understanding. (For example yesterday there was a post about Hermann Grassmann and his pioneering work in linear algebra which got almost no positive feedback at the time but 150 years later is considered foundational for whole fields of study.) On the other hand making very popular material that helps a large number of people to slightly better understand old well-known ideas can still have huge benefit.


I am more internet infamous, but my one viral success has impacted my life in more ways than I could explain. But that one success took 5 years of my life.


I am only a little Internet famous, but this matches my experience. But I also self-host and stay away from poisonous platforms.


This is why the stock market (specifically the Nasdaq) and real estate, tech jobs are so great for building wealth . none of this unpredictability of having to rely on user/reader engagement or guessing the whims of reader or publisher tastes. For investing, being successful is as easy as parking your money and watching it grow. The creator/engagement economy has vastly more losers relative to winners, which makes it impractical.


It's not just unpredictability. It's that outside of doing explicitly commercial work (and often even then), the average and certainly the median income for a great deal of creative work is really low.


It might be that things that take more effort to produce also take more effort to consume. When readers face a choice between a well-researched 100 page article versus a short (maybe low effort) quip of 100 characters, they are more likely to view the latter.


Funny how it happens like that. Maurice Ravel's most famous piece, Bolero, was written intended as a simple warmup for his orchestra.


Predicting what people want to consume is hard, especially if # of impressions is your success measure. More broadly, I have been endlessly surprised by how users use my products and what in particular they liked. You do tend to get better at feeling this out from seeing people interacting with your product, but you stand to be bewildered, forever. If only you saw in what circumstances people read your posts!

At the same time, some might say it's about the area under the curve. If 10 folks get their mind blown by in-depth treatment of some curious topic, it's roughly same amount of utils as 1e5 impressions on some silly quip if you ask me.

I, for one, am perpetually grateful to lcamtuf. I have been looking up to him for like 15 years, and he has shaped me profoundly by showing what level of focus, productivity and insight is possible. You wouldn't think that someone's life trajectory can get changed by super detailed CNC lore write up, but here we are, years later (: Thanks!!

Also, if you're reading this lcamtuf, I would like to put one vote in favour of re-instating the American essays. Pls don't pull a Kafka on us! I did read your "choosing how to be remembered" post, but still, the fact that you took the American essays down feeds right into the topic of this current post. I found them positively entertaining and insightful.


Finally? If I look at the last N mastodon.social links that have >= 15 points on HN besides this one, they are:

  Porting games to Linux pays for itself
  People go to Stack Overflow because the docs and error messages are garbage 
  Hallucineted CVE against Curl: someone asked Bard to find a vulnerability
  Curl on track for non-experimental HTTP/3 support 
  Jeff Johnson: "Passkeys are a lie and contradiction." 
  Swift was not given the chance to prove itself as a language 
  The only planet where 100% of Linux systems have working audio is Mars 
  Bevy Needs Sponsorship
That's the entire first page of results. On page 2, we have:

  Things that didn't age well 
  Google Maps is a critical dependency for nutrition facts on mcdonalds.com 
  Google Search Is Over
  Downloading a video should be “fair use” as recording a song from the radio 
  A list of recent hostile moves by Google's Chrome team 
  One thing that has changed in the professional game industry is that 
  SunOS 4.1.4 says it can't possibly be the year 2023 
  Mozilla should call for the removal of Google from W3C because of WEI 
  How Stuff Works replaced writers with GPT-generated content and laid off editors 
  A fun new feature we are working on in systemd: userspace-only reboot 
  Mastodon's active user base has increased by 110K over the last day 
  Mastodon has reached 13M accounts
It looks like about 10% "not finally" content and 90% "finally" content. You can see the same thing by clicking "mastodon.social" at the top of this page.


good stuff


If you click through the link in that sentence to https://danluu.com/empirical-pl/ or read the study itself, you'll see that the paper doesn't support the claims made in the abstract at all.

It used automatic classification that's obviously wrong. Table 1 gives a list of "top" projects for each language and many of them are simply misclassified.

> ... the "top three" TypeScript projects are bitcoin, litecoin, and qBittorrent). These are C++ projects. So the intermediate result appears to not be that TypeScript is reliable, but that projects mis-identified as TypeScript are reliable. Those projects are reliable because Qt translation files are identified as TypeScript and it turns out that, per line of code, giant dumps of config files from another project don't cause a lot of bugs. It's like saying that a project has few bugs per line of code because it has a giant README. This is the most blatant classification error, but it's far from the only one.

> For example, of what they call the "top three" perl projects, one is showdown, a javascript project, and one is rails-dev-box, a shell script and a vagrant file used to launch a Rails dev environment. Without knowing anything about the latter project, one might expect it's not a perl project from its name, rails-dev-box, which correctly indicates that it's a rails related project.

There are other major problems with the study, but that one is sufficient to make the results invalid.


Are you the author of the meta analysis? If so, thanks for your work on that.

But I was not commenting on the quality of the studies. I did not think the author of "The Epistemology of Software Quality" was either.


If you can suggest an edit that will fit within HN's title length limit that conveys the sentiment of the entire tweet, please feel free to do so. Appending enough of the missing text to convey anything useful violates the limit, but maybe the title could be compressed in another way.

I don't think I can edit the title anymore, but one of the mods can edit it.


s/pay medical costs out of pocket/front medical costs/

Shorter and as I understand it more correct.


If you look at what Tog says was actually tested, every single test he describes is completely bogus. There's more detail on this in https://danluu.com/keyboard-v-mouse/, but briefly, a test he describes is:

> the author typed a paragraph and then had to replace every “e” with a “|”, either using cursor keys or the mouse. The author found that the average time for using cursor keys was 99.43 seconds and the average time for the mouse was 50.22 seconds

Sure, keyboard-only users who do that exact task who literally use the arrow keys plus backspace are slower than mouse users, but since keyboard-only users don't do bulk search and replace by navigating to each relevant character with arrow keys, that's a meaningless benchmark.


So they never heard of ctrl-R;e;<tab>;|;<enter>? Even without vim this is like a 5 second task.

Perhaps the key takeaway is that Apple optimizes interfaces for people who don't know how to use a computer.


Note that the research was conducted before 1989. "People who don't know how to use a computer" was approximately everyone.



    :%s/e/|/g
93 seconds? More like 3 seconds.


No kidding. They spent $50 million to be laughed at by anyone who's heard of sed or vim? Yikes.


As pointed out in another comment ( https://news.ycombinator.com/item?id=28068986 ), the people who know sed and vim are likely to be power users, which are a really small part of the populace.

Sure, there are text replace tools within most programs, but that doesn't mean that even half of the people using computers will know how to use them, or will ever bother to learn, when there are alternatives available. To that end, it makes sense to optimize for either of the easiest approaches, be it either using basic keyboard commands or the mouse.

That's also why GUIs are more popular for personal computing, as opposed to CLIs, or TUIs. Of course, CLIs could also be improved ( think tar vs docker UX, which can be achieved with something like https://typer.tiangolo.com/ ) and TUIs are also somewhat underused (nmtui, ncdu and many others are great pieces of software).

As for the actual amount of money spent - that is indeed a "yikes", how does someone even spend so much?


> Doesn't seem to be any discussion of Nuhfer's findings . . .

That's an interesting result, but I think it would be pretty surprising if your link was discussed since your link is from 2017 and the post is from 2010 and doesn't appear to have a recent update.


That explains it.


> the numbers are pretty much business as usual . . . going purely by deaths the situation is mostly normal

The article notes that, through July 25, there were 235,610 excess U.S. deaths. The link you gave shows that excess deaths started March 28th, for a period of ~119 days.

Wikipedia says that the U.S. had 291,557 combat deaths during WWII, which was over a ~1366 day period of U.S. combat involvement if we count the period from Pearl Harbor until the end of the war. The U.S. is about 2.46 more populous now, so adjusting for that, the ratio of the rate of excess death over the period studied vs. U.S. combat deaths in WWII is (235610 / (119 * 2.46)) / (291557 / 1366) = 3.77.

Since you're calling 3.77x the population-adjusted rate of WWII combat deaths "mostly normal", I would be curious to know what rate of death you would consider abnormal.

On a non-population-adjusted basis, which is arguably the correct measure if we want to think about waging a war that's the equivalent of WWII, the death rate over the period studied would've been equivalent to the U.S. combat deaths from waging 9.28 simultaneous WWIIs. Personally, I wouldn't consider simultaneously engaging in nine wars the size of WWII to be business as usual.


Is WW2 generally perceived in america as having caused many deaths among american military aged males? I'm sure we can point to D-day and other punctual events as being outstandingly bloody but does this extend to the whole conflict?

I'd be particularily interested in comparing the above ratio for Vietnam, or to German or Russian (combat & non-) WW2 deaths.


Having grown up in the US in the 1980s and 1990s, the American Revolution and WWII both loom large in the mythology of American exceptionalism, so I think the sacrifices of the soldiers weigh psychologically disproportionately.

I think most college-educated Americans are aware that the Soviets lost nearly endless waves of soldiers against the Nazis. I also think most Americans are largely indifferent to the numbers of German rank-and-file soldiers killed, as if sympathy for the enemy dead would somehow diminish admiration for the heroes who defeated the Nazis.

On a side note, I have a Scottish friend who still swears up and down that Americans weren't involved in D-Day. I tried pointing him at Wikipedia entries for Utah and Omaha beaches, but I get the impression he believes it's just American propaganda that the US had much involvement in WWII before the Nazis were on the retreat.


> I would be curious to know what rate of death you would consider abnormal.

Well, clearly this is an abnormal number of deaths. Pretty much every week is statistically Very Unlikely.

But it is 20% higher than the background rate. Quitting a job as a day labourer and going into roofing is something like a 400% death rate. People voluntarily subject themselves to some pretty outsized risks.

> Wikipedia says that the U.S. had 291,557 combat deaths during WWII...

That is about a decade worth of car accidents. So on the one hand, horrific. On other other hand, the raw number of deaths is not the biggest issue at play. It didn't stop Americans marching over to the other side of the world to fight people.


> Since you're calling 3.77x the population-adjusted rate of WWII combat deaths "mostly normal"

This is an example of how statistics can mislead. Why not select the worst 119 days of ww2 and compare it to 119 days of covid? After all, most of 1366 days of ww2 didn't involve actual fighting. Also, are we including definitive covid deaths or "covid related deaths". Because if we included "ww2 related deaths and not just combat deaths", then I suspect that ratio would plummet towards 0.

> I would be curious to know what rate of death you would consider abnormal.

I guess it depends, but spike in deaths every now and then is normal even if it looks abnormal. It may sound counterintuitive but that's just life.

Your statistical analysis is intentionally dishonest. You cherrypicked the time frame and example to fit your agenda. How about try another set of data - the US combat deaths on D-Day ( 2500 deaths ) and extrapolate that to 119 days and population adjust and enjoy. Funny how that makes covid look like a walk in the park huh? But if I did that, it would be just as intellectually dishonest as your example.

Also, almost all of the combat deaths in ww2 was young healthy men whereas covid seems to afflict predominantly older people - almost all of them 45 and over. So not quite apples to apples right?


And what happens if you repeat these calculations over different period? There is a "harvesting" theory which claims that COVID causes some deaths to happen sooner. So it is a possiblity that there were many more deaths in April 2020, compared to Aprild 2019, but there will be way fewer in April 2021. Somewhat similar "dry tinder" theory is sometime applied to Sweden: they had fewer excess deaths during the flu season for a last couple of years and got hit hard by COVID claiming lives spared by the flu.


Well Covid-19 causes all deaths happen sooner than they would without it, the important question is how much sooner, and early research shows its much sooner (like 10 years too early in the median case, which would mean the harvesting theory is wrong).

https://wellcomeopenresearch.org/articles/5-75


This article makes the reasonable point that the web is likely getting faster for people with cutting edge devices. For example, at one point they say

> Someone who used a Galaxy S4 in 2013 and now uses a Galaxy S10 will have seen their CPU processing power go up by a factor of 5. Let's assume that browsers have become 4x more efficient since then. If we naively multiply these numbers we get an overall 20x improvement.

> Since 2013, JavaScript page weight has increased 3.7x from 107KB to 392KB. Maybe minification and compression have improved a bit as well, so more JavaScript code now fits into fewer bytes. Let's round the multiple up to 6x. Let's pretend that JavaScript page weight is proportional to JavaScript execution time.

> We'd still end up with a 3.3x performance improvement.

But then the author concludes

> he web is slowly getting faster

Which ignores a pretty large fraction of users. A part of the article acknowledges that this all depends on the device, etc., but this is ignored in the conclusion!

Let's say that, as a first approximation, the first set of quotes is correct. I think most developers who look at user experience with respect to latency or performance today (or even ten years ago) would agree that we should not only consider the average and that we should also look at the tail. If we do so, we see that device age is increasing at the median and the tail, quite drastically in the tail even if we "only" look at p75 device age: https://danluu.com/android-updates/.

If we consider a user who's still using a 2013 Galaxy S4 and ask "does your phone feel 5 times faster than it did in 2013?", based on some js benchmarks improving by 5x, I think they'll laugh in our face. I've used a couple of Android devices that I tried to keep up to date (to the extent that's possible on Android) and each one became unbearably slow after taking some big Android update. Those updates probably included improvements in the Android Runtime as well as V8, and yet, the net effect was not positive. I don't think I'm alone in this -- if you read any forum where people discuss taking updates for the phones, one of the most common complaints is that their previously usable phone became unusable due to performance degradations caused by the update.

Sure, my personal user experience on my daily-driver phone is ok on my phone because I have very fast phone and I'm often using it from fast wifi. But it's terrible if I take a road trip across the U.S. and the experience is terrible anywhere in the U.S. with an old phone. I don't think we should just write off the experiences of people with old phones or who live in places where they can't get high-speed internet even if life is good for people like me when I'm at home on my 1Gb connection. When I looked at this with respect to bandwidth and latency (inspired by a road trip where I found every website from a major tech company to be unusable, excluding a few Google properties), I found that, on a slow connection like you get in many places in the U.S., websites can easily take more than 1 minute to load in a controlled benchmark: https://danluu.com/web-bloat/. My experience in real life (where I probably had higher variance in both latency, packet loss, and effective bandwidth) was that many websites simply wouldn't load.

One thing this post looks at is the 75%-ile onLoad time. When I travel through the U.S. on the interstate (major throughfares which will, in general, have better connectivity than analogous places off of major throughfares) most pages are so slow that they don't even load at all, so those attempts aren't counted in the statistics! I don't dispute that things are getting faster for the median user or even the 75%-ile slow user if you measure that in a specific way, but there are plenty of users whose experiences are getting worse who won't even be counted in this stat that's in the post because their experience is too slow to even get counted in the stats.


>Sure, my personal user experience on my daily-driver phone is ok on my phone because I have very fast phone and I'm often using it from fast wifi. But it's terrible if I take a road trip across the U.S. and the experience is terrible anywhere in the U.S. with an old phone.

This is a per peeve of mine, in my (perverted) mind the developers/programmers (not only web related) that usually work and use (it is fine, it is their work, they deserve the best of the best) high performance hardware and connections should have a low performance setup (simulated in a VM or simply some oldish hardware with a limited amount of RAM and a slowish processor) where to test what they release to the public for interaction/responsiveness/etc.

In my experience, bar the professional developers or programmers and a few designers, architects, engineers, etc. , the only people with high end hardware are gamers, all the rest (both at home and in the office), for different reasons tend to have relatively underpowered machines.


It's possible to use a bloom filter variant of this for text search (for example, Bing does this, see https://danluu.com/bitfunnel-sigir.pdf for details).

If you wanted a very small bundle to use with a static site, I don't think it's obvious that a bloom filter variant is a bad approach.

I mean, yeah, this thing that the author said "made for a fun hour of Saturday night hacking" is probably not the optimal solution, but that would've been true whether or not the author chose to build something based on bloom filters or an inverted index.


> The biggest problem is people want the benefits of freely downloadable software but mainly aren't prepared to give anything back. Go and assist with emacs or some other project.

I don't really buy this line of argument in general, but I think Josh is a particularly poor candidate to pull rank on because they don't contribute to open source. I don't know Josh, but I recognize his name from his open source contributions.

If you don't recognize Josh's name, you can get an idea of some of what he's done from his website, which is linked in his bio: https://joshtriplett.org/

> I work on Linux, primarily on the RCU subsystem and on Sparse-related code. I maintain the rcutorture test module.

> I co-maintain the X C Binding (XCB). I developed the XML-XCB format to describe the X Window System protocol. I also work on other Xorg projects on Freedesktop.org.

> I maintained the Sparse semantic parser and static analysis tool for C for several years, before passing it on to Christopher Li.

> I maintain several packages in the Debian project.


Consider me very regretful indeed. Most people who moan seem to have nothing to contribute, well I got it very wrong this time. Apologies to @JoshTriplett if you're reading this, and Ill check out your stuff tomorrow.

Ah shit, I recognise your name too. I've just been spanked by Dan Luu. This has not been a good night.


I appreciate the sentiment. I would gently suggest focusing the regret on the message rather than the recipient; it wouldn't have been better if written to a novice user.


Heh. My website is drastically outdated (by a decade), but thank you nonetheless.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: