I anticipated this response of yours. It sounds like a great gotcha in its surface, until you realize that every fast-spreading respiratory virus works like that. It's happened with every flu pandemic, including the recent H1N1.
SARS-CoV-2 appears to be an incredible spreader and a poor killer. What that means is the cost of perpetually trying to avoid infection are very high, whereas the benefits (avoided mortality) are quite low.
Case in point, SARS-CoV-2 exhibits a fairly high degree of pre-symptomatic spread. I believe that this is almost certainly due to the findings of interferon-mediated early-course immunosuppression (read: in the early days of infection it prevents your immune system from reacting strongly, meaning that unlike many other diseases there is a period where you have enough viral load to spread it yet don't express symptoms). Also note that it does not exhibit asymptomatic spread, which would be even "worse" since pre-symptomatic spread only gives you a window of maybe days whereas asymptomatic is by definition across the whole disease course.
Now factor in the fact that many people are either completely asymptomatic or paucisymptomatic (few symptoms). For those people, which may even be the majority of cases, it's such a not-big-deal that most don't ever realize they've had it. Some will have more of an actual cold, and some will have symptoms comparable to a run-of-the-mill flu. A small fraction of those infected will go on to experience increasingly severe symptoms, probably comparable to a normal SARS-1 infection (since SARS-1 is quite nasty), culminating in the worst cases in the need for invasive ventilation at which point death is incredibly difficult to avoid. (This is obviously an area of active research but it appears that the severe form of the disease is related to a state of immune disregulation where pathological cytokine release syndrome, the tissue damage from widespread neutrophil infiltration, etc wreak havoc).
So, we have a virus that spreads incredibly well, yet is overall incredibly mild, and has very well-defined populations who are at real risk of severe outcomes. That is precisely the type of virus that is a horrible candidate for lockdown, which damages the entire society in the attempt to prevent what is perceived as a greater threat (but is actually not, in my opinion).
So, as a society we saw a papercut and chopped off our hand. Oops.
I am happy to provide sources for pre-symptomatic spread, interferon-mediated immunosuppresion, etc, but first I wanted to make sure that you were here to engage in good faith dialogue; i.e. whether I can convince you or not, you are actually willing to read (or try to read) the papers. I'm a bit scarred by numerous times (here and elsewhere) where I've invested a bunch of time into detailed posts and then quickly realized that the person on the other end was never serious about addressing the problem of SARS-2 but instead was there to toe the party line and reinforce their pre-existing conclusions.
> let's call a spade a spade: it's not a "goof," it's a dangerous and deceptive business practice
There's a very odd tendency for people to engage with corporate PR packages in the same way they engage in interpersonal interactions. In the abstract, sure, they get that it's a crafted artifact meant to maximize profits, but in the immediate sense... they act as though the words have any intrinsic meaning at all rather than "white noise that maximizes likelihood of profit, while ideally not instigating litigation or regulation."
It's not unique to Tesla. It's every single time a major corp. issues a significant public statement, as though it's some sort of earnest missive from the founder rather than a PR-crafted artifact vetted by legal, compliance, and probably the COO and CMO, if not a board member or two.
Corps are profit-maximizing engines. They are not your buddies. They are not speaking from the heart. They're not even spinning something that started off as something from the heart. They are designing cognitive drone strikes meant to optimize public reception of current business practices.
Search "daniel lemire" "AVX-512". Vectorizing random number generators for greater speed: PCG and xorshift128+ (AVX-512 edition) [1]. AVX-512: when and how to use these new instructions [2]. The dangers of AVX-512 throttling: a 3% impact [3].
Sweden is right to be concerned about their economy. Its not just about skipping a few haircuts -- a shattered economy kills people as sure as coronavirus does. Every shuttered manufacturing plant has a body count associated with it and its own case mortality rate.
I don't know how many people died as a direct result of the Great Depression but based on what I've been told by relatives the suffering was considerable with deaths due to lack of food, homelessness and despair. I do know that the Nazi party would never have come to power in Germany without the world economic collapse. They were a minor party on the wane in the late 20s only to get their second wind in the elections following the crash. It's also possible the Japanese militarist would never have come to power without the Great Depression for more complex reasons. WWII was the result at the cost of 45-70 million lives. Economic catastrophes are not just about money -- they have life and death consequences. I wish more people would have taken this into consideration before they carpet bombed the US economy to combat the pandemic. I guess they figured that no matter what, the US was safe from the kind of civil unrest that gripped the Weimar republic. A strong man taking control of the US? Fascist and antifascist battling in the street? That could never happen here.
The standard for journalism should be stricter than for legal documents. In legal documents, it’s okay to edit the quotes to emphasize the point you want, or edit out throat clearing. But you must accurately represent the context. That’s not what happened with the Paul Graham quote. The author edited a reference to specific people (applicants to YC) and made it into a general reference to women. You would not edit out an antecedent reference like that in a legal document, because it changes the literal meaning. (It doesn’t matter if you think the substance of the meaning remains the same. You don’t get to make that call. If you want to argue that the more specific point is logically equivalent to the more general point, you’re welcome to make that argument.)
Just thinking off the top of my head, the media ran with a quote from Steve Mnuchin that made it seem like he was saying that the $1,200 stimulus checks should last people 10 weeks: https://www.businessinsider.com/mnuchin-criticized-seemingly.... If you actually play the whole video, the parts everyone quoted were actually tens of seconds apart. Mnuchin describes the $2 trillion package of reforms, including many months of adding $600 per week to unemployment benefits. A reporter asked “how long is $1,200 supposed to last people?” This was a deceptive question to begin with: The $1,200 was a stimulus, not a replacement for lost income. It’s not supposed to “last” any amount of time. That’s what the massively increased unemployment benefits were for. So Mnuchin refocused the question on the whole $2 trillion package. He spent several seconds doing that, which almost all the mainstream media sources edited out. Then he finally said that the whole package was supposed to last 10 weeks. Then the story blew up that he said the $1,200 was supposed to last 10 weeks: https://www.themarysue.com/steve-mnuchin-wants-us-to-survive.... AOC tweeted “how much is your rent, $10?” Again—the $2,600 per month in extra unemployment benefits was supposed to pay the rent. The $1,200 was stimulus.
Then there is a bigger issue of how journalists frame facts and what context they leave out. When New York City was seeing 750 deaths per day, the media was showing beach goers celebrating spring break in Florida. Now, they’re showing “spikes in cases” in Florida and Texas, even though death rates in Massachusetts, Illinois, New Jersey, etc., remain higher than in Florida and Texas (which are bigger states). Quick: without looking it up, what’s the ratio of COVID-19 deaths in New York to Texas? Florida?
To pick another example: news stories about healthcare routinely mention that Western European countries have universal healthcare. Then they talk about wealth taxes and taxes on the rich. Have you ever read a story in the NYT talking about how European countries pay for universal healthcare? (Regressive taxes like payroll taxes and VAT.) Isn’t it weird to talk about healthcare systems and taxes, and reference Europe to talk about the benefits they offer, but ignore Europe when talking about the taxes they levy to pay for those benefits?
Along those same lines, journalists always reference Europe when it’s favorable to a liberal narrative, but ignore comparisons to Europe when its favorable to a conservative narrative. So Europe always comes up in the context of healthcare and gun control. But it never comes up when talking about say corporate tax cuts (countries like Sweden, Canada, France, the UK, etc. have massively cut corporate taxes in the last 30 years). Reporters will write an article where they talk about how Warren or Sanders want universal healthcare, like Sweden has. Then they’ll talk about Warren’s wealth tax. But now they’ve forgotten Sweden—which got rid of it’s wealth tax. I’ve read so many articles that say “we should have universal healthcare, like every other developed country” or “we should have gun control, like every other developed country.” I have never seen “we should have a 20% VAT like every other developed country.”
Journalists also mentioned Europe early in Trump’s presidency when talking about how great it was that Germany, under CDU leadership, took so many refugees in 2015. Did any US media report on the follow up, where the leader of the dominant CDU, Merkel’s successor, said: “We have made it clear that we will do everything we can to ensure that 2015 won’t ever be repeated. We must make clear that we have learnt our lesson.”
Oh, another example. During COVID-19, the media made a big deal about how Trump wouldn’t issue a national shutdown order. And they talked about Germany’s testing. But they never mentioned that Germany never issued a national shutdown order (just like us, Germany left those decisions to the states).
The result of this is that Americans are completely without their bearings. They have no idea of the context in which our political debates take place. I had a Facebook friend recently say, while ranting about Trump, that Angela Merkel was a “progressive.” Merkel leads the Christian Democratic Union. She has championed massive corporate tax cuts in Germany during her tenure. She opposes gay marriage, and supports Germany’s restrictive abortion system (where abortion is technically still illegal, though not prosecuted under 12 weeks, and the abortion rate is 70% lower than in the US). Her successor in the CDU is anti-abortion period. Merkel is the most powerful woman in the world, and doesn’t call herself a “feminist” because of the optics to her conservative base. I wondered what could possibly have made him think Merkel was a “progressive” but then I realized it was because the American media only ever mentions Merkel as a foil to Trump. Merkel hates Trump, progressives hate Trump, so Merkel must be a progressive.
Another example. We have seen a lot of stories recently about the tragic black-white gap in maternal mortality rate. The maternal mortality rate among black women is 2.5x as high as for white women: https://www.nbcnews.com/health/womens-health/u-s-finally-has...
Invariably, media coverage of the issue starts talking about universal healthcare. But have you ever seen a US media report mention that the gap is twice as big in the UK, which has universal healthcare? https://www.bbc.com/news/uk-england-47115305. There, black mothers are five times as likely to die in childbirth. In reality, the black maternal mortality rate in the US is the same or slightly lower than in the UK, despite the lack of universal healthcare. In fact, by talking about universal healthcare without bothering to check if universal healthcare actually reduces black maternal mortality, journalists actively divert the reader away from thinking about real solutions to the problem.
We have made it clear that we will do everything we can to ensure that 2015 [when nearly 1 million refugees entered Germany] won’t ever be repeated,” Ms Kramp-Karrenbauer said on Monday. “We must make clear that we have learnt our lesson It’s not that the media doesn’t sprinkle in facts for context. They do, when it’s to contradict a conservative viewpoint. They will never do that extra research to contradict a liberal viewpoint. So you never see a reporter confront someone who says “we need to spend more on education” with the fact that we spend more on education than almost every country in the OECD.
People who read the NYT like to think they have “the facts” on their side. You don’t know all the facts, because the NYT doesn’t tell you.
Whatever the specific case here, I think we're hitting some sort of epistemological point with after-the-fact data analysis. Pseudoscience, basically. I realise that the term is inflammatory but ultimately it comes down to some fundamentals.
There's a reason why science as a philosophy places so much value on experimentation that makes predictions and tests them. Testability is a fundamental. Untestable theories are not just wrong, they're nonsensical to science. Take Ostrom's "Simulation hypothesis." Is it pseudoscientific? Depends. Is it testable? Is a simulated universe different in a testable way? A theoretically testable simulation hypothesis makes predictions. One that isn't, doesn't. A simulation which is 100% unknowable cannot be a scientific theory. This doesn't strictly mean that it isn't true, it just means we can't gain scientific knowledge of it. If it's masquerading as a scientific theory, then it is pseudoscience.
On the "softer" level, there's a reason why science as a cultural, intellectual institution places so much value on eliminating observer effects. Scientists own bias has always been a major detriment to science. Often in the history of science, generations have had to die off before bias can be shed and science advanced in some area.
Basically and practically, the "double blind" designs of experiments are the heart of this. People's objectivity is not to be trusted, not even scientists, not even a little.
Economics, sociology and such have shifted from one flavour of pseudoscience to another. Back in Popper's day, they would create big untestable "theories" that explained everything and nothing. Today, they make micro-theories after they analyze the data.
I think this may be peaking though. The way machines do "data analysis" is starting to shift the way we think on the margins. Machines make predictions, but not theories. They don't care which way causality runs.
> You say that as though people aren't susceptible to influence form malicious actors spouting propaganda
People are sheep and require benevolent shepherds. A viewpoint that has successfully been instituted throughout time. Who shall this aristocracy be? Those with the most potential liability and deepest pockets? Sure. I present you believe in Plutocracy, even if you profess not to, by your own circumspect explanations.
Unsurprisingly, this has been par for the US since always (more or less).
But is it necessarily a bad thing? I think whether the outcomes are good or bad is going to be highly subjective and debated ad infinitum. I, for one, welcome our new Internet overlord.
There are two interesting articles on this topic, I strongly recommend everyone to read them. Gwern's article has interesting observations about politics, subculture, Unicode, and programming languages. David Perell examines the effect of the Internet in a bigger framework of world's politics, education, and commerce.
My perspective of the issue is, under the previous centralized system, the entire nation is subject to the identical propaganda. The effect of the system can already be seen from the extensive propaganda during Mexican–American War of the 1840s - nearly 200 years ago.
> You furnish the pictures, and I'll furnish the war.
And during the World War II, and later the Cold War, the power of state propaganda reached its height, and I'd say this system is responsible for the massive thought manipulation and greatest violence in the human history. But there are good sides as well - information authority, good writings, and strong consensus.
> We have had Edward R. Murrow talking straight at us and gripping the whole nation's attention, we have had Thomas Paine standing in the street, telling us common sense that changes our lives, we've had shots heard round the world, revelations shocking the whole nation at once. - said Shii, a early and influential 4chan moderator, also a major contributor of English Wikipedia.
Then came the Internet revolution. Sure, under this system, the absolute notion of truth is deteriorated. There would be Holocaust deniers, anti-vaccine activists, flat-Earthers, moon-landing truthers, foreign propagandists, wild populism, among other groups - heck, even a Facebook poster can start an absurd national hate movement (see my comment at https://news.ycombinator.com/item?id=20012564). But simultaneously, it's a feature as well, the same system also gave voice and self-determination to those who didn't have - although not in a totally egalitarianist manner, but at least a positive contribution, and brought democratization of communication, which ended, or greatly reduced the power of centralized propaganda.
> Getting your views on not just politics, but also physics, biology, economics and who-to-burn-at-the-stake from your local religious official. This was the default for most of human history. I’d much rather have flat earthers than the Spanish inquisition, thank you very much. - An author's response to the concern of Fake News and disinformation, who is working on replicating GPT-2.
It disintegrates physical and national barriers and identities. The horror of nationalistic violence had seen it better days, e.g. "the country I live in now is the best country in the world for people like me; I would be terribly unhappy if I was exiled", and now, it's something like, "‘Why, what’s so special about the USA? It’s not particularly economically or politically free, it’s not the only civilized English-speaking country, it’s not the wealthiest..."
And the subcultures rule. and in a sense, liberates individuals by giving one the option to opt-out. I, for one, welcome our new Internet overlord. As gwern said,
> If I’m a programmer, I don’t need to be competing with 7 billion people, and the few hundred billionaires, for self-esteem. I can just consider the computing community. Better yet, I might only have to consider the functional programming community, or perhaps just the Haskell programming community. Or to take another example: if I decide to commit to the English Wikipedia subculture, as it were, instead of American culture, I am no longer mentally dealing with 300 million competitors and threats; I am dealing with just a few thousand. It is a more manageable tribe. It’s closer to the Dunbar number, which still applies online. Even if I’m on the bottom of the Wikipedia heap, that’s fine. As long as I know where I am! I don’t have to be a rich elite to be happy; a master craftsman is content, and “a cat may look at a king”.
> Leaving a culture, and joining a subculture, is a way for the monkey mind to cope with the modern world.
But from another perspective, it's also harmful in some ways. It was once possible to leave or escape from one's tribe by physically moving, but now it's an iron cage that nearly impossible to escape. It may intensity the world's geopolitical conflict, as everything has been balkanized. On the other hand, perhaps the society can be better off by learning to operate without a strong consensus.
--
To start fresh and frame the topic differently, we can use decentralized systems as an analogy, the traditional mass media is like the Certificate Authority model. It guarantees absolute truth on whether a public key is real by providing a central consensus, but on the other hand, when things go wrong, the entire public key infrastructure is vulnerable to rogue actors, especially state actors. In comparison, we have the PGP web-of-trust model - although the actual implementation turned out to be a total failure due to legacy code and design issues, but the ideas remain valid - that iut derives the trust not from an authority, but from the collective opinion of a group of people in a community you know. The good thing is that anyone is free to make one's own judgements, and the system is resistant from a central rogue actor, the bad thing is that there is only localized consensus, not centralized consensus, you cannot surf the web using this model.
Recently, there was a chaotic argument online about on how to scale a Bitcoin-like decentralized, P2P protocol. Two authors purposed the DCS theorem an an analogy of the CAP theorem. It's not really a theorem or a research paper, I'd say it's just a personal opinion, but somewhat interesting [0]. Basically it says, a decentralized system cannot simultaneously satisfies the three properties: (1) Decentralization, (2) Consensus, and (3) Scale. A traditional bank is C & S: having a global consensus and no scaling difficulty, but it's centralized. The original Bitcoin achieves D & C: decentralization and global consensus. Everyone knows every transaction - by running a blockchain, but it must come with an extremely high cost. On the other hand, Layer-2 solutions bypasses the blockchain, thus it achieves D & S: decentralization and scale, by avoiding to broadcast the transaction to the blockchain and use P2P communication instead, thus abolish global consensus. Using the same line of thinking, I think this conclusion can be applied to a lot of other systems, not only blockchains, for example, the Certificate Authority is C & S, and the web-of-trust is D & S. The mass media is C & S, the Internet is D & S. Doing D & C requires everyone knows everything, which is simply not the Internet.
Anyway, what I'm trying to say is that the pros and cons are inherent in each paradigm, you cannot both have your cake and eat it, and whether the outcomes are good or bad is going to be highly subjective.
And back to the issue of biases of mass media, one possible idea is to impose some limitations on the freedom of the press to the mass media as a counterweight to the freedom of speech: In the age when everyone can say everything online, perhaps the mass media establishment should be forced to engage in a journalism with a higher standard, serving as a reliable reference source.
My hot-take as a smaller game designer is that I am skeptical that global competitive ladders are good design. I admit that's fringe, and I'm not going to fight people over it. But I can't imagine myself ever designing a system like that for any of my games.
First, I think these systems ignore technical realities, and design has to pay attention to real-world constraints. Blocking cheaters is insanely hard, and almost no real-time games that I know of do it at what I believe is an acceptable level.
It's a nice idea that I don't think is technically supported, in the same way that it would be a nice idea if my photorealistic MMO didn't have loading screens anywhere and loaded everything instantly. It's not good design to spend a bunch of effort hinging your game design on something that's not technically possible for you to do.
Second, I think these systems ignore player incentives. I think that global competitive ranking encourages the worst of playerbases, that skill is an arbitrary mechanism to optimize on instead of something like how close each match was, or player-reported satisfaction levels, or variety of play-styles, or any other of a dozen other metrics.
Third, caring about bots radically shrinks your design space. You can't have exploits, players have to have a "right" way to play, modding has to be more limited. It shrinks the game not only in the sense that it makes your systems more complicated -- it fundamentally shrinks what your game can do as a game, and how your game can evolve with the rest of the medium.
Finally, not only is skill not a particularly rewarding metric to optimize on when what you really want is for your players to have fun, most of the problems with automation explicitly only are a problem for competitive matchmaking. If you're optimizing for player-reported satisfaction, you probably don't care if people are botting your game, because successful bots will be optimizing for creating good matches instead of for just winning[0].
Personal plug, with my current project, Loop Thesis, LAN play is the only multiplayer. I only want people to play with friends, and I'm not trying to build some kind of community or network. That decision has simplified my architecture so much, the game is so much more modder friendly, the networking code is so much simpler, everything is nicer, and more stable, and more player friendly, because I don't have to care about bots and cheaters.
Future games that I make, even if they include global matchmaking, will not have global competitive rankings. My personal take now has become that 90% of the time these systems are a design antipattern.
It's counterproductive. I want players playing my game to have fun. I have to ban bots because they make the game not fun. The reason the bots make the game not fun is because all of my design hinges fun on giving players an optimization problem that I explicitly don't want them to solve in creative ways. The bots are solving the optimization problem too well. So getting rid of that optimization problem, getting rid of the rankings, makes so many other design/engineering problems go away. The bots aren't the problem, the optimization problem is the problem.
And yet, technology stacks aren't exempt from social and economic dynamics. Technology stacks are tools, means to a wide variety of ends. And as such, if they are freely available, people can and will use them for whatever purpose or need they seem fit.
As such, linux is just as much a product and a service as it is a technology tied to very real markets and niches. And it with that come the exact same concerns you'd find in any other market.
Your observation is totally correct: there are tons of distributions and there is plenty of choice and people don't care.
But that choice is far from granted, and it's not an argument that deprecates questions about moral responsibility: Are companies providing a service/product entirely responsible for the consequences? Or are customers entirely responsible for their own choices and behavior? Or is it both?
Let's use an analogy to demonstrate. Bread is a great example.
The vast majority of people eat bread daily. And so there's a huge demand. But the market has consolidated in some places and so many people end up buying bread from retail chains. Sadly, such bread is a processed food and contains many ingredients that aren't that healthy such as emulgators (E-numbers), or loads of sugar.
You could argue that people still have a choice: You could go to an artisanal baker. Or you could bake your own bread: the recipe isn't that difficult and requires few raw ingredients. So, why aren't people doing that when they know retail chain bread isn't good for them? Three reasons: because artisanal bread is expensive; it's a different shop from a centralized retail shop and making decent bread takes time and skill. Consumers simply calculate cost/benefits: buying cheap bread is far more convenient in the short run, even though the long term consequences may cost more.
Okay, so how does that hold up with Linux? Same thing.
Ubuntu is huge because it was successful in it's marketing early on; and it did a bang up job in creating an effective onboarding experience for new linux users. You can argue about their design choices over the past 15 years, but in the end, they have been ahead of the competition when it comes to tapping into a new generation of users.
As time went on, what you see is that the Ubuntu community has spawned a huge volume of documentation: tutorials, videos, stackoverflow posts,... Google "how to install sound drivers in linux" and your first hit will be tutorial for Ubuntu. Whereas trying to find the same information for lesser known distributions might be just a tad harder.
And that's what makes all the difference. Convenience. If you spend an evening trying to fix your damn audio after first installing Slackware, and all you find is Ubuntu tutorials, you will likely just switch over to... Ubuntu. Especially if you aren't particularly interested in the design philosophy of Slackware, or who these Canonical people are anyway. You simply want to listen to your favorite Spotify playlist, right?
So, the issue here is that both retail chains and Canonical understand that the vast majority of people simply want convenience and they will easily choose it over principle or their own self-interest. And so, it's extremely tempting to exploit that.
The question then becomes whether it's morally right to tap into that and take more from your customers or users then what they are bargaining from. Especially when they aren't really aware you're doing it. Either by using ingredients that aren't prohibited, but clearly unhealthy, or by using techniques to gain more insights about their behavior. Especially when everyone sells them "Linux doesn't do that unlike Apple, Google or Microsoft".
And so, ever more people will keep choosing Ubuntu over other distributions, even when they do have a free choice.
The mere existence of a choice isn't a defining factor at all, it's everything else: available tutorials, number of users, onboarding experience, how quick they can get things working and so on.
Tis where the discussion does turn political. The public space is regulated through laws for many reasons. But such rules and laws aren't static. They can change through political discussion and ever changing power balances. And so, consumer protection laws exist not just to protect individual consumers, but also to ensure that large, unintended second order effects of new products and services don't cause shocks that might derail an arguably stable society.
If retail chains are free to add more and more unhealthy ingredients to their food products because that clearly drives costs down and attract more customers, the societal cost will be an increasing number of people who suffer from health concerns linked to consuming those products. Not because people don't mind their health, but because convenience forces them, and the alternatives are simply too costly.
By the same token, if companies are free to build in all kinds of telemetry sending scripts, that won't stop people from using those products as they prove to be really convenient. But the societal cost will be that private enterprises are able to derive very accurate profiles about who we are, and what we want, and use those in ways that don't align with the interests of users of their software.
Well, it's the Marine Corps. It ain't a rose garden.[1][2]
>>>full of backstabbing jerks
Actually quite the contrary. If anything we are well-known for our solidarity.[3] Assuming there aren't issues due to rank/the command hierarchy, most personal disagreements are handled face-to-face. People try to stiff-arm work responsibilities off to other offices but I think that is endemic to any ridiculously large and sluggish bureaucratic organization.
>>> Am I misunderstand you?
I think so.
It's a work culture that is optimized for conditions of 100% stress, where people's lives are at stake. We have a communication system and culture that is built to support the worst conditions, and honed over 225+ years. Even though we spend 99% of our time at, say, 10% stress levels in an office environment, it's easier to "dial down" our methods rather than "dial up" something less robust.
NVC strikes me as the sort of methodology designed and tested for a 5% stress office environment. If you attempted to employ it in the most critical communications scenario (arguably a contested beach assault), it would be an utter failure. Based on the OP's link, Steps #2 and #4 are the biggest failure points IMO.
"How do you feel about NOT retreating away from your objective?" "What are your thoughts on possibly doing a frontal attack on that machinegun nest?"
We would end the day all face-down in pools of our own blood trying to communicate like that.
Now, one might be inclined to retort that as our organization has a selection process that weeds out those who can't handle 100% stress, there is an inherent bias in the viability of our methods to integrate with the regular civilian workforce. But the counter-examples are our on-site contractors, some of which have NO military experience. Most of them still integrate just fine. Not all though:
IT Contractor A: A direct speaker. About 40yo. Also a jiu-jitsu purple belt. Kinda a "quiet badass". Fits in well.
IT Contractor B: A smarmy, weaselly character. Late 20's. Can't handle people speaking harshly.
These two had an exchange that essentially went like this:
Contractor A: "Hey I need X. And it's a time-sensitive priority."
Contractor B: "Could you possibly ask nicer next time?"
Contractor A: "Could you possibly do your job next time?"
Then Contractor A came back to the office and shared the story, to which pretty much everyone, from the 19yo Lance Corporals to 35+ guys felt "OMFG, sometimes I wanna throat-punch that dude (B). Ask nicer? WTF? Like people are gonna be asking nice for things if Chinese ballistic missiles are ever raining down on us? Does he not know where he is? If he can't handle it he should go back to hiding in an office in Northern Virginia."
We would go on to have numerous problems with Contractor B, most of them stemming from his "feelings". I'm not gonna go onto a rant about direct-vs-indirect counseling methods and some additional leadership anecdotes/case studies though. Hopefully that added some clarity.
I anticipated this response of yours. It sounds like a great gotcha in its surface, until you realize that every fast-spreading respiratory virus works like that. It's happened with every flu pandemic, including the recent H1N1.
SARS-CoV-2 appears to be an incredible spreader and a poor killer. What that means is the cost of perpetually trying to avoid infection are very high, whereas the benefits (avoided mortality) are quite low.
Case in point, SARS-CoV-2 exhibits a fairly high degree of pre-symptomatic spread. I believe that this is almost certainly due to the findings of interferon-mediated early-course immunosuppression (read: in the early days of infection it prevents your immune system from reacting strongly, meaning that unlike many other diseases there is a period where you have enough viral load to spread it yet don't express symptoms). Also note that it does not exhibit asymptomatic spread, which would be even "worse" since pre-symptomatic spread only gives you a window of maybe days whereas asymptomatic is by definition across the whole disease course.
Now factor in the fact that many people are either completely asymptomatic or paucisymptomatic (few symptoms). For those people, which may even be the majority of cases, it's such a not-big-deal that most don't ever realize they've had it. Some will have more of an actual cold, and some will have symptoms comparable to a run-of-the-mill flu. A small fraction of those infected will go on to experience increasingly severe symptoms, probably comparable to a normal SARS-1 infection (since SARS-1 is quite nasty), culminating in the worst cases in the need for invasive ventilation at which point death is incredibly difficult to avoid. (This is obviously an area of active research but it appears that the severe form of the disease is related to a state of immune disregulation where pathological cytokine release syndrome, the tissue damage from widespread neutrophil infiltration, etc wreak havoc).
So, we have a virus that spreads incredibly well, yet is overall incredibly mild, and has very well-defined populations who are at real risk of severe outcomes. That is precisely the type of virus that is a horrible candidate for lockdown, which damages the entire society in the attempt to prevent what is perceived as a greater threat (but is actually not, in my opinion).
So, as a society we saw a papercut and chopped off our hand. Oops.
I am happy to provide sources for pre-symptomatic spread, interferon-mediated immunosuppresion, etc, but first I wanted to make sure that you were here to engage in good faith dialogue; i.e. whether I can convince you or not, you are actually willing to read (or try to read) the papers. I'm a bit scarred by numerous times (here and elsewhere) where I've invested a bunch of time into detailed posts and then quickly realized that the person on the other end was never serious about addressing the problem of SARS-2 but instead was there to toe the party line and reinforce their pre-existing conclusions.