Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Stanford apologizes after vaccine allocation leaves out medical residents (npr.org)
270 points by potench on Dec 19, 2020 | hide | past | favorite | 235 comments


> "[A]lgorithms are made by people and the results ... were reviewed multiple times by people,"

You got that right. The way they talk about "an algorithm did it" makes it seem as as if they think that somehow explains it, like there was only one algorithm possible handed down from god or something.

We'll be seeing more and more of this of course. "We can't be blamed, it was an algorithm! We can't be blamed for trusting in the algorithm, because everyone knows algorithms are objective, right?"

I'm not sure "laypeople" realize that, especially in this particular case, "algorithm" is just a fancy word for "formula". Right, you MADE the formula, and it was wrong.


"Machine learning is like money laundering for bias"

https://twitter.com/pinboard/status/744595961217835008


"To err is human, but to really foul things up you need a computer."

- Senator Soaper

https://quoteinvestigator.com/2010/12/07/foul-computer/#:~:t....


There's no way they used machine learning here. My guess it was some simple rules like older doctors go first. I wouldn't even call it an algorithm. I think algorithm is a weasly way for the administrator to make it sound more complicated then it was.


My guess it was some simple rules like older doctors go first

No need to guess.

"It used an algorithm that assigned each person a crude risk score, taking into account factors such as age, job description and the number of coronavirus cases that had been detected in their hospital department. That resulted in personnel like environmental services workers, food service workers and older employees being shuttled to the front of the line.

Residents, who are early in their careers and tend to be young, rotate throughout the hospital to train with various teams of physicians, making them difficult to place in a designated unit."

https://www.nytimes.com/2020/12/18/world/covid-stanford-heal...


I guess they do deserve some credit for prioritizing food service workers and such who are in high-contact situations, not just medical workers!

In addition to having a bug for residents without designated units (which again, how did they not test it on a sample set including residents?), it sounds like they maybe weighted age (or department?) too much compared to job description. It doesn't seem like an older physician that is working from home should be weighted higher than nearly anyone working every day in the hospital, regardless of age or if other people in their department working in the hospital got sick already.

It seems like amount of contact with patients should have been weighted far higher than anything else (patients for elective procedures are required to get a virus test first, so amount of contact with untested patients like in ER, or covid+ patients like those who need to care for them, even higher). But I'm not a doctor or medical ethicist. If Stanford has a reason for not doing that, they can, well, explain it, if they want to look better. It makes me figure it was just not done very carefully.

It seems to me that formula was just not very good. While it's technically an "algorithm", sure, I think most of us in the field would just call that a "formula" not an algorithm. I think they use the phrase "algorithm" precisely because to the layperson it seems more mysterious, more sophisticated, more complicated, harder to question, less obvious that was just something humans did, possibly not very carefully.


I really don't think you can call that an algorithm, it's a set of human defined rules. You could call it a formula, but an algorithm implies a series of steps which are not present here.


Since algorithm is defined as "a process or set of rules to be followed in calculations or other problem-solving operations", how is it not an algorithm?


Going by that definition, the rules here are not defining the steps to be followed to do the calculation. The rules here are defining the answer that they want, not the steps necessary to figure it out.


As computer scientists and adjacent, we have a whole taxonomy for algorithms. It should be noted that "sort by salary" is an algorithm.


In popular media "algorithm" has taken on a new meaning, probably because the word itself sounds mysterious.

I can't fully define the new meaning, but it's like the YouTube video recommendation algorithm which is more like a whole system than a single algorithm in the original sense, the Facebook algorithm for ranking items on your feed. These big data, machine learning, opaque models.


If the set of rules are defined by humans and then those rules are coded into a SQL statement is it really an algorithm? If the humans define the rules and later they realize the rules are bad it's not the sorting algorithm that caused the problem.


I've got to be honest, I answered yes to both your rhetorical "are these algorithms?" questions, which I'm guessing was not your intention. It would help if you gave some examples of what is missing in each case (in your eyes) disqualifying them.

With the SQL example:

I don't think a set of rules is disqualified from being called an algorithm if it's implemented using some other tool or process, because we do this all the time: when programming, we split up implementations into functions, or we could have used the standard library's sort function too -- I would still consider it an algorithm no matter how the "sort the results" step ended up being implemented.

If the result is wrong, it is not necessarily because the implementation of SQL's ORDER BY is incorrect (it could be but it's unlikely for a popular SQL implementation), and if you know that the the rules are incorrect then I agree, I definitely wouldn't blame the sorting algorithm (at least initially, although it's possible it's also be wrong).


Of course it wasn't using machine learn, there wasn't anything to learn here.

But the point that machine learning has become an ideal package for existing biases still is worth mentioning because machine learning becomes an ideal way to present mistakes involving values, such racism, as mistakes that are simply technical and can't be helped and this allows regular algorithms to also get this kind of pass.


I am not a native speaker and I don't think I understand the quote. What should "money laundering for bias mean"?


Money laundering is a crime in which one takes money or goods taken through criminal activities (e.g., theft) and sends it through some process, usually a cover, like a business or something, to make it appear legitimate. More generally, it refers to the process by which money or resources obtained through ethically questionable means is passed through layers of other activities so as to obscure its original source.

https://en.wikipedia.org/wiki/Money_laundering

So the idea is, you have some bias in some process (racial, religious, whatever). You set up some algorithm that relies on the bias to make predictions or classifications. Now you can say it's not you that's biased, it's just the algorithm. The algorithm is some process by which your bias is "made legitimate".


> You set up some algorithm that relies on the bias to make predictions or classifications. Now you can say it's not you that's biased, it's just the algorithm.

It's not that you set up the algorithm to rely on the bias - ML trains on data produced by a biased system and ends up building a model containing the same biases - you don't need to "set-up" anything - if you're fine with the existing biases you can "launder" them through ML.


Right -- I didn't mean to imply that... I just meant that there's some predictive information in the bias the algorithm can take advantage of.


[flagged]


We've banned this account and a bunch of related accounts for abusing HN and breaking the site guidelines. If you do it again we will ban your main account as well.

https://news.ycombinator.com/newsguidelines.html


To add to the other answers, machine learning based on biased datasets results in biased models. For example, Word2Vec can be used to make analogies. Having it determine "Man is to woman as king is to ___." results in "queen", which is reasonable. On the other hand, determining "Man is to computer programmer as woman is to ___." results in "homemaker". The algorithm wasn't deliberately designed to have sexist bias in it, but there was implicit bias in the dataset from which it learned.

https://www.technologyreview.com/2016/07/27/158634/how-vecto...

As another example, predictive policing tries to place police in places with higher crime rates. Those crime rates are determined by looking at past history of police reports and arrests. That past history has human bias already in it, with disproportionately higher arrest rates in places with racial minorities. The effect of the predictive policing is to justify overpolicing of minorities.

https://en.wikipedia.org/wiki/Predictive_policing#Criticisms


The analogy thing didn’t entirely hold up: the original demonstration was constrained not to return the same word in the prompt, so “man is to woman as doctor is to ____” had to return something that’s close to, but not the same as “doctor” in the embedding space. Hence, it returns nurse.

Ironically, this makes the original point nearly as well: we need to evaluate the hell out of machine learning systems to make sure that they’re doing what we think they are and that they’re not keying off something else instead, especially something biased. To date, the field has been...not great about this.


I wonder what it would return for "Woman is to doctor as man is to ___". I'd bet money on the result not being "nurse".


You can monkey around with it here: http://bionlp-www.utu.fi/wv_demo/ (choose the English Google News model, but this may not be exactly the same set/model as the original report).

Man is to Woman as Doctor is to ___ gives 1) gynecologist 2) nurse 3) doctors 4) physician 5) pediatrician

Woman is to Man as Doctor is to ___ gives: 1) physician 2) doctors 3) surgeon 4) dentist 5) cardiologist

These are just generally near "Doctor" though: the ten nearest terms are physician, doctors, gynecologist, surgeon, dentist, pediatrician, pharmacist, neurologist, cardiologist, and nurse.

Some gender differences may persist (nurse is #2 for `woman`, but #68 for `man`, but it's also near `woman` generally and you could imagine it gets a bit of a boost from the verb ("to feed a baby") being attached exclusively to women too.

Anyway, my point is not that there's no bias (there certainly can be--seed GTP-3 with a prompt about Muslims) but that one should be wary of thinking they know what the model is doing.


It highly depends on the corpus and you can check for yourself.

I'm being lazy, but the results for Gutenberg books you can check online at http://labs.statsbiblioteket.dk/dsc/

- man is to woman as doctor is to reprovingly (nurse is the first noun, on position 4) - woman is to man as doctor is to snodgrass (after a couple nonsense/rare words)

The most important thing that teaches us is that big corpora (bigger than PG) are essential for this method.


How would you expect the computer programmer analogy be completed? I legitimately don’t know what a reasonable answer would be.


The takeaway is the model has bias. It’s up to humans to decide what to do with that information.

Depending on the input “homemaker” may be a technically reasonable output.

Should we use such a model to suggest career paths to high school students? Or should we reevaluate the methodology?


Who cares about ML when we already do that:

https://www.slideshare.net/yuyomajadero/jobs-occupations-pro...


The danger is in thinking ML can magically see past human biases and/or is not subject to it’s own types of bias.


Seems like an overhyped danger. Does anyone actually think that? You would have to not even know that the purpose of a language model is to model human language. But if that's the level of your understanding, why would you select a language model to use as a career guidance oracle? Such a negligent career advisor would be just as likely to use those children's posters with pictures of female nurses and male doctors.

I'd like to add that there's also a danger in people trying too hard to avoid bias and losing important information. For example, a man who's a homemaker instead of a programmer is a less attractive partner for a woman. So such an occupation might harm both his quality of life and that of his partner. There is some useful information encoded in cultural bias. Even if that information turns out to be entirely socially constructed, it still has real harmful effects on real humans who go against it.


> Does anyone actually think that [ML can magically see past human biases and/or is not subject to it’s own types of bias]?

I know some people who, fairly strongly, believe that ML is the future of non-biased decision making, yes.

> a man who's a homemaker instead of a programmer is a less attractive partner for a woman

That's definitely going to be [CITATION NEEDED].


> For example, a man who's a homemaker instead of a programmer is a less attractive partner for a woman.

This is much to sweeping a generalization to have a place here. You might say a man who is a homemaker is less attractive than a programmer to you, but don't speak for everybody else.


Hmmm... A Homemaker must balance a myriad of factors in order to create an environment within which a group of people can productively live and work. The Man/Woman link is the more humorous seeing as programning was originally considered women's work.


Two independent things might best be analogized as A:B as C:B. So also computer programmer.


Except that they're not independent and that looks like a fact which the model discovered.


The model should be able to tell the difference between population demographics in a particular country and a definition. Demographically, they're not independent. Definitionally, they are entirely independent.


> On the other hand, determining "Man is to computer programmer as woman is to ___." results in "homemaker"

Who are we to decide for the woman what she should feel about it? Is bias absolute or relative, objective or subjective? Is there a one true policy for dealing with bias or can there be one?


Since computers are unbiased, the idea is that machine learning is also unbiased. However, that's not true and, if anything, it actually re-enforces a lot of bias.

I've never done any, but my understanding is that machine learning is just correlation. It's good at figuring out "what", but not "why". Consider training an algorithm to recognize horses by feeding it millions of pictures of horses. Eventually, the algorithm "learns" what a horse is, but it's definition of a horse is based on the inputs it was given by a human.

So now, consider the scenario where the millions of pictures of horses were all brown. If you give the algorithm a picture of a white horse, it'll tell you it's not a horse. If you give it a picture of a brown donkey, it might think it's a horse because it's learned to put too much emphasis on the color brown.

If that algorithm becomes relied on to define a horse, "the system" will insist there are no white horses even though you can walk outside and see them plain as day.

Now, apply the same kind of idea and feed an algorithm mugshots of all criminals. It's going to develop the same bias and tell you that a black person is more likely to be a criminal than a white person. There's no nuance. The inputs used to train the AI were tainted by decades of systematic discrimination, but the AI doesn't know that.

Of course you could try to take that input bias into account, but the whole sales pitch of machine learning is that you feed it tons of data and it gives you an objective result. As far as I know, no one is trying to quantify, and correct, the biases in the inputs.

The phrase "money laundering for bias" means the machine learning algorithms are used to re-enforce incorrect opinions and assumptions because it gives the excuse that an "objective" computer used cold hard data to draw the same conclusion.

Machine learning is one of the scariest parts of tech right now because it's the equivalent of an extremely stupid person that only understands correlation and not causality and the systems being built are going to be making a lot of decisions at scale.


> Now, apply the same kind of idea and feed an algorithm mugshots of all criminals. It's going to develop the same bias and tell you that a black person is more likely to be a criminal than a white person.

No. Nobody training an ML system to detect criminals would train it only with pictures of criminals. And if you somehow did, it wouldn't determine that black people are more likely to be criminals, but that humans are more likely to be criminals than say ducks or fire engines.

Yes, ML models can end up reflecting prejudices in their training data, but this description is incorrect, reductionist and unhelpful.


How would you even start to build a ML training dataset that isn't biased by the discriminatory policing the US has seen for the last 50 years?


I share your concerns. I am of the opinion that we shouldn't be using machine learning algorithms to make decisions of consequence in people's lives without filtering those decisions through human operators. You can use machine learning to assist these human operators and empower them to do much more work than without the technology. But at the end of the day, you still have someone to hold responsible, and you still have checks on the ML doing something profoundly unethical.


> You can use machine learning to assist these human operators and empower them to do much more work than without the technology. But at the end of the day, you still have someone to hold responsible, and you still have checks on the ML doing something profoundly unethical.

I would, but the people in charge won't. Some of my family members MUST chat with a crappy bot before they can get support from their mobile phone provider and that's in Canada where we pay an astronomical amount of money for our phones/plans.

A penny saved is a penny earned, even if it costs someone else a dollar.


The problem is we actually conflate two related, but not identical, concepts when we talk about "bias".

The first is statistical bias - feeding algorithms training data that is somehow unrepresentative of the "real world" (or more specifically, the actual class of data for the intended use case). As an example, applying facial recognition to Caucasian faces when the model was solely trained on Chinese faces, you're going to have a bad time. The problem was that your data was "biased", because you actually wanted a model that recognizes "human faces", but you trained on the biased subset of "Chinese faces".

The second is the ethical/political notion of "bias" against individuals; more concretely, the idea that a society is "just" when people are judged as individuals, and not prejudiced by their gender/skin colour/etc. In this respect, when we say "we should not be biased against men", we really mean that "an individual man should not be treated any differently from a woman, even though men are overwhelmingly perpetrators (and victims) homicide".

The complication is that reality is inherently imbalanced/biased. Society can be chopped up into a lot of sub-views that skew towards particular demographics. Some are relatively harmless. "OnlyFans content creators" aren't 50-50 men-women, and men aren't charging the same as women either. Some are not - "murderers" are mostly men, black men are overrepresented in the "criminal" group, and so on.

This raises some obvious questions:

1) Why is this the case? Is this the result of systemic discrimination? Historical oppression? Innate preference? Cultural pressure to conform?

2) If you can answer (1), how does that influence your view of what a "just" society is? For example, do you consider it to be "unjust" to be wary of men (and men only) to protect yourself from random physical violence when you're out and about?

3) Does everyone share your view on what a "just" society is?

4) How do these answers dictate what you should "do" about it? As a voter? As an ML practitioner? As a CEO?

I'm not going to delve further into these questions, because they cause a lot of contention and deserve more time/consideration than I can justify right now in a HN post.

The main reason I decided to comment is that I've seen too much debate that tries to steamroll people into accepting conclusions without considering or answering these questions. Even worse, some people are actively trying to silence others who simply want to discuss these questions, rather than swallowing their conclusions uncritically.

(This is not levelled at you, by the way, your comment just presented an opportunity to lay out my thoughts.)

I don't think anyone can meaningfully discuss the (ethical) concept of "bias" without first laying out a very comprehensive perspective of "society" that touches on all of these points.

There was a joint paper from Google, Facebook (and possibly others) about 2 years ago that I thought handled this exceptionally well. The authors addressed many of these questions honestly and objectively, and most importantly, acknowledged the potential for disagreement.


Money laundering hides where the money comes from.

Bias are subjective assumption you have of the world and people.

So this quote is saying that machine learning hides the source of bias.


“Money laundering” is taking criminal money (maybe marked currency or something) and trading them out for “clean” currency that can’t be tracked.

Imagine a dataset about healthcare outcomes. You want to know whether airlifting a patient is good or bad, so you train a machine learning model to predict mortality given airlifting a patient versus not airlifting them. Turns out a lot more of the airlifted patients died, and the model picks up on that, deciding that airlifting is dangerous.

Obviously, we’re airlifting the patients who are in the most dire circumstances, and that’s why they die more, but maybe the model doesn’t have the context/circumstance variables to see that, or maybe it’s just regularized and thinks those context variables are noise. So the bias of “airlift = bad” gets stuck in the model, but then people defend it as mathematically precise so it can’t be biased like a person can. They say that the model just reflects reality, pretending the model or data is wrong is blindly rejecting reality for the sake of political correctness.

It’s worse with really human stuff like recidivism because the context variables that might be useful (like the patient’s dire circumstances) tend to be very human and complex and they’re unlikely to be captured in a simple form. Even if they were, interpreting them might require human level AI. By analogy, it is frequently impossible (or just too hard to be worth it) for current technology’s ML to tell from the data we feed it that the patients being airlifted were most near death to begin with.

So you end up with an algorithm biased against airlifting patients getting defended as mathematically bulletproof.


Early machine learning / deep learning algorithms have shown bias towards some results or behavior due to problems in training dataset. These problems are tried to be remedied with better training sets but I'm not aware whether it's completely eliminated.

"Money laundering for bias" in this context can be translated as "Finding a plausible excuse/cover for a specific bias (your computer/model says the same!)".


Before machine learning, a system (such as resume screening) that was biased was the fault of the rules and people implementing them. With machine learning, people can blame the “The Algorithm”, a numerical black box that can’t be inspected or interviewed. The system and people now claim they are blameless because “the computer can’t be biased“, even though it was trained on data produced by the previous biased system.


Concretely, imagine you have a bank, and you decide you don’t want to give loans to black people.

You employ a machine learning algorithm, feed it some data that shows black people in poverty have high risk of default on loans and avoid giving it data that shows otherwise. Train your neural network hard.

Now when people ask about why you won’t don’t give out many loans to blacks, throw your hands up and say “I would but our advanced machine learning algorithms say these people are high risk, we’re not racist we’re just following the results.”


In the same vein: “Criticism is prejudice made plausible.” - H.L. Mencken


This is gold XD


I think most people are misunderstanding the likely root cause.

> An algorithm was used to assign its first allotment of the vaccine. The algorithm prioritized health care workers at highest risk for COVID infections, along with factors like age and the location or unit where they work in the hospital. Residents did not have an assigned location, and along with their typically young age, they were dropped low on the priority list.

Whoever coded up the algorithm for personal COVID-risk probably forgot to take care of null-states for the location input. Residents had their location set to null, and were therefore prioritized lower than they should've been.

It's possible that the admins purposefully created this null-state bug so as to have a reasonable fallback story in the case they got caught, but per Occam's Razor, I think it's much more likely it was a dumb, honest mistake.


I don't think I was assuming anything in particular about the intents.

it's a rather important thing not to check on actual data, including residents in the sample, isn't it? they just never bothered to test the algorithm, didn't even review the results it spit out, before delivering the results as a plan? (it is quite possible indeed they didn't, I'm not saying that is a coverup for some other motive, I'm saying that is incompetent). What was the QA process like? Who was involved in it? From the article, apparently not any department heads or medical staff in general.

That's not a competent design process for something so important, is it?

That they never bothered to see how "the algorithm" treated residents demonstrates that they didn't give a shit about residents, not necessarily that they were intentionally screwing them.

The administration doesn't say much about how the algorithm was designed, they just say "oh, it was an algorithm." I think many people believe that "algorithms are objective" and they are trying to play that.


Well, we're all agreed that this was a dumb mistake that could've been easily avoided. And, the admins should own up to it and implement a QA process to avoid a second occurrence.

That said, I think the reason this story is getting so much attention is because of the assumed selfish intent. And, that assumption is probably wrong.


This is being painted as a technical error, but the only actual mistake was a miscalculation of the amount of blowback they would receive.


You may be misunderstanding what this is a plan for. All Stanford medical staff are expected to be vaccinated over the coming weeks; the allocation in question is for deciding who's getting vaccinated today, with the first available shipment. It does seem like this particular error could have been caught easily, but conducting a thorough, laborious review process rather than getting shots in arms ASAP would have been an even worse decision.


It wouldn't have required a laborious review process. A quick skim through the list, looking at 20 or so random recipients near the top, middle, and bottom, would have quickly shown that frontline workers were clustered near the end. You would have seen that some people were WFH, but at the top.

This wasn't some random HR benefits process that they screwed up. It was a triage process. A medical triage process. That's what they should be good at.


From the article, it sounds like they did notice, but didn't have time to revise the plan before vaccinations started today. And again, I'm not claiming that this was necessarily the right decision, but I can't get behind the idea that it's some transparently evil decision made by uncaring administrators.


It's a software bug, by an organization that doesn't normally produce software.

They've agreed it's wrong, and intend to fix it.

Send like a very average Thursday to me!


"Designed"? They had a meeting and discussed what criteria they should use, someone documented that, and someone else wrote some Excel.


You were there?


No. But I have been in a great many meetings where decisions were "designed."


What makes you believe your experience is generalizable to other situations?


It sounds like the administration wanted a scapegoat on which to hang any blame around vaccine allocation. Surely whichever allocation they came up with would have axes along which it could be criticized.

Enter some poor analyst working with a pile of messy data and “The Algorithm”.

If machine learning can be “money laundering for bias”, then “algorithms” can be money laundering for responsibility.


The article addresses this:

> While leadership is pointing to an error in an algorithm meant to ensure equity and justice, our understanding is this error was identified on Tuesday and a decision was made not to revise the vaccine allocation scheme before its release today


What does that even mean? Surely the utility function being optimised by the algorithm is quality of life adjusted years saved by the vaccine? Surely they do not value different people's lives differently based on their group identity, but then what does that statement mean?


The issue is not the algorithm but that the issues with it were identified in the days before the allotments finalized.

Meaning: administrators saw that the results were dumb, and didn't compensate.


I don't like the use of the word algorithm there. I would call it rules or formulas. Algorithm makes it sound like something more complicated when it was probably just a SQL query.


In general, blaming the algorithm is an indirect way of blaming software engineers.

From Facebook to the VW emissions scandal to Stanford’s vaccine allocation, people love narratives that blame engineers. I suspect this trend will only get worse as the general public realizes that engineers are now a highly compensated professional, similar to how lawyers are the butt of so many jokes.


Engineers have bosses, and they generally do what their bosses tell them to do.

So the responsibility should lie, as usual, with the people at the top, who set the direction for the company, not for low-level peons like engineers, who might be highly compensated, but the engineers don't set the direction for the company nor make the ultimate decisions regarding these algorithms.

But part of the tragedy of organizations is that responsibility tends to be diffused, so it's really hard to ultimately blame any one person.

There was a documentary (maybe called "The Corporation" or something) which showed some protestors outside some CEO's home, and the CEO's wife went out with some tea and cookies or something and invited the protestors inside their home to have a chat, and the CEO talked to the protestors and told them how helpless he himself was, as he was just part of the system with relatively limited ability to change it.

I'm not sure I buy that, and not sure the protestors did either, as the CEO still has enormous power. At the very least the CEO has the ear of the board of directors, and quite a lot of leeway as to how to run the business. They might not be able to change it all, but they can change a lot. Still, there's no denying that especially in a large organization no one person knows everything that's going on and can be accountable for absolutely everything, but leadership still exists and still is ultimately responsible.


"and the CEO talked to the protestors and told them how helpless he himself was, as he was just part of the system with relatively limited ability to change it. I'm not sure I buy that, and not sure the protestors did either, as the CEO still has enormous power."

Precisely. Nobody should accept that kind of obvious nonsense.

People are put into positions to make decisions. So make better decisions.


Very few people are reading this as some engineer wrote an opaque piece of software that made this decision. They're reading it, based on what the article says, as we came up with a set of prioritization criteria which were flawed.


I don’t think people necessarily love narratives that blame engineers. Why would they?

Some specific people in positions of responsibility that screw up badly see blaming engineers/tech as an easy way out of their responsibilities.

Similar outcome different thing.


Yes, people love stories like the space pen vs the pencil. The supposedly educated engineer getting shown up by the clever average joe with common sense. There's always someone smarter than you, but see well actually they're not that smart because they're not street smart.


> Yes, people love stories like the space pen vs the pencil. The supposedly educated engineer getting shown up by the clever average joe with common sense. There's always someone smarter than you, but see well actually they're not that smart because they're not street smart.

And yet that classic example is also an example of people really not understanding the nuance of larger systems; in that specific case, there were concerns about graphite creating electrical issues in the ship's electrical systems.


Surprise twist: the engineers aren't all stupid.


I think it is similar to how certain people always feel they are being ripped off by their car mechanic. They were told one thing by marketing/sales and then reality turns out to be more nuanced.


Except that people are quite often ripped off by car mechanics. Whenever there is an information asymmetry some proportion of the individuals with more knowledge will take advantage of it. I know nothing about cars, so when a mechanic tells me my car needs a $1000 repair, I have to take them at their word or get a second opinion, which is usually more effort than I'm prepared to expend. That sort of imbalance is irresistible to some people in all walks of life, including auto-repair, sales, and engineering.


Not engineers broadly or specifically, but I do think many people find a bit of schadenfreude when an overly confident expert is found to be wrong.


The twist with software engineering though is that the overly confident people tend to be the non-experts, who then blame the engineers for having poor communication skills.


The "algorithm" was most likely a simple optimization model run on a spreadsheet.

Nobody's blaming software programmers (or "engineers")


I'm not even sure this formula is WRONG. It's just not great for morale.

Residents are generally under 34 and in good heath. In the USA, only ~2400 people 34 have died from COVID--and that is mostly people with comorbidites.

In comparison about ~250,000 people over 55 have died.

You'd obviously have do account for life-years lost, risk of exposure [1], etc. I wouldn't be shocked if giving the vaccine to a 25 year old resident is a sub-optimal choice.

[1]Based on my MiL's experience (an ER doc), the risk of infection at a hospital is a lot lower than it used to be because they have PPE now. She's more worried about catching it from her son who works in an office. That said, they need to take into account risk of exposure and risk that the residents spread it to others.


I mean a surface level analysis would say that you've missed a critical datum if you're saying patient-facing staff should be left at risk of being incapacitated from their duties for up to a month.


That should definitely be a factor considered, especially in any area where a doctor shortage could occur. But a month sounds like a long time for <35 years olds in good health.


14 days is the quarantine period to show symptoms. It is not how long the infection takes to run its course if you actually have it, which is generally 3-4 weeks presuming things go well, which they much more frequently do not.


I mean, the hospitals are full. Staff are catching covid every day. The staff are getting patient sick, they're getting their famlies sick. The hospitals are shifting workers who don't normally work with Covid patients to start working half days with covid patients, ensuring as many people as possible get exposed.

There's simply no excuse for not ensuring everybody in the hospitals, then in clinics, gets the vaccine before anyone else in the offices and working from home.


Which makes them more likely to have low level symptoms while simultaneously spreading it further. This policy just doesn't make sense.


“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

- Dune


Select * From employees sort by salary desc


Small nit-pic. select * from employees sort by total_compensation desc;

Otherwise C Level folks who take a $1 salary but receive tons of options would be put at the bottom.


This is the same algorithm many companies use for benefits programs


The most charitable interpretation is that they're admitting they failed, but saying where they failed is in basic due diligence of looking at the output of running the algorithm. And that they're claiming that, if they had done so, they wouldn't have been OK with it either.

It's sort of like saying, "That's definitely a bug. It was unacceptable for me to put my code into production without ever testing it, and for that I take the blame. But I assure you that I did not intentionally write code designed to delete everyone's data."

Obviously this may or may not be what happened. It may even be what they're pretending happened.


I do think people understand that. Blaming the algorithm is a way of saying that we agreed on the rules by which vaccines would be assigned and that they were fair. And that we can’t just go back and change them because we don’t like the results when the rules are applied. At that point why even have a formal process for assigning vaccines when all we’re doing is coming a post-hoc justification for a particular outcome. If we already know the outcome we want then just use that!


Fwiw, the administration said: “”” We are writing to acknowledge the significant concerns expressed by our community regarding the development and execution of our vaccine distribution plan. We take complete responsibility and profusely apologize to all of you. We fully recognize we should have acted more swiftly to address the errors that resulted in an outcome we did not anticipate “””

The chief resident sent an email explaining the nature of the error. The chief resident is, per my limited understanding, a resident from the previous year who stays on in a leadership role. Not obvious to me that someone in that position explaining the root cause is problematic.


Re-read it; the letter from the chief residents (who are residents in their final year of training, in some departments it’s everyone in their last year, in others selected leaders) was _protesting_ the allocation. The letter explaining the error was from hospital leadership, which does not include the chief residents.

The email sent by a chief resident was simply passing on the administration’s explanation.


Chief residents are not always in their final year of training. In some cases, like pediatrics, it's a subsequent leadership year.


Ah, true. All my friends are surgeons =)


Standing on the shoulders of Giants, I would like to suggest the term 'Distributed Bureaucracy' to reflect when an algorithm you didn't even know existed determines your allocation of resources.


And the thing here is, vaccine distribution in a single institution clearly wouldn't need an elaborate algorithm. But yeah, creating an algorithm and then throwing up your hands to say "who knew!" is very convenient.


I'm more curious why there was an algorithm involved at all. Based upon the number of doses they had, they should have immediately been given to as many staff on first responder units, like ER, ICU, etc as would take it. My wife works at a hospital as a nurse and this is what they have done for the first phase of vaccination. After that I can see a need for an algorithm to determine who is most at need but there is no excuse for the administrators for what happened at Stanford.


Sounds very similar to “don’t blame the decision maker, because they were just following a moral/legal principle.” Except that, bizarrely, “principles” are usually regarded in high esteem even by people who vehemently disagree with the principles and the decisions excused by them. How often do you hear things (especially in US politics) like “well at least he/she is a principled politician, that’s so rare these days.”


My pet peeve is when people present averages with no care. If the average is is the result they want to see they assume everything is fine under the hood.

There's so many lessons why that is terrible, like the fighter cockpit design history, and yet it's super common for people to stop all inquisition when the average shows them what they hope is true across all subgroups.


It's probably not even a real algorithm and just some janky excel spreadsheet.


Also “model” is a synonym for “guess”


With you right until the end where formulas are made by people. In many NN setups, the “formula” is pretty much a self trained black box so in those cases people can only be blamed for the input data that went into training the system.


I disagree—-that’s exactly the the “bias laundering” discussed above.

The people/organization deploying a system are responsible for evaluating the system and how it might fail. You can’t just throw up your hands and go “Well, we had that data and this network so...it is what it is.” You could have fit a different model—-or not at all.


That's a bit like saying "I didn't burn the food. The stove burned the food. I just put the food on the stove".


The labor of medical residents is something hospital systems exploit during normal times, but that exploitation has severely deepened during the pandemic. At the hospital my partner works at, respiratory therapists and nurses got a $10k bonus for working during COVID; the residents got nothing despite working insane hours in ICU, routinely working more than the legally mandated 90 hours per week. Just because doctors earn more later in their careers does not excuse the level of labor exploitation they are subject to during residency.

Stanford is not the only hospital system to restrict access to the vaccine from frontline residents. I can name 3 other local hospital systems in my city that have vaccinated administrative & C-suite/VP level staff before doctors, nurses, and other frontline employees. If vaccine allocation is getting messed up this early on within these closed systems, I can't help but think the next 2-3 phases will go awry as well--what checks are in place to ensure these vaccines get distributed to grocery store workers before people who are willing to pay more to get it early?


> I can name 3 other local hospital systems in my city that have vaccinated administrative & C-suite/VP level staff

I don't understand why society is putting up with this. Right now if you're not in a daily COVID-facing role (i.e. an actual front line medical worker) or in a nursing home you should not be getting the shot. This makes my blood boil. There should have been laws passed regarding ordering of the distribution with criminal penalties for line jumpers like this.

There's an article in our local paper with a happy picture of one of our state's congressional representatives (a healthy 34-year-old!) getting the shot. Like WTF? There's doctors and nurses who are treating covid patients who can't get it yet. Why the heck does Congress get priority over them?

Not only do these people have no shame, half of them even have the nerve to brag about it to the rest of us plebes who will have to wait months or more to get it.


And all is plebs do is comment on their actions. If they have no consequences why would they stop screwing us?


Now you see why social media is so powerful. People just bitch and do nothing about this stuff, it's the perfect tool for pacification. Right now I'm bitching but next I'm going to go play video games, so you can see how it works even when you know it's there.


Yep,

Stanford resident acted. Health care workers at those other hospitals are apparently silent. That makes the difference.


Did it make a difference? They apologized after using many of the vaccines and promised "to make a change". The promise is incredibly vague and won't likely come into effect until after most of the vaccines have been given out. It means nothing


There is a massive pushback against taking the vaccine going on right now. People coming up with a litany of different reasons not to take it. One of those reasons is a lack of trust in the government. If that smiling 34 year old has a lot of constituents telling him they are scared to take it because it might not be safe it becomes his duty to stand up and take it as early as possible with a smile on his face.

What ever happened to starting out with assuming good intentions?


There's been plenty of healthcare workers splashed all over the news taking it already. More importantly, we're a long long way away from the general public even getting to decide to take it or not. If hesitation is still a concern in March/April, then congress could have done the PR stunt at that point.

Right now there's a huge shortage of vaccines compared to the number of people who are both eligible and willing. No need to worry about the unwilling at this point since there's not enough to go around anyway.

Congress took it because they think they're more important than the rest of us, and apparently they think they're even more important than the frontline workers who are still waiting.


Congress is legally mandated to get it first, a policy that has been in place since Eisenhower:

https://en.wikipedia.org/wiki/United_States_federal_governme...

Of course it is unlikely that congress would ever change that law...


Ehh, I completely and truly agree with you/feel the same way, but because I'm feeling a bit cynical right now, I have to say - the congressional representative getting it can be passed off as encouragement for his constituents to willingly get vaccinated in a time where there's quite a vocal bit of people who think Bill Gates is going to be swimming through their veins in a tiny submarine if they get it.

Now - could he have just vocally stated "I am your congressional rep - I have full trust in the vaccine and encourage you all to receive it as I wait for my time in the line of priority"

Yes, yes he could've.


I agree that healthcare workers, firefighters and police should be the first ones to get the shots - and directly after them, politicians should, and that mandatory. Get the shot or lose your seat.

There are large parts of the population who say they won't get vaccinated because they're afraid that politicians exploit them as "guinea pigs" (especially the PoC community has a really bad history, e.g. Tuskegee syphilis study). Time to turn the usual situation around.


I like your list. I really hope to see a program where 'front-line workers' (i.e. grocery store workers/etc as opposed to 'first responders') are given priority -and- financial assistance to receive the vaccine if they so choose to.


> -and- financial assistance to receive the vaccine if they so choose to.

A decent government should fund vaccinations out of taxpayer money. It's simply way more cost-effective than having people around who want but can't afford vaccination and then society has to pay many orders of magnitude more for treating the illness...


My wife is one of these trainees left out by this "algorithm." She's an intensive care fellow and was caught up in this mess yesterday. It's really hard to witness considering how much I worry about her everyday she's on service.

My understanding is the training is capped at 80 hours/week, but they average that over the month. They definitely work more than that some weeks, but it's usually because it's not scheduled hours; it's shifts that run long because of emergencies or codes or whatever. When she was a resident and had to write patient notes, that definitely took extra hours each night after her shifts. It's brutal.

And for what it's worth, not all doctors make great money after training. In pediatrics, salaries are generally half what adult doctors make. For a lot of subspecialties, that's on par with average software engineering salaries, even after about 14 years of education and training.


small quibble - the limit is supposed to be 80hrs/wk

I thought about a top level comment to link this, but we are naming and shaming programs that exploit residents during this time.

https://docs.google.com/spreadsheets/d/1ZgEKvTr1lvTLHsREeill...


I can name 3 other local hospital systems in my city that have vaccinated administrative & C-suite/VP level staff before doctors, nurses, and other frontline employees.

What's stopping you?


Is there a medical union?


Evidence like this suggest that medical leadership for the most part care very little about house officers (residents), and that is pretty universal.

Our institution built a new billion dollar hospital and did not include call rooms.

Great job.



If you visit this page with an ad blocker enabled, nothing renders. If you disable your ad blocker on the page, well, god help you:

https://ibb.co/PskWz4d

A couple alternative links for reading:

- https://outline.com/udkxSE

- https://pastebin.com/5MQE91k3


Seemed fine with uBlock Origin


I'm using AdGuard on iOS/macOS with an aggressive set of content blockers. Not sure which rule is blanking the entire page, but there's so many ads and other crap on that page I don't care to debug it.

When I load with content blockers disabled, Safari reports 17 (!) trackers prevented.


Short summary for people who don't want to read it: author says that call rooms were for functionality and that they believe that call room quality and medical learning quality are non-correlated. Author does not discuss absence of call rooms.

Only adding that summary because I assumed, given context of GP, that absence of call rooms would have been discussed. If you're like me, hope I've saved you the time.


Sorry, I should've been clear that it was opinion about what call rooms are for (since I could only guess, and, according to the article, they aren't like the lounges I've occasionally seen on TV).


All good. Link appreciated anyway.

I think it would have helped if you had described why you felt the link was worth sharing but I intend that feedback as gently as possible and not as a rebuke of any sort.


Exactly. Residents in the United States are deeply in debt and can't quit, and the resulting quality of working conditions is sadly predictable: https://jakeseliger.com/2012/10/20/why-you-should-become-a-n...


Not surprising, unfortunately. Our "call room" in one hospital during residency was a utility closet. There are mandates of what to do, but hospitals routinely ignore them. Who will report problems? No resident wants to have their program lose accreditation.

Our program director (in psychiatry) always pointed out that psychiatry had one of the highest rates of people going over the 80 hour-per-week limit in the health system. It was much more uncommon for us than for most of our colleagues, but we tended to report it on the rare occasion it happened. My surgical colleagues who went over the limit were asked to meet with their program director when it happened, and it was much easier just to "round down."

Glad I did a residency, but mine was relatively easy. I probably wouldn't have made it through some programs.


The great thing about an algorithm is that when it gives you results that benefit you, you can accept them and then pass the buck on to it when people get outraged:

"According to an email sent by a chief resident to other residents, Stanford's leaders explained that an algorithm was used to assign its first allotment of the vaccine. The algorithm was said to have prioritized those health care workers at highest risk for COVID infections, along with factors like age and the location or unit where they work in the hospital. Residents apparently did not have an assigned location, and along with their typically young age, they were dropped low on the priority list."


An algorithm is clearly a virtual scapegoat. They’ll need to interview those who were involved in coding it. Dig deeper to find out why.


The people “coding it” are probably following requirements usually set by HR and upper management. Not sure about Stanford hospitals, but in most orgs HR is usually the lead department when it comes to COVID-19 protocols.


The quoted explanation sounds pretty complete to me. They used "assigned location" to determine who was working with COVID patients, but residents don't have an "assigned location", so they were incorrectly passed over even though they do in practice work with COVID patients. Maybe that doesn't completely exonerate the administrators from blame, because someone should have sanity checked the plan, but it also seems like a very understandable error to make when you're rushing.


But... why were they rushing? It's not like the vaccine appeared out of thin air on Friday. Its arrival was known for weeks/months. The staff and where they work is known. The error itself is understandable but the conditions leading to it and its propagation are not.

The further, unforgivable sin is that the error was brought to their attention internally on Tuesday, the administrators didn't bother to fix it, and then when it became more widely known to the hospital staff Thursday afternoon they still did not act to fix it. And then Friday, suddenly they're in the news, they decide they're going to fix it?

Disclosure: Am a physician at Stanford.


It's an understandable error only if residents and their safety isn't at the front of your mind. Perhaps if a resident had been closely involved they would have been able to tell the developers that they don't have anything in the location field.

It's not like Stanford didn't know a limited supply vaccine was coming several months ago, there was time to figure this stuff out.

And it would have been, if it had been a high enough priority.


Not everything can be high priority. Stanford administrators several months ago would have been working on securing adequate supplies and staffing for the expected fall surge; that's a much more important problem, both to the system as a whole and to the residents, than who gets vaccinated today vs. next week.


I've got to be honest with you: this thing can totally be solved with a human pass. In fact, give me the list, your employment rolls, and I'll fix this for you for free in a couple of hours at max.

Government workers are particularly vulnerable to this sort of failure mode: assuming everything is hard. This is not that hard. It's pretty easy. But big company workers are also usually susceptible to this. They don't get things done very fast so they're used to things taking a long time to do and assume that even trivialities must take a couple of days.

But that's why companies like Tesla and SpaceX succeed to these peoples' surprise. They look at these companies and say "They can't do that. This is hard. It takes maybe a half century of engineering."

Turns out lots of things, even if they're hard, can be done pretty fast. But not by those who've been damaged by their own slowness not being punished.


If you gave Tesla or SpaceX the task of managing a vaccine rollout, they wouldn't see this as a problem to fix, because they wouldn't construct a public, detailed priority list cross-checked for equity across every subgroup. They'd just reach out to the N people that seem best at a quick glance, say "hey come get your vaccine", and repeat as new shipments come in until everyone's vaccinated. That bias towards doing things, rather than going to laborious efforts checking plans against everyone who might be affected, is exactly why they can do things so fast.


The article says they realized the error Tuesday and did nothing about it till the residents staged a protest. It very much seems like they were hopping it would blow over.


They had time to build the algo but no time to proof check the selection?


The whole mess has demonstrated why ethical AI research is important. It's so easy for people to disregard why and how biases exist in their systems when it doesn't affect them personally.


If Stanford truly used ML to generate the algorithm, there’s really no back-tracing to find where the decision went wrong besides looking through datasets. There’s your perfect scapegoat.


What would be important are criminal charges for gross negligence while distributing vaccines to medical professionals in the middle of a fucking pandemic. In the best case prioritizing the administration was unintentional, which means nobody ran even basic sanity checks on the output until the staff noticed that they got almost nothing. Add in a murder charge for anyone in the area dying of COVID until the case is over.


What happened had nothing to do with ethical AI research. People in power put themselves first. This has happened before computers as well.


Exactly. There can be a cottage industry using evolutionary algorithms in which the fitness to optimize is the benefit of the interested part. "This new algorithm indicates that the best way to improve the economy is to increase 3X the compensation of Senators".


I hope they can be pushed to publish the algorithms so we can understand exactly how this happened.


Welp, apparently Stanford is shit at algorithmic design


Are you saying that the algorithm shouldn’t consider age and other risk factors? What in the world do you want them to do, give it to babies because they are cute?


If those babies work with thousands of potentially COVID patients and your father gets sick and get to a hospital, you will understand it.


Washington University in St. Louis is currently under fire for the exact same "oversight."


This sounds way too convenient - basically the modern day equivalent of the "dog did it".

So we're to believe that a rogue computer code screwed up?.. i.e.: nobody in charge mess'ed up?

Why again do those with the largest paychecks, given the largest slack?


Another messed up thing I've heard is that many hospitals are prioritizing vaccines for WFH Administrators over staff that are actually working at hospitals..

Its a weird system, I asked someone I know who is doing her residency how many times she's been COVID tested this year... just once. Apparently as long as you're not working on the COVID floor there isn't really a requirement at her hospital.


Since this algorithm is obviously flawed this way, is it likely to be flawed with respect to the timing of the second dose?

Every report I’ve seen from experiment subjects (who may have received the placebo) indicates that the second dose sucked for 2-3 days, and was much worse than the first.

If you give all your, for example, ICU staff the second dose at the same time, and then they can’t work for 2 days, how do you staff that?

Did you plan for this, or will it be a surprise that “The Algorithm” missed?


This is easy to resolve by staggering doses which I know at least other hospitals are doing.


It is, but are they actually planning to do this?

The key is that you need to stagger them now - maybe over a 10 day period to be safe (plan for 2 days off, so 20% out of action with 10 days, try to use “weekend” time for this)

But in the middle of a pandemic, even 20% is a lot of lost capacity

And then some people need to be in the day 8, 9 and 10 groups so won’t “get the vaccine first”

So it’s actually not quite as simple as just spreading them out over a few days and hope for the best.


Seems like very typical behavior for university and hospital administrators.


> that resulted in an outcome we did not anticipate," they wrote.

I bet the "outcome" they're talking about is the _protest_, not the allocation.


I can't really get upset until I understand who got it instead of the resident.


The residents who didn't get it: those working 80+ hrs/week in the emergency room, in the ICU, intubating patients, operating on patients, etc etc etc

Some people I know of who got it instead: Radiologists who primarily work from home. Administrative roles with relatively low (or no) patient interaction.

Not saying those people shouldn't get vaccinated -- just that if you were asked to come up with a priority list, it wouldn't be this.

Disclosure: am a physician at Stanford.


Thanks for the info, that stinks for sure. I hope they get it right now that they are revisiting the issue. I have a soft spot for that hospital, my first child was born there.


They are trying.

For what its worth, faculty across multiple departments (including those radiologists) stepped up and declined their scheduled vaccine appointments until residents get vaccinated first.

It's a big place made up of mostly smart (and good) people. A lot of outrage is at the administration's poor planning, failure to respond quickly the problem, and subsequent excuse-making.


C-Suite and WFH admins - the low risk, safe group. The system forgot front-line infantry (again)


Not “The system”, but the people in charge forgot them.


"The system" IS the people in charge


Source? The article irritatingly doesn't say.


SfGate and NYT articles mentioned execs and admins.

I can't find direct links on mobile


This is going to be how it goes broadly, at a national and international level.

I guarantee you that you'll see access for the wealthy much sooner than you'll see access for the folks at highest risk. I don't know how to help with this, but I'd much rather see farm workers, grocery store clerks, the homeless, bus drivers, etc. get access after we take care of the medical staff (who are exposed to patients) and the elderly (and others in extremely risky environments.)

Yet I'm sure that's not how this will go.

How long before testing and vaccination becomes an employment perk at a FAANG company?


Residency requirements are nothing more than a form of hazing. The doctors are half-asleep while making important decisions. How can this be not be outrageously unacceptable?


No, Stanford is sorry that they were called out, but not sorry for what they did - or they wouldn't have done it in the first place.


I have one question: was the "algorithm" an excel scorecard? I bet it was


If this pandemic has shown anything it’s that the poor and front line is throwaway.

Inequality in the US is getting more and more extreme. This shit will not be tolerated much longer, riots will get worse, people will get angrier.


Trying times accentuate dividing lines...


Under a capitalist system, resources are allocated to those most willing to pay for it.

Under a socialist system, resources are allocated to those with the most political power.

Under an anarchist system, resources are allocated to those with the most firepower.

The vaccine is being distributed under a socialist system. Nobody should be surprised at the results.


I don't think the "capitalist" system would really have any effectiveness here. Think about people like me: I could easily afford to pay whatever it would cost to be vaccinated. But, it's largely pointless: I work at home and don't go outside except to get some groceries now and again. If I did get COVID-19, I'm at a very low risk of dying. I'm probably not going to spread it to anyone either; I live alone and would just stay in bed for the two weeks I was contagious. And, if I did die, it wouldn't really matter to society -- I'm not writing software to save people from COVID-19. Thus, if the goal is to minimize the spread of COVID-19, there is no value to letting me enter the highest bid for the vaccine. By the time my money feeds back into the system to increase production, everyone will already be vaccinated. Naively applying a simplistic economic model from elementary school doesn't get results here.

Capitalism doesn't help in the case that the article is discussing, either. Older doctors tend to be paid more than younger doctors, and are also at higher risk of COVID-19 without taking into account how many COVID-19 patients they see. If we just charged a high price for the COVID-19 vaccine, the older doctors would easily be able to afford it and get vaccinated and the residents wouldn't be able to afford it -- exactly what's being complained about in the article. You don't get paid for each COVID patient you treat (caring for dying pandemic victims is not a high-margin business), so using ability to pay as the means of distributing the vaccine doesn't maximize its value to society. The problem that the designers of the distribution model faced was balancing risk from old age with risk from exposure to patients with COVID-19. They balanced it wrong, and over-weigthed old age and under-weighted daily exposure to people sick with COVID-19.

It's not about capitalism, socialism, or anarchy. It's about getting back to normal as quickly as possible.


> I don't think the "capitalist" system would really have any effectiveness here.

But then you go on to explain how it would be effective:

> Think about people like me: I could easily afford to pay whatever it would cost to be vaccinated. But, it's largely pointless: I work at home and don't go outside except to get some groceries now and again. If I did get COVID-19, I'm at a very low risk of dying. I'm probably not going to spread it to anyone either; I live alone and would just stay in bed for the two weeks I was contagious.

I.e. you wouldn't find it worthwhile to pay whatever it may cost for an early dose (the LA Times reports people offering up to $25,000). Distribution by willingness to pay does sort by those who are able to pay and finding it worth the cost.


Under a capitalist system, those with the most resources have the most political power, and the most firepower.

Under a socialist system, those with the most political power have the most resources and the most firepower.

Under an anarchist system, those with the most firepower have the most resources and the most political power.


It may have just been a software bug, but which of those three options would have been your preference? Or some fourth option?


TFW when you think capitalism can survive without the state


i don't think walter is an ancap


So then is the US not capitalist? Because I see Congress getting vaccines before essential workers

(And it’s not because they’re paying more)


Confirmed, I am not an ancap.


A socialist system adamantly defending their intellectual property to prevent poor countries from developing generics? Sounds pretty rare.


didn't know stanford was located in a socialist country, or does it have an anti-capitalism force field around it?


While Stanford is a private entity, it acts and is structured like a public one. Even its full time employees aren’t at will like the private sector at large.

We can argue about whether or not Stanford is “socialist”, but I don’t feel he’s wrong that Stanford allocated vaccines according to political power. He also doesn’t judge any system. He’s just giving an observation


The newspapers are full of reports of special interest groups jockeying for priority. It's a test of how much political pull each has, rather than their need. (Of course, each will frame it in terms of their need.)

Powerful politicians are already getting vaccinated.

It's a virtual certainty that those offering $25,000 for a dose are going to find someone in the distribution system providing it and pocketing the money.


The distribution of vaccines is under the control of the government. There are none for sale, and in fact those in charge have adamantly said they wouldn't be selling vaccines.

https://www.latimes.com/california/story/2020-12-18/wealthy-...

So yeah, socialism :-)


Algorithm? They probably mean a HR spreadsheet.


Here’s another algorithm screwup: https://www.who.int/news/item/14-12-2020-who-information-not...

Oh, wait, no more need for false PCR tests, we have vaccines now. Forget testing, they’re wrong anyway, everyone get in line for a dose of nothing.

The good news appearing in media recently are the high percentage of people all over the world against being vaccinated, it means the human reason algorithm is still top notch.


Lets see: first they give themselves what they want. Only after, and only if there is any outcry, then they apologize without any consequences.

Yeah, I think management learnt something /s.

We're always going to keep getting more of this behavior unless and until we start holding management fully accountable for this type of insider dealing. And accountable means being immediately fired, no golden handcuffs, and a full clawback of any issued bonuses and stock options.


What if management is the judge, jury and executive? Who will bell the cat?


Well, if all the people that actually produce the value (that is the workers) walk out on strike until the cat is belled, things would move.

Of course, that requires that strikes be legal. In the USA, that is frequently not the case.


I don't see how free will has anything to do with this. The algorithm is clearly calling the shots. (pun intended)


Gross oversight: Veterinarians are not in the first wave of vaccinations, despite being both essential and healthcare workers. The mega corps that own most Vet clinics don't care about them, they have continued operating apace. Meanwhile, corona could be all over the dogs and cats brought in, very few owners sanitize their indoor pets before Vet visits.


Neither are farm workers, grocery store workers, utility workers, and many other people whose work and service are used by the vast majority of the country every single day. Medical workers do not make up large percentage of COVID-19 deaths, less than 1% in the US. I don't know what the numbers are for other types of essential workers, but I doubt it's a lot different, since the majority of deaths fall largely on people 55+ years old.

Another factor that should be accounted for, but I've not seen in any official policy, is the people who've contracted the virus and recovered. While there is research showing that the human immune response to SARS-COV-2 may persist for years[0], even if that's not true we do know that it lasts for 6+ months as shown by the small amount of confirmed reinfections[1]. So put those people at the end of the line.

We really should write down the objective and then work out the way to achieve that objective. If the objective is to preserve the maximum amount of life years, then the criteria should heavily bias toward people older 55+ and people with comorbidities. Being a front line health care worker is very honorable and should carry a lot of respect, but it may not be the truly relevant criteria for maximizing the preservation of life.

0. https://bgr.com/2020/11/18/coronavirus-immunity-years-antibo... 1. https://www.forbes.com/sites/joshuacohen/2020/11/16/though-r...


I’ve been tracking the news and the priority of administering the vaccine has been a debate in itself. If ML can truly find out the obvious and identify key areas to target, we can avoid a bunch of unnecessary finger pointing. What may seem obvious to humans may be totally wrong.


Nobody is catching that residents are almost universally 25-35 years old, which puts them in an extremely low risk category for serious illness.

Which sucks, but is 100% the most efficient way to prevent deaths. Giving the vaccines to older workers first is correct.


The 25–35 age group is not "extremely low risk" for a serious COVID-19 infection. The risk is lower than older age groups, but it's also substantially increased due to their higher degree of direct patient interaction compared to Stanford's executives and remote employees who did receive the vaccine. Also due to residents' high degree of patient interaction, they are at high risk of spreading COVID-19 to other patients and healthcare workers.

Your opinion is poorly thought out.


Why is Stanford in a position to choose allocations in the first place? I thought the government was going to handle allocations.


The federal government has allocated to states, and pretty much left it up to the states how to allocate internally. With some vague sort of guidelines.

Most states seem to be deciding to prioritize healthcare workers (especially those taking care of covid patients) either entirely or among other groups. So the way you do that is delivering to hospitals makes sense. CA presumably allocated a certain amount to Stanford Medicine, as a large hospital system. I don't know how CA decided what hospitals to give to in what amounts, but it certainly wouldn't shock me if political influence were part of it, why wouldn't it be, what sort of guidelines or transparency is there to stop it? There is not really any written procedure for how to decide which hospitals to get how much in most states, it's just... happening.

Then Stanford was clearly left entirely on their own to decide how to allocate internally.

This is far too important and too politically fraught (political in terms of POWER, in terms of distributing a resource for which demand far exceeds supply for something that can literally be life-and-death) -- to have left it up for everyone to just make up their own prioritization rules and medical ethics determinations on the fly, in a hetereogenous way. It means people will make terrible mistakes, and the powerful will take advantage to have their needs prioritized.

It is ridiculous there aren't much more clear/specific guidelines, and in some cases enforceable policies/regulations, from the federal government. It's like nobody's driving the bus here


The CDC is creating literature assuming "Promoting Justice" and racial criteria are triage factors for vaccine distribution, sufficient to prioritize essential workers (listed examples are mostly government employees) over at-risk and elderly populations. Even though they admit their own models show that would result in slightly more deaths. With that in mind, perhaps decentralization is preferable.

https://www.cdc.gov/vaccines/acip/meetings/downloads/slides-...

https://www.cdc.gov/mmwr/volumes/69/wr/mm6947e3.htm


> It is ridiculous there aren't much more clear/specific guidelines, and in some cases enforceable policies/regulations, from the federal government. It's like nobody's driving the bus here.

Why is this not California's fault? States were each permitted to establish their own procedures, which somewhat makes sense given the challenging distribution requirements of the Pfizer vaccine. Montana has significantly different challenges than Rhode Island in that sense. Most states that I know of have established clear guidelines saying who gets it and when - I assume California is the same.

Seems like California is the governmental entity that failed to exercise proper oversight and/or requirements specification here.


At all levels, sure.

Most states you know have established clear guidelines sayign who gets it and when? Including specifics on what medical staff within a hospital system would get it? Like not just "health care staff" or "first responders" (everyone in the Stanford Medicine system is that already right, this is about who within that group gets it).

Please back that up by showing me such clear guidelines from a few states. It should be easy to find this, if indeed most states have done this, presumably in a very transparent way for something so important and contentious, right?

I don't believe most states have.

(It doesn't make it easier that the federal government told states how much they'd get them REDUCED it, and in general is only committing to telling states how much they'll get a week in advance).


New York's plan: https://www.nbcnewyork.com/news/coronavirus/ny-unveils-draft...

Phase 1 says "Healthcare workers in patient care settings"

Tennessee's: https://www.tn.gov/content/dam/tn/health/documents/cedep/nov...

Phase 1 says "hospital/free-standing emergency department staff with direct patient exposure and/or exposure to potentially infectious materials"

Other states have similar wording. You'd really have to twist yourself into a knot to convince yourself that a work-from-home administrator falls into the categories specified above. Shame on any state who didn't include wording like that - there was nothing stopping them from putting some common-sense wording in their plans. Beyond the written rules, you'd also have to be a selfish idiot to think that just because you're related to a healthcare company that you should get it this week if you're working from home. If I were in that kind of role, shame would be enough to stop me but as we've seen the elite often have no shame.

(edit) California's own plan [0] says Phase 1-A includes "paid and unpaid persons serving in healthcare settings who have the potential for direct or indirect exposure to patients and infectious materials and are unable to work from home". So if Stanford was really vaccinating admins who are working from home, then it seems like they violated state guidelines and should be punished appropriately.

[0] https://www.cdph.ca.gov/Programs/CID/DCDC/CDPH%20Document%20...


the hospital employs somewhere around 30k people. even if they had all 60k doses (two per person) they can't vaccinate everyone at the same time due to potential side effects, so a schedule is needed to avoid an entire department being sick at the same time.

my wife works for the hospital but has been working from home most of the time since covid started. she's low on the priority list because of this, and rightfully so. her co-workers that are treating the (sometimes infected) patients should get access first, if they want it.


They have a huge hospital and medical program.


[flagged]


“ Residents are doctors in training, who have graduated from medical school.”

It’s because you didn’t read the article. Residents are doctors in training, not just people living there - and they are interacting with sick people.


I’m fully aware of what a resident is, which is why I assumed that nearly all of them are young. Mostly they are under 30 because they just finished medical school.


One of the main risk factors is exposure.

Guess who is walking the halls filled with COVID-infected patients, and who is WFH, only having to leave the house once every two weeks for a grocery run?


People working from home should obviously not be higher priority than residents.

Unless they are people who could be seeing patients if they got vaccinated.


> Why should residents who are young and very low risk be a high priority?

Maybe they shouldn't be top priority, but if they're dealing directly with covid patients they sure as heck should have higher priority than a VP or administrator that's been working from home this whole time.


They are at high risk of infection because they are exposed to sick people as part of their job caring for the sick... how is that not obvious?


> How on earth do you expect them to prioritize these vaccines if not based on age and other risk factors?

On their required interactions with many infected people, causing them to be risky for spreading the disease and causing labor shortages in the hospital if they are found to be infected.

It makes no more sense to administer the vaccine to an elderly hospital administrator working from home than it does to give it to an elderly professional of any other kind working from home. They should get the vaccine eventually, but they should not be among the first when vaccines are rationed.


Most likely residents are young, arent they? So if they prioritized people most like getting affected by the disease and only have 5000 shots, I am not suprised that no resident made it to the list.


The problem is that the doctors who are doing their job completely remotely are getting the vaccine ahead of the people who have to work 90 hours a week in overcrowded hospitals.

While age does matter, ability to control exposure and risk of spreading the disease within the hospital should be at least as important, in my opinion.


My Resident daughter-in-law was scheduled to be vaccinated. Was sick on the day. Turns out, she had the virus! So no vaccination needed any more. One way to resolve it.


In all likelihood, this was an innocent mistake (i.e., not done with malice)... but it's hard not be cynical about it, especially when we consider the context.

Medical residents:

1) are at high risk, overworked, stressed out, and underpaid;

2) are regularly being called "heroes" in slick posters and PR campaigns; and

3) have just been left out of the initial vaccine allocation.

The combination of 1), 2), and 3) seems almost "engineered" to induce psychological and emotional breakdown in the very people who least deserve it. Horrible.


> In all likelihood, this was an innocent mistake

From the article,

> While leadership is pointing to an error in an algorithm meant to ensure equity and justice, our understanding is this error was identified on Tuesday and a decision was made not to revise the vaccine allocation scheme before its release today," they wrote.

The residents' point here is that it ceases to be an innocent mistake when it is pointed out to you, and you deliberately decide to do nothing about it.


Do you actually think this was done with malice?

I doubt it. Per Hanlon's razor, we should never attribute to malice that which is adequately explained by (bureaucratic) stupidity.[a]

[a] https://en.wikipedia.org/wiki/Hanlon's_razor


It should be no surprise that, when confronted with this error (if it is one, POSIWID, etc. etc.), admin staff insisted on preserving the erroneous state. Should America ever find herself with single-payer healthcare during shortages, she will find herself encountering this same situation once again. She will then react with outrage and surprise, learning not that centralization sets these incentives, but falsely doubling down in the belief that this repeated failure of centralization is evidence that the wrong kind of people were at the centre: not very different from communists who argue that "it just wasn't done right by the right people".


> According to an email sent by a chief resident to other residents, Stanford's leaders explained that an algorithm was used to assign its first allotment of the vaccine. The algorithm was said to have prioritized those health care workers at highest risk for COVID infections, along with factors like age and the location or unit where they work in the hospital. Residents apparently did not have an assigned location, and along with their typically young age, they were dropped low on the priority list.

So, basically, whoever coded up the algorithm for personal COVID-risk forgot to take care of null-states for the location input.

So much for all the outrage over selfish intent.

Sure, it's possible that the admins purposefully created this glitch so as to have a reasonable fallback story in the case they got caught, but per Occam's Razor, I think it's much more likely it was a dumb, honest mistake.


The outrage stems from the fact that they doubled down on it after the problem was pointed out, this is what caused things to escalate to the level it did, which became a PR nightmare and resulted in them reversing their position. Heads should roll over this, not because there was a mistake in the algorithm, but because the leadership cannot be trusted to make correct decisions in the face of obvious errors.


> The outrage stems from the fact that they doubled down on it after the problem was pointed out

This might be the reason you're outraged, but most outraged people here don't even know the administration doubled down on their position. They're outraged due to assumed selfish intent. I think this assumption is wrong.


Had they not doubled-down, and instead corrected the allocation error, we wouldn't even be reading about this.

So, I'm not sure that it makes a difference if some people—hearing about their screw-up—are angry for the wrong reason.

To be clear, I'm with you that this probably wasn't a mustache-twirling attempt to divert vaccinations from the frontline staff. But I don't think that it matters much, given the medical expertise we would expect from an organization like this.


There are no dumb honest mistakes when it comes to putting people’s lives at risk. Engineering and algorithmic errors like this and the Boeing 737 max debacle are absolutely worth being outraged over.


Agreed that society should be more critical of mistakes that put people's lives at risk. But, this decision scores fairly low by that measure. If I had to guess, 0.00001 residents would have died due to this error. There are many more attention-worthy mistakes to be critical of if this is the main concern.

I think the real reason this story is getting attention is because people are assuming selfish intent. I think this assumption is probably wrong.


Residents are also frontline workers who would be some of the most effective in stopping the spread of the virus once vaccinated. Second and third order effects should have been considered in a well crafted algorithm. And your conjecture that it was not selfish hasn’t been proven, either.


Medical professionals should not be in the early rounds of vaccines.

If the vaccines have a delayed negative effect you cripple health care.

Vaccines are great once they've had years of testing, but risk management would tell you to play it safe with critical staff. If the vaccine disabled 20% of medical professionals in January how many people would die as a consequence?


The possibility that a vaccine that has not shown any serious sideeffects until now will disable 20% of people getting it is extremely low. And you make the typical mistake of forgetting to weight vaccine risks against the risks of not vaccinating.

Is this possible that a yet completely unknown sideeffect that hasn't shown up until now will suddenly appear? Yes. Is it likely? No, absolutely not.

Is it possible that a mutated strain of Covid-19 will disable 20% of medical personel? Yes. Is it likely? No. But still: It's much more likely than a vaccine that has not shown any serious adverse events until now doing this.


That's why we run trials. Over 20,000 people got this vaccine months ago, and have been carefully followed since. If it was going to wipe out 20% of people one month after injection we'd know already.


Are you confusing medical trials with the vaccine rollout? The first vaccine was given March 16th.


This is not how any of this works...


There is incredible pressure to give out vaccine to those with the tallest soapbox from which they can harass decision makers. And of course, said decision makers aren't exactly known for their backbone when handling conflict, so we get situations like this. Administrators often don't even know who in the hospitals spends the most time around sick patients. Entire front-line departments are forgotten, primarily because they don't have any political power or representation.

To see such a highly regarded institution fail this very simple test is another example of how the human creature needs more time to evolve. It's not just a mistake, this is a pattern playing out all over. Anyone who spends time in these environs would not be much surprised, but when the best hospitals still can't take care of their most vulnerable staff because of pernicious political maneuvering, the contrast between perception and reality is all the more stark.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: