Hacker News new | past | comments | ask | show | jobs | submit login
Heart surgeons refuse difficult operations to avoid poor mortality ratings (telegraph.co.uk)
434 points by lxm on April 21, 2018 | hide | past | favorite | 263 comments



It's Goodhart's Law in action, this time with deadly consequences:

"When a measure becomes a target, it ceases to be a good measure."[a]

[a] https://en.wikipedia.org/wiki/Goodhart%27s_law


When I was fresh out of a school I worked in a call center for a while. One of the most important metrics (and everything you did was measured) was the average length of each call. So it was common practice, when you got cases that basically were going to be long calls or ones you probably couldn't actually resolve (e.g., "I want the service back on but cannot pay"), to deliberately upset the customers so that they would demand a supervisor and you could transfer them.


I worked in a call center, as well, but they didn’t prioritize call length as a metric/KPI. Their number one KPI was “first call resolution”, meaning even if it took an hour as long as the problem was resolved the call was considered a success. I feel like it was a much healthier way to measure call center performance.


One support team I used to work with prioritized "first touch resolution" for email cases as well, so the agents would answer every case with a 16-page DYI flowchart listing every single possible problem and its resolution. KPIs are hard...


Maybe I’m weird but that’s exactly the kind of response I would want.


No, that's documentation, and it should come with the product or, in the modern era, be available online.


My experience is most customers can't read past the first line of a response. The idea that they can get through a 16-page flowchart seems unlikely to me. And I do sell to a mostly technical audience.


That sounds great!

I'm reminded of https://xkcd.com/810/


Getting sent the manual doesn't sound that great to me.


It sounds pretty good to me if it inspires them to write a manual that answers the questions support gets. Most products come with a "manual" that's useless for troubleshooting.


This is systems thinking in action. It reduces failure demand[0].

[0] https://en.wikipedia.org/wiki/Failure_demand


We rotated through all of them and our pay was tied substantially to them, which I guess also created an incentive to make it hard to get good metrics all the time on the part of someone else in the organization.


I moved from a call center where length was a KPI to one where it is not a KPI, and it makes a world of difference in the pressure/health of the office environment (and thus better outcomes for customers)

It is far better when call length is not a KPI. I would never go back.

IMO, if call length is a KPI, it's a key indicator that the company isn't willing to hire enough agents and is trying to apply pressure to shorten calls for purposes of coverage.


This is how my call center measured performance until upper management came in and shut it down. Now it’s 800s, or you’re out.


I'd also expect a bunch of "I'm going to transfer you... oops, pressed a wrong button and it hang up!" As a customer, I've had a fair measure of mysteriously hang up calls to support, I wonder how many of those are metrics-driven...


There was recently a story about a 911 operator being jailed for hanging up on thousands of calls.[1]

If someone could get away with it at a 911 call center, it could happen anywhere... and might happen oftener than we think.

[1] - https://www.sfgate.com/news/article/911-dispatcher-sentenced...


That Houston lady that did that didn’t care about metrics, she literally just couldn’t be bothered. “Ain’t nobody got time for that.”


I have to imagine there are differences between a television customer service line and emergency dispatch anyway.


Yeah, eventually they started measuring transfers as a metric, which indeed encouraged that behavior.

Of course some people would just say the issue was solved, falsely (which, you know, it was a little easier to get caught so it was higher-risk, in addition to being probably less moral), and there was also the X factor of how much you liked the customer, considering the extreme amount of verbal abuse the job entailed.


Does this mean that I can threaten to not talk to their supervisor?


Ooh good tactic. Mention something like ‘Got all day and I’m not gonna hang up until we get this solved properly.


I mean, in general, a hostile tactic like this is likely to get the agent to do as little as possible as he can to help you while staying within the rules. Plus in many cases the front-line agent doesn't have the permissions to do what you need anyway and has to transfer you to a "supervisor" (usually not actually one) or a retention guy to resolve your issue anyway.


I’ve even heard of some centers that are allowed to hang up on you if you try that.


The call center I used to work for expected us to give one warning for such hostile behavior, and then simply hang up if the customer kept being abusive and/or threatening.

In general its, simply a bad idea to be rude to the person you expect to provide assistance.


We didn't have that, but people had figured out a way to use the hardware to hang up and bypass the monitoring and would just use it.


As a corollary, this is along the lines of one of my favorite bits of wisdom from Poor Charlie's Almanac.

If I recall correctly, Munger considers incentives to be the single most important concept to properly understand in order to drive successful business (and arguably life) outcomes.

Not surprisingly, incentives are also chronically underestimated or outright ignored, even in situations where there's a strong, profit-driven incentive (#meta) to get them right.

> From all business, my favorite case on incentives is Federal Express. The heart and soul of their system – which creates the integrity of the product – is having all their airplanes come to one place in the middle of the night and shift all the packages from plane to plane. If there are delays, the whole operation can’t deliver a product full of integrity to Federal Express customers. And it was always screwed up. They could never get it done on time. They tried everything – moral suasion, threats, you name it. And nothing worked. Finally, somebody got the idea to pay all these people not so much an hour, but so much a shift and when it’s all done, they can all go home. Well, their problems cleared up overnight.


For a different view, see "Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A's, Praise, and Other Bribes".[1]

It's about extrinsic vs intrinsic motivation. Extrinsic motivation comes from outside of the person and is what's typically used by organizations to try to motivate people: salary, praise, bonuses, etc. Intrinsic motivation comes from within.

[1] - https://www.amazon.com/Punished-Rewards-Trouble-Incentive-Pr...


Luckily in most businesses incentives are used by the people designing the incentives to

1) cost as little as possible to the business

2) "optimize" their own pay/career

(There was a study saying that over 80% of sales people would deny their employer a million-dollar contract if it netted them $500 personally, so compared to that this is nothing)

Which makes sure that the incentives are not aligned with the business goals.

Now I guarantee that the best way to destroy intrinsic motivation, bar none, is to provide extrinsic motivation for things that aren't aligned with the business' goals.


I think this is frequently short-sighted. Surgeons are intrinsically motivated to be surgeons and help people, and they are trained to think clinically (pun somewhat intended) about risk-reward. They realize that trying to help high-risk patients will ultimately result in them being able to help fewer patients.


It also kind of incentivizes the alternative as well: if you aren’t an amazing surgeon, you could work to ensure you exclusively work on “hard” cases you can claim all failures are due to patient difficulty rather than lack of competence (at least competence to deal with that difficulty). Then you can dismiss any claims you’re incompetent with “I have a high mortality rate because I’m willing to risk my reputation to save people who have been abandoned”...


I've listened to a physician make this argument (for a different metric, not mortalities). He sounded as though he expected everyone in the audience to disbelieve him, enough so that I wondered why he even tried.

So yes, could.


That only really applies when you're measuring side effects though, and not the actual thing intended.

In this case they should have had a metric that took into account the estimated prior probability of a good outcome given an ordinary surgeon.


The problem is that in many cases, including this, you cannot directly measure the intended thing in any meaningful way. Your example is just another indirection which is still certainly somewhat playable and in addition involves estimation from insufficient input data (also known as guessing ;)).


Mortality rate is a side effect.

Survival is the intended effect, but that's much more of a binary thing.


Ironically applied to not-good hearts


"These doctors are vile dogs. If I was a physician I would take all the hard patients, heal them all, and be a hero of the people."

Yeah, if you were given one operation where the patient has a 100% chance of dying without it, but 80% of you killing them if you operate + 20% chance of them living after, maybe you'd do this once and get lucky. But try doing this twice, you have a a 96% chance of killing a patient, three times makes a 99.2% chance of killing at least a single patient. This probability quickly approaches 100%. If you kill a patient you get dragged into court, branded as a murderer by the opposing lawyer, and your career which you have spent your youth, your 20s and a half a million dollars training for is down the drain. Taking risky cases, if that's not your niche, is is guaranteed to have you out of a job and in debt from legal fees.

To suggest that "bad hearts" are involved in this necessary calculation, is incredibly naive.


People die in surgery all the time, it's not nearly as big a deal for the surgeon as you're suggesting. ~5% chance of death within 30 days per major hart surgery means the average surgeon is losing several people in an average year.


I am pretty sure that the parent was referring to the fact that this article is about heart surgeries. Bad heart is used in the same manner as a bad knee or a bad tooth. It refers to the patient not the doctor.

But aside from that, I completely disagree with your comment. The approach you are defending, even if legal, is completely unethical and immoral. It should be something we chastise, not condone.


Well I think you're an idiot. There's only one viable "approach", determined by the laws of probability I described above. You can try to shame people into doing actions that would ultimately get them fired and sued, but you're only going to look like a self-righteous asshole and nothing will happen.


Hey, this is a little too ad hom...


Sorry. Touches a nerve with me when people moralize what they do not understand.


Yet another reason (among many) the US is such a backwards country.


In another sense, 'Goodhart' seems like a splendid cosmic coincidence. Instead of obsessing over targets, try to perceive the whole system. And who can do that? Good-hearted individuals.


The version I've heard might be a corollary: if you measure it, it will improve.

The subtext is that the system won't improve; rather, something in the system will be sacrificed to improve the metric. The end result (usually quality) won't improve.


At best it probably only identifies the people to replace.


Which is often another misguided response: https://deming.org/explore/red-bead-experiment


The measure can help you identify those willing workers by how the react to it though.


Some guy wrote a taxonomy of Goodhart's Law cases. This would be the "Adversarial Goodhart" case: "When you optimize for a proxy, you provide an incentive for adversaries to correlate their goal with your proxy, thus destroying the correlation with your goal."

https://www.lesserwrong.com/posts/EbFABnst8LsidYs5Y/goodhart...


The first example on that page, "Regressional Goodhart", is totally wrong. If U measures V plus some noise X, assuming V and X form a bivariate normal distribution, then the conditional expectation of V is maximized by selecting the greatest U. In fact, the conditional expectation is linear in U.

Not sure how much stock I can put into the rest of the article given this. Goodhart's Law can seemingly only apply if the variables are highly non-normal, or are negatively correlated (e.g. worse doctors are more likely to refuse the difficult operations than good doctors).


> The first example on that page, "Regressional Goodhart", is totally wrong.

I don't think your statements contradict anything that it says. For reference, here are all the sentences in the "Quick Reference" part, which I'm assuming is the part you read.

Regressional Goodhart - When selecting for a proxy measure, you select not only for the true goal, but also for the difference between the proxy and the goal.

Model: When U is equal to V+X, where X is some noise, a point with a large U value will likely have a large V value, but also a large X value.

Thus, when U is large, you can expect V to be predictably smaller than U.

Example: height is correlated with basketball ability, and does actually directly help, but the best player is only 6'3", and a random 7' person in their 20s would probably not be as good

Is any particular one of these sentences false?

> If U measures V plus some noise X, assuming V and X form a bivariate normal distribution, then the conditional expectation of V is maximized by selecting the greatest U.

The word "maximized" carries some assumptions. If the only thing you can do is select based on U, then that statement is correct. However, if you had some means of selecting directly on V, then it's extremely likely that this would do better than selecting on U.

Suppose you're selecting the top 10 people. Suppose X ranges 0-10 chosen by a die roll, and V ranges 0-10, and it happens there are ten people with V=10, a hundred people with V=9, and a thousand with V=8 (and a lot more with lower Vs). On average, you'll have ten V=9s who score U=19, ten V=9s and a hundred V=8s who score U=18, and so on; in order for selecting on U to perform as well as selecting on V, every one of the V=10s would have to roll X=9 or X=10, which is exceedingly unlikely. It is true that taking people with high U scores yields people with higher Vs than taking people at random, and it is further true that taking the U=19s will give you better results than taking the U=18s or U=12s. It is simultaneously true that, when you take the U=19s, you'll be getting people whose X was 9 or 10, much higher than if you selected people at random or if you selected directly for high V. The first two sentences from the text state exactly this. (One consequence of this observation is, e.g., if you're doing admissions based on some test score, and you're considering raising the required score by ∆U, you should know the effect will be to raise average V and to raise average X, with ∆V=∆U-∆X, and if ∆X is large, you may be disappointed in the results. This is simple regression to the mean.)

I imagine you know all these concepts; I think you're interpreting the text as a stronger statement than it is.


On a less dire area of work, I’m starting to turn away from risky jobs/startups. It seems like there’s more and more newcomers in tech these days, and they’ll reject you for not being at gigs for longer. If I get the impression a future manager will be inexperienced, or that they’d failed to build a team before, I hesitate now. It’s a little frustrating, but that’s just how it is.


Apologies for the silly comment in advance - but how ironical that "Goodhart" is applied in this case to heart surgeons.


I prefer to think that this happens when you pick a measure that ignores key aspects of the results. For example, would the results be the same if the mortality rate was complexity-weighted?


Thanks. A few years ago I asked a question on what this was called.


Came here to say this.


I'm not so sure Goodhart's Law is correct.

For example, if I target a high win percentage, is win percentage, then, not a good measure?

Mortality rates are a pretty good measure for surgeons.

The problem here is that they are inflating their measure by cherry picking.

edit:

Some measures should be targeted, the problem here is that it is done in an non genuine way


Goodhart law deals with proxies - i.e. measures not of direct target, which is hard to measure, but of something that is easy to measure, and you consider it related to the actual target. If you need wins and measure wins, wins are not a proxy and thus Goodhart law does not apply. But if, say, it were hard for you to count wins in soccer for some reason, and you measured time spent with the ball per player instead, thinking that if players own the ball all the time, they surely going to win eventually - you'd get a team that is engaged in pointless passing around instead of actually scoring goals.

For surgeons, the direct target is for the patient to get better - or at least live longer and with higher quality of life. Proxy is the measure of a mortality of a specific surgeon. It is not a direct target - as the surgeon can achieve zero mortality by not doing any surgery at all, but his patients would die of suffer from the lack of treatment.


Pro tip for those targeting a high win percentage. If you win your first game (beginners luck?), then stop playing and never play again.

That's what the article is describing, surgeons won't even play the game, they won't even attempt to help certain people because it might hurt their "win percentage".


another pro tip: compete only against weaker opponents (this is common in Boxing).


Wins are likely from systems with fixed rules that don't reward innovation, so winning by the rules of a system doesn't necessarily mean a positive outcome. An example might be the system that lead to the housing crisis. Certainly a few people who gamed things the right way "won" big, but only at the expense of millions of homeowners.

Mortality rates are one measure of many that can tell a story, but they're like incarceration rates in that they've become a perverse incentive that exacerbates an issue. If we say that putting more people in prison is a good thing, that's predicated on an assumption that only those that are being put in prisons are people worthy of being there, whose freedom is more costly to society than their imprisonment. When a district is rewarded for putting people in prisons, what types of things might you expect to happen to the legal systems and populations in those places? Would prosecutors then be incentivized to trump up charges and force people into serving prison time unnecessarily to make themselves look like bigger "winners"? Might police make more frivolous arrests or even stir up trouble in communities to paint a picture of rampant criminality so they can look to be "tough on crime"? Wouldn't you expect to see an all-causes decrease in long term crime?

The problem isn't just cherry picking, but if people can game a system built around limited and gameable measures, then it's going to encourage min-maxing for profit.


the problem here is that mortality rate is not correlated/normalized with the patient risk factor. the problem lies in the performance indicator itself, not strictly people trying to optimizing for what they are asked for. it is likely inhumane to refuse patients that have slightly above average risk - even if there are some considerations to be had, like having replacement organs go to waste, but that is already covered by the donor lists queues - but given people can find themselves out of work for having bad mortality rates, optimizing for the performance indicator it's the only logical conclusion.

also, normalizing for the patient risk will likely just lead to people overstating patient risk, because once you reduce performance to an index, people will focus on what's measured before on what the intended goal for the measurement is.


Physicians already have good reasons to overstate the patient's severity. It's part of the payment formula used by CMS. But there are plenty of incentives not to lie such as jail time.


I'm not an expert in cardiatric medicine or anything, but couldn't patient risk be determined by objective factors like patient age and history of previous heart issues?


This is like asking why software project estimation can't just be done by looking at the size of the project to be built.


I don't see how the analogy works. The entire point is you have a medical record with objective characteristics that can be tied to the risk of any given procedure.


My entire point is that isn't true at all - you have a medical record with some objective characteristics, but it is not possible to calculate the risk of a procedure based on that information. Just like in software, we have a project requirements document with many objective characteristics and we are unable to calculate the time required based on those characteristics.


So what is the risk calculation based on? Presumably the doctors refusing to do procedures they deem high-risk are not determining completely at random. The risk calculation doesn't have to be perfectly accurate, and it seems to me as though heart procedures are better defined than software projects.

(And besides that, although software estimation is notoriously inaccurate, it is still done and written into contracts all the time, because it serves a useful purpose that outweighs its inaccuracy)


Another way of looking at it is that the patients who are refused treatment count as losses. So the current system is not actually maximizing wins, it's maximizing win percentage for doctors.


Ya, I went down this line of thinking too. It does seem like this might too be a poor incentive because now doctors would operate on everyone to try and claim wins, even if the likelihood is exceedingly low. There’s probably some measure like “healthy six months from now” that incentivizes them to try in situations with high mortality where operating is the only chance to improve things but keep them from operating if the risk is too high.


Goodhart is talking about life, not a self-contained system like a game.

>Some measures should be targeted, the problem here is that it is done in an non genuine way

The "not in a genuine way" is exactly what he's talking about.


Your question was the same question I had. The wikipedia entry didn't seem clear to me for what the law is and is not.

I disagree with the downvotes your comment's received.


Mortality rate doesn't account for the outcomes of surgeries which are declined due to risk.


In your scenario, the equivalent for win percentage would be trying to set an easier schedule.


Low mortality is a good damned measure.

I don't want to be operated on some yahoo surgeon who unconditionally operates regardless of risk and has a high mortality track record as a result.


Then maybe mortality rate should be attributed to doctors not operating as well. Low mortality is a good metric on patients, not doctors.

But on the other hand, I don't want to be treated by a doctor who cares more about his sellable stats than saving lives.


Maybe it's a good metric for hospitals. There are other pressures discouraging hospitals from rejecting patients altogether, or such is my impression, anyway.


You say that as if you had a choice. The point of the article is that you don't. That yahoo doctor is the only one willing to operate on you. The rest -- presumably the smart ones -- are playing the game to win, so they'll sit out this turn, because you're a bad bet.


The point is that if you are already dying then you would prefer the surgery instead of doing nothing. The doctors seem to have no problem with actually doing the work but don't like their stats ruined, which is rather disconcerting given the consequences.

Your comment also shows that patients are just as much of an issue by focusing on the metric more than the cases involved.


If you need a high-risk operation or die, no-one wants to operate on you because of this rule.


You are committing a common fallacy - when evaluating risk, you are considering only one of the alternatives and forgetting the risk of the other. Yes, nobody wants to be operated on by a yahoo. But mortality is not a good measure of yahoo-ness, because in some cases not operating carries larger risk of death than operating. And if a surgeon decides to operate in this case, the decision is right even if resulting risk is high - because the alternative is even higher risk. Considering only mortality rate of operations ignores this. It is a variation of survivorship bias, only on the opposite side.


You presumably want the metric (operation, mortality rate); its useless to consider mortality across the board, because that assumes all activities are equally risky. Otherwise, you end up like this, just entirely removing the option of high-risk activities, regardless of the potential benefit.

Essentially, the doctor has decided that he doesn't want to take the risk of killing you, even if you're willing to accept it. Why should that be the doctor's choice to make, and why is he being given this additional risk, when the benefit/loss should clearly be yours?


A doctor might have a 5% mortality track record for a surgery that generally has a 20% mortality risk. I feel like you are pushing it to an extreme with this yahoo doctor example.


It becomes a bad measure because once it's a target people start to manipulate it in order to hit the target. In this case if a surgeon refuses to take anything but easy surgeries their stats will look amazing but it says very little about their actual abilities.


but he'll become good at accurately measuring risk, so if accepts to operate on you, you can be certain that your case is simple and your odds of dying on the table are slim.


Some doctors have high mortality records because they specialize in treating deadly diseases. And you'd rather be treated by a doctor who has only ever treated non-fatal ones? Well, that'll work if you never have anything worse than a common cold.


When the option is "or die" I think you'd change your tune.


I'm so happy that this system of 'likes' works for some people.


So if it's difficult we shouldn't try?


Finger ----------------> Moon

^ You're looking at this.


The advent of public reporting of outcomes after intervention for myocardial infarction (heart attack) led to an increase in mortality in Massachusetts and New York.[1]

1 = https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4368858/


A better way to make this claim would be to plot the mortality rates for the states over time and show that the rates in those places requiring reporting begin to diverge as the new policy is implemented.


They effectively did that: they compared public reporting states with other geographically adjacent states.


There's nothing like Boston or NYC in the control states. If the effect is as big as they seem to think--a 21% increase in mortality--that should be visible on a plot of deaths over time.


It was not heart surgery, but after being diagnosed with lung cancer, my mother had one of her lungs removed by a surgeon who assured us he'd be able to get all the cancer. She had a miserable, painful recovery that I would not wish on anybody. Within a week or two of her recovering to the point where she could walk short distances, the cancer was found in her other lung and went on to kill her within a few more months. She may have gained several months of life from this operation, but it was a painful few months without dignity.

It was 20 years ago, so my memory is fuzzy, but I don't remember ever being presented with anything but certainty from the surgeon that he could cure her. I think that patients and surgeons need to have a better idea of the realistic chance of success from a major surgery in order to make an informed decision. Perhaps if surgeons at least have to worry about their own numbers, they might hesitate before attempting cases like my mother's.


Sadly, it seems that your mother would not have had a better outcome with or without the surgery - if the doctor had said "I'm 75% sure that the cancer is contained in the one lung, so removing the lung will remove the cancer", would that have changed the decision to have the surgery? What it was was 50%? What is the threshold where someone with a fatal illness will accept the risk for a possible cure?


Perhaps the patient should have the right to decide what that threshold is for themselves rather than having the information hidden from them.


It's difficult to judge from a twenty year old story, but certainly these days patients do not have any information hidden from them. I had a family member with cancer, and the doctors were very clear that an operation might get all the cancer, but that they could not guarantee anything.

They left the decision in our hands, and in many ways that's not better - how on earth are we supposed to decide? They also can't tell you with any certainty what the after-effects will be because no-one knows.


It’s a known Freakonomics effect that doctors treat themselves much less in case of cancer. They don’t want to just prolong their life if it’s only going to be painful longer, because they know the probabilities and the outcomes better.


> because they know the probabilities and the outcomes better.

Probabilities and outcomes, or the process? Patients do get pretty good estimates and information these days. They don't have a good idea what chemo feels like though and what people think about the side effects. Doctors get that from seeing their patients.


What information? Apparently the doctor told them he could do it. If he really believed in himself, then he told the patient everything she needed to know. Remember, the other option is "go die". So assigning a percentage chance on the successful outcome of a surgery is utterly meaningless when the alternative is a 100% "go die".


respectfully, this isn't the way to conceptualize any medical therapy. it's never "this will save you" or "go die", it's always "you will eventually die, this might forestall it, we're never sure, and there's a big list of potential drawbacks."

a person who can be helped by a medical intervention will die no matter what, but if they get an organ transplant or a pacemaker or continuous dialysis or a cancer removed, they may live a little bit longer, as far as scientists observe, although sometimes with significant difficulties. immunosuppression and chemotherapy both deactivate certain organ systems.

no one can see the future. cancer treatments seem to help some people and be useless for others, and it might all come down to something like, "uh cancer stem cell #13491340 was not destroyed, and there was metastasis." the idea with tumor excision is usually to prevent there from being so many potentially-metastatic stems.


That's not true at all. Would you want to have surgery done if it had a guaranteed 3 month recovery period of incredible pain, a 5% chance of curing you, and a 95% chance of doing nothing, when your alternative is "nothing will hurt, but in 12 months you'll drop dead"?


Not really. A treatment can decrease quality of life to the point where you don't want to live, or perhaps bankrupt your family for a small chance of success. This logic only holds if you assign infinite value to "live at any cost"


Realistically, you're going to die either way. We know that for sure. So what's really important is some notion of probability of quality time.


Ultimately, it's about responsibility: is the patient allowed to take responsibility for the decision, or does the doctor take that opportunity away by not presenting the information (or presenting it in a way that allows the patient to actually decide)? This may sound odd, but I've seen deep emotional scarring in friends and family who went through major surgeries, even when the operations themselves had medically "good" outcomes... and all because the time and care wasn't taken to give the patient autonomy over their own treatment.

Making a decision vs having one made for you are very different things, even if the outcome of the decision is the same.


I do not intend to diminish the suffering your mother has gone through or, in consequence, your mental agony. I myself have watched a close relative suffer. Treatment that just prolongs the agony is hellish and pointless.

Having said that, I just wanted to add a small message of hope for those in similar situations. Atleast in the case of lung cancer, the situation is vastly improved now. The variants of lung cancer that are most prevalent now can be managed with oral medication alone and the patients can recover to go on and lead normal lives. This is from experience seeing another close relative suffer terribly from lung cancer (mainly because diagnosis of lung cancer itself is very hard as the symptoms show up very late and can be mistaken for other issues) and has now miraculously recovered and is back onto his normal routine.


I've taken care of numerous people living with their lung cancer. It becomes difficult when they develop another cancer, e.g. breast. Do you do axillary dissection or just radiate the axilla? Anyway, melanoma, lymphoma and lung CA (and breast) have seen remarkable improvements in the last 20 years.

When I was an undergraduate I worked in the lab of Dr. Robert Bast to pad my resume - he went on to head cancer research at MD Anderson. He's the doctor that discovered the CA-125 ovarian tumor marker. Anyway, 26 years ago he told me he thought we'd have a cure for cancer in the next 10 years. I think he's off by 40-50, but we are making strides



I always tell myself that modern medicine is a gamble. No medicine at all is a death sentence though.

I work with software and we gotta give estimates in hours (because the managers didn't like complexity numbers), you know, like add a dropown to the search page or whatnot and I say yeah sure, it's simple, one hour... and then I end up in prehistoric spaghetti land for two days.


if it's any consolation, I'm a surgeon, and in my town, another surgeon's wife was diagnosed with lung cancer right after his last weekend of call. She underwent surgery / chemo / radiation and died within 4 months and later (he kept working bc his plans of taking time to see the world with his wife were ruined) he told me that she would have been better off just going on hospice bc she suffered. Hindsight, unfortunately, is 20-20.

Cardiac surgeons in the US have had their own database for >20 years, and vascular surgeons do too. General surgeons have gotten into it the last 10 years or so

https://www.facs.org/quality-programs/acs-nsqip

I'm the NSQIP champion for my hospital system and go to monthly meetings to review our system's data. There's a lot of controversy with the data. We show individuals their data as compared to their anonymized peer data. Recently, the credentialling committee wanted access to the data, and that is being discussed, but I'm against it because the data only samples your outcomes, not every case, so it can be biased (although as you get more cases sampled, hopefully, the pattern established becomes more relevant). Anyway, it can be humbling.

Moreover, all deaths / OR take backs / readmissions get reviewed by a hospital committee and in the 15 years I've been in practice, there have been 4 surgeons in town that have lost their privileges to operate at the hospital.


I guess I'm just shocked that the surgeon could offer that level of assurance about something like that.


Unbelievable! The five year survival rate for lung cancer is only 18%! Even when the cancer is localized the five year survival rate is only 55%![1]

It's irresponsible for the doctor to not give a reasonable prognosis!

[1]http://www.lung.org/lung-health-and-diseases/lung-disease-lo...


Maybe lung cancer wasn't as understood 20 years ago as today.


we should find this person and sue them for what I'm sure is a 100% accurate retelling of a 20-year old story on hacker news.


> I guess I'm just shocked that the surgeon could offer that level of assurance about something like that

I'm not. That's been my experience with pretty much every surgeon, and even a fair amount of the non-surgical doctors. Letting patients see uncertainty and indecision is not part of the medical school culture, as near as I can tell.

It behooves anyone with serious medical issues to do independent research. (God help you if you don't already know how to deep learn on literature not in your field.)


Ultimately, we need to compare apples to apples by building rating systems that account for the difficulty of treating patients with various, ailments at various ages, with various complications.

Think of it in terms of competitive diving, gymnastics, or snowboarding: Your scores depend upon the difficulty of the moves you attempted combined with your performance of those specific moves. The more difficult the move, the more you're compensated for even attempting it when it comes to scoring.

The trick then is to prevent physicians from gaming the system by exaggerating the difficulty of the patients they're dealing with. You'd need to use as many objective metrics as possible and possibly some system of having different physicians assess ratings than the ones performing the procedures - maybe in some kind of double-blind fashion to prevent any kind of coordination strategy.


> some system of having different physicians assess ratings than the ones performing the procedures - maybe in some kind of double-blind fashion to prevent any kind of coordination strategy.

That would simply result in an unwritten rule where everyone gives everyone else good ratings.


The "objective metrics" part is key. Medical records are generally written in a way that weaves a standard narrative of the doctor as either a hero/savior or a valiant defender of a lost cause.

Doctors only get investigated when derogatory facts accumulate to an extent that this "standard narrative" collapses. It's not really an objective or scientific standard.


The article itself alludes to a 'EuroSCORE' that purports to rate the risk of mortality beforehand. It seems like the mortality ratings should be risk-adjusted. A double-blind system as you suggest could work like this: First, the rating system would be opt-in on the part of the participating surgeons. It seems likely that patients would prefer rated surgeons, so there would be an incentive to participate but it would not be mandatory. Those who belonged to the rating body would be required to do anonymous risk assessment on some number of cases per month. They wouldn't know who the patients are or who is treating them, they would only get anonymized records. Their risk assessments would be calibrated in the short run against other assessments, and in the long run against outcomes. These calibrated risk assessments would be used to adjust the score of the treating surgeon on the outcome, so that they would be penalized less for mortality in patients with a very high assessed risk.


This topic was discussed in the last EconTalk [1] podcast featuring the author of a recent book “The Tyranny of Metrics” [2]. I think one of the author’s interest insights is the distinction between metrics for diagnostics and those for incentives.

1. http://www.econtalk.org/archives/2018/04/_what_is_the_ap.htm...

2. https://www.amazon.com/Tyranny-Metrics-Jerry-Z-Muller-ebook/...


It's what I hate about the legal system. Prosecutors don't care about justice, they care about a record of putting thevmost people away.


In Australia, we have a neurosurgeon Charlie Teo that is well known for operating on "inoperable" brain tumours. While he has a lot of respect in the general community, to many of his surgical peers he is seen as a reckless maverick. I would think his stats on average are bad, but because he will tackle outliers he has won the support of the people that matter - his patients.


66% of 115 specialists polled did not refuse a difficult operation.

33% thought about how it would impact their statistics, or put more generously, thought hard about the probability of the patient surviving presumably quite serious surgery.

Yes mortality is a blunt statistic for a surgeon, but perhaps giving people pause for thought about the odds of success is no bad thing.

Contra point: would you rather not know how many cases your surgeon has done that year, and the issues they have had?


A large portion of the people undergoing heart surgeries still face a high risk of death if they don't have the surgery (with the surgery bringing the chance of a longer, more comfortable life).

Here's why your hot take isn't the best take:

“About 30 percent of them said they had turned patients down for surgery even when they knew full well that surgery was in their best interest.”


One problem is that fee-for-service incentives strongly distort what people think they "know full well."


Correct me if I am wrong, but don't doctors in the UK get paid a salary instead of the fee-for-service model prevalent in the US and Canada?


For NHS hospital doctors (including heart surgeons) that's true. General practitioners may be salaried or equity partners (the system is largely by contract to doctors' offices rather than directly employed). There is also a small percentage of non-NHS private work, though this is mainly in routine elective work rather than high-risk heart surgery.


Here's the converse of that, though: why do we think it's okay to expect a team of doctors and nurses to be okay with almost assuredly killing someone during surgery? They're not robots, "it's their job" is not an acceptable answer.

Even if the surgery is in a patient's best interest, if the odds of killing them are all but guaranteed then it's most definitely not a matter of looking at the patient and rationalising it with "they will die otherwise anyway". It's not just about the patient.

People who make this argument seem to forget that there's also an entire team of medical professionals that your rationalisation says should be okay with going into a surgery knowing they are almost guaranteed to kill this patient. They have the stats, the stats say "this person will die under the knife", many more lives are affected in this decision than just the patient.

So expecting them to just do the surgery instead of going "No. This will kill the patient, I don't want that on me and my team" is very far from an okay attitude towards fellow human beings, and leads to terrible medical practices.


I don't see where the article says "the odds of killing them are all but guaranteed", so I don't really appreciate you stuffing that meaning into my comment. I highly doubt that surgeries at that level of risk are actually in the medical interests of the patient (the standard I used).

Are you speaking from further knowledge of the the statistics of the cases where surgery is refused? If so, why not drop that instead of the lecture?


But it does not have to be that extreme. Let us take a made up example where there are two categories of patients. One that are healthy enough that a positive outcome of surgery is almost guaranteed, almost all survive. If you operate on people like that you will have excellent stats.

However let us add someone that is really sick and will most likely die within 6 months if they don't get surgery. They are in bad shape so surgery has a 25% risk of killing them. Most patients would probably want to take that gamble but for the surgeon who operates on patients like that frequently that would mean that their stats would go from almost perfect to pretty bad. You don't want to surgery from a guy with where 10% of his patients die from heart surgery when you can get another guy where only 0.1% die.


If their operations were peer reviewed for oversight purposes, assuming they can or would be willing to self-regulate without bias, so the results have confirmed qualitative meaning associated with them - then yes, I'd like to know their history, full history.


Do not Doctors Surgeons disclose the potential errors and things that might go wrong in the US? I have had several ops in the last 18 months in the UK and they always go though what can go wrong and tell you the odds.


There's generally a consent process but the patient is often in no position to decline or comprehend what they're signing. In the case of heart surgery, or even a catheterization procedure, they may not be conscious to consent.


Well when I had my ops they always (in the UK) go through what can go wrong and the odds - my major one had a long list of possible outcomes the surgeon (cheerfully ) ended on 1/5000 chance of death.


If they're misleading stats then I'd rather not know them as then I'm going to try to do something stupid as a result.


Sports athletes do this. For example players won't attempt end of the quarter half court shots since that will lower their percentages, or QBs taking sacks to not lower their QB ratings. What they do in sports is keep advanced stats. Instead of just giving a pure mortality rating have mortality rating, by surgery type.


Maybe in other sports this happens, but I seriously doubt QB's engage in that. First of all, taking a sack is not pleasant. In the extreme it can take you out of the game, or land you on the IR (injured reserve). Drew Bledsoe was replaced by Tom Brady after such a sack, and was never the star he had been before. Second, the number of sacks a QB takes is a statistic in itself. Third, the top stats for a QB are in this order: touchdowns, interceptions, passing yards, and completion percentage. If a QB takes a sack to avoid an interception, I'm 100% happy with that. If he takes a sack so that his completion percentage is higher by 0.01%, I would not be, but then the completion percentage is reported at the 0.1% level of precision. There is simply too much physical pain in taking a sack for a QB to trade that for an imperceptible gain in a pretty unimportant stat.


some QBs like aaron rodgers will take a sack instead of throwing the ball away.

rodgers had a 50 sack season twice in his career and has a sack career of 6.96. this is reflected on his interception rate (which is the best in the nfl). he just prefers to take a sack to attempt a pass that could be intercepted -- and everybody looks at those int/td numbers and don't look a lot at sack numbers (because sacks are usually not the qb fault).


>For example players won't attempt end of the quarter half court shots since that will lower their percentages

Durant is particularly bad at this. In an AMA on Reddit, Daryl Morey said he tells his players they keep their own internal stats (most teams do) and ignore heaves. The idea is they’ll be more likely to take them knowing the team won’t hold it against them when negotiating a contract.


Then they just dodge the hard cases of each specific type of surgery. It's probably still an improvement though on just avoiding a surgery type altogether.


Then you track the deferral rates.


Then they start operating on people who don't really need it.


...which punishes doctors for serving high-risk population.


Guarantee QBs are doing everything possible to avoid being sacked, and would gladly give up rating points in exchange for not being hit.


I could say you're wrong, but then it would simply be my opinion vs your opinion, and the discussion would end there.


One could break out the quarter half court shots as a separate statistic.


At this point it’s very likely all teams do. The stats they’re using are nothing like what was printed on the back of basketball cards when we were kids.


Shameless plug - I wrote an essay recently on how any metric becomes a vanity metric http://dimitarsimeonov.com/2018/03/22/the-vanity-metric-para...

Some excerpts:

- having clear, well defined metrics is the single largest driver of progress within a company

- Any metric sufficiently optimized becomes a vanity metric

- Building products is not science. What makes a good product is usually dependent on so many factors that are subject to change and evolve.

- Over time, a product no longer stands to die if these metrics degrade.

- According to EdwardTufte - people and institutions cannot keep their own score

Hope someone finds it useful!


Econtalk had a good interview this month on this general phenomenon discussing the book "The Tyranny of Metrics"

http://www.econtalk.org/archives/2018/04/jerry_muller_on.htm...


Why don't we keep track of refusals as well?


Not a bad idea, per se, but then which is better - the doctor with 0% refusals and 20% deaths, or the one with 20% refusals and 10% deaths?


Include the outcomes of the refused cases.

Note that this also works to improve stats on docs who choose hard cases.


You compare it to the average death rate of a surgery.


The incorrect assumption your readers are making is that heart surgery is a good thing in all cases. America is the leader in futile medical procedures: it may well be a good thing to have fewer risky procedures on patients who were more likely to be harmed. "primum no nocere" may be advanced!


On a related vein, do surgical procedures involve any type of quality control, where the work is independently audited? If so I wasn't aware. Quality control is such an important cogwheel of processes in many other industries, yet I don't hear much talk of it in relation to medical profession.


In the US, if the surgery is in hospital:

- Well-regarded hospitals will vet surgeons before granting privileges.

- Average hospitals give out privileges fairly easily if there are no actions against a person's license.

- There is a "collegial" review of big screw-ups that carry major reputational risk.

- As with any fee-for-service firm, "rainmakers" are highly sought after and get away with more.

- Privileges are difficult to take away once granted. (Have stronger legal protections than academic tenure in some states.)

If the surgery is in an ambulatory surgery center or doctor's office:

Basically anything goes. Ice-pick lobotomies, tonsillectomy mills, boob job factories in strip malls....all have happened in recent US history. About as well-regulated as traveling carnivals.


I was thinking something more akin to routine inspection program carried out by independent personnnel applicable to all procedures, not just the reviews that are necessitated due to screw ups.


As far as a performance evaluation that directly scores a surgeon's judgment and whether the procedures s/he is performing are actually beneficial...in the absence of a big screw-up or complaint, I don't believe there is any forum to do that after training is complete.


There are process checklist reviews (http://qualitysafety.bmj.com/content/23/4/299) and medical review panels (https://mrp.ky.gov/Pages/index.aspx).


Yes they have process and analysis for quality purposes

You can see one example here

https://www.facs.org/advocacy/quality/phases


Yes, at least in the UK, we have a series of national clinical audit programmes.


it's been said a few times on /r/medicine, surgeons have their lawyer in mind when operating, they're skilled, but they first avoid anything that could damage their own life irreversably. Of course to an extent this goal aligns with the patient, but not always, and a bit sad on the ideal side.


Can someone really sue their surgeon in the US?


Absolutely. You can sue for pretty much anything. Winning is another story, but doctor's don't generally make sympathetic defendants. Juries know they have money and that they pay huge insurance fees to cover these cases. Of course, after losing a big case where the insurance company pays out millions of dollars, they likely become uninsurable and unemployable. But hey, the money is good until you hit the anti-lottery.

My father was an anesthesiologist (who retired in good standing), and I remember him telling me a case he read about a Caesarean section. The surgeon was using an electrocautery pen to sear closed the ends of blood vessels. The surgeon set down the pen on the surgical cart, lifted the baby out, set the baby on the cart, and the baby's heel touched the pen. Now the baby has a small scar on the bottom of his/her heel for life. The parents sued the surgeon, the cart nurse overseeing the cart, and the anesthesiologist was close to the surgical cart. All 3 defendants settled out of court, since babies almost always win jury cases against rich doctors, regardless of merit.

The litigant's lawyer would almost certainly bring up a below median survival rate as evidence of a pattern of gross incompetence.

Also, of course, in the U.S., I believe only governments are immune to lawsuits (sovereign immunity) without their consent. The courts have consistently ruled that many rights, including the right to sue, cannot be legally waived by contract. In a similar way, if you sell yourself into slavery by entering into a contract that waives your right against unlawful detention, that contract cannot be legally enforced.

My grandfather was an anesthesiologist. My dad was an anesthesiologist (and did well enough on the MCAT to go to med school after 3 years of undergrad... practiced medicine without an undergrad degree... med school is too competitive today to do such a thing). I was a good student: I graduated from MIT. However, my brother and I saw all of the BS and stress (anesthesiologists have a high suicide rate) and constantly rotating sleep/work shifts and both went into engineering.


If he really had a pattern of gross incompetence, wouldn't there have been some other process to stop him doing that? Are courts really that incompetent themselves that they can't understand such simple statistical errors as using survival rate to measure competence?


It's not about gross incompetence; it's about a jury's perception of incompetence, colored by their perception of the doctor and their perception of the grieving family members. If it comes to a jury trial, the court relies on the jury to make legally factual findings, so the courts end up relying upon the statistical abilities of the median juror, colored by the aforementioned biases.

There are medical boards to remove incompetent doctors, but those influence jury trials primarily through submissions of findings as evidence for consideration by juries. A good lawyer will portray medical boards as a bunch of doctors biased against passing judgement on fellow doctors.


If you undergo a procedure whereby you could die it doesn't mean every cadaver is a paycheck. It means that if your death is a consequence of negligence or malfeasance that family has a recourse.

This is entirely reasonable and I cannot imagine how you could have a medical system where you can't?

What do you do if your surgeon operates drunk and kills you? Hope that a bunch of other doctors don't close ranks and protect their own at your expense? Why would anyone have any faith in their fellow man. People are terrible.


> What do you do if your surgeon operates drunk and kills you?

Can you see that this introduces a level of recklessness that isn't present in the parents question? Parent is asking about normal death by medical error. That's likely to have complex causes and is rarely as simple as "doctor was negligent".

But assuming that a doctor does kill someone: most relatives don't want a pay out. They want to know that this mistake won't happen again; that people and the orgnisation have learnt from the death; and they want an explanation of what went wrong along with an apology.


I can't recall what countries these guys were.


Easily solvable. Have a person whos jobs it is to rank how likely a person is to live. He gets paid based on how accurate he is.

Doctors would then be ranked as an offset from this previous mortality prediction.


This is effectively how the Federal Reserve is scored.

It targets inflation and unemployment goals.

Goals are measured by the Department of Labour (BLS).


This incentivizes the doctor to maximize the appearance of risk in the medical record.


The one doing the estimate should obviously not be the same doctor doing the surgery.


The one doing the estimate will do so based off the records generated by the doctor doing the surgery.


Not for referrals.


The only way to get reasonably accurate results with this is to have two surgeons do completely separate diagnostics in a double-blind where neither they nor the patient knows which one is going to be performing the surgery. That seems unworkable at scale.


Surgeons don't do diagnostics, as a general rule.

Internists, paths, triage, GP, trauma do.


This exact topic was discussed in the 1968 novel Case of Need by Jeffrey Hudson. It was his first book, written while a medical student at Harvard. The novel raised a stir as it was critical of Harvard faculty and in some ways the practice of medicine in general. The book eventually won an Edgar Award.

Some may know Jeffrey Hudson better by his given name, Michael Crichton.

https://en.m.wikipedia.org/wiki/A_Case_of_Need


You cannot compare crude outcomes. You have to adjust for so called "case-mix". For mortality after cardiac surgery, the risk can be calculated using the EuroSCORE II. http://www.euroscore.org/calc.html And even then, often the numbers for individual surgeons are too small to draw conclusions.


That means there's an opportunity / gap in the market.

Demand for risky operations by motivated buyers, should lead to some doctors taking on the role / label of risk-takers / explorers.

Since they can get the customers other, risk averse doctors are turning away, and pull in rare-treatment seekers on their own.


The only way this changes is removing the choice from the surgeon doing the cutting.

Sounds a bit dramatic but if they think action is genuinely inappropriate, they should make the case to their colleagues, or a Multi Disciplinary Team, and let them have the final say. Together they should be able to enforce the Royal College and NICE guidelines for operating and the CQC —the main monitoring body— should be able to work out (both from data and on-the-ground inspectors) whether hospitals are doing as they should.

I realise that's pretty UK specific but the ACS should be able to achieve something similar.


Relatedly, I've met many people who had a surgery where they used to live and, after moving, other surgeons won't touch them. They have to travel back to where they used to live to get care. Seems really unethical.


Reminds me of why some devs will not touch critical pieces of code because the next time a bug comes in that module, git log shows they were last editors and will be asked to take a look (even if the bug is unrelated to them)


We chose to have our daughter delivered at a hospital that is renowned for its neonatal ICU. It's obviously a great place to give birth, but if you look on paper, their c-section and other intervention rates are above average. Which makes sense when you understand that they are equipped to handle the highest risk pregnancies.

A similar phenomenon occurs in education. I taught high school in Baltimore, where the vast majority of my students were high risk. Baltimore has a few renowned high schools, but by and large, they serve the easiest students.


There was a counterintuitive study of obstetricians that I cannot find right now that showed that in one state over many (10?) years the ones who did the most C-sections were the best doctors - because when researchers looked at the underlying data the realized these doctors treated the patients who had the worst predicted outcomes and beat the national odds - I think they theorized that these doctors had the best ability to choose the optimal method of birth and were not afraid to choose c-section when it was necessary.


There's a very strong relationship between experience (number of procs performed) and outcomes. High-volume is almost always beetter.

This may of course favour unnecessary ops...)


Yes, I think this was particularly against the grain because mothers are commonly told to look for a doctor/practice who does not resort to a C-section right away, and judge this by the number of C-sections a doctor/practice performs.


Hospital rather than doc rates are far more useful here.

Look to c-sec vs. vag deliveries.


This is the classic problem where people use averages as a metric, but ignore the fact that variations are not distributed evenly.

I remember looking at the economics of a clinic for our product. Someone said "let's just give them a 5% discount as on average they will break-even.

That works if you look at on average how much they make, but ignore the fact that every clinic has a different mix of insurance companies and *their own economics are all different."


My mother went two years with a horrible stomach condition and couldn't get anyone to treat it until she wound up in the hospital and they were legally required to treat it. It was for exactly that reason: risky surgery looks bad, even if it's necessary. When there aren't a lot of doctors who can perform the surgery in question to begin with, say only two in your state, then you might just die.


This isn't surprising because humans will do what they're incentivized to do (speaking mainly in a professional setting here).

Might be effective to add a measure to the transparency that shows something to the effect of, "likelihood to perform life-saving and risky procedures." Therefore a high win-loss with a lower risk threshold would be weighted lower than a so-so win-loss with a higher risk threshold.


1/3 of heart surgeons violate medical ethics. That shows how flimsy and easily corrupted doctors are. That they don't really care about doing what's best for their patients' health as soon as an incentive makes that inconvenient. I hope some more data will reveal who has statistical anomalies in who they're turning away so the frauds can be identified.


Fertility clinics are subject to the same pressures. They need to keep their success rates up so they avoid cases with poor prognoses.


We bundle up too many conflicting jobs into the role of the doctor. Rating the difficulty and status of an incoming patient should be completely separate from the job of treating them, and all judgments about doctor effectiveness should be judged relative to those assessments.

(Credit: alerted to this misincentive by Yudkowsky’s recent book Inadequate Equilibria.)


Would it be possible to determine a "risk" factor and count that in the mortality ratings, or is this too simple?


There is probably already some adjustment for risk - anyone can see that a smoker in his 60s will have higher mortality than a non-smoker in his 30s.

The problem might be that the statistician doing the risk adjustment doesn't have 'skin in the game', so his/her assessment of risk is worse than the surgeon's. (Although if that were the case I would expect the statistician to be just as likely to overestimate the risk than underestimate it, yet I never hear about surgeons favouring the difficult cases because they think they can improve on the expected mortality rate in such cases).


There might be several decent ways to improve (at least as a next step) to avoid this particular way to game the KPIs: - use an assessed upfront difficulty (if available in measurable form) to adjust for the outcome rating - add additional KPI for % of refused operations (perhaps again adjusted for difficulty)


What would you prefer?

A surgeon who expects a poor outcome of your surgery:

a)decides to not operate on you? b)decides to operate on you nonetheless?

I am in camp a


I'm definitely in camp B. 90% chance of dying on the table beats a 100% chance of dying without the surgery, every single time.


Quality of life matters in most of these marginal cases. If they are going to give you the Konno procedure (scrape extra tissue out of a thickened ventricle), there's tons of things that can go wrong OTHER than death. Death is assured, either way, for everyone.


It depends what the outcome is without the surgery. If I'm dead without it, I want them to operate regardless. If there's only a chance of death, or the surgery is aiming to improve quality of life (rather than prevent [imminent] death), it's more of a judgement call.


Metrics do drive behavior. Like, in the IT space, percentage of successful change requests. Encouraged you to power though even if things look bad. Because aborting is a 100% chance of a ding to the metric, while taking the risk is something less.

The stakes are lower, of course, but the idea is similar.


Alternatively, you have a good idea of what your mortality rate will be if a surgeon picks you for a case.


I firmly believe that this is why my mother-in-law died. She needed surgery to fix a hernia that basically left her unable to eat at all. Is she did not have the surgery she was going to die. The surgeon stopped the surgery because the 'risk' was too high.

48 hours later she was dead.


You don't die in 48h from not eating. So many elements missing there, this is such a terrible depiction. I hope anyone passing by will have a bit of critical thinking.


They need to add a difficulty correlated metric to the statistics. Maybe BMI and age? I am sure IVF clinics wouldn't offer the procedure to anyone over 35 if the rather detailed statistics they are required to report didn't also include the patient's age.


Rather than try to figure out some classification criteria, use a predictions market.

Have surgeons predict their own chance of success. Preferably, gather multiple second++ opinions for each case.

The difficulty of the surgery is based on some combination of the different surgeon's predictions.

A surgeon's stats would then be based on his successful and unsuccessful surgeries, weighted against the difficulty.

Using this method, not only can you still rank the surgeons based on the successes of their surgeries, but also on the accuracy of their predictions, which will be in larger sample size and also perhaps a better indicator of their mastery of medical knowledge.


I imagine it’s extremely difficult to come up with a metric that captures everything about the case. BMI, age, tumor type, tumor shape, history of kidney disease, diabetes, where do you stop?


You don't need a metric which captures everything about the case - you just need a statistical model whose risk assessment is at least as accurate as the surgeon's assessment. This is not as difficult as it sounds - in Chapter 21 of Thinking Fast and Slow, Kahneman makes a strong case that simple algorithms are very often better than expert clinical judgement.


Can there be positive self selection and specialization of the surgeons? Those who don’t believe they can probably operate successfully don’t do that at all and the others get more insensitive.


If you’re a patient, wouldn’t this still mean starting with the lowest-mortality surgeon? If they take your case, you’re least likely to die. If not, you have a better appraisal of your odds.


I'm not so sure. That might mean going with a surgeon where you are at the top end of their "risk profile", rather than a more skilled surgeon who has higher mortality rates due to routinely taking more difficult cases. I'm not sure you can generalise the mortality rate of all patients to your specific chance of death (a more skilled surgeon would likely be better for you, c.p.).


That strategy may very well end up skipping (well, delaying selection) of a surgeon that routinely takes much harder cases than yours. I don't know enough about heart surgery to know if practice at more difficult procedures has any carryover to less difficult procedures, but it's conceivable . Your strategy might make sense, but it's not obviously optimal.


One possibility is to offer the surgeons more money for difficult surgeries, and more money for a successful outcome, to balance out the reputational risk to the surgeon.


How do you estimate the difficulty in a way that does not involve wrong incentives on the part of the surgeon or whoever is doing the estimation when money is involved?


A colleague of mine was interviewed by a new hospital heart program. He was asked what his mortality rate was. He replied, what do you want it to be?


Check out the comments over on /r/medicine and you will likely be horrified. It is career over everything. There was even a thread this week about how you should NEVER correct someone above you (med students should never correct a resident and a resident should never correct an attending).

That mentality is terrible for everyone and every thing, except the physicians' careers.

I don't know if we need to select for better people when admitting them to medical school or if the culture is just completely fucked.


If you think about what it takes to get into med school then it seems pretty natural that you're going to get hyper competitive surgeons in the end. It starts before med school. You get assigned 10 unobtainable books that only the library has. Finals week comes around, all the books are missing even though they have to stay in the library. Turns out your classmates hid them inside the library so they could have a competitive advantage and get a higher grade on the curve.

You already devoted probably half your life to becoming a surgeon, why would you screw it up and lose so much ground so easily?


What decade are you describing? In my medical school, all of the required material aside from our anatomy textbook was given to us as a PDF or a printout.


This was within the last 7 years. I don't doubt each school is different. This was at UW in Seattle.


Hyper competitive, conformist surgeons.


Yes, the system is horrendous and broken. There should be complete oversight and full transparency for self-regulation purposes, which includes ongoing learning and education, and get lawyers involved once the self-regulating is found to be inadequate - such as a surgeon who repeatedly makes the same mistakes and hasn't bothered to learn or care to improve their skills, or prevented from continuing.

From my own struggles of trying to find healing for chronic pain I have, I have realized that doctors/professionals are selected for for their memorization and not for critical thinking skills.


Or maybe this attitude you have, that doctors should be sued for vague ideas like lack of "critical thinking skills" is exactly the reason why doctors are incredibly risk averse and the system is messed up to begin with.


This is one thing that still amazes me to this day.

It's like doctor's stop thinking after they get their degree.

Whenever I've visited a doc within the past few years I have noticed that instead of actually getting to know someone as a patient, many act as if they are a quick reference book.

This is especially horrifying when you consider that there is a non-trivial portion of the population that exist outside of the medical literature.

Every one of the best Doctors I've ever had has ALWAYS been a researcher at heart. They observe, take notes, treat what they can now, and come back later and can clue me in on the latest research they've come across.

Nowadays, I'm usually making Doctors aware of advancements in their field... Scary stuff.


> Nowadays, I'm usually making Doctors aware of advancements in their field... Scary stuff.

I don't find that to be too surprising. I can be hyper-aware of my own ailments, but keeping up with an entire field is more difficult. Furthermore, treating to research runs the risk of using treatments where the outcomes haven't been replicated, or there are long-term complications. Unless you've exhausted more conventional treatments/the treatment guidelines, there's something of an advantage of not being right at the bleeding edge.


>I don't find that to be too surprising. I can be hyper-aware of my own ailments, but keeping up with an entire field is more difficult.

I do not find the above to be sufficient justification to practice in ANY field without trying to stay aware of the state-of-the-art.

Yes, in medicine in particular, there is a justifiable bias toward applying more conservative treatments first. "Hear hoof beats, think horse first, not Zebra," is the axiom I believe is most frequently ground into new physicians.

The problem comes in when a physician becomes so conditioned to conservative treatments working that they become blind to the extremes. Running with the Zebra analogy, just because you hear hoof beats, you see a horse-like silhouette, and in this lighting you can't tell the color, does not mean you should IGNORE the possibility of Zebra.

As a practicing physician, you are DEPENDED upon to be the layman's gateway to the entirety of the collective medical knowledge base we've been able to accrue, verify, and are in the process of verifying. You owe it to your patients, just as an Engineer owes it to the public, to become as well acquainted with the medical literature as it pertains to them as possible.

I may be being a bit normative here, but an M.D. should be seen as a commitment to spending your life on the pursuit of the means by which to provide the highest quality of care for your body of patients. That at times means being ready to help the a patient access and navigate the less charted waters of the state-of-the-art if necessary.

I agree completely that state-of-the-art should not be the first hammer pulled from the toolbox, but one should not actively avoid it either. At the end of the day, data doesn't get generated out of thin air, and if the patient understands and accepts the risks after you have provided them the best guidance you can, you have done all that can be expected of you, have you not?


> I do not find the above to be sufficient justification to practice in ANY field without trying to stay aware of the state-of-the-art.

It's not a justification of not _trying_ to stay aware of the state of the art, but you always (potentially) stay ahead if you are focused on a smaller sub-field. I doubt it's possible to be completely aware of every single advancement for every condition you might encounter as it happens, nor is it suggesting that you should ignore the chance of zebra - for me, the best doctors have been those who will say "_I_ don't know; I will consult others or the literature", recognising that it's not possible for a single doctor to have exhaustive knowledge.


> Nowadays, I'm usually making Doctors aware of advancements in their field... Scary stuff.

Are you taking about your cases? Because keeping on top of the narrow score that's currently relevant to you is a different level of time investment than keeping on top of everything happening in the field, after already learning a large part of it, and keeping in mind current treating guidelines which lag behind research.


Alright, I'll concede that one.

I admit, things become a lot simpler once you limit your scope to issues immediately relevant to you as a patient.

However, I do still believe that when made aware of something new from a patient, a good physician should actively tackle the topic if only to be able to give a realistic assessment of where it falls on the risk/outcome spectrum.

NOTE: This may be assuming an above average level of patient investment. I in no way condone of wasting a physician's time having them wade through quackery. Nobody has time for that.


There were similar articles for developers on Hacker News and on /r/programming over the week end.


It’s a tough profession populated with type-a people, paid by insurance companies and beholden to malpractice insurers as well.

As with anything in modern society, bean counters rule and usually drive awful decision making.


Such practices were pervasive in other industries too, including air crews. Thankfully, this has been changing, and shown marked improvement in some cases.


To be fair, you wouldn't want heart surgeons to shrug and take on operations they didn't feel ready for, either.


Can we get a (2016) on the thread title?


The subjective is always where there is value.

I wish there were surgeons that would accept these challenges and colleagues would objectively assign a degree of difficulty score.

And a site that aggregated such scores. I really don’t care how “friendly” a doctor is - did the get the diagnosis right? Did they improve longevity? Did they improve quality of life?

Not easy to score but what everyone wants to know.


Not sure why this is being downvoted. It’s a legitimate request. I worked for a novel medical device company and we had a lot of surgeons on staff.

While not a product we’d ever build, there was always discussion amongst the doctors of how to better grade physicians and specifically surgeons. The concern is was that a lot, and I mean a LOT, of surgeons were bad at their job. They were always proposing different systems to evaluate surgeons, very close to what you described.


> ... and colleagues would objectively assign a degree of difficulty score.

He asks for a level of "objectivity" that you just can't get. What do you think medicine is? Like repairing toasters? "Why don't you simply assign some objective numbers" - it's so easy! No it is not, unless you only count a set of basic and routine procedures. Even things like a broken bone vary greatly - and then add the additional variance of the people having those broken bones.

You can sort of get some objectivity when you do statistics of lots and lots of patients in many places over long periods - but the problem OP wants is to be able to make statements about individuals (doctors). Being able to say something about a population is very different from saying something about an individual within the population. "Statistics" about a single doctor are and will be full of randomness.

Besides, if they did what I quoted from OPs statement above what would probably happen is that everybody declares everything as "very difficult" (or just below) expecting colleagues to do the same for them. In Germany private insurance has a multiplier on the amounts they can charge for something. Guess what: Everything is now declared as "very difficult" (with some short fuzzy explanation), multiplier 2.3. Almost nothing is charged at the "normal difficulty" rate.

> Did they improve longevity? Did they improve quality of life?

Given that each doctor is highly specialized, and that on the other hand your GP, the one who treats you long-term, can't actually do much when there is anything serious, those numbers are mostly out of the control of individual doctors. You would have to get such numbers system-wide, not per doctor. Medicine is supplied to patients throughout their life (incl. late life) by a network, not by an individual. There also is a huge impact of ones own life choices (e.g. obesity, smoking), and no, you can't just correct for them - not individually (for populations we can). All those numbers would look terrible for doctors who treat more of the worse cases. It would lead exactly to what the article is about.

I'm not arguing against trying to measure surgeon outcomes - I'm against trying to use those measures for things they are not good for.


The feedback would have to be anonymous. It’s the only way to get honest scoring.


What you're describing is commonly referred to as "propensity weighting." It is a good first step, though most of us think it still doesn't do a sufficient job of accounting for the risk of difficult cases.


Does the NHS provide enough free market choice that anybody could take advantage of a score even if they wanted to?


You can choose where to get treatment and who treats you; the main issue is whether you get the timeline you'd prefer. Generally, patients will choose a clinical center (to the extent that some hospitals have stopped offering specific treatments, because their stats were never good enough - e.g you need to do a certain number of children's heart ops per year to be good enough to get good stats). If you have the option to go private, either through your insurance or the NHS, then obviously there's another layer of choice.

Most of the private surgeons are also NHS, anyway.


It provides some. "Choose and Book" was introduced in 2004, and converted into e-referral in 2014, and it means you get a choice for most elective surgery.

https://www.nhs.uk/NHSEngland/appointment-booking/Pages/abou...

https://www.england.nhs.uk/2014/05/choose-and-book/

Most people don't have enough information to make a sensible choice; and they don't understand the information they've got.


Yes, my father is currently undergoing treatment for cancer (involving multiple surgeries) and had issues with a surgeon. He was given the option to choose a different surgeon or even transfer to another hospital entirely.


If you have private insurance, you can choose your own surgeon.


Should “NHS” be added to the title? This seems to only apply to British heart surgeons.


I recommend listening to the new EconTalk episode about the tyranny of metrics


You would think that they could adjust these figures by patient age and condition.


Sounds a lot like police work in Japan (they love their stats).


Goodhart's Law again!


Some doctors probably refuse patients that left bad reviews at competing offices...


"I don't appreciate the service they provide and they're terrible people !!!"

"Okay, we won't serve you if you're just going to trash us."

"EVIL HOW DARE YOU!"


I meant if you left a bad review somewhere else...

Of course someone should not go back where they don't like to go....


That only works if you have more than one choice available.


The shitty US health/dental insurance system make it hard, but I still would be surprised if most people don't have a choice of more then one doctor...


It's sad to hear this when it's totally unnecessary.

Hear disease has been a solved problem for decades. Just compare the rates of disease for vegans with the general population.


> Just compare the rates of disease for vegans with the general population.

"Disease" is such a broad category that it's sort of meaningless here.

Maybe veganism is healthier than an animal diet, but the degree of its impact can only be stated only after removing the many other environmental factors that might be contributing to lower disease rates among vegans, including income and location.


Yes, lot's of things are always involved. Including air quality etc as I think I saw on HN just the other day.

I find it interesting though that the right type of vegan diet will basically make you hypertension and heart attack proof and that it may also reverse heard disease (the only diet to do so). Especially when you look at mortality and costs of healthcare this really need to be more generally recognized.


> I find it interesting though that the right type of vegan diet will basically make you hypertension and heart attack proof

A diet making someone heart-attack-proof is a pretty strong claim. Have a citation?

I think the only thing one can say with near-certainty about a well balanced vegan diet is that it has much lower environmental externalities (i.e carbon footprint, water usage) than a meat-based diet, while being no worse for you than any other comparably healthy diet.

That alone is a great reason to be a vegan, without having to rely on possibly hyperbolic claims of health benefits.


I wouldn't make blanket statements about a vegan diet because there are several types of diets that are compatible with a vegan lifestyle. I can eat Oreos and peanut butter all day and still be vegan.

You must be referring to a whole food plant-based diet, which has been shown effective in the prevention and reversal of heart disease[1][2].

[1] https://www.ornish.com/wp-content/uploads/Intensive-lifestyl... [2] http://dresselstyn.com/JFP_06307_Article1.pdf


Yes, you are correct.


What are the rates?



> Researchers studied five different diets to see which lowered the risk of heart failure: convenience (fast food and pasta), plant-based, sweets, Southern (sugary beverages and fried foods), and alcohol/salads.

What kind of study design is this? It seems like they compared veganism with 4 garbage diets. Seems like all we can extract from this is plant-based is better than eating trash.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: