Hacker News new | past | comments | ask | show | jobs | submit login
Stanford president resigns over manipulated research, will retract 3 papers (stanforddaily.com)
1507 points by dralley on July 19, 2023 | hide | past | favorite | 777 comments



The student-run newspaper broke the story and relentlessly pursued it.

Stanford president’s research under investigation for scientific misconduct, University admits ‘mistakes’

https://stanforddaily.com/2022/11/29/stanford-presidents-res...

Stanford president dodges research misconduct questions

https://stanforddaily.com/2023/04/25/stanford-president-dodg...

Internal review found ‘falsified data’ in Stanford President’s Alzheimer’s research, colleagues allege

https://stanforddaily.com/2023/02/17/internal-review-found-f...

The reporter, Theo Baker, is a freshman.

https://stanforddaily.com/author/tabaker/


Good for Theo for writing this up, but it was Elizabeth Bik who got the ball rolling in discovering and investigating the fraud.


"Writing this up"? It's likely true that Dr. Bik got the ball running (I believe she had posted about the anomalous figures months/years in the past), but there's a big difference between noticing fraudulent signs in published research, versus pursuing that lead into an investigation that attempts to identify the actual decision makers.

Marc Tessier-Lavigne said all along that the bad/faked data was the work of other sloppy/bad actors. It was only through the reporting from Theo Baker and his colleagues that the public learned of MTL's negligence and culpability.


I don't want to take away from Theo Baker's work here. The articles are well written and well researched. But ultimately they are simply reporting on other people's actions and formal investigations. I don't see any evidence that these articles were a driving force in this set of events.


Well, technically, nearly all news reporting is "reporting on other people's actions" — e.g. Seymour Hersh didn't witness My Lai nor did he even find out about it on his own: Lt. Calley Jr. had already been investigated by the military and charged with mass murder of civilians; Hersh got interested after an anti-war lawyer tipped him off [0].

> I don't see any evidence that these articles were a driving force in this set of events.

One day after Baker published his investigation in the Daily [1], Stanford announced an investigation into MTL's research. You think that's a coincidence, or that the university was pressured to act that quickly b/c of the Daily simply reporting that The EMBO Journal was looking into allegations about a 2008 paper that MTL was third author on? No need to speculate; the university explicitly credits the Daily's story [2]:

> According to spokesperson Dee Mostofi, the University will “assess the allegations presented in the Stanford Daily, consistent with its normal rigorous approach by which allegations of research misconduct are reviewed and investigated.”

It is true that Baker's first story led with the news that "A prominent research journal [EMBO] has confirmed to The Daily that it is reviewing a paper co-authored by [MTL]". But that's not the meat of the story, and most definitely not a catalyst for the university to announce an investigation. EMBO itself told the Daily that the review was still too early to comment:

> When asked about the specific allegations and the timeframe of the investigation, Batista wrote, “We currently cannot comment more substantively before we have a better assessment of the full situation.”

> It is unclear how long the review will take, and allegations can go without a response from the journal or authors for years. But an investigation could carry serious consequences for Tessier-Lavigne even if he is absolved of the alleged manipulation or other direct wrongdoing.

[0] https://www.newyorker.com/magazine/2015/03/30/the-scene-of-t...

[1] https://stanforddaily.com/2022/11/29/stanford-presidents-res...

[2] https://stanforddaily.com/2022/11/30/stanford-opens-investig...


Don't you understand it's the IDEA that matters?


In this case the execution and follow-through matters more.

I could be the one to identify that my metropolitan city has a massive homelessness problem. But it's another, much more impressive, achievement to be the one who does the hard work of pursuing and resolving that problem.


You are correct but I think the comment was tongue in cheek.


Could be. Chalk it up to Poe’s Law, then.


Bik is a wonderful resource in science in general. She also runs the Microbiome Digest https://microbiomedigest.com that probably the majority of scientists working on the microbiome use to keep current in our field.



Good, this kind of thing in academia is far too common. Cherry picking results all the way to outright falsifying data. It’s a problem at Auckland university here in NZ too, my friends who are studying there say that their results are ignored if they are inconvenient. It’s disgusting, and they should make an example of them (they are I guess)


I was just reading an article from a HN story yesterday, where it was found that around 1/4 of data published in studies in anesthesiology were found to be faked. You know, anesthesiology, the field where giving you the wrong drug or the wrong amount can kill you.

I strongly suspect that a driving factor is guys like this - leaders who reward "positive" results and punish "negative" ones.


I don’t think it has to do with academia. It’s just that manipulating data is common if it benefits people. Think of Facebook ad efficiency scandal, it’s just one example.


All written by freshman reporter Theo Baker: https://twitter.com/tab_delete/status/ His parents are NYT chief White House correspondent Peter Baker, and New Yorker staff writer Susan Glasser.

It’s an interesting story for sure.


I checked his bio, and Baker appears to be a computer science student doing journalism as a hobby.

It brings me to wonder how much choice people really have in choosing their paths…if both your parents are journalists or [insert whatever profession], it’s as if you’re likely to pick it up no matter how hard you try not to…


I realize this is not the point of the article, however you hit on something here I always thought.

I think Journalism is a great dual major choice (or maybe just a minor), with whatever it is you want to study, particularly if its in conjunction with engineering, computer science, physics, biology, finance / accounting etc. Why?

Because Journalists are trained to be good communicators and summarize ideas (a worthwhile skill in most professions) and they are also taught to be ferocious in finding and corroborating information for its "truthiness". Having these skills would give most people an edge in whatever line of work you are in.


I had a colleague who’s degree was in comparative literature - we’d walk out of meetings and the rest of us would be talking about the engineering side of what we’d just heard while he’d go through and enumerate the different things each person had been talking about while using the same words as everyone else. The amount of latent conflicts that dude caught before the rest of us got torched made me really appreciate the value of an art degree.


I think it's really an indictment of (software?) engineer's communication skills. So much of us seem to come from a self-taught/introverted background and ignore the importance of clear communication required to work in a team.

The "pet peeve" thing in particular I catch myself in regularly, where I realize I didn't actually answer someone's question, instead mapping their meaning to my preferred topic. It's interesting how just listening is a skill.

The Communications class I attended early in my career has been incredibly useful.


Spending time in meetings with someone who insists on near-complete terminological clarity from everyone involved ~illuminates just how hard it is to communicate precisely and consistently. (In my case, this person is a CEO who's had past lives in engineering and finance, IIRC.)

Setting aside the communication skills of specific engineers, various stakeholders can still have both wildly and subtly different senses of what they mean by common terms.

For example, I find there's a fair amount of chaos surrounding very common terms like "product" and "content" that tend to mean different things in different systems and to people in different roles/departments.


I've interacted with government and insurance attorneys for various work gigs and some of them have impressed me with their insistence on clarity, too. I find it challenging and fun to communicate that way, albeit I've only ever done it in small 'doses'. (I'd guess it follows a dose-response relationship that veers off toward madness pretty quickly.)

I get a similiar kind of kick from observing people communicating technically and precisely to complete a task-- launching rockets, doing performing surgery, controlling air traffic, etc.


Being precise (not vague) with your language is a learned skill. I started practicing is after reading 12 rules for life (it's one of them).

It really makes a difference in everyday life. It prevents so many arguments with my wife: we agree on most things, when we don't it's usually because we disagree on the meaning of the words we're using. At work I have far less misunderstandings with coworkers, I understand expectations better and I don't get caught out by people using vague language to hide or gloss over major problems. Last month I was evaluating a contract and it took me a week to get a clear answer about terms. I suspect the people involved subconsciously knew that if they gave a clear answer the contract would be deemed unecessary and cancelled. It was.


Ugh... I am a very precise communicator, and my wife --- Not so much.

I love her dearly, but I cannot count the number of times I have enraged her by asking her to elaborate or seeking clarification about a pronoun or some other vaguery by asking questions she believes "I should know the answer to because I'm too smart not to know"

We're getting there, though :)


Lol my wife is the same and it drives me nuts. She will say go get me "x". I'm like where? And then she'll say "the closet" or something like that and I'll have to ask "which closet" and so on.


(Taking light-hearted commentary too seriously)

Something that young children have to grow out of is assuming that "if they know it, their parents must know it too", such as where they put (not hid, because "the parents know") the phone or the keys. As a child they haven't yet developed the, obvious once developed, concept of the individuality of knowledge and experience.

It does feel as if this regresses for long-term partners. The volume of shared experience must blur the boundaries of individual experience, or something.

My wife will often blurt out something totally incomprehensible to anyone but her, but since she's spent the previous 5-10 minutes reading / watching all the context leading up to it, she expects the rest of us (who may have only just strolled into the room) to know it to its core.

A punchline without context is nothing!

I do like the sideways glances my kids give me when it happens though. That's a shared understanding :)


> It does feel as if this regresses for long-term partners. The volume of shared experience must blur the boundaries of individual experience, or something.

This is so true, and painfully so. I was fortunate to have an epiphany a few years ago when I realized I was failing regularly to see my wife as a separate individual and part of the whole of "us."

More accurately, there were times when I probably viewed her merely and extension of myself. It honestly changed the way I view the world, and I somehow managed to extrapolate that understanding to life itself. It was a glorious dose of ego death, and one I sorely needed.


I've known lots of intelligent adults who assumed their particular knowledge was common knowledge. Most were polite about it and realized their mistake; a few were quite obnoxious and learned nothing.

Anecdotally, I wonder if/ to what degree this has been affected by increased job specialization, where a person who spends all day working (and perhaps socializing after) with folks who share their particular knowledge and jargon will then err when interacting with folks outside of that bubble.


Cannot not think of this quote from C.S. Lewis' "That hideous strength":

> “The cardinal difficulty,” said MacPhee, “in collaboration between the sexes is that women speak a language without nouns. If two men are doing a bit of work, one will say to the other, ‘Put this bowl inside the bigger bowl which you’ll find on the top shelf of the green cupboard.’ The female for this is, ‘Put that in the other one in there.’ And then if you ask them, ‘in where?’ they say, ‘in there, of course.’ There is consequently a phatic hiatus.”


Ironically, given that the discussion is about clarity of language, I had to look up the meaning of “phatic hiatus”; it refers to a pause in conversation. For more on this, see https://english.stackexchange.com/questions/592225/is-the-fo...


I looked up phatic also. Great word. denoting or relating to language of a general purposes of social interaction.


What a sexist comment.


> What a sexist comment.

Indeed. The author seems to imply that men are so bad at understanding contextual clues that communication with them is virtually impossible, almost as if they were primitive machines and not full-fledged human beings capable of observing and thinking. Even worse, when confronted with this uncomfortable truth, instead of learning how to communicate properly, they react with frustration, almost as if they were primitive animals driven by instincts and emotions and not full-fledged human beings capable of learning.

Still, I find it hilarious, even though I am a man myself!

And on a more serious note – why, when presented with two different phenomena (in this case: men and women, but there are many more cases, like "SQL" and "NoSQL", or "Rust" and "Clojure", or "GUI" and "CLI", etc., etc.), so many people automatically assume that one must be somehow strictly "better" and the other somehow strictly "worse"? Of course, it's sometimes (maybe even often) the case, but neither "sometimes" nor "often" does not mean "always"!


Wholeheartedly agree! We are drowning in false dichotomy and always wanting to know what is "the best" as if there were an absolute measure of goodness. You just can't project a high dimensional space down to a single dimension.

I love your rhetorical twist on sexism, which exposes a harmful aspect of stereotypes. The positive stereo type, "all X are good at Y, or naturals at Z"

We both know people who are dogmatically over precise in their language, it has its use, but becomes exhausting after while, esp where not needed. Here comes the generalization, but the use of overly pedantic forms of communication come from either an environment with high complexity, or a person using another as an extension of their own self and that manipulator (the assistant) literally has no context, so everything has to be over explained. When your groovin, all you need is a look.


The longer I spend on social media the more I have learned that the only differences between men and women is that men are terrible and women are awesome. Right?


Part of this is sharing of effort. Given a vague request, you can demand clarity or you can go try to work it to your best ability. Guess the most likely closet and go check it. If not there, go check the next one. Etc.

This approach is harder on you… but easier on her. And that is a sort of gift that one can choose to give to a spouse. Allow them to be quick and vague sometimes, and choose to invest effort to pick up some of the slack they drop.

Not all the time, but some of the time.


> At work I have far less misunderstandings with coworkers [...]

You have fewer* misunderstandings ;)


In case your point is that using "less" instead of "fewer" for countable nouns is an example of unclear communication, let me disagree with the point.


Less so (more awkward) here than in other cases, but it's as if the misunderstandings were smaller in magnitude rather than quantity or frequency.


"portal" is a good one. Depending on the speaker it can mean an entire website, or a single link, or just about anything in between.


Or a very specific iframe-like HTML tag!

<portal>

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/po...


Similarly, how many people in meetings won't actually speak up if they aren't familiar with a term or acronym in context. I'm often surprised when I ask, and then several others mention they weren't sure either.


From the other side of the water, as an art teacher, I can say that I really appreciate the precision of computer/software engineers. Artists/designers can be so 'fuzzy', so 'hand wavey' and so prone to mystification and grandification.

I will never forget attending an science/art symposium where the guest speaker was an artist. I quote them verbatim: 'Art can exist without science, but what happens if science tries to exist without art? Hiroshima!' Never in my life have I felt so ashamed of being an artist.


Listening is a skill, but understanding is the bigger one. Actually being able to fit someone else's ideas into a bigger framework, finding points of similarity and conflict is super challenging, and not really taught in engineering.


Cannot agree more. Communication shouldn't be a specialized skill like writing code or designing bridges. It's a universal that is accessible to everyone and should be taught to everyone. It's exactly why colleges have general education.

If someone finds that their team keeps getting in trouble or failing due to mis-communication as a pattern, then something deeper is wrong.


We specifically don’t educate engineers in writing or analysis, and high school preparation varies a lot.

I dual majored CS and History. Computer Science got me in the door and provided core skillsets, but reading and writing, analysis skills really made me a better and more effective person.


I don't have the degree but I often notice (or think I notice) people not answering the question that was asked, reframing questions to answer the pet peeve they love to bring up, and people agreeing with each other while sounding like they're arguing with each other such that the conversation never ends. It annoys the hell out of me and it feels quasi-impossible for me to relay to others what is going on.


Yeah, the violent agreement is usually a big tell. I've gotten much better at throwing the flag in meetings to have that conversation - "Hey, when you say X, do you mean <what I'm hearing>? Can you expand on that?". I think people are hesitant to do it out of fear of sounding stupid; I think I'm lucky enough to be far enough into my career that I don't really worry about that anymore.

The "reframe the conversation to the thing I want to talk about" - man, that one's frustrating. I don't have a polite way to stop that one yet. I think some of it is just that we all pick up traumas and trigger words, and you've gotta recognize when someone said "banana" that doesn't actually mean "the thing I slipped on five years ago."


One of the best advices I got: "stop acting like you're the smartest person in the room, even if you are" so I started acting like the stupidest person in the room. Often times, by asking the dumbest question in the naivest way possible, you can expose a lot of bad ideas.


A thousand times, yes. They're not dumb questions, of course. They're pure, simple, and relevant. They're the strongest questions one can ask. The best.


The Columbo Method


Or the Feynman one.


Oh my god yes. It's also So Much more fun and interesting! You get to learn from people!


Well... it's more like by asking an extremely dumb question helps other people in the room know when they're being bullshitted.

"So, I see your business model is to make widgets. But ... uh... I don't really understand these ideas but that's called capital intensive, right? So, you're gonna have a factory, with a bunch of workers... in San Francisco? Won't that be really expensive, and ... uh, I guess that means your profit margin will be small? Could you put your factory in Kansas?"

(paraphrased but essentially accurate question I asked when being pitched by a hardware startup at a major VC that I advised)


> I don't have a polite way to stop that one yet.

Selective doses of being impolite can be extremely effective, especially when you're otherwise very polite.


"A gentleman is one who is never unintentionally rude."

(One of several variants: <https://quoteinvestigator.com/2015/01/21/offense/>)


I think we could benefit from more directness and bluntness of the right kind. To a large degree, what is considered "polite" is conditioned. I don't say absolutely conditioned (there are absolute limits), but cultural conditioning can either blunt perception to the impolite, or oversensitive us so that we interpret normal things as impolite. Gen Z in the US, for example, seems hypersensitive compared to prior generations, though it didn't begin with them. It is not unexpected that correcting someone's bad behavior, even in normal speech tone, will be seen as "yelling". This is very bad because an inability to receive feedback, let alone survive impropriety, essential to adulthood. Softness suffocates reason and weakens action, and it softens the person who wants to avoid perturbing the softness of another. Hemming and hawing and hedging, too, is an enemy of clear communication.

But more to the point, I find that asking for clarification is the best tactic in the aforementioned circumstances. That way, you avoid having to make accusations. It removes all pretext for getting defensive and focuses the discussion on the substance and merit rather than the character flaws and lack of speaking skills of the other. If the other person starts to get unjustifiably angry, this reflects poorly on them, not you, so there is no need to feel any guilt. Be honest and never lie. Do not pretend to understand someone just because you think asking for clarification will make you look less competent. Maybe you are less competent, in which case pretending to competence you don't actually have is dishonest and unjust. You also close off the doors to learning. And if you are competent, then there's nothing to worry about. Bullshitters feed off pretense, and honest people are dismayed by it.


> The "reframe the conversation to the thing I want to talk about" - man, that one's frustrating.

The most effective method I've found is making that person responsible for resolving whatever the issue is. Not always possible, but especially when it happens in group settings, some verbal judo can work even if you can't "officially" task them.

(Make sure to memorialize that in an email afterwards, or it will probably retroactively never have happened.)


It's an extremely valuable observational skill. It's also an extremely valuable skill to be able to get everyone in alignment, but much much harder to "git gud" at (so to speak).

A few suggestions for getting more value out of your observations:

* in the moment, particularly if it's heated, you won't make a ton of headway unless you really know the parties involved and know how to frame "i think you agree with each other" well enough to be heard over the argumentative mindset. Instead pointing it out to each party individually in a later/follow up discussion can help a lot!

* If you have a good "people person" mentor or manager, just pointing it out to them can often result in positive outcomes, because they can take it on themselves to have the discussions in the background or if you ask for it, mentor you in how to get that across in a well-received way.

* sometimes when people are arguing with each other in agreement, the issue is usually semantics and someone (or everyone) has a different take on some word/phrase/name whatever being used. A good tactic is to try and identify where that bit of disagreement is and play dumb (it works best when you're in a "junior" position but can work in any situation) and say something like - "wait, sorry to interrupt but I don't quite get the difference between foobar and barfoo can you help me understand?" and then when they explain to you, the neutral third party, they'll come to the realization that they are arguing in agreement after all.

I've been in your shoes before and the above advice helped me get going so I'm passing it along. For me the difficulty in relaying the info came from a couple places:

* I was afraid of speaking out of turn, or looking dumb. It turns out that the "dumb look" i was afraid of is often interpreted as "wow this guy is asking smart questions", and at worst it's interpreted as "this guy needed a bit of a different explanation to grok it".

* I didn't realize that people don't need to understand that I was seeing them argue in agreement or avoid the question. I just needed to ask my own clarifying questions until everyone got the info/agreement they needed. If they get that I was driving at "arguing in agreement" or if they think I resolved a conflict, it doesn't matter - the goal of "we're all on the same page" was successfully reached.

I've still got a lot to learn in this whole area, but even trying to address those things often helps smooth out the rough bits and is useful. HTH!


I like to call "not being afraid to look dumb" "weaponizing my own stupidity." I'm not the sharpest knife in the drawer, but if you aren't worried about looking like a dope from time-to-time, you can help the actually smart people in the room agree on stuff.


Whole heatedly resonate with this. In my experience, people love questions (stupid questions, simple questions, doesn't matter), it gives them an opportunity to feel informed and impactful.

If you ask a question and someone is scornful ("you should know that already!"), First time it's on them, second time it's on you, third time update your resume.


I use dumb questions a lot as a manager - although my hiring intentions are to try and make me the actual dumbest person on the team anyway.

I don't use it so much for the teasing out agreement or highlighting problems angle though - one of my goals is setting the accepted threshold for dumb questions low enough that junior/newer or less confident team members aren't afraid to ask their own "dumb questions". Those questions are usually more important than mine, and teams that don't have that "no question is too dumb" culture are often dysfunctional or heading that way.


"Communicating" is often used to mean "sending" information, but for communication to occur that information must be received.

Yet, sending is easier than receiving, just as generating is easier than parsing.


+1. To riff in this, I play competitive paintball (long story happy to elaborate)

The most key skill in this sport of paintball is communication. Among the highest circles of ability communication is understood to mean: "receiving information and repeating it until confirmed by first party"

A lot of wasted effort, needless mistakes, can be eliminated by confirming your understanding. Communication is not speaking, it is not listening, it is both at once.


So what you’re saying is, TCP is better than UDP?


> Yet, sending is easier than receiving, just as generating is easier than parsing.

Generating data is easier than generating useful data or easily comprehensible data; time spent generating good data can reduce time spent parsing it.

In a communication context, I don't think sending is necessarily easier than receiving, I think it's that we've ignored the sender's responsibilities and put the obligation on the receiver. (This should be expected when organizations reward the quantity of sent information over the quality of sent information.)


That's very interesting! Would you be able to give an example without being too specific?


It's been a long time and I don't remember specifics, but - we'd regularly be meeting with people from multiple different departments, and it'd be things like two people talking about testing, where it turns out one is talking about unit tests & CI and the other is talking about user testing, and you can go very far into that conversation using all the same words and meaning very different things.


A fairly simple one I've seen is "when will this be done?" - stakeholder means to ask when will it be in production; engineer hears "done" as in "my task is done" and answers about when the PR will land.


Ideally, merging the PR and having it be in production is a 30-60 minute automated CI process.


The former guy is pretty simple example. My partner refers to as being a squid, spraying ink and making a get away.


Can you give an example? What you're saying sounds interesting, but I don't understand. What kind of conflicts did they catch?


I'm a product designer and would never have fallen into this career if not for working for my college paper, The Auburn Plainsman.

A story—My first semester working on the paper I was at the bottom of the food chain as an associate news editor. So it was my job to sit in Auburn City Council meetings and fill my page with a summary of those meetings. I don't know how many of you have sat in small town council meetings, but all that really happens is they announce what restaurants are granted a liquor licenses and table interesting topic indefinitely. It's boring. On occasion there would be a heated debate about installing a speed bump on some neighborhood street, but usually nothing.

I would have an assigned amount of space to fill with city council notes and I never—never—was able to fill it. So I taught myself photoshop and started creating infographics to take up space. I started a weekly gas monitor price fluctuations and would add several other graphics to fill my section. That's how I got into design.

EDIT: Also, breaking a story about a kid stealing a Tiger Transit (Drunk Bus) to get home from the bar was a crowning journalistic achievement of mine. https://www.theplainsman.com/article/2009/09/tiger-transit-s...


I'm a PM, and I spent time as a writer and editor of my high school newspaper that was probably more useful to my career than anything else I did in high school or college. Learning how to ask the right questions, understand people's perspectives and biases and to take a bunch of related information and turn it into a coherent narrative that keeps people engaged are useful skills just about everything and certainly in this job.

Did they find out who stole the bus??


They never found out who stole it! The bus story became somewhat of an urban legend.

And I'm with ya. I learned so much working for the paper. Perhaps one unexpected skill was cold calling. In sales getting over that fear is an enormously important barrier to cross. Once you do it though, it makes a lot of things in life easier.

For stories I'd have to call people or go find them, frequently when they screwed up, frequently when they did not want to talk to me. Just like the story above—the transit manager did not want to talk to me, but I spent a day and a half hunting him down. He didn't answer my calls, so I went down to where the buses get dispatched from in the afternoon and asked a driver where to find him. That "Somebody just didn't want to wait" quote came from that interaction.

About a year later I started a coupon website. I went door-to-door trying to get local business to buy in. That's probably not something I could have done, if I hadn't worked for the paper first.


Man, good for you - I hate hate hate hate hate cold calling. I think the job I would want least in life is outbound cold calling. But I totally agree with you - I wouldn't say I ever really developed the skill, but I learned just shove the anxiety down deep and go do it, because it's gotta be done. That has served me pretty well.


I put a lot of thought into how I would respond to your comment, and I'd just like to say Roll Tide.


Did they ever find the bus burglar?

Also, funny to think there were probably multiple reporters in those city council meetings all trying to figure out their own way of filling space. While not good for people's career prospects, having a single reporter [maybe rotating each year] write once and disseminate to all (AP style) feels more optimal.


> While not good for people's career prospects, having a single reporter [maybe rotating each year] write once and disseminate to all (AP style) feels more optimal.

I find that having a single source of information too often leads to very sub optimal outcomes.


Now a days we have zero sources of information for these city council meetings given newspaper industry can’t afford to cover local news.

Teaming would be better than that.


I'm not sure there would even be anyone to team with. I've maybe been to a couple of town or selectman meetings in the 25 years I've lived in my town. Most are incredibly boring and have zero direct impact on me.


Hence how silly it would be to send multiple reporters and the point of my original comment.


> Because Journalists are trained to be good communicators and summarize ideas (a worthwhile skill in most professions) and they are also taught to be ferocious in finding and corroborating information for its "truthiness".

While that's certainly the ideal of journalism, the field routinely falls pretty short on this, IMHO. Sensationalism and clickbait isn't anything new. Just lookup Yellow journalism.

Unfortunately nowadays it feels like the truthiness aspect is just conflated with corporate group think. But it's great to see instances like this where a journalist doggedly question those in power.


The negative outcomes you describe are fairly independent from the training curriculum in school though. An equivalent would be saying that computer science degrees are worthless because sometimes people become parasites who work in adtech or fintech.


Agreed. Those are all straw men, rather than the the natural consequence of a journalism education.


I'd argue that the "evils" of modern journalism are more a consequence of economic failures in their employers than journalist themselves.

As came out in the Dominion v Fox News discovery, even the most political journalists are still disgusted by the things they have financial and management pressure to peddle.


Not disgusted enough to leave their job even though many are wealthy to extremely wealthy. Not disgusted enough to stop supporting Republicans.

I can understand someone who works at a newspaper and makes 70k a year so they have to report on sensational stories but Tucker Carlons net worth is $380~ million.

The key people at Foxnews were shocked and mad at the outrageous claims but for them the ends justify the means.


Yes it's been bad for a long time. I recall the "George Bush encounters barcode scanners" story back in the day. YEARS later, working in data-capture technology, I learned the truth. It wasn't a run-of-the-mill supermarket barcode scanning system. But that wouldn't have made as 'good' of a story that matched the papers pre-conceived notions.

https://apnews.com/article/61f29d10e27140b0b108d8e12b64b839


Here's a (free gift) link to the original NYTimes story

https://www.nytimes.com/1992/02/05/us/bush-encounters-the-su...

It's certainly one of the more misleading stories I've read in a national paper.


A newspaper is not a single entity. In its life it will have hundreds or thousands of employees with a range from good to bad. You can claim that there's a management structure that should know what's going on but they put trust in their employees even if they are ultimately responsible. If a reporter claims a government source said X but doesn't want to be revealed they need to make a decision based on that reporters history. Basically it's difficult to prevent rogue employees from putting out garbage stories.

------------------------------

Here a more recent article from the NYT examining the original reporting about the Bush scanner story showing how it was mostly misleading and the paper's response to those claims at the time (that they were misleading)

https://www.nytimes.com/2018/12/04/us/politics/fact-check-pr... (See EDIT at bottom for pastebin link)

For any single news organization over a period of time a certain amount of articles will be inaccurate, sometimes purposefully. The NYT has been around for 150+ years and what matters it is how they manage their reporters and sources. We also need to look at the positive accomplishments. Shouldn't that be weighed against the negative when making a judgement of trust?

The NYT is probably responsible for more breaking news and investigations than any other paper in US history. Stories we might not have have known about without the resources they allocated and their contacts (i.e. government or business sources)

I don't think trust is black and white and that people who claim it is may have an ulterior motive. If you dismiss news organizations based on having a few misleading stories without considering what they have accomplished you wouldn't trust any news organization. This of course would greatly benefit corrupt governments, businesses, and people.

EDIT: Here's a pastebin link to the section of the 2018 article where they examine the original reporting

https://pastebin.com/VKHyGARK


I gave up on the NYT with the Judith Miller WMD non-sense. The NYT failed the United States during the run up to the Iraq war. All they had to do was honestly report what they knew.


I remember that and thought George H.W. was a busy guy and probably only got out to the supermarket in Kennebunkport. Where they mail the bill at the end of the month.


(it was a then-uncommon combo scanner/scale with error correction)


There's even more insidious influences on how narratives are built and propagated than just lying or not. Saying things that are "not even wrong" but redirect the discourse so severely that good faith discussion breaks down on inflamed, sectarian lines due to the shaping of the zeitgeist around this or that topic.


People do what they must to live. Like many other professions, some people are paid to do things they probably would rather not.


A lot of "yellow journalism" these days comes from editors, who are actually a different profession than journalists, though I think people reasonably don't care about that.

The NYT editors are the ones who write all the headlines like "The economy is great - here's how that's bad news for Biden".


At a party, a coworker's partner (who is in law enforcement) asked what I thought about how to help people upskill to write better reports and such, and without hesitation I said they should try taking a newswriting class (ideally, IMO, on a condensed schedule as in a summer semester).

I'm not sure how common this is, but newswriting (a sophomore-level course) was the weed-out class for all mass comm degrees at the state university I attended. I went into a summer newswriting class with quite a bit of writing experience and it still had an impact on me.

(I double-majored in English + public relations and went on to get an MFA in creative writing. I doubt any 8-week period since elementary school affected my writing as much. It was a great counterweight to the kinds of academic writing styles you tend to pick up in English and philosophy. Caveat: I went into newswriting with a full toolchest; I can't speak to how it would go as a ~beginner.)


>I'm not sure how common this is, but newswriting (a sophomore-level course) was the weed-out class for all mass comm degrees at the state university I attended. I went into a summer newswriting class with quite a bit of writing experience and it still had an impact on me.

Definitey. And reading good writing is also an excellent way to improve one's communication skills.

It doesn't even need to be related to subjects you might be writing about either.

Good novels, well written essays/non-fiction books, etc. can provide examples of good writing and, if one continues to read well written stuff, it will likely rub off.

That's not a substitute for your suggestion (which is a good one), but another way to improve how one communicates in writing.


I have never been a reader. I've read maybe one book on purpose in life. Also, I almost never read the posted article.

I think reading is bad.


Journalism is an excellent minor. Criminal justice and psychology are excellent complements.

- Journalism will teach you who to ask questions of [to achieve the goal of accountability].

- Criminal justice will teach you what questions to ask [to achieve the goal of conviction/correction].

- Psychology will teach you how to ask questions [to achieve the goal of interrogation]. All interrogation and sales techniques are rooted in exploitation of psychology, but some people just have a natural knack for this. In both, the goal is to groom/break you into giving [something] you are inclined to withhold.

Philosophy likely factors in here too but I'm less familiar with that field. People appreciate Ethics about as much as they appreciate someone pulling the fire alarm and yelling racial slurs at evacuees in the parking lot. I've never found much use in naming logical fallacies (IME it's the domain of pseudointellectual internet bullies and pre-law students), but could see it being a way for oneself to reason why you're pursuing something. Self-righteousness substitutes well enough.


Ad-hominem fallacy comes up pretty frequently, as people buy into it a lot. Pointing it out doesn't make someone a bully, usually the other person saying those things in order to convince others is the one being a bully.


Yes, I study logical fallacies and how to use them. Ad-hominem is a really powerful one.


I majored in journalism. I learned how to write clearly and concisely, and do it on short notice, too. I learned a process for writing, which is something most people don't have. I also learned how to ask good questions, and be skeptical about what organizations and people say when their job or profits depend on it.


Funny, I quit newspaper club because I couldn't tolerate writing articles in that useless newspaper style: irrelevant fact, lede, quote, counterquote, irrelevant speculation end.


I was taught to aggressively look for and cut irrelevant facts. Back in the day newspapers and other printed material had space limitations. If your story didn't fit in the allotted column inches, the copy desk would cut it.

I also worked on a copy desk in college. Ideally a story has the most important information up front, in the inverted pyramid style, so that if an editor has to cut it for space, they can just cut from the bottom and not lose the main story. Given the very limited time the copy desk has to look at all the stories before deadline, writers than failed to lead with the key facts would know very quickly that they needed to work on their stories more.


never heard of this style. afaik the "inverted pyramid" is the commonly-taught method for reporting in journalism schools: https://en.wikipedia.org/wiki/Inverted_pyramid_(journalism)

the whole point of this style is to remove all irrelevant information.


Specifically for newspapers. Less so for magazines. Certainly it was followed fairly rigidly for historical wire service journalism because the newspaper using the copy would (literally) cut the article at a more or less arbitrary point to fill a space (between ads) in the paper. Obviously the constraints don't exist in the same way but writing from most important down to least important still makes sense for a lot of reasons.


It's just a tool in the toolkit.

I dropped my entire comp-sci major when I realized I couldn't bear even the first of two required technical writing courses. It was wringing all of the joy out of something I loved.

Later, newswriting was... maybe not quite "fun", but I did enjoy the challenge of remaining creative within the form while keeping a demanding instructor happy.

Screenwriting was similar. It's not a form I really ~enjoy writing in, but I think learning to write from that perspective also leaves you with something good for the kit.


That seems like a questionable reason to ditch CS to me. Was it really that bad to do the technical writing courses?


Thanks for questioning my reasons (anonymously).

As such things tend to be, it was the last straw. I was an angry, shy, lonely young person. I didn't know it at the time, but I was 1 year in to 6 years of ~writing-my-way-out of the worst two of those.

I came to school thinking I'd study writing or programming. I started with the one likely to pay more to please my dad, but only found one of the intro CS projects remotely engaging.

Yes; we were sitting in technical writing (second semester) discussing what we'd done for the first assignment and why. Through the lens of the only thing that I really found engaging (and catharctic, and joyful) in high school, I could see that my heart wasn't in it. Any of it.

(I would later return to programming through art projects.)


Yes, having done all kinds of styles, I can see wanting to avoid technical writing. It's a highly rigid form, with lots of rules about how to arrange information. Sometimes just the tools are enough to make you want to quit: I'm looking at you (La)TeX.


That sounds so valuable. Even for other fields, like the parent commenter said.

I’m in engineering, but I have a growing interest in journalism. I’ve already graduated and here in Norway we don’t really do double degrees or minors anyway.

My main interest will remain engineering, but if I want to scratch that journalism itch some day, I wonder if I should go for a part-time degree, or just try amateur journalism. I think I’d favour the former.


Any videos or online courses that you would recommend?


You learn by doing. Find some online pub (or open source community) that will give you decent editorial support and start writing for them.


Journalism is a very large field with a history going back to the 17th century. Your question is roughly analogous to asking to recommend something about computers.


>I think Journalism is a great dual major choice (or maybe just a minor), with whatever it is you want to study, particularly if its in conjunction with engineering, computer science, physics, biology, finance / accounting etc. Why?

Just working for the college paper isn't a bad alternative. Honestly, at this point, having been involved with several college papers and having done a lot of writing is probably way more valuable than any individual engineering class I've ever taken. (Though I certainly wouldn't dismiss what I learned with my engineering degrees in their totality if not in the specifics.)

ADDED: Didn't have a journalism minor per se but there was a lecturer (had been a senior editor at Newsweek, etc.) in undergrad who ran a Friday morning basically seminar where he brought in all sorts of interesting journo-related guests. It wasn't (for obvious reasons) literally limited to people on campus newspapers. But a 9am Friday slot kept most of the riff-raff out :-) (And every now and then someone else would wander in and wonder how everyone else in the room knew each other.)


My father majored in Journalism, despite having a much more mathematical leaning mind, and small business career. He swears it was the best decision with similar arguments. I'm not 100% convinced it should be the one and only major as he did, but a dual major/minor does sound great.


Vitalik Buterin is another famous example. He was a journalist (writing for Bitcoin Magazine) before he started Ethereum.

He (or his dad) said that they believed in writing as a way to clarify the mind. It seemed to work for Vitalik.


Writing is for constructing oneself


hi great


hi, great


My wife is a journalist by training and the skill of expression of information at an engaging level for a layman audience has made her a killer marketer, fundraiser, philanthropy officer, and now director of philanthropy. She works for a pretty major international aid org.


Not only the truth finding part but also the understanding nuance part. Great journalists excel at identifying important nuance to complex situations and bringing it to light. A very good skill to have as an engineer, especially when moving up the ranks.


Don't we think this probably applies to any of the humanities though? What you described, at least, is the practice of careful, critical research followed by exegesis.


I think it applies most of the time, but some fields and academics (like Judith Butler to pick a famous example) seem to rejoice in the opposite, complicating your language to make your point more difficult to grasp. So I wouldn't be quick to generalise.

Not trying to come across as partisan by bringing up Butler's name. Here is another academic, Talal Asad, making the same point in an entirely different context that the writing style of academia tends towards unnecessary complexity:

"For some years I have been exercised by this puzzle. How is it that the approach exemplified by Gellner’s paper remains attractive to so many academics in spite of its being demonstrably faulty? Is it perhaps because they are intimidated by a style? We know, of course, that anthropologists, like other academics, learn not merely to use a scholarly language but to fear it, to admire it, to be captivated by it."


I am not sure I quite understand, is the implication here that some disciplines overtly try to stifle understanding by making things harder to grasp? Why would they do that?

It's perfectly fine to read Butler and not understand it, in the same way as it would be for another to read a textbook on quantum mechanics. There seems to be a hidden assumption in what you say that holds that the domain/area-of-discourse Judith Butler is working in is one you should be able to understand without difficulty, but you really have no good reason to feel that. It's really not her job to ELI5 to everyone, or at least, its not something her peers or publishers care about. And why should they?

Yes, like in STEM, there is false positives, paper's and researchers that rise to the top that maybe shouldn't, but that is different than saying entire disciplines are sustained by bad faith.

I can't speak for the Gellner paper, but if you spend some time with it, understand a little about where she is working, Butler is a very rewarding writer that is incredibly influential to many many people. There isn't some conspiracy or shared delusion here. And the fact that she can say things that resonate with so many people in her domain, shows huge evidence of her general abilities around information comprehension, articulation, and communication.


I think in retrospect that I was too dismissive and hyperbolic about my comment that some fields seem to enjoy having an unnecessarily complicated style, but I think the characterisation of some academic writing in the humanities having an unnecessarily complex style is ultimately fair.

I know from my own time with the humanities that some adopt that kind of style to make themselves seem more profound and out of a desire to impress others because I was one of them (although I'm happy to have comparatively flattened the ego since then).

There's obviously a diversity of intentions behind why people would adopt a complex style (although I have no doubt the motivation I had is common) and some would be justified but I don't think it's wrong of me to point out that Butler's style is needlessly complicated as an example. When reading her words in particular, I sometimes find myself rephrasing in my head to make her point more digestible to myself (and I can't help feeling that random archaic English texts like Francis Bacon's Of Simulation and Dissimulation would be more readable to most, despite the outdated and unfamiliar language).

That's a sign of needless complexity in my opinion.


> is the implication here that some disciplines overtly try to stifle understanding by making things harder to grasp?

That's how I read it. IME with social science academics, it's accurate.

> Why would they do that?

To create a mystique of knowledge or superiority; to obfuscate controversial points; to prevent mainstream scrutiny; to exclude outsiders from giving criticism.

There are times where a Big Word is more precise than a smaller, common one, and the Big Word is preferred to writing 20 small words.

Then there are times where a Big Word is not more precise than a smaller, common one, and may even be less precise -- but it's chosen because it limits who can comprehend the work.


Listen, beyond any of the culture war you find yourself enlisted within, the sooner you can, as simply a human being, separate out "I do not understand this" from "I do not understand this, there must be something wrong with it," the sooner you will gain deeper understandings of things, make better connections, and honestly just be a happier individual. Like, it's fine, I guess, if you want to spend your life fighting huge swaths of intellectual history and human advancement, but you can't just be like overtly anti-intellectual about it! It's just not good for you, you will be sustained in fear and anger forever.


The point is that people should right more in plain english


I think there's still a difference here with respect to writing styles and audiences. In most humanities specialties you'll be writing for an academic audience (and depending on your focus, potentially one with a very narrow band of shared knowledge/terminology).

A lot of what you learn there can get in the way when you need to reach a general/lay audience.


Journalism is the same way though. They write highly technical papers for academic audiences. Journalism is just distinct in that there are marginally more jobs you can get where you practice direct skills/techniques around the domains. In the rest you can still get this kind of with teaching!


Sure, but we aren't discussing how everyone should just go teach journalism or the humanities.


100% agree. All the best engineers I've worked with are decent problem solvers, decent coders, but phenomenal communicators, both written and verbal.

I think a correlation to this are people who are good teachers.


One of the most knowledgable security people I know is also consistently rated as one of the best presenters in the company.


Interesting! I’ve always thought engineering and Philosophy were good for the same reason. Builds logical reasoning and ability to communicate.


There's something about the liberal arts and sciences that gives a person skills for life. I wonder if anyone's studied that phenomenon.


I think you're spot on.

I have a journalism + mass communications bachelor's, but only because I dropped out of the CS program -- too much math. Once I had mouths to feed, I couldn't make it work financially as a journalist, so I started freelance in software and my career has grown from there.

The communication skills, and more critically, the understanding of how information is gathered and disseminated in groups (i.e. mass comms) is by far my biggest strength. That foundational understanding has served my career far more than any of the programming classes I took before switching majors.


I wonder if it works in the opposite direction too. If you dual majored biotech and journalism maybe it'd give you a leg up on writing about biotech; you'd know what to look out for and what's BS.


The problem is that you'd probably be a good journalist writing about biotech but journalism is a pretty awful way to pay the bills these days, not that it was ever all that great.


Yes, but you'd make 3x as much in biotech.


Those are important skills, but does a journalism major really teach you that? Nearly all of my daily interactions with journalism involves endless amounts of agenda-driven spin, and consistent use of intentionally divisive and provocative language. Journalists are responsible for some of the least effective communication I see on a daily basis. Perhaps it would be more useful for teaching you how lie all the time, whilst holding on to a few shreds of plausible deniability.


>and they are also taught to be ferocious in finding and corroborating information for its "truthiness"

And yet most practice the exact opposite.


"And yet most practice the exact opposite"

And yet you just made a sweeping generalization about journalists without any corroborating information.


When did people lost the ability to experience empirical reality, and generalize and draw conclusions from it, as opposed to having everything force-fed in the form of some statistic?

Finding "corroborating information" should be the province of journalists, scientists, the police, etc.

Being able to draw first level conclusions from their empirical experience of reality (and, in this case, their years of exposure to watching media, reading media, reading about media, and seeing media coverage unfold and evaluated), is table stakes for being a citizen.

Without that direct experience and the ability to distill it into a general understand, you're just someone reading statistics and reports, who they can't evaluate or corroborate with any of their experience, might as well be reading for some fictional land.


You said 80% of blah blah then drcry the use of statistics. You just forced-fed that to everyone

When did people lost the ability to experience empirical reality, and generalize and draw conclusions from it

No one lost this ability it's just an objectively worse way to come to a conclusion. Making generalizations is bad, it's weird you would openly propose doing it.

empirical experience of reality (and, in this case, their years of exposure to watching media, reading media, reading about media, and seeing media coverage unfold and evaluated)

You're simply describing anecdotal evidence which has little value especially if there's alternatives and it's being used to make a conclusion. It's fine to offer up anecdotes but with supportering information. However what the media does and tells you isn't your own personal experiences.

Also, watching and observing all media or just your own selected media where people tell you something is the way it is.

The issue is a persons limited ability to observe as well as confirmation bias,and memory lead to faulty conclusions.

as opposed to having everything force-fed in the form of some statistic?

You're claiming statistics have less value than limited personal observations?

You also used the phrase "forced-fed" as a manipulation tactic to make people think whatever you mentioned after it is bad. Like if I said "My mom force fed us hamburgers for dinner" instead of "My mom made hamburgers for dinner".


>You said 80% of blah blah

Huh? Did you LLM-like hallucinate that part?

>No one lost this ability

You'd be surprised.

>Making generalizations is bad, it's weird you would openly propose doing it.

Making generalizations is the cornerstone of understanding the world, and the basis of science. The alternative is taking each element of larger clusters of things and behaviors as some unique snowflake, and never learning any greater lesson ("missing the forrest for the trees").

>You're claiming statistics have less value than limited personal observations?

Merely claiming? This is reality 101. Anything you can directly observe is more real than some third or fourth-hand statistical "knowledge".

>You also used the phrase "forced-fed" as a manipulation tactic to make people think whatever you mentioned after it is bad.

Or, you know, I used it to accurately describe the way statistics are created, manipulated, promoted, and used to paint all kinds of pictures state and private interests want to promote. Which is how they got their place at the worse end of the scale after "lies" and "damned lies".


Or, you know, I used it to accurately describe the way statistics are created, manipulated, promoted, and used to paint all kind

Another generalization without evidence?

Merely claiming? This is reality 101. Anything you can directly observe is more real than some third or fourth-hand statistical "knowledge".

No where did I say your observations are false. If you use limited observations to reach a conclusion. You're not even documenting your observations if you are simply relying on your memory.

This is literally how racists think. They observe behavior then generalize about a race. However confirmation bias, media manipulation, and cultural bias muddy your "recorded" observation. I'm not saying you're a racist but I'm showing using your own input of the world to generlize leads to faulty conclusions.

Making generalizations is the cornerstone of understanding the world, and the basis of science

No, the scientific method is the cornerstone of science. Observations are only the first step to creating a hypothesis then attempting to disprove it.

Statistics can be wrong but that doesn't mean you should dismiss them.


I mean, have ya looked around lately?


One might say that debugging is a form of investigative journalism, and vice-versa.

Particularly when the error cannot be reproduced/captured on-demand, and you need to develop--and test--a story for how the final state could have been reached.


Having these skills would give most people an edge in whatever line of work you are in.

Absolutely. In the last few years I’ve moved into a role with a lot more interaction with non-engineers. At minimum, being able to politely re-state somebody else’s point in a concise manner is massively useful. Particularly when two people are talking past each other and I’m pretty sure they actually agree.


> Because Journalists are trained to be good communicators and summarize ideas...

Good to remember next time someone whinges about bad science journalism.

Blame the editors, publishers, and owners. The constraints on journalists are ridiculous. While op-eds and columnists are granted soapboxes to blather on.


> they are also taught to be ferocious in finding and corroborating information for its "truthiness"

Agree with the rest of what you said, except for this. Unless you said "truthiness" in quotes instead of truth (without quotes) to indicate this is a failing.

Today's so-called journalists are ideology merchants. Their fitness function is guided by such things as their ideological alignment (or indoctrination), that of the organization they work for (what do I have to say to keep my job?) or whatever it takes to get clicks.

Journalism has not been equated with truth-seeking in a long time. From my perspective, I see it as a disgraceful profession. In other words, if someone says "I am a journalist", I will assume they exist to sell lies and ideology, not to uncover the truth at all.

This is the only profession that enjoys constitutional protection (in the US).

What do they do with that protection? Elevate lies and misinformation to a virtue.

Given that our system of education does not produce people who are able to think critically, what you have are masses who believe what is being repeated by these puppet masters. Collectively and through their actions, they are damaging society in ways we have yet to discover.

I am certain this is not at all what the authors of the US constitution had in mind when they offered that protection.

Perhaps that's along the lines of what you meant when you said "truthiness", which sounds like a way to have a chuckle at the idea of them actually seeking truth at all.


let's apply that extremely broad comment to another group: computer scientists!

> Today's so-called journalists are ideology merchants

Today's so-called computer scientists are distraction merchants

> Their fitness function is guided by such things as their ideological alignment (or indoctrination), that of the organization they work for ... or whatever it takes to get clicks

Their fitness function is guided by such things as their financial incentives (or indoctrination), that of the organization they work for ... or whatever it takes to make money

> Journalism has not been equated with truth-seeking in a long time. From my perspective, I see it as a disgraceful profession.

Computer science has not been equated with technological advancement in a long time. From my perspective, I see it as a disgraceful profession.

> In other words, if someone says "I am a journalist", I will assume they exist to sell lies and ideology, not to uncover the truth at all.

In other words, if someone says "I am a developer", I will assume they exist to sell user data to the highest bidder, not to engage in any kind of technological pursuit.

> Given that our system of education does not produce people who are able to think critically, what you have are masses who believe what is being repeated by these puppet masters. Collectively and through their actions, they are damaging society in ways we have yet to discover.

no changes.


>Journalism has not been equated with truth-seeking in a long time.

Why do you believe it was ever about truth-seeking?

My understanding of journalism specifically - and the flow of information generally - is that those in power have always sought to control it, and they were just as successful in the past, perhaps more so.


> Because Journalists are trained to be good communicators and summarize ideas

I don't see how a freshman is 'trained' in this sense. Could it be that good communicators and people who can summarize ideas naturally gravitate towards journalism?


> they are also taught to be ferocious in finding and corroborating information for its "truthiness".

If they are actually trained in that, there's no evidence of it. Journalists shape their articles to fit their narrative.


>Having these skills would give most people an edge in whatever line of work you are in.

Yes, and you can probably learn 80% of those skills yourself without having to spend years and a small fortune on a formal degree in journalism.


Yes. I also majored in economics and psychology whilst during my CS undergrad, and having to write a ton of essays and submitting for journals, taught me that communications is a key skill to have.


Also, a bio+journalism dual degree is going to be much better at biology journalism than a straight-up bio major.


I hope to God any decent journalist (if any are left) is looking for the actual truth and not just "truthiness."

You do understand that Stephen Colbert coined that term, right? And that he coined it as a deliberate satire of politicians and journalists who had what we'll call an open relationship with the truth?


In this case it’s probably not so much that the author was forced into journalism, and more that the student was empowered to cover this story without fear of retaliation due to his parents’ large megaphone.

With the sheer volume of scandals coming out of Stanford these days, it wouldn’t shock me if a critical mass of former students start feeling empowered to speak out now as well.


"Hey, your parents are journalists right? I think that guy's works is fishy. How would I report this or investigate further?"

There's definitely an attractive force based on generational expertise. I'd guess more people would be likely to feed them information as well as they would innately have more experience (learned growing up) and definitely have more access to expertise. Considering someone said the student was in CS I don't think I'd say they were forced into journalism, but it does definitely relate to the skills and resources at their disposal.


It’s also likely that having grown up surrounded by journalists and people working for newspapers, his education allows him to properly write articles as a freshman. That doesn’t prevent him for learning computer science if that’s what he likes doing.


Yeah if you have insight into why a field is interesting from a younger age, you are more likely to be interested in it yourself as well.

That also goes the other way. My dad was a lawyer, and I know a little more about the law and legal profession than the average joe. However that was enough information to tell me I had no interest in being a lawyer.


My daughter went into an entirely different career path than me but she's still the one to fix her employer's Wifi when it goes down.


What if it really takes two generations to be good at something?

Since 1789 (in France) we postulate that inheritance in a societal curse, and postulate that everyone must be equal at birth.

However, there are countless examples where sons of doctors make better doctors, sons of journalists make better journalists, and sons of presidents make better administrators of oil companies in war zones (joke intended).

We should still aim so that it is possible to succeed as an orphan, of course, but we should also recognize that the best tricks ate learnt during teenage years, when you ask “Hey dad, how come the board of a company isn’t salaried? Dad, how did you deal with your last board where you had too many naysayers? How does it work when you have to fire an employee?”

Of course I’ve read my share of books by Ben Horowitz, but of course being the son of such a person gives tremendous advance on how to deal with a lot of situations.


> What if it really takes two generations to be good at something?

This really is the recipe for success. The majority of success is intergenerational.

Someone can come from nothing and become wildly successful, it's true. But it's extremely unlikely. With 8 billion people, occasional rags-to-riches stories are going to happen; even if it's a 1-in-100,000,000 chance that would be about 80 people. These are not the stories to aspire to; they're random anomalies. The stories we should aspire to are the ones of humans setting up future humans for success. Ideally, not even just their children...


> This really is the recipe for success. The majority of success is intergenerational.

I think it's better said that the majority of success is from _iteration_.

In lots of ways, I'm not just a product of my parents' (lack of?) success or their parents' (lack of?) success and so on, but also a product of my iterated local communities and iterated larger society.

> The stories we should aspire to are the ones of humans setting up future humans for success. Ideally, not even just their children...

Be wary of claimed benevolence, for it justifies corpses in the name of the greater good.

History has shown that rising tides lift all boats. We are all the product of millennia of selfish pursuits by bigots, rapists, war profiteers, etc. And yet, the long-term trajectory for the masses is up -- because success iterates not just along family lines, but broadly across society.


It's not just conversations around the dinner table. It's also how you get started vis a vis introductions to the right people, prized starter jobs and educational pathways that may not be widely known (eg, an internship at ___ will set you up for a job later at ___).


Culture is inherited, you’ll get the most influence from the people you grew up around. Kids tend to be either a lot like their parents or try to be nothing like them at all (trying to be the opposite is still a huge influence)


Part of it is that people like what they are good at. Having both parents providing years of mentoring and experience will invariably give their child an edge in that skill set, which leads to a higher chance of embracing it in adulthood.


> a computer science student doing journalism as a hobby

Y'know, sometimes CS gets slow and you gotta go do some award winning journalism to keep busy.


It can definitely be an influence. My parents were both in the life sciences and I was definitely gently pushed in that direction. And I got interested in nascent biomedical engineering.

Then I took organic chemistry.

Switched to pure mechanical engineering. Would probably have liked EECS more when all is said and done and have had more aptitude for it. But, at the end of the day, can't really complain about the circuitous path I took which, like so many in my cohort, ended up in computers anyway.


I dunno, I'm nowhere close to doing worker's compensation claims adjustment even though my dad did it for 40+ years.


Both of my parents are journalists and I'm a big tech ML engineer... so there are alternative paths available :)


Well I managed to not get into law at all.

I do know more about patent law than is probably relevant for a layperson though lol


Genetics certainly plays a heavy hand in terms of what we end up doing occupationally. It's not the whole story, obviously, but it's one of the chief protagonists. There's also the matter of constant reinforcement and exposure to your parents' careers as you grow up. So it's mostly genes (nature) and parenting (nurture). Extrafamilial social interactions provide a lesser, but not insignificant, influence.


Yes, genetic variability, which was largely set in prehistory, definitely "plays a heavy hand" in determining whether someone will enter the workforce in one of several occupations that cluster closely in terms of skill set, yet vary widely in terms of pay and prestige. This guy will probably be a journalist instead of, say, a private investigator, or a novelist, or a technical writer, or a marketer, because it's in his genes. /s


Different occupations require different temperaments, personalities, and so forth. These things are largely determined by genetics. How is any of that controversial, or worthy of your scorn?


>Different occupations require different temperaments, personalities, and so forth.

No, I don't think I agree with that. Widely varying workplace cultures across businesses in the same field or market are driven by widely-varying temperaments, personalities, etc. in each business's labor force. (This is the saving grace of, say, police officers, where many decorated vets have been shown to be bullies and sociopaths; I would hope that there is some variance there.) There isn't just variance in how different companies and even teams do things, but across time; the job of a journalist today is quite different from the job of one at the occupation's inception, and even of one who is now retiring as this young man is beginning his career. In fact, assuming that in-built assets truly vary non-trivially, many retire BECAUSE theirs are incompatible with the shifting needs of a modern worker. That a job's responsibilities change over time would alone seem to falsify the notion that one can be born with a proclivity for it, the same as one's parents.

>How is any of that controversial, or worthy of your scorn?

It's unscientific, Panglossian garbage.


Meh, both my parents practice law and I went into software dev. Had no interest in following in their footsteps.


Both are careers with a foundation of reasoning and logic so the apple didn't fall too far from the tree.


Same, but now I am studying law


Pretty simple to see how this happens.

One can not isolate one's self from cumulation of being that is one's life.

During this individuals life in sure he was often exposed to talk and thought about journalism. But also maybe form additional interests and ambitions.

This is not about choice persay, but rather a life lived.


Not necessary - both parents were doctors but I became an engineer. And no - They were not in it for the money.

Although they worked really hard they were always passionate about their work and truly loved helping people getting better.

But I realized early enough that I did not have the same level of commitment.


My father was a school gym coach and then a flight attendant. My mother was a sales clerk. Neither of them were into computers.

I have no idea about none of their areas either.

So I guess this only applies to a prestigious occupations.


Picking it up doesn't mean it's your career path. You learn a lot from your parents, including bits about their careers, but I'd say such learnings add to your career choices rather than dictate them.


There's no inevitability about picking up a career you're familiar with. My dad was a pilot and a professor - yet I'm a computer nerd.


My experience is very little choice if you follow your aptitude.


> It brings me to wonder how much choice people really have in choosing their paths

I believe the correct answer here is "not a lot," but not in the way that you probably think.

I couldn't be more different from my German-immigrant parents. Neither of them were college-educated, my father was the general manager of a moving company before he retired and my mom was a travel agent before she had me. They were both very generally hardworking and I (frankly) only work hard in very specific circumstances (ADHD brain). I excelled academically and was pretty immediately drawn to computers (I'm talking like 1982, with the Commodore PET; I was 10). Neither of my parents were technical, or even that literate, or even that successful (my dad, even as a mere "general manager of a small moving company", was nevertheless the most successful person in his family). Nevertheless they managed to put together enough money to send me off to Cornell, where I got a Psych degree with a "CS minor" (you don't declare minors at Cornell, but let's just say I hounded the CS department for courses I could take; I didn't like the inflexibility of being a CS major, though, and I had messed up a critical calculus course that was a requirement for many of them)

I also did a 4 year stint in the USAF (after a very poor first-year showing at Cornell where I bombed academically due to no study habits, having coasted through HS) wherein I was an aircraft mechanic and pushed computers away as far as I could (this was literally just 2-3 years before the Internet would explode in 1995, and sentiment about people who were really into computers was very much still "big nerd"; I was a late-bloomer ::cough:: virgin ::cough:: and felt the need to push anything "uncool" away from me as much as possible). Despite this overt conscious effort to avoid computers, one day the commander calls me into his office (my immediate reaction was "oh sh--, what did I do?") and proceeds with this spiel:

"Airman Marreck, word has gotten back to me about your giftedness with computers." (Wait, what? And then suddenly, with some horror and trepidation, I remembered flashes of memory: Walking past VT100 terminals that were inop to keyboard input until I couldn't help but set them right. Hearing about someone complaining about some Windows 3.1 issue and helping them. Fixing a formatting issue with printouts of flight records. Helping another person ranked above me with an Excel issue. Etc. Etc. Etc.)

He continued. "I am offering you the opportunity to cross-train into [whatever the USAF's version of software engineer was, I forget]"

My honest thought: This f---ing thing has boomerang'ed back to me despite every effort I've made to avoid it. (Clearly, I let SOME efforts slip through... And truth be told, I was ready to accept it, having felt I matured a bit. And gotten my V-Card stamped, of course.)

I asked "What's the catch?" He says "Extending your enlistment for 2 more years."

I thought "if I'm supposed to do this, then I'm going to do it in the civilian world, and benefit from civilian salaries."

I said "Thank you, but no thanks."

Anyway, the thing you love (and we could have a very deep discussion about where that comes from, because I certainly never consciously chose it) is the thing you will do. I feel I don't really have a choice, since you can't really choose what you love, you just either do or don't.

So... For some at least, there may not be much of a choice. But it may also have nothing to do with their parents.

The closest relative that might have had anything to do with me being into computers is my mother's father, who was an accountant, and could add up a column of numbers just by sliding his finger down them (and that quickly). That is literally the only "analytical" type of person in my entire extended family.


That's a wonderful story (and told well).

> The closest relative that might have had anything to do with me being into computers is my mother's father, who was an accountant, and could add up a column of numbers just by sliding his finger down them (and that quickly). That is literally the only "analytical" type of person in my entire extended family.

How much of that was because relatively few jobs 50/ 75/ 100 years ago required "analytics" as we think of it? For your parents and others of their generation, how many of those jobs were gated behind education (and the English language) they didn't have as kids?

Research has shown that environment ("nurture") have huge impacts on development, but it's also shown that genetic inheritance ("nature") plays a major role and perhaps a bigger one that environment. That doesn't mean it's easily recognizable when comparing someone born in the 1940s versus the 70s versus the 2000s.


> if both your parents are journalists or [insert whatever profession], it’s as if you’re likely to pick it up no matter how hard you try not to…

How likely is it for you to be {Catholic,Muslim,Hindu,etc} if your parents are? What about the food you like? Interests? Etc

The answer is high, but not 100%. I say this as someone who comes from a _very_ religious family (I'm more areligious), the only person who does anything even remotely STEM related, and has a very different diet than my parents. So triple whammy (but probably unsurprising to get both given one).

It's just likely that you are going to be more similar to your parents than not. Not sure why we should expect otherwise. As an example, growing up in my house there was a lot of discussion of politics and business. It is probably no surprised that almost everyone works in sales. But friends who have parents who are professors grew up listening to and discussing academic ideas. Essentially what is happening is that at an early age we're all being trained and this training is heavily influenced by the teacher models, I mean parents. Sorry, the ML is creeping in ;). I would not be surprised that someone who grew up in a house where both parents are journalists that they wouldn't be explicitly trained to scrutinize authority, gather evidence, and write that down. Lots of learning is done by imitation.

Is this a bad thing? Probably not. Does it mean you don't have free will? Definitely not, but it does mean heavy influence. I think an important aspect is to recognize this, especially if we are going to have conversations about meritocracy and fairness. Understanding this early learning does give credit to claims of quality through generational experience. The question is how much this matters by the time we reach adults and finish our education (wherever that is). I think the longer the education, the less this matters. But it definitely means there's a bias in prestige of that education (and quality if you believe that prestige strongly correlates with quality). It probably isn't a surprise that kids from academics tend to end up at higher ranked universities than kids from non-academics (we see a strong effect when looking at race or class, but these are most likely not causal variables but confounders). It also means that any "meritocratic" system (personal belief is that these are fools errands) will be substantially benefited by generational experience as children will at a young age learn how to optimize for the metrics (which are not necessarily aligned to the goals of those metrics though, which is why the previous note).

Momentum is one hell of a force. Compounding interest isn't just important for money, but nearly any form of resource. There are a lot of benefits to momentum, but the problem is that we often don't account for this in our models and this makes for poor evaluation. I would be very willing to bet that a lot of things people make evaluations on are far more random than they believe. For example I'd put money on that were everything else held equal (making this hard to measure) that high school grades would not significantly affect student outcomes. We can use the studies on students who were admitted through affirmative action to get an indication of this, where we generally see similar results to students admitted normally. There is some noise, but it also isn't unsurprising that if you are doing poorly in high school that this might correlate with problems at home, which can cause a higher likelihood of dropout. But generally, this would suggest that you could shuffle all applications, randomly select, and you wouldn't see a large difference in your students' performance nor outcomes post university (my hypothesis). Some of this is already done, given that many top universities have more applicants that are nearly impossible to distinguish than they have seats for, making selection at least nearly arbitrary for these applicants. But where this may matter in the real world is that prestige definitely correlates with opportunities like internships and work, as companies prioritize recruitment from these locations and prioritize building connections with these professors (who connect students with opportunities). These are probably where we could make adjustments to make a more fair system, but is likely harder than suggested.

TLDR: heavy influence, but you still got free will. Should make you think though...


[flagged]


We've banned this account for posting abusive and flamebait comments. Please don't create accounts to break HN's rules with.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


You can tell he's not a real journalist because he wasn't digging dirt and attacking anybody who questioned The Science™ or carrying water for the ruling class.


Theo Baker also became the youngest person to win a Polk Award because of this story, at 18. Very cool!


And because of this, The Stanford Daily became the first independent student newspaper to ever win a Polk Award.


oh wow that's useful context. Congratulations to the Daily.


Despite the lack of name recognition, the Polk Award is a serious award which is well respected among journalists.


Wow! You can't make this up.

It's like one side of the "legacy elite" came around and smacked down a whole different side of the "legacy elite."


That's an odd framing of what happened here, I don't really get what it has to do with "legacy elites". I guess maybe in the sense that his parents were exceptional, but when I think of "legacy elites" I think of multi-generational wealth and power that often has little to do with individual merit (and is probably correlated against merit, in my experience).

It certainly is fascinating though. Like it will be interesting to see what happens if the kid of Ashton Eaton (gold medalist, decathlon) and Brianne Theisen-Eaton (world champion / bronze medalist, heptathlon) decides to dabble in athletics in 10-15 years.

This is pretty much a journalistic version of that scenario; if there was ever someone born to lay waste to a fraudulent Stanford president, surely it was this kid.


Yeah this is exactly what I’m describing

His parents are “elite” and he’s following in their footsteps - hence the “legacy.”

It’s one of the few generational advantages that I’m generally ok with ethically (passing down a trade), so long as it’s not abused.

In this case that advantage seems to be being used for the greater good which is great!


It’s always interesting to watch the semi-elites & elites of HN discuss and rationalize the merits of class society.


Why would you consider two moderately paid journalists part of the "legacy elite", how are what they do comparable - at all - to the president of Stanford University?


There are many yardsticks by which you can measure elite status, other than salary. I respect Dennis Ritchie, but not for his salary.

If you measure status by the yardstick "Number of people in the white house you're on a first name basis with" then I am reasonably confident that NYT chief White House correspondent Peter Baker scores higher than the president of Stanford University.

None of this detracts from what Theo Baker has achieved, of course.


Interesting. So, by that yardstick, a cook in the Whitehouse basement is legacy elite. So is my 20 year old niece. Sorry, but a more successful than average journalist is not part of the "legacy elite", whatever the fuck that means anyway.


The term 'legacy elite' is indeed ambiguous, so let me state things in a different way.

Imagine you're the President of Stanford, and some kid on the student newspaper is bothering you about some decade-old nonsense. You can barely even remember fudging the numbers and besides, everyone was doing it. Student newspapers have been railing against the university since time immemorial, but this guy seems determined to harass you personally. It bothers you and you want it to stop.

You could probably make it stop; almost everyone is guilty of something, and society has seen fit to give you your own police force. And even if you can't find any evidence of pot or drinking or piracy, why shouldn't rules about harassment, bullying and creating a hostile workplace be on your side? Even if that wouldn't hold up in court, probably someone can convince the kid that 'voluntarily' stopping would be in everyone's best interests.

Then you find out the kid's parents both write for major national periodicals. And not occasional contract work or writing the sunday style magazine, they're serious journalists.

You realise this kid has probably already received advice on the lines between journalism and harassment. Probably very well informed advice. And even if his parents don't personally write about rising censorship on college campuses, they undoubtedly know people who do.

You're going to have to do this one by the book.


Exactly, don't know why they are trying to frame the kid as part of some legacy elite. We should judge Theo for his actions (commendable), not who his parents are.


Theo's achievements wouldn't be possible without his parent's legacy to protect him. The average student journalist absolutely could not do the same thing Theo did here.


> two moderately paid journalists

"Peter Eleftherios Baker (born July 2, 1967) is an American journalist and author. He is the chief White House correspondent for The New York Times and a political analyst for MSNBC, and was previously a reporter for The Washington Post for 20 years."

https://en.m.wikipedia.org/wiki/Peter_Baker_(journalist)

I don't know what he's paid, but I doubt it's anywhere near the median for journalists, for US workers, or for the NYTimes.



Why would you classify Baker’s family as “legacy elite”?


Are you serious?

> His parents are NYT chief White House correspondent Peter Baker, and New Yorker staff writer Susan Glasser.


Point is, his dad probably didn't write to the university president (which would be ironic at this point) asking him to admit his deadbeat son as a favor. The student earned his admission the same way anyone does: essays, grades, luck. I'm sure he was helped by the gift from his parents of good writing skills and some doggedness. "Legacy elite" tends to imply there's someone more deserving of his spot.


It's not about admission. Without his parents, he would be removed from school for harassing a lesser faculty member. Like imagine making the same accusations, as a normal person, against your PI. Like he could have direct, first hand evidence of fraud, and it would be career suicide, if it weren't for his parents. Do you see?


> Without his parents, he would be removed from school for harassing a lesser faculty member.

That's conjecture, not fact.

When I was at Georgia Tech (THWG), the administration, via the Diversity Task Force, wanted to quietly change the school fight song (better to ask forgiveness than permission). So, the Technique student paper got the story together and published it before the admin could sink it.

I don't know if anyone at the Technique ever suffered for it; my recollection was that the Diversity Task Force, and the Director of Diversity Stephanie Ray, were pretty hostile to anyone and everyone who opposed their re-education ideas, so I presume there was retribution.

https://web.archive.org/web/20090726171257/http://technique....


Your opinion that the people working at those position are 'legacy elite' is clearly not universal.

I'm not even sure what's 'legacy elite' supposed to mean, other than that it is I suppose negative?


Basically folks have bought into conservative propaganda that white collar journalists are somehow equally “elite” as the ruling owner class.


You're unaware there are people who more easily get into elite schools because of their parents?

Educate yourself. It is a widespread systemic problem often called affirmative action for Whites: https://en.m.wikipedia.org/wiki/Legacy_preferences

"Nepobaby" (as in nepotism baby) or simply "Legacy" is more commonly used.

This skips what is supposed to be a meritocracy and can quite straightforwardly be argued to be perpetuating an oligarchy and plutocracy

So yes, it's negative.


Legacy elite is not mentioned on that site, not sure how I was supposed to know what the poster means.

Anyway sounds pretty rude to claim that a specific student was admitted thanks to his parents without any evidence.

Thank you for your recommendation regarding the topics I need to educate myself on.


I too cannot believe it. Even the most diehard nonparticipating 20-year-old League-of-Legends-addicted coding-bootcamped hustlebro HN reader could fathom how being a senior journalist in the country's maybe #1 and #3 news institutions is like, a big deal.

Since it sounds like the other commenters are really confused about why it matters whose kid he is: like if you were just a regular person, you might be ejected from Stanford for pursuing something like this against a far lesser faculty member. It has nothing to do with admissions, which is such a hustlebro 20 year old POV anyway: to think that the privileges of nepotism are limited to getting admitted to fancy universities and money in a bank account. If only!

Imagine having first hand evidence of your PI doing fraud, which lots of people do by definition, and that is rightly seen as career suicide. Like even the postdoc in the story, who has the evidence, hasn't come clean! He can't just like, stop living the way this student can.

The students who relayed Marc Hauser's fraud to Harvard never went public with their identities. They don't get to win Polk Awards at all. They're not regarded as investigative journalists. This also prevents us from seeing what happened. I can tell you from my experience at least some of those students joined Hauser's prestigious lab for a medical school recommendation, which obviously didn't happen. In fact they may have only exposed him because, he was such an asshole, he refused to write them recs at all, for whatever reason.

Do you know what all those people at Genentech get for exposing this fraud publicly? Nothing. I mean, they certainly don't become eligible for journalism awards. They could very well have provided the first hand evidence, and maybe posted to a PR newswire that has more circulation than the Stanford Daily or whatever, and cause the Stanford president to resign, and they still will absolutely, positively, never win a Polk award. They will just have a blown up career at the end, either way. They have a 100% chance of exposing the truth, and yet a pretty, pretty low chance of ending up better off than they were before they started.

Nevermind research. Think about all the kids at e.g. private schools, younger kids, directly harmed by teachers molesting them, and you know, the kids are the ones who leave, not the teachers, for many decades and many institutions.

So it definitely mattered to be the son of some big deal journalists in New York. Jerry Yang is beholden to journalists in New York. It made this happen.

That said, of all the things to deploy your nepobabiness on, this is a pretty good one, isn't it? Investigative journalism of the finest degree. I don't personally think that a Polk award determines the merit of the investigation; nor does even the publication, clearly. Anyone, everywhere, can be not only be an investigative journalist, but indeed a great investigative journalist. It is a legitimately great story of bringing this guy, who has clearly dodged bullets for a decade, to task.


how being a senior journalist in the country's maybe #1 and #3 news institutions is like, a big deal.

Ok, so you've established how his parents may be considered elite. You haven't explained the legacy part though. Did his parents get their jobs based on his grandparents merits? Can they trace their lineage back to the Mayflower? If that is irrelevant, can you perhaps explain your conception of "legacy elite" and how it differs from regular elite?

[they] never went public with their identities[..] They're not regarded as investigative journalists

That's true. In order to be considered a journalist you have to go public with your findings. Otherwise, you may be an investigator but not a journalist.


I like how, you write, and I agree. You have panache.


> His parents are NYT chief White House correspondent Peter Baker, and New Yorker staff writer Susan Glasser

wow, his parents maybe can learn from this! How to doggedly pursue allegations against the occupant of the Whitehouse, without fear or favor, with a healty dose of Grabthar's Hammer (never give up, never surrender)


He was interviewed by one of the local TV stations today: https://abc7news.com/stanford-theo-baker-university-presiden...



He would have gotten away with it except for those meddling kids.


Give him a Scooby snack.


Go Cardinal.

The Daily can be considered a real newspaper.


Obviously a sad, and kind of disturbing story, but the meta-story is incredible


Ok, I know this is a bit of a stretch, but bear with me: I think we are seeing something similar to the doping scandal that took down Lance Armstrong and many others a while back.

There was at least one year's Tour de France in which all but one participant were later found to have been doping. In other words, you didn't get to that level of that competitive activity, if you didn't cheat, because it wasn't possible to outperform people with such a significant advantage. It actually became a contest, not just of bicycling, but of doping, because getting away with it was quite difficult and it took years for the world of cycling to get good enough countermeasures to shift the advantage to non-doping.

Academic research has, for decades now, been a very competitive, high stakes endeavor. Many fields have more people trying to work in it, than there are spots (i.e. grants and endowed chairs) available. If you have twice as many aspirants as slots available, and "only" 25% of them cheat (in this case, fudge their data to get more interesting results), then you get something like 50% of your field filled with fraud.

Moreover, the closer they are to the top, the higher the likelihood that they are "doping".

I live in Austin, TX, and I remember when nearly all of Lance Armstrong's competitors had been busted, but he had not yet. I said to other people, "well, that's it then, he must have been doping. You can't win the Tour de France 7 years in a row against doping opponents, if you're clean. Either doping doesn't work, or he was doping."

I recall several people disagreeing, convinced that he was clean. He wasn't clean.


Universities are in the business of creating/discovering truth and shipping it. A Positivist review quickly shows things like math are easy to prove true. As you work your ways down to the humanities it becomes increasingly difficult. And competitive.

Shipping truth that isn’t powerful isn’t as attractive to your customers as truth that is. So you’re incentivized to develop truth that is. And you do this by hacking the accepted standards in knowledge pursuit by starting with a conclusion and working backwards. You tell a story based on the evidence that’s useful.

Now it’s “the science” or at the very least “there’s studies” and this is useful to both the customers (NGOs, journals, activists, lobbyists, media, anyone that wants to influence policy) and the university (attracts money, reputation and status) and the people shipping it (tenure, book deals, speaking fees).

It’s not a conspiracy. It’s just simple incentives. The poor guy who spends his time figuring out what the truth is not or that ships truth that isn’t immediately useful to the customers is looking for a new job after his grant dries up.


Exactly. In other words, there is a huge difference between science as "pursuit of knowledge through thoroughly applied scientific method" and science as "product of academic process".

Incentive structures make sure that the latter is dominant in published science.

Even if there was no intentional fudging of data, results and interpretations by scientist teams, the results of science as a whole could still be heavily (over time) influenced by simple biases in selection of grant applications. For example if you only fund researches of type "thing X good" and dismiss all applications trying to prove "thing X bad", you will eventually gather enough evidence for X to appear good, regardless of how objectively "good" it is.


The other level of problem is that truth in reality means consensus. You can argue about what truth is in reality, but functionally it means consensus.

So it's not even about what is or is not true, or what is recognized as useful in the short term, it's that what defines truth at any given time is what is popularly believed to be true.

This sounds very flaky but it has real consequences because the incentives aren't just to find positive results, it's to find positive results that are likely to be accepted by the majority of the academic field. This kind of positive feedback loop is bound to lead to pressure because science almost by definition has to get some things incorrect — it's how progress is made. So people are actively punished for going against the grain, or exploring areas where our knowledge is limited and therefore prone to lots of negative results because we just don't know.


Perceived truth is consensus. But greater truth, reality itself, remains true even if not a single person is able to perceive it. You can't actually modify reality with wishful thinking.


For these scientific papers, at a minimum truth==reproducibility.

For humanities and social sciences, it could mean consensus, but in hard science it is different. You are doing experiments that produce data. If someone else repeats the experiment, they should get the same data.

In this case, no one could reproduce the results of the papers in question.

I wonder how widespread this has become. How many scientists, driven to publish, know that it is too expensive to reproduce their experiment?


> The other level of problem is that truth in reality means consensus

That’s the job of the prestigious media. They manufacture consent to the masses. They propagate right-think. Again, not a conspiracy but simply incentives.

Anyways, there is objective truth. Not everything can be settled that way though and you’re right in that “truth” can mean consensus in many instances.

The problem with that is that in many things the dissent is tasked with proving claims untrue, because of the consensus, rather than those making claim to knowledge having to prove it is true. There’s a lot of this today.


I (and others before and after me) filed complaints that my advisor at UT Austin was falsifying data. University ignored me and gave him tenure. It is commonplace and admin doesn't care as long as it doesn't affect the endowment.


Are you willing to say who it is?


Yeah I'm also curious


> There was at least one year's Tour de France in which all but one participant were later found to have been doping. In other words, you didn't get to that level of that competitive activity, if you didn't cheat, because it wasn't possible to outperform people with such a significant advantage.

This is not true. Even during the 1904 Tour de France where 9 people were disqualifed because of, among other actions, illegal use of cars or trains [1] - 27 riders finished the race.

Tour de France in the modern era has up to 180+ competitors lining up, and there hasn't been a case of 100+ riders being disqualified for doping.

[1] https://en.wikipedia.org/wiki/1904_Tour_de_France#Disqualifi...


Indeed, the claim needs to be adjusted to be correct. One of the worst recent years was 2005: https://en.wikipedia.org/wiki/Doping_at_the_Tour_de_France#2...

There, the claim is that of the top 10 finishers, all but one participant were either stripped of result in that race or failed tests/were sanctioned for doping in another race.

(The remaining participant, originally #8 Cadel Evans, was a documented client of Michele Ferrari.)


A one-off consultation prior to his road racing career to see if he the right kind of fitness for road. I don't think the association should be used to tarnish Cadel without further evidence.


Yeah not the whole peleton, but most of Armstrong’s main rivals right? Like Contador etc. (I know his test was contentious).


Thank you for the correction, I either heard wrong back then, or misremembered. I think the analogy/logic still applies, though. Actually, this suggests even more strongly that the higher their position in a competitive field, the more likely they are doping...


I don't think this kind of fraud is even remotely close in scale to the cycling doping you mention.

My primary reason for believing this is that as a scientist myself, I've never done it, and none of my coauthors has ever proposed to do something like that. I have seen other kinds of fraud (e.g. ghost authorship) but not result manipulation.

But of course, If I were a fraudster myself I would also say that I've never done it :)

So a more general argument: scientific "doping" is just not as straightforward as cycling doping. In cycling, it's clear that if you don't get caught (and have no scruples) you benefit from doping, it makes you stronger.

In science, you first need an idea that looks original/promising, and then you do the experiments and see if the idea works or not. This kind of "doping" will only help if you (1) are good at generating ideas, but (2) these ideas often don't work. At least in my field, this is possible but far from the most common situation to be. And in other situations, there is not much benefit to faking results.


> if you (1) are good at generating ideas

I have been joking with my former boss (who is a professor) about how easy it is to come up with good ideas. We each have five to six One Billion Dollar ideas before breakfast, and another 1-2 while in the shower! /s

Coming up with ideas is _ridicolously_ easy, executing on them is the hard part. If you're a seasoned researcher, it shouldn't be hard to guess at the "meta" of your field, you're probably aware of promising ideas discussed among your colleagues. What if you faked results, proving one of these ideas?

And the bar for cheating is _suprisingly_ low. I'm not a researcher myself, but I've experienced how my work is torn apart in peer review. But if you're dealing with data, apparently no one makes sure to actually check it before approval.[0][1]

[0] https://youtu.be/d2Tm3Yx4HWI?t=846 (analysis on the original data from the Francesca Gino scandal, which already looks suspicious without statistical analysis)

[1] https://youtu.be/KsSuhP60qnI (faked data from the Hendrik Schoen scandal at Bell Labs, as long as you make outrageous claims no one can verify, you can continue to publish in top journals)


I suppose this is heavily discipline-specific. In my specific subfield of CS, the execution is often a Python program or bunch of scripts that can be (and often are) coded by an undergrad if given enough guidance on what exactly is wanted. And then of course the way of presenting it in the paper, choosing which data to show, etc. but cheating doesn't help much there. The hard part in my view is coming up with what Python program should be written (i.e. coming up with a relevant algorithm, experiment, evaluation methodology, etc.).

I can see how in other fields, e.g. biomedical, execution could be more challenging and ideas more plentiful.

Anyway, I don't think ideas (read: good ones, because obviously anyone can have nonsense ideas) can be a commodity in any field. It is quite widely accepted that, e.g., when applying for an ERC grant (the highest-regarded individual research grants in Europe), having a good idea is as important as the CV if not more. This wouldn't be the case if coming up with good ideas were so trivial.


> having a good idea is as important as the CV if not more

I don't know about the acceptance criteria for ERC grants, but from the peer reviews I've read from one of my professors papers it feels like there's a bias towards accepting novelty of all else. My joke with ideas was meant more general, i.e. that you can have a few billion dollar ideas, but executing on them is much harder (due to lack of funds, time or skill).

Additionally, depending on the demand of the field I can imagine it being easier to obtain funding for resarch. Imagine being in machine learning right now lol.

I wasn't talking about grant acceptance though, but purely what kind of ideas can get you published and what data you have to fake in order to get there. In the case of Francesca Gino it doesn't sound like her fraudulent resarch was so costly that she would have to have applied for large grants in order to conduct it.


You make it sound so straightforward and simple.

> and then you do the experiments and see if the idea works or not

The problem is "if the idea works or not" can be very fuzzy. With a little p-hacking, anything can work! If nothing in your original data/experiment supports your hypothesis, you can just cherry pick some part of it that gives "significant" results.


>My primary reason for believing this is that as a scientist myself, I've never done it, and none of my coauthors has ever proposed to do something like that.

Are you sure? Also a) how far along are you and b) how far from actual research work are you at this point?


I'm a full professor and PI of several projects. So sadly more management than actual hands-on research lately, but I've gone through all stages, of course.

And yes, I have more than 100 papers and in their making I haven't fabricated data or seen anyone do so. If there is fake data in any of them, it would be because someone included it behind my back. But I doubt it's the case, as during the course of my career we tried various things that didn't work and we just moved on to other experiments. I've never had a student or collaborator even suggest "let's just edit the data and publish it". The most that we have done is to try to cast negative results in a positive light to make them publishable ("the results are promising...", "the results are mixed but show improvements for some dataset..."), but without touching the data or saying anything factually incorrect.


Doping in sports is so much more complicated than most people think it is.

For starters there’s so many performance enhancing substances, and entire underground industries for developing and producing them. All of these drugs would be banned by the blanket prohibition on unapproved substances, but in practice many of them don’t have any screening procedures, so won’t turn up in any tests.

There’s also the fact that many performance enhancing substances are biologically identical to substances your body is naturally producing (like bio-identical testosterone for example). The only way to test for these is to test the ratios of certain metabolites (with a margin for error to avoid false positives). In practice this basically means any athlete can do steroid cycles, as long as they are only boosting their test levels up to that of a genetically elite outlier.

You also have the therapeutic use exemption, which allows for legitimate therapeutic use of banned substances. Testosterone replacement therapy is a legitimate therapeutic use of steroids, and some sports are known for abusing TUEs for this usecase.

Testing for PEDs is way less precise than a lot of people think it is, and any motivated athlete can get away with PED use to some extent, no matter how much testing you subject them to. There’s good reason to be suspicious of a lot of athletes. One massively suspect category of athlete is all of the ones in/near their 40s who somehow manage to maintain physiques as good as/better than they had in their early 20s.


In my personal, anecdotal experience...

Once you get far enough in ANY sport, you start considering tradeoffs between risking long term health and improving performance. If you are a competent adult, you can evaluate your drivers, consider risks.

But without fail, in ANY competitive sport, you realise that you have to risk health in order to achieve that next performance tier, there is no other way.

Once you get into that 1-10%(depending on sport) elite level, you realise that there is no way you can compete without PEDs.

I've participated in (taken myself) and have been a witness to this in weightlifting(olympic and strongman), soccer, cycling and swimming. Starting young. Real young.

Coaches of different sanity levels are trusted to make the right choices and not often questioned by busy or absent parents.

Everyone cheers on when medals are won and records are broken, without consideration for how it happens.

I personally had no idea I was taking hormone modifiers until about 16, it started at 11.

My coach just gave me what he said were vitamins and I never questioned it, he was the closest thing to a father I had, his authority was absolute.

At 16 I found myself in a hospital yellow as a banana on the brink of liver failure, that's when I found out.

Everyone on our team took what the coaches gave. Once a while we'd see a parent get into a shouting match with one of the coaches about something we didn't understand and that kid would never come back again.

Some of the guys I trained with went on to become highly recognisable names in their respective sports. One even has a supplement brand. Those I've kept in touch with continue PED use, that is the price of being at the top of the game.


Depending on the substance and quality, the time it is detectable can be as short as 12 hours to a few days. There is a reason lots of athletes refer to drug screenings as "IQ tests". Only the dumb or careless get caught.


I would be very much interested in reading more about this, do you have any good articles you could point me to?


This paper is a bit out of date, but gives a reasonable overview of the challenges of testing for testosterone supplementation

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2657495/

Paper on challenges of detecting growth hormone

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3830297/

Testing for both of those substances has improved, but still leave plenty of room for abuse.

Use of "unknown" or the constant stream newly developed substances is naturally not rigorously studied, but here's a couple of articles referencing the challenges they pose.

https://thetruthaboutforensicscience.com/undetectable-perfor...

https://bleacherreport.com/articles/1955231-undetectable-the...

UFC is notorious for testosterone TUEs, but that's mostly because they regulate things differently to most other sports

https://www.espn.com/espn/otl/story/_/id/10500652/therapeuti...

Top flight soccer is another sport that's constantly garnering attention for TUEs, but most of the coverage of that is speculative, because other than high level statistics the TUE process is naturally not very transparent (and most of the recent coverage on the topic has been replaced by controversy relating to trans athletes).


You've hit the nail head on. Also applies to China's anti-corruption programs. Everyone who achieves any level of political power is corrupt, but framing it in an anti-corruption campaign allows you to target your political enemies.


No need to single out China. All politics, and guess for Academia, can extend to all human organizations, Church, School, Government.


You’re making the preposterous assertion that all countries’ politicians are equally corrupt. That needs some extraordinary data to support.


Didn't say equal. Just that all 'large' human organizations tend towards corruption.

Churches, Governments, Academia. Hierarchies form, incentives drift over time and become mismatched with original intent. The proof is in history, look at Church abuse, Inquisition, look at grade fixing, admissions scandals, look at Volkswagen tweaking emissions, look at CEO's lying to promote products. Our entire society is based on the ebb and flow of some kind of corruption, or ebb-flow of Law.

Even today, in America, half the population think that the laws themselves are corrupt and should be broken. Judges aren't trusted. So who is corrupt if you don't have any foundation to judge anything against?

I'm totally against China taking Taiwan, but from a 'certain point of view', they have point. It was a part of China before the war, and the mainland 'won'. So they think of it as unification.

Again. I am totally against China taking Taiwan. Just think it is bit more nuanced.

Actually, think we are agreeing.


If there’s no need to ‘single out’ China then there’s no need to ‘single out’ doping in cycling, and no need to ‘single out’ this one particular Stanford President researcher, etc. The objective of your comment, that I am pointing out, is that you are hand-waving corruption in Chinese politics away by saying “but what about…”


I agree with that. If we hand-wave away all cases, then we never tamp them down. Corruption would continue to grow.

So, we should call out all cases. Even if there are so many cases to cause one to become tired of the fight.

Maybe I was just reacting that in todays world the lines have blurred enough that it is sometimes hard to tell if you are fighting corruption, or if you have become the corruption.

From different points of view both these are correct:

"The US experienced a Coup attempt by a president that then did not suffer any consequences for his un-constitutional actions"

or

"The President, fighting for the American Way against the corrupt deep state government attempted to bypass a fake election to continue his quest to root out evil? "


> It actually became a contest, not just of bicycling, but of doping, because getting away with it was quite difficult and it took years for the world of cycling to get good enough countermeasures to shift the advantage to non-doping.

Not just a contest of hiding doping, but of winning against others who were doping. If everyone is doping, then you could argue it's still a fair playing field, just with a higher skill ceiling.


> If everyone is doping, then you could argue it's still a fair playing field, just with a higher skill ceiling.

No, because now you have at least two contests in parallel: the race event for the cyclist and through-the-year "doping and not getting caught" for medical/management team.

For example one prominent olympic level swimmer was disqualified because they failed to appear for testing multiple times. Why would one do that just to come back after disqualification?


It’s not a fair playing field because there were still people who did not cheat.



As long as they knew others were doping, I would argue it was still a fair playing field. The real advantage is in others not knowing you are illegally enhancing performance.


That's why I said, "If everyone is doping" :)


But that's not what you were actually implying.


There are at least two problems with this argument. First, faking a research in a way that won't be detected is hard, especially if aim for high impact paper which probably will be studied by others. That's why we hear a lot about fraud in top tier venues. Lower ones probably have more problematic publications. Second, having fake results won't guarantee you a high quality publication, leave alone position in the future.

I can imagine some kind of first order phase transition in which the amount of fraudsters becomes so large that self correction breaks and instead they start to endorse each other. But that'd be evident from the outside, since science will stop producing real life results. We're definitely not there yet.


> First, faking a research in a way that won't be detected is hard, especially if aim for high impact paper which probably will be studied by others.

Fake is a spectrum. Soft faking is easy and widespread--"p-hacking" is the polite term.

The stakes for faking it are low because most research is low impact, most research is seldom reproduced, and even bad research is rarely retracted. For that which is reproduced there is plausible deniability--"researcher degrees of freedom". Getting a high impact study published is more important than having one retracted, so researchers follow that incentive structure.

> But that'd be evident from the outside, since science will stop producing real life results. We're definitely not there yet.

Have you taken a look at the biosciences, social sciences, and medicine? Basic research hasn't been reliable for decades.


> The stakes for faking it are low because most research is low impact ... Getting a high impact study published

That's contradictory. For low impact studies you can fake whatever you want, because nobody really cares. That probably wouldn't promote your career either tho. Publishing bunch of organic semiconductor papers in Nature and Science will promote your career, but won't hold. That's the point.

> Have you taken a look at the biosciences, social sciences, and medicine? Basic research hasn't been reliable for decades.

Not deeply, but I surely can come up with bio/med research with real life applications that appeared in last 2-3 decades. Say mRNA vaccines?


The entire fields of sociology, social psychology, economics, literature, theological studies, etc are all low impact. Even if you get in the best journals, it’s high impact for your career but low impact to society so it will get very little scrutiny apart from the tiny slice of academia you compete with.


Again, I know that academia is very different among fields, but usually high impact for career ~ high impact for (sub)field, meaning people will want to look at your data and possibly try to replicate it. Outside scrutiny for science is extremely rare anyway


> That's contradictory.

What is contradictory in a claim that most research is low impact thus easier to fake while the incentives for publishing high impact research skews to publishing bad/fake papers since publishing is far more important then having a paper retracted?

> Not deeply, but I surely can come up with bio/med research with real life applications that appeared in last 2-3 decades. Say mRNA vaccines?

Nobody said that all research is faked.


Ok, not contradictory, it is just two claims unrelated both between themselves and to my original claim. Yes, low impact research is easy to fake. Yes, there is incentive to publish fake high-impact papers. Yes, it is hard to create and publish fake high-impact impact research.


It doesn't have to be outright fraud. I imagine the vast majority of manipulations are extremely subtle. Intentional selection bias, multiple hypothesis testing without correction or reporting the failed hypotheses(p hacking). Most of these would improve your chances of finding important results significantly and be almost impossible to detect


For the first argument I can't really speak beyond my field (physics, or specifically condensed matter physics), where I think it's fairly hard to be subtle. Often it's partial data presentation which allows for inverse Occam razor [1] (proposing fancy explaination instead of a simple one), which is why full data publication requirement are important (and seem to happen more).

In smaller sub (sub)fields sometimes fraud becomes large enough to self-sustain, like it happened with Majoranas [2,3] but even that kind of self-corrected. [1] https://arxiv.org/abs/2204.08284 [2] https://twitter.com/spinespresso/status/1503352928656138241 [3] https://espressospin.org/2022/11/17/the-fallen-angel-particl...


> without reporting the failed hypotheses(p hacking).

This one is so subtle, it's probable that many of the people doing it don't even realize they're doing something wrong.


It doesn't even need to be deliberately faked data but more data published without much due process to ensure it validity.

I have had about three people working in the research fields that have used the term, off the record of course, "Publish or perish".

Get it out the door, try to get the funding and hopefully if it works out, solve the issues before it becomes a problem.


I suppose you could probably also say the same of certain housing markets. If the cost to rent or buy a place is sufficiently high and the system actively prevents the prices ever coming down to a point where people who didn't inherit huge sums or came into that money some other way have a chance, you might as well do whatever you can because the constraints have been laid before you. At a certain point, you're just evening the playing field and surviving.


I know several Tour de France aficionados, who would claim that doping remains rampant, they are just better at hiding it now.


Any competition without an effective referee (from the Tour to Academics to presidential races) will devolve to the lowest level. People always seem surprised by this...


Link to full pdf of the Report issued by Stanford's "special committee:" https://boardoftrustees.stanford.edu/wp-content/uploads/site...

Some extracts:

"There were repeated instances of manipulation of research data and/or subpar scientific practices from different people and in labs run by Dr. Tessier-Lavigne at different institutions"

"At various times when concerns with Dr. Tessier-Lavigne’s papers emerged—in 2001, the early 2010s, 2015-16, and March 2021—Dr.Tessier-Lavigne failed to decisively and forthrightly correct mistakes in the scientific record."

"However, a second theme emerged among some of the interviewees that the same lab culture also tended to reward the “winners” (that is, postdocs who could generate favorable results) and marginalize or diminish the “losers” (that is, postdocs who were unable or struggled to generate such data)"

Considering that Stanford's special committee has every reason to protect Tessier-Lavigne and damage control, the findings are quite damning.

Good on Theo Baker for continuing to provide a more critical perspective compared to the cushy political speak of the report.


Interesting that Stanford found the 2009 paper "lacked vigor" and Genentech found no-one "reported observing or knowing of any fraud, fabrication, or other intentional wrongdoing in the research leading to and reported in the 2009 Nature paper." [0]

So basically the euphemism "lacked vigor" = "wasn't scientific" but since it wasn't "intentional" or "known" no-one can be blamed for it? Am I the only one who doesn't really care if bad science is intentional or fraudulent? They should be judged on their science, which was objectively dogshit, not on their morality, which is subjectively dogshit, but conveniently can't be judged by the relevant parties because everyone involved got amnesia.

Maybe I'm being too harsh but shadiness in public health really irritates me.

[0] https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&c...


>For the five reviewed papers where Dr. Tessier-Lavigne was a principal author (sometimes referred to as the “primary papers”), the Scientific Panel has concluded that Dr. Tessier-Lavigne did not have actual knowledge of the manipulation of research data that occurred in his lab and was not reckless in failing to identify such manipulation prior to publication.

I've read some of the papers that contain duplicated data; such duplications can be seen as "honest errors" similar to typos (e.g. maybe the file names were too similar?). It would be negligent of MTL to overlook these, even if it would not be fraud.


The issue with academia is much broader and larger than people know of, but I'm happy that (slowly) some of these things are starting to come out to light.

Broader, because fraudulent research is only one of the multiple crappy things that a person with no ethics will do to grab a place and keep it. But another common thing is academic abuse towards students, which is another HUGE problem that needs to be addressed. There's a lot of real crimes happening here, extortion, sexual abuse, you name it. Guess who's talking about it? Almost nobody.

Larger, because people tend to believe that these are just a few bad apples while in reality this is pretty much how many large academic institutions operate de facto. If I'd put an estimate of how many of these "bad apples" are actually there, I would say it is as high as 7 out of 10 people involved in academia.

I love science, I've been doing it for about 15 years. That is the reason why I'm very vocal around this subject. This is a swamp that needs to be drained.


It goes even deeper than all that. Schools basically give away undergraduate degrees now, as those who should fail will magically receive passing grades. They cannot afford to fail too many students (relative to other institutions), as that ultimately would damage either their finances or reputation.

Absolutely everyone involved in teaching (professors, instructors, grad students) are complicit in this scam. This endemic condition demonstrates the complete lack of integrity present in these institutions. If they are willing to lie and cheat the system that grants the very entry level credentials required to get into the upper echelons of academia, it becomes a very safe bet that they are lying and cheating in countless other ways from the comfort of their ivory towers.


>It goes even deeper than all that. Schools basically give away undergraduate degrees now, as those who should fail will magically receive passing grades.

how exactly did you make the leap to undergrads?


> There's a lot of real crimes happening here, extortion, sexual abuse, you name it. Guess who's talking about it? Almost nobody.

And institutions are very good and very fast at sweeping these under the rug, often if not always at the expense of the students.


I hope you get a chance to respond, just want to clarify the point: "7 out of 10 people in academia are either engaging in highly unethical, borderline and actual criminal acts, or complicit in enabling and/or hiding these acts"

Quite a claim if that's what you meant. Is this team members that are considered untouchable, due to being brilliant jerks/creeps, too important/connected, or otherwise inconvenient to go after?

The most I've personally heard of is lecturing professors berating the demo team (and students)... he was brilliant lecturer. Perhaps others hide it better.

I've heard it can work the opposite way, but I imagine it's not evenly applied. People get randomly fired from academia for the smallest of things.


I think the reason for this is that this behavior is self-perpetuating, in the sense that "a gang of crooks" is much more likely to assemble than "a gang of good people".

Whenever "a gang of crooks" takes hold of a Uni, it is extremely difficult to take them out, and people who don't want to comply with their modus operandi get pushed away and removed.

The only time "a gang of crooks" is dethroned is when someone (usually, worse) seizes their power to instill their own.

>People get randomly fired from academia for the smallest of things.

Only those on the outgroup, :^).


Underplayed here is the role of Elisabeth Bik (@microbiomdigest on Twitter). She’s spent years doggedly finding photoshopped images in published research papers, including these ones.


Super fascinating. Do you have any links to this research?


Her site is:

https://scienceintegritydigest.com/about/

https://scienceintegritydigest.com/

She wrote an op-ed in the NYT last year: https://www.nytimes.com/interactive/2022/10/29/opinion/scien...

She says her work has led to 938 papers being retracted.

One of the interesting parts to me is that most of her work seems to be completely manual. She seems to have an eye for seeing photoshop on microscopy images or images of gels. The biggest method of fraud that she seems to find is researchers cutting and pasting from one image to another, for example, the scientist runs a gel but the results don't work out the way they hoped, so they paste in a line from another gel to make the experiment look like it worked.

The uncomfortable part is that this is just one of many ways that you could fabricate or alter research results, and Dr. Bik is only one person finding it in her spare time. Probably a whole lot of fraud goes undetected.


> One of the interesting parts to me is that most of her work seems to be completely manual. She seems to have an eye for seeing photoshop [...]

Brings to mind the skills of the 'super recogniser'. Perhaps her manual work involves similar aptitude.

https://en.wikipedia.org/wiki/Super_recogniser

--

Later edit: Indeed, the NYT article mentions -

"Since childhood, I’ve been “blessed” with what I’m told is a better-than-average ability to spot repeating patterns. It’s a questionable blessing when you’re focused more on the floor tiles than on the person you’re supposed to talk to. However, this ability, combined with my — what some might call obsessive — personality, helped me when hunting duplications in scientific images by eye." - Elisabeth Bik


She also use a computer to double check her findings, from GP link:

> Most duplications were confirmed by ImageTwin.



It is. And should make you double think before you do the same photoshopping.

They are watching us.


She wrote a guest essay for us at NYT Opinion last fall, describing and showing examples of her investigative work: https://nyti.ms/43ti6y1

(Gift link, no paywall)


Thanks for the link and wow! A very worthwile read. And this paragraph raised the alarm level even more for me:

> Things could be about to get even worse. Artificial intelligence might help detect duplicated data in research, but it can also be used to generate fake data. It is easy nowadays to produce fabricated photos or videos of events that never happened, and A.I.-generated images might have already started to poison the scientific literature. As A.I. technology develops, it will become significantly harder to distinguish fake from real.


Why don't scientists live stream their experiments and have more rigorous verification of methodology?


This is actually one of the reasons I believe some foreign web commerce sites tend to do so well. There's a culture of filming everything. Film the product being made. Film the completed product in packaging. Film the completed product interacting with its environment and humans. Really adds to the "this product is real, and not a scam." Much larger barrier for falsification.


This would be a great tool for other teams to learn new/different methodologies. I guess the reason is competition.

But reproducing studies could be live-streamed without a problem.


One of the things that surprised me was that she was identifying manipulated and duplicated images using her own eyes.

This could be done via software, and might catch more papers than the ~6k out of 100k that she did.

In the software world, there are tools for this, "software composition analysis". I worked at a company that got busted for violating GPL, and as part of settling the suit, all software had to be run through BlackDuck and all warning/issues found by the tool had to be resolved before the software could be released. (NOTE: the software that violated GPL was from an acquisition)


The image where she found the duplications in the microscopy image demonstrates an absolutely unreal pattern recognition ability. I knew of her, but I hadn’t previously seen this essay


Here's one of any archived links; no paywall, can't be edited, and chokes out NYT's telemetry and metrics and such: https://archive.ph/hytwz


The NYT needs to make money to pay its Journalists.

After somebody who presumably works there posts a free link, you see the need to do this?


You can always justify behavior that causes someone else cost. Like the NYT with all their spyware?

Both queries are reasonable ones to make. A spies by default without proper consent, B blocks by default without consent.

Is this optimal? Can we do better?


I recommend you don't read NYT articles if you don't want your IP to be logged.


I recommend you put up convincing evidence that this is all that the NYT log about visitors because I don't believe that and it sounds a bit, well, glib to be honest. I don't read the nyt fwiw.

Moreover I recommend you have a slightly closer look if you are really holding up the nyt as a bastion of ethical behavior. But at least they stand up for Assange and journalism not being criminalised while doing stenography for the powerful (amongst some decent real reporting, some now being criminalized, away from the front page), right? I'm sure they'll balance their reporting about wikileaks by quoting someone who doesn't actively hate them any day now.

Anyway it should be noted that saying what you believe as you've done is the right thing and I endorse it even when defending the revenue a company that had $2.31 Billion on the books last year and has been a marker of inter-generational wealth and power for 170+ years to the family that owns it. I agree that what that looks like genuinely doesn't matter with regard to what you think is right in any ethical analysis. That goes unsaid too often imho.


>The NYT needs to make money to pay its Journalists.

Don't threaten me with a good time.

If j*urnos want money, they should choose to start acting in a way that deserves it at some point.


>The NYT needs to make money to pay its Journalists. Don't threaten me with a good time.

If j*urnos want money, they should choose to start acting in a way that deserves it at some point. In this case, if "The Paper of Record" needs to be regularly recorded from outside to be held to account for shoddy jobs and stitch-ups, exactly what service is it actually providing?


ok then


What is concerning is there were even more allegations that weren't included because they could not offer anonymity to those providing them:

https://stanforddaily.com/2023/07/19/sources-refused-to-part...


Wikipedia discusses the reliability of academic sources here[0], advising “extreme caution” when using primary research papers, preferring reviews.

This incident is a case in point, and I wish media wouldn’t rush to publicize papers until they have been through much more extensive validation, replication and review. It’s especially worrying when primary medical research is enthusiastically rushed into the hands of doctors many years ahead of the systematic reviews that temper them.

[0] https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources#Som...


looks nervously at the arxiv culture in AI...


Hello I work for Hubris Ventures and noticed you had AI in your comment. Are you interested in funding?


Hello I work for Hype Bank and noticed you had venture in your comment. Are you interested in some cheap loans?


I think publication of code culture in AI mitigates this relatively well compared to other fields, but it is getting strained recently.


I don't think there is a lot wrong with doctors, or specialists getting hold of individual or preprint papers. Sometimes they're the best you have when you need to make decisions.

I think the problems start when you have a layman trying to understand a topic. You can find all kinds of papers and results with contradicting evidence, so you need foundational knowledge to interpret them.


"doctor" is more lay of a title than many think imo


Doctors vary but they're still the best at what they do, which is deciding on a treatment given limited information.


In my ideal world, they shouldn't be doing that at all. They should be advising on treatments and performing said treatments, not deciding on them.


My understanding is that in the U.S a patient always has the right to refuse treatment. The only power a Doctor might have is to deny a treatment, but that seems reasonable from a positive/negative rights perspective. But I get it, you and the other poster are trying to seed doubt about vaccines.


you and the other poster are trying to seed doubt about vaccines

WTF? How did you get that from what I wrote? Are you under the impression that individual doctors decide on which vaccines to give to which patient?


> But I get it, you and the other poster are trying to seed doubt about vaccines.

yeah no, not at all


I think a bigger problem (especially if you think ordinary people shouldn’t be entitled to their own opinions), is when people advocate for science to have the final say on all matters of public policy. Especially when so many people consider any form of skepticism towards scientific findings to be some sort of heresy.


What else should we depend on, vibes? Even the claim "scientific findings aren't reliable" is a claim that can only be confirmed through science. Science isn't an exclusive institution, everyone is welcome to participate. But a preprint + conspiratorial thinking isn't a substitute for good arguments.


> What else should we depend on, vibes?

Well the purpose of public policy is to serve the will and preferences of the people, so I’d suggest we should rely on our democratic processes to have the final say on all matters of public policy.

> Even the claim "scientific findings aren't reliable" is a claim that can only be confirmed through science.

This is complete nonsense. Anybody can point out the flaws in any claim made by science, or demonstrate a reason for why something might be false. Even a freshman CompSci student with a side interest in journalism. Perhaps scientists will only accept their findings being falsified by appropriately qualified people who follow whatever formal procedures they deem appropriate, but who cares? If I were to read a study that made some wild claims that were barely supported at all by the data gathered from their clearly flawed experiments, then I’m welcome to draw my own opinion that “that study seems like bs to me”, and it’s not necessary for me to have that opinion validated by any scientific process or institution.

> Science isn't an exclusive institution, everyone is welcome to participate.

This is also nonsense. Public discussions on science are extensively gatekept to only allowing the views of “the experts” and people who agree with “the experts”. A non-expert challenging the claims made by an expert is misinformation, and should be fact-checked and flagged with a disclaimer, or removed from the discussion entirely.


> Well the purpose of public policy is to serve the will and preferences of the people, so I’d suggest we should rely on our democratic processes to have the final say on all matters of public policy.

Policy in the U.S is decided through democracy. But that begs the question, how should voters decide how to vote? Unless your belief is that vibes are valid, then it should be science.

> This is complete nonsense...

And it seems like that is your position. If you think there is widespread systemic error in science, I got bad news for you about vibes.

> This is also nonsense. Public discussions on science are extensively gatekept to only allowing the views of “the experts” and people who agree with “the experts”. A non-expert challenging the claims made by an expert is misinformation, and should be fact-checked and flagged with a disclaimer, or removed from the discussion entirely.

Have you considered that people get fact checked because they say some wild things without good evidence? Do you not care that people can just make things up?


> how should voters decide how to vote? Unless your belief is that vibes are valid, then it should be science.

The questions raised by most political platforms are answered by personal values or personal preferences. These are completely inaccessible to the scientific method, and I doubt you or anybody else takes a scientific approach to voting.

> Have you considered that people get fact checked because they say some wild things without good evidence?

Scientists routinely do this too. The main difference being it’s only socially acceptable to scrutinise somebody if they’re not being venerated as an unassailable expert.


>The questions raised by most political platforms are answered by personal values or personal preferences.

The democratic and republican parties makes a big deal about health care, housing, taxes, foreign policy, etc. These are also top issues for voters.

>These are completely inaccessible to the scientific method, and I doubt you or anybody else takes a scientific approach to voting.

They can, and they should. If you have a personal opinion on the tax rate, you shouldn't pull that tax rate out of your ass. If you have a preference for a tax rate, you should inform yourself on the matter.

>Scientists routinely do this too. The main difference being it’s only socially acceptable to scrutinise somebody if they’re not being venerated as an unassailable expert.

Now you've entered conspiracy theory land again. People routinely scrutinize scientists.


> According to Jerry Yang, chair of the Stanford Board of Trustees, Tessier-Lavigne will step down “in light of the report and its impact on his ability to lead Stanford.”

Jerry Yang, as the same Jerry Yang that co-founded Yahoo. Didn’t know he is on the board of trustees at Stanford.


He's been Chair since July 2021. At Stanford, Yang has served twice on the board of trustees, the first time from 2005 to 2015. He joined the board again in October 2017 and has served as its vice chair.

Yang & his wife have also given over $75 million to Stanford.


It's a small world up there.


[flagged]


lol is this a dig at Andrew Yang? why discuss hypotheticals. a fraud is a fraud.


Richard Feynman on the subject:

"researchers must avoid fooling themselves, be willing to question and doubt their own theories and their own results, and investigate possible flaws in a theory or an experiment. He recommended that researchers adopt an unusually high level of honesty which is rarely encountered in everyday life, and gave examples from advertising, politics, and psychology to illustrate the everyday dishonesty which should be unacceptable in science. Feynman cautioned,[3]

'We've learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature's phenomena will agree or they'll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven't tried to be very careful in this kind of work. And it's this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.'"

from https://en.wikipedia.org/wiki/Cargo_cult_science

The frauds and bad scientists will be found out, eventually.


> The frauds and bad scientists will be found out, eventually.

The Stanford president mentioned is 63 years old and had a full academic career before being found out. It might even be that he remains financially well-off (I'm assuming his current role paid well for the last few years).

For every big case such as this one, how many more commit fraud without being noticed?

These quotes address what a scientist should do to do science, not what is practical to have a career in academia. The latter is more relevant in practice, because rigor and integrity don't get you tenure on their own.


Exactly. He stiffled the career of many honest scientists by siphoning money and prestige that should rightfully have been directed elsewhere. The damage is done.


This documentary series comes to mind [1], where in the end they go over the fallout and all the collective research years wasted and budding careers stunted (including people outright leaving research) because as students/postdocs trying to replicate the work of a scientist who was considered to be a potential nobel laureate, they couldn't possibly question the work they were trying to replicate.

I feel this aspect of research fraud is even worse than the other established scientists who had their research impacted. In the mentioned case, the established scientists were essentially unharmed with a large body of other work to lean on and many having some form of job security, while the PhDs and postdocs suddenly found themselves unable to list most of their experience on their CV, despite the established scientists being the most qualified to verify the fraudster's work. Especially for someone in a leadership position at a university, I wonder how that doesn't eat them up inside.

[1] https://www.youtube.com/watch?v=nfDoml-Db64


> It might even be that he remains financially well-off (I'm assuming his current role paid well for the last few years).

I would say so. From one of the article:

> Tessier-Lavigne’s salary at Regeneron in 2014 was $1,764,032, according to a previously-unreported class action lawsuit alleging excessive compensation for members of the Compensation Committee, which included Tessier-Lavigne. It was later settled. He earned $1,555,296 from Stanford in 2021 with an additional $700,000 annually as a board director for Regeneron.


> Tessier-Lavigne’s salary at Regeneron

Wait, the company that re-sells "medical waste"? e.g. foreskins, cord blood, placentas? Body parts that should reside with their rightly owner, but doctors butcher and steal for self-enrichment?


The "The frauds and bad scientists will be found out, eventually" is still relevant. The point is that the priniciples of collaborative peer review should keep the field on track, not that it will rain vengeance down on some miscreant. The papers being retracted is far more important, in the long run, than this person being punshed or not.


But it should also be a reminder to those not in academic fields to take preprints and even published research with a huge grain of salt. Between political pressure, incentives to cheat to get funding and status, and the “infallibility” of people in authority, peer reviewed research is not looking very good to the average Joe. Hopefully it will eventually get straightened out, but it’s looking more like an entrenched ministry of truth.


I've been fascinated by this recently. Could you wind up better off, even getting exposed with a fraud scandal.

We always assume it's a tragic tale, and we're seeing someone crushed into a rock bottom state. It seems in many cases they still end up ahead of where they would be.

I even wondered if that would happen with Theranos. It's possible she's even ahead now of where she would be without fraud. I tend to think a lengthy prison sentence is never worth it, but if she had got off easier that would be a different story.


> The frauds and bad scientists will be found out, eventually.

That's nice, but some frauds could potentially be very, very costly for society. There needs to be some deterrent as well.


There needs to be heavy deterrent, but there also needs to be room for genuine failure. A lot of research, in both academic, public, and private domains, is worthless. When you spend months, or years on dead end research it can be a disaster for your career. The temptation to misrepresent or fabricate results is strong in those situations.


That’s the problem. Spending years to fail in research shouldn’t nuke your career. Finding out it’s not worth doing is as, if not more, important as finding the way to do it


Biomedical science is not self-correcting on any reasonable time scale. Ten to twenty year errors and distractions are common, and can have a huge direct and indirect cost. Alzheimer’s disease research and the APP mania is a case in point.


APP Mania?


Amyloid precursor protein

The original study on that from like 2006 turns out to be faked

https://www.science.org/content/article/potential-fabricatio...


As well we are discovering from 40 years of neoclassical economics being a dominant force in policy.


Only if we can question experts. That seems to get you in trouble.


Eventually you will be found out. Unfortunately that could be at 63 (like the Stanford president) after you have had an entire career and tens of millions in compensation. Other times it may be after your death.

Sadly, this is a hard scam to catch and most top researchers may never get enough scrutiny to get caught.


> Eventually you will be found out.

this just isn't true. you're assuming because you only see this every once in a while that it must be rare and policed effectively. it's actually the complete opposite - very common and not policed at all. the much more likely thing is that your paper with fabricated data will never be read. that's as good in the sense that no harm no foul but it still has the effect of padding out people's CVs that don't deserve it. but hey - don't hate the player hate the game is my motto.


Fair enough. I was just quoting the guy above me.


"eventually" might be a while. these fairly obvious (once pointed out) photoshops were just sitting in the open in the top scientific journals in the world for 20 years...


[flagged]


Ok, but can you please not post unsubstantive and/or flamebait comments to HN?

You may not owe Richard Feynman better, but you owe this community better if you're participating in it.

https://news.ycombinator.com/newsguidelines.html


The provost has also stepped down, presumably because she knew the new president will select a new provost. Given that she announced her resignation 10 weeks ago, she must have known that this outcome was the likely one. [1]

1: https://news.stanford.edu/report/2023/05/03/persis-drell-ste...


wow, I didn't see that one coming. There was something underwhelming after the John L Hennessy handoff and the new cast of characters I couldn't put my finger on at the time. The tone of the alumni magazine changed in a direction I wasn't too fond of but too complex to nail and articulate at the time why.


On the whole, all these scandals in manipulated research have deeply shaken my trust in many of our scientific institutions. It's clear by now this isn't the case of a few bad apples - our scientific institutions are systemically broken in ways that promote spreading fraudulent results as established scientific truth.


As a aside, the phrase "a few bad apples" is actually originally "a few bad apples spoil the barrel" referencing the fact that a bad/overripe apple causes nearby apples to quickly ripen and go bad which is now known to be due to ripe apples producing ethylene gas which accelerates the ripening of other nearby apples. The phrase originally meant that one bad thing corrupts and destroys all associated. The discovery of a bad apple actually means everything is already irrevocably destroyed and thus reason for not tolerating even a single bad apple.

A modern metaphor with a somewhat similar meaning to the original is: "A fish rots from the head down." Pointing out that organizational failures are usually the result of bad leadership. A rotten leadership will quickly result in a rotten organization. Therefore, it is important to make sure the leadership is not rotten in a organization. It also points out that low-level failures indicate there are deeper high-level failures. If the line-level is screwed up, the leadership is almost certainly just as screwed up. The fix being replacing the rotten leadership with a new one as lower-level fixes will not fix the rotten head.

Another, more direct equivalent metaphor is a Chinese saying translated as: "One piece of rat poop spoils the pot of soup." That is hopefully self-explanatory. We should probably use it instead of "a few bad apples" as nobody will reverse the meaning of that one.


As an aside to your aside, it's also the case that phrases/words change meaning over time, as usage in one grows above the usage in a different way.

In this case, the "a few bad apples are not representative of a group" meaning have grown above the "One bad apple spoils the barrel" meaning, and so the phrase as changed, for better or worse.

Maybe it would be best if everyone used the long version instead of the short one. When you say/write "A few bad apples", the meaning is ambiguous, but if you use the long version, it's not. Problem solved :)


> In this case, the "a few bad apples are not representative of a group" meaning have grown above the "One bad apple spoils the barrel" meaning

Most of the time when I hear the "only a few bad apples, the rest of us are fine" meaning it's coming right from the mouths of badly spoiled apples twisting the meaning of those words and popularizing that usage to suit their agendas.

Generally, I think that there's nothing wrong with pushing back against words and phrases used incorrectly. We get to decide how words are used, and a large part of that decision making process involves social pressure and education. I think it's particularly useful to defend the meaning of words and phrases when they're being deceptively misused and promoted.


> the "a few bad apples are not representative of a group"

I have never heard that phrase, it has always been that the few spoils the whole.

I have heard people say "the proof is in the pudding", which means nothing at all, when the real phrase is "The proof of the pudding is in the tasting".

I'm from England and I speak English, so maybe it hasn't translated well to Americlish.


I think the larger "issue" is that the phrase colloquially means the exact opposite of the original observation, that a bad apple MEANS the bunch is spoiled. It's worse because this changing of the meaning is perpetuated by those same bad apples themselves.

"the proof is in the pudding" is a much more benign change. It's literally just a shortening, but no meaning is lost... if you want the proof, you'll find it in the pudding (implying you should try the pudding to verify your assumptions)


"Literally" is another word where the meaning changed from being the literal opposite of what it was "meant" to originally mean, not sure one is "worse" than the other. It's just change, which will continue to happen.


I would argue that the meaning has never changed. There is just an additional slang variation used by a subset of English speakers. Much like “wicked” was once slang for “good” and how Londers don’t literally ring people using the bones of dogs (“dog an bone” Cockney rhyming slang, in case the reference doesn’t translate).


> the original observation, that a bad apple MEANS the bunch is spoiled

Too few people have enough apple trees in their lives to preserve the meaning.


"It's just a few bad apples" is a common response to police misconduct here in the States, with the attitude of "why are you making such a big deal out of this?"

The original saying, of course, is all about why you have to make a big deal out of this, for reasons that apply to both apples and cops.


> I have heard people say "the proof is in the pudding", which means nothing at all, when the real phrase is "The proof of the pudding is in the tasting".

To be fair, the "real" phrase you give here doesn't make much more sense to me. Even assuming the use of the term "pudding" across the pond to be more than just a fairly niche dessert like it is in America, what does it mean for pudding to have "proof"? Is is some sort of philosophical thing where you don't accept that the pudding exists unless you taste it (which I feel isn't super convincing, since if we're going to have a discussion, we kind of have to accept that each other exists without having similar first-hand "proof", so we might as well accept that pudding exists as well)? I know there's a concept of something called "proofing" in baking, but I'm pretty sure that happens long before people taste the final product.

In general, I don't find most cliches to be particularly profound. "It is what it is" is just a weird way to state an obvious tautology, but somehow it's supposed to convince me that I should just passively accept whatever bad thing is happening? "You can’t teach an old dog new tricks" isn't universally true, but it apparently also is supposed to be a convincing argument in favor of inaction. "You can’t have your cake and eat it too" is probably the most annoying to me, because the only way anyone ever wants to "have" cake is by eating it; no one actually struggles to decide between eating their cake or keeping it around as a decoration or whatever.

There's something about stating something vaguely or ambiguously that seems to make it resonate with people as profound, and I've never been able to understand it. In my experience, thought-terminating cliches are by far the most common kind.


It’s “proof” as in to test. Like “proof reading”. The point being, the real test of how good something is, is to use it (for its intended purpose).

A vaguely similar sentiment to when people say “eating your own dog food” (or words to that effect) to mean testing something by using it themselves. Albeit the pudding proverb doesn’t necessitate the prover to be one’s self like “dog fooding” does.


This is just an excuse for ignorance and the annoying habit people have of repeating something they heard but don't understand.

I think it's right to correct it because when people misuse this phrase, it isn't gaining a new meaning--it's making it meaningless. Why apples? The comparison to apples adds no information or nuance.

Like when there's a story about police corruption, and someone says "they're just a few bad apples, not all cops are bad." Again, why compare them to apples? Why not just say a few bad cops?

This isn't words/phrases changing meaning, it's losing meaning.


> This isn't words/phrases changing meaning, it's losing meaning.

It is literally not, it still means something, just not the same as it originally meant. This happens all the time, with "literally" being one of the best examples of something that literally means the opposite of what it used to mean.


The problem with this is that it creates ambiguity in communication. Both the old meaning and the new one will circulate together, especially among different demographics, and cause potentially severe misunderstandings.


I don't think the phrase has changed meaning?

> The phrase originally meant that one bad thing corrupts and destroys all associated.

It's saying if you don't remove the bad apple, you will get a lot of bad apples in the future. The presence of a bad apple doesn't imply all of them are already spoiled right now. If you're seeing other apples that still haven't spoiled, it suggests you still have time to do damage control. That seems consistent with how the metaphor is used nowadays.


I think there's a transition phase between the two that people miss out on. I recall hearing "let's not let a few bad apples spoil the bunch" which is an acknowledgement of the original phrase, reworked to implore listeners not to throw it all away. You could say "let's not throw the baby out with the bathwater" but I guess some people are apple enthusiasts?


I would like to point out that "scientific truth" does not really exist, or at least is far from straightforward to define and establish. Basically, you should see each piece of research as evidence for a certain hypothesis, and the more evidence is available, the more that hypothesis is believable.

But the larger issue here is that all public institutions are, by that definition, broken. For example, businesses also won't hesitate to spread falsehoods to sell their stuff, governments will try to convince their people that they are needed through propaganda and policing, and so on.

How do we solve these problems? We have laws to regulate what businesses can't do (nevermind lobbying), and we split governments' responsibility so that no single branch becomes too powerful. In general, we have several independent institutions that keep an eye on each other.

In case of science, we trust other scientists to replicate and confirm previous findings. It is a self-correcting mechanism, whereby sloppy or fraudulent research is eventually singled-out, as it happened in this and many other cases.

So I guess the gist of what I want to say is that you're right in not trusting a piece of research just because it was made by a reputable institute, but look for solid results that were replicated by independent researchers (and the gold standard here is replication, not peer review)


> businesses also won't hesitate to spread falsehoods to sell their stuff

They do hesitate. It's quite hard to catch businesses openly lying about their own products because, as you observe, there are so many systems and institutions out there trying to get them. Regulators but also lawyers (class action + ambulance-chasers), politicians, journalists, activists, consumer research people. Also you can criticize companies all day and not get banned from social media.

A good example of what happens when someone forgets this is Elizabeth Holmes. Exposed by a journalist, prosecuted, jailed.

Public institutions are quite well insulated in comparison. Journalists virtually never investigate them, preferring to take their word as gospel. There are few crimes on the book that can jail them regardless of what they say or do, they are often allowed to investigate themselves, criticism is often branded misinformation and then banned, and many people automatically discard any accusation of malfeasance on the assumption that as the institutions claim to be non-profit, corruption is nearly impossible.

> It is a self-correcting mechanism, whereby sloppy or fraudulent research is eventually singled-out, as it happened in this and many other cases.

It's not self correcting sadly, far from it. If it were self-correcting then the Stanford President's fraud would have been exposed by other scientists years ago, it wouldn't be so easy to find examples of it and we wouldn't see editors of famous journals estimate that half or more of their research is bad. In practice cases where there are consequences are the exception rather than the norm, it's usually found by highly patient outsiders and it almost always takes years of effort by them to get anywhere. Even then the default expected outcome is nothing. Bear in mind that there had been many attempts to flag fraud at the MTL labs before and he had simply ignored them without consequence.


Alternatively there is a baseline of fraudulent behavior in any human organization of 1-5% and since there are tens of thousands of high-profile researchers this sort of thing is inevitable. The question you should be asking is whether the field is able to correct and address its mistakes. Ironically cases like this one are the success stories: we don’t have enough data to know how many cases we’re missing.


I don't think the baseline is the same. The more competition, the more temptation to cheat. When the margins to win are small enough, cheaters are disproportionately rewarded.

Think of Tour de France. Famously doping-riddled. There are a lot of clean cyclists, but they are much less likely to be able to compete in the tour.

You can fight cheating with policing: doping controls, etc. But as the competition gets more extreme, the more resources you need to spend on policing. There's a breaking point, where what you need to spend on policing exceeds what you get from competition.

This is why almost no municipalities have a free-for-all policy for taxis. There are too many people technically able to drive people for money. All that competition drives prices lower, sure, but asymptotically. You get less and less lower prices the more competition you pile on - but the incentives for taxi drivers to cheat (by evading taxes, doing money laundering as a side gig etc.) keep growing. London did an interesting thing - with their gruelling geography knowledge exam, they tried to use all that competitive energy to buy something other than marginally lower prices. Still incentive to cheat, of course, but catching cheaters on an exam is probably cheaper and easier than catching cheaters in the economy.

(Municipalities that auction taxi permits get to keep most of the bad incentives, without the advantage of competition on price.)


It's only a story because he's president, if he were only a researcher/professor this would not even be a story. This is NOT a success story, it shows that this fraudulent behavior is endemic and an effective strategy for climbing the academic ladder.

A success story would be this is exposed at large... we work out some kind of effective peer-reproduced tests... and the hundreds/thousands of cheating professors are fired.


Endemic means "regularly occurring". How many examples of this kind of misconduct are you aware of? Ok, now, what's the denominator? How much research is actually conducted? I'm personally familiar with 3 fields (CS, bio, and geology) and what I've learned is that the number of labs --- let alone projects --- is mind-boggling. If your examples constitute 1% of all research conducted --- which would represent a cosmic-scale fuckload of research projects --- how much should I care about it?


BMJ: Time to assume fraud? https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-hea...

Study claims 1 in 4 cancer research papers contains faked data https://arstechnica.com/science/2015/06/study-claims-1-in-4-...


So let's talk about misleading headlines and citations in journal articles. I would argue that arstechnica is one of the better news sources. Despite that, if we go to the article there is a link to that there has been "a real uptick in misconduct". Now if we click through that link, it does claim that there has been an increase in fraud as a lead in (this time without a link) but the article is about something completely different (i.e. that almost half the retracted papers are retracted due to fraud).

As an aside, the article cites that there have been a total of 2000 retracted papers in the NIH database. Considering that there are 9 Million papers in the database overall, that is a tiny percentage.


> ... if we click through ...

So you deflect from the entire content of the article with that distraction? And then an additional misdirection regarding retraction? Why?


> > ... if we click through ...

> So you deflect from the entire content of the article with that distraction? And then an additional misdirection regarding retraction? Why?

What do you mean? I take issue with the headlines and reporting. And I believe if one claims lack of evidence, sloppy evidence or fraudulent evidence one should be pretty diligent about ones one evidence.

Regarding the claims in the article. If you look at the 1 in 4 article you find that the reality is actually much more nuanced, which is exactly my point. The reporting does not necessarily reflect the reality.

If you call that deflection...


The ArsTechnica article was about a paper by Morten Oksvold that claimed that 25% of cancer biology papers contain duplicated data.

One nuance is that his approach only focused on one easily identifiable form of fraud: Western blot images that can be shown to be fraudulent because they were copies of images use in different papers. Of all the potential opportunities for fraud, one must think that this must represent just a small portion.

If there are other nuances you care to mention, I'm all ears.

Instead, you refer to an entirely different article, as if the article I cited has no relevant content, which misleads casual readers of this comment stream. To paraphrase your comment in a less misleading way: "Inside this article you can find a link to an entirely different article whose content does not support the headline of the original article."


Well, one thing you might want to do before doubling down on the Oksvold study is work out the percentage of those papers that were likely to have misused western blot images (it's the bulk of the paper, impossible to miss), and then read the part of the Ars article (again: the bulk of the article) that discusses reasons why different experiments might have identical western blot images (one obvious one being that multiple experiments might get run in the same assay).

Instead, you're repeatedly citing this 25% number as if it was something the paper established. Even the author of the paper disagrees with you.


Double down? Repeatedly? I posted a link to an article with its headline, and only later, when rebutting a comment that implied the article was about something "completely different", I mention that the article is about the Oskvold study and its finding of duplication in 25% of papers. The paper did in fact establish that number (unless you want to quibble about 24% vs. 25%).

Yes, the ArsTechnica headline is poorly written, and not supported by the content of the article, because not all instances of duplication are fraud, but we can clarify that issue by quoting the article itself: "... the fact that it's closer to one in eight should still be troubling."


devil's advocate - '1 in 4 studies are fake, says "study"'


So just because one person is cheating, it means all academics are cheating?

FWIW, most top-ranked CS conferences have an artifact evaluation track, and it doesn't look good if you submit an experimental paper and don't go through the artifact evaluation process. Things are certainly changing in CS, at least on the experimental side.

It's also possible that theorems are incorrect, but subsequent work that figures this out will comment on it and fix it.

The scientific record is self-correcting, and fraud / bullshit does get caught out.


It's not just "one person", there is wide-spread fraud across many disciplines of academia. The situation, of course, is vastly different across subjects/disciplines, e.g. math and CS are not really much affected and I would agree they're self-correcting.

I might agree they're self-correcting in the (very) long-term, but we're seeing fictitious results fund entire careers. We don't know the damage that having 20+ years of incorrect results being built upon will have... And that's not to speak of those who were overlooked, and left academia, because their opportunities were taken by these cheaters (who knows what cost that has for society).


The very fact that the fraud is discovered, that reporters amplify it, and that it can bring down the president of the university, is evidence to me that the system still works.


Maybe? I'd want to see a clear model of flows and selection biases before I concluded that.

Another way to look at it: perhaps Tessier-Lavigne only got this scrutiny because he was president of the university. And the fact that they didn't guarantee anonymity when "not guaranteeing anonymity in an investigation of this importance is an 'extremely unusual move'" might be a sign that the scrutiny was politically diminished.

So it could be that most of the equally dubious researchers don't get caught because not enough attention is paid to patterns like this except when it's somebody especially prominent. Or it could be that this one was not as well covered up, perhaps because of the sheer number of issues. Or that the cross-institution issues made Stanford more willing to note the wrongdoing. Or that Stanford is less likely to sweep things under the rug because of its prominence. Or just that there was some ongoing tension between the trustees and the president and that this was an opportunity to win a political fight.


These are good points and hard to know. But the Retraction Watch is tracking stories of both mistakes and fraud in published research, across universities:

https://retractionwatch.com/


A tenacious undergrad doing journalism as a hobby is not a system.


The fate of the world lies in the hands of the young and inexperienced.

Grad students, Supreme Court clerks, 19-year-old soldiers.


Sure, any system with a false negative and false positive rate 'works'.


No. This level of scrutiny and diligence is rare, and was selectively applied based on the targets profile. The "field" did nothing about this over 20 years. A computer science freshman did this as a hobby, not as a participant in neuroscience.

Perhaps "nothing" is too harsh. Various people in the field raised concerns on several occasions. But the journals did nothing. The "field" still honoured him. And _Stanford_ did nothing (except enable him and pay him well) until public embarrassment revealed the ugliness.


This is the important and troubling point. Everyone trumpets science as a model of a rational, self-correcting social enterprise. But we see time and time again that it takes non-scientists to blow the whistle and call foul and gin up enough outside attention before something gets done to make the correction. That puts the lie to the notion of self-correction.


This is an issue at the department politics level. For the scientific field, once someone starts retracting papers (and arguably, even before this), everybody knows that you should take person X's papers with a huge grain of salt.

E.g., in math / theory, if someone has a history of making big blunders that invalidate their results, you will be very hesitant to accept results from a new paper they put on arXiv until your community has vetted the result.

So yes, I do trumpet science as a model of a rational, self-correcting social enterprise, at least in CS.

Other sciences like biology and psychology have some way to go.


The thing is that replication is inherently easy in CS. Especially now that people are expected to post code online.

Forcing authors to share raw data and code in all papers would already be a start. I don't know why top impact factor papers don't do this already.


I completely agree. It's a pity that this isn't becoming standard in fields affected by the replication crisis. I would be happy to be corrected if someone has heard / experienced otherwise.


> you will be very hesitant to accept results from a new paper they put on arXiv until your community has vetted the result.

Forgive my ignorance but I thought that was SOP for all papers. Is it not?


Well not really, right? Let's suppose some well known, well respected author that has a history of correct results puts up a new paper. I (and I think most people) will assume that the result is correct. We start to apply more doubt once the claimed result is a solution to a longstanding open problem, or importantly, if the researcher has a spotty track record for correctness (in math/TCS) or falsifying results (in experimental fields).

But really we shouldn't be talking about math errors and falsification in the same category.


The problem is that we don't know what the baseline really is. We know that between a third and a half of results from peer reviewed papers in many domains cannot be replicated. Looking closer, we see what look like irregularities in some of them, but it's harder to say which of them are fraud, which are honest mistakes, and which of them just can't be replicated due to some other factors. But because so many of these studies just don't pay off for one reason or another, I would agree that it is getting really hard to rely on a process which is, if nothing else, supposed to result in reliable and trustworthy information.


Where is that number of 1/3-1/2 coming from? And which fields? I find that very hard to believe (at least if we exclude the obvious fraudulent journals, where no actual research gets published)


I think he's referencing the replication crisis that was a big deal a few years ago. Psychology was hit hard(unsurprising), but a few other fields in the biology area were also hit.


It's worst in Psychology and the Social Sciences, but it's not limited to them. Per Wikipedia:

> A 2016 survey by Nature on 1,576 researchers who took a brief online questionnaire on reproducibility found that more than 70% of researchers have tried and failed to reproduce another scientist's experiment results (including 87% of chemists, 77% of biologists, 69% of physicists and engineers, 67% of medical researchers, 64% of earth and environmental scientists, and 62% of all others), and more than half have failed to reproduce their own experiments. But fewer than 20% had been contacted by another researcher unable to reproduce their work. The survey found that fewer than 31% of researchers believe that failure to reproduce results means that the original result is probably wrong, although 52% agree that a significant replication crisis exists. Most researchers said they still trust the published literature

Not sure if the results of that online study have (or can) themselves be reproduced, however. It's turtles all the way down.


Skimmed the wiki on the replication crisis, and people have actually tried to systemically replicate popular studies and found similar results. You could say there has been a successful replication of failure to replicate.


If a field takes two decades to "correct" its mistakes, then there are several things wrong with it. And if we have top positions held by unethical people, who have got away with it, and possibly climbed to the top because of it, then I do not know what to feel or say about this.


It's taken String Theory a few decades to correct itself.


Any human organization?

I don't expect 1-5% fraud in airline pilots, bank tellers, grocery store clerks, judges, structural engineers, restaurant chefs, or even cops (they can be assholes but you don't have to bribe them in functional countries).

I think academics can do better than 1-5% fraudulent.


What? In all of the ones you mentioned there is a known significant amount of fraudulent behaviour.

Store clerks, theft is about 1-2% of sales typically. It has been said for years that the majority of that theft is from employees. Airline pilots have been known to drink during their flights (or go away from there seat for other reasons that are not in the rules).

Cops, I mean don't get me started, just the protection of a cop who has done something wrong by the other cops would count as fraudulent, but I don't see many cops going after their own black sheep.

Judges, in Germany deals (i.e. the accused pleads guilty to lesser charges so the bigger ones get dropped) are only legal under very limited circumstances (almost never and need to be properly documented). Nevertheless, in studies >80% of lawyers reported that they had encountered these deals).

I think you seriously underestimate the amount of fraudulent behaviour.


Also coming back to judges. The behaviour by Thomas and Alito regarding presents etc. Would count as serious scientific misconduct in academia. So there's a significant percentage just there already.


I expect far higher levels of fraud in these professions.


I’ve come to believe that science is mostly about popularity and not about truth-finding. As long as peers like what you write, then you will get through the reviews and get cited. Feynman called this Cargo Cult Science. I think much of science is like this, see also Why Most Published Scientific Findings are False. Not much has changed since the publication of that paper. A few Open Science checks are not gonna solve the fundamental misalignment of incentives.


Wholeheartedly agree, really a shame to see what it’s become. Wish I could still see research the way I dreamed it was as a child.


it is impossible for most scientists to understand / critically think about all the research coming out from so many institutions, so most of these academics mainly focus on research coming from someone they respect / institutions they respect, so yes it is kind of like a popularity contest but i would argue that most things in life are due to the limited nature of the human brain we cannot think independently about everything for ourselves and rely on external judgements to what is important / true etc...


It is absolutely a popularity contest. The biggest problem is that many academics are reluctant to deviate too far from current consensus in fear of damaging their reputation.

The result is that research in many fields tends to stagnate and reinforce old ideas, regardless of whether they are right or wrong.


"It is difficult to get a man to understand something when his salary depends on his not understanding it." -- Upton Sinclair


The peer review system is not designed to catch fraud, it's designed to catch scientific or experimental errors.

Giving up on science is such a vast overgeneralization. You could take your statement and replace "manipulated research", "scientific institutions" and "established scientific truth" with just about any negative article in any domain. You could just as easily make this statement about startups (Theranos, Juicero), or government, or religion, or suburbs, or cities...


> The peer review system is not designed to catch fraud, it's designed to catch scientific or experimental errors.

Yes.

> Giving up on science is such a vast overgeneralization. You could take your statement and replace "manipulated research", "scientific institutions" and "established scientific truth" with just about any negative article in any domain. You could just as easily make this statement about startups (Theranos, Juicero), or government, or religion, or suburbs, or cities...

Institutions go through similar cycles of breaking and systemic reform. Not surprised that you can see patterns in other domains.


It often does neither:( The only real protection from fraud, mistakes and poor science is replication. If results can't be replicated by others it is not science.


If you implement a strategy such as publish or perish exceedingly smart people will game the system to win. Any metric gets gamed.

Look at papers that have real impact they get cited. Look at ones that don’t …


And not just that, but rewarding outsized effect sizes so that you reward folks who create the biggest lies with fraudulent stats.


And you have some of the smartest brains gaming it too... Such a sad use of good neurons :(


I wouldn’t presume that the smartest brains are gaming the system. Most likely, it’s mediocre hucksters who have bullied and networked their way into a position of authority. Being good at social engineering != to being the best researcher.


I've seen some situations where smart people did bad research because of deadlines related to work visas. Science doesn't care how smart you are or if you could end up without a home. It will take as many logical iterations over an experiment design before being fruitful.


I would. Lots of "the smartest brains" are mediocre hucksters who have bullied and networked their way into a position of authority. This doesn't mean they aren't "the smartest brains".

IME some of the more effective engineers I've worked with have gravitated towards politics, not "raw technical skill". It's not because they prefer it. They use their "smartness" to win.

The problem is that being good at social engineering >> anything else. Intelligent people often look at the system and say: "What's the point in naively following this when no one successful unless they game the system?"

What's the point of putting one of your great, well considered idea into the fold. It's far more effective the be a mediocre huckster. You don't have to deal with the uncertainty, giving your idea to someone else, etc. Better to work the social game and phone in the rest.

It works better and you don't have to deal with the crushing disappointment that goes with fighting for an idea in a horrifying bureaucracy.


Being smarter doesn't make you more moral either.


It may enable you to do sufficiently well without resorting to immoral methods. Interesting how these things can go.


It may, but it doesn't provide the motivation to bother, especially if you only ever get caught at the end of your career.


I've seen this more often go in the other direction, but I think it can be either way.



I can't speak for other fields but in Neuro there's plenty of this but often one learns how to catch it before using it in your own research, even if it never becomes a matter of public scrutiny. Unfortunately, I can't reassure you that bad research gets caught all the time. However, there's usually at least a couple of experts in a given sub field of Neuro that quickly call BS before something goes too far.


This is an excellent point. A lot of crappy research goes on, and nobody pays it any attention (except, occasionally, when cranks outside the field want to prove that "peer-reviewed research proves the Earth is flat).

It's frankly not worth the effort to debunk a shitty piece of researchin a low-profile journal that's never been cited in a decade.


> in Neuro there's plenty of this but often one learns how to catch it before using it in your own research, even if it never becomes a matter of public scrutiny.

And what happens when it is caught, it is just quietly ignored by the field, right? How often are there retractions?


Depends on the situation. If no one cites it then it drifts into obscurity quickly. If it was actually cited frequently it leads to an investigation of work by all authors on the paper along with a retraction.


A vast amount of "science" is being done at all times. You can likely count the scandals cognitively available to you on one hand; even if it took dozens of hands, you'd still be talking about an infinitesimal sliver of science on the whole. What's actually happening here is an availability bias: you remember scandals, because they're scandalous and thus memorable. You don't know anything about the overwhelming majority of scientific work that is being done, so you have no way of weighting it against the impression those scandals create in your mind.


Via HN yesterday [1]- an editor of _Anaesthesia_ did a meta study of the papers he handled that conducted RCTs. He had data from 150 of them and concluded:

> ...26% of the papers had problems that were so widespread that the trial was impossible to trust, he judged — either because the authors were incompetent, or because they had faked the data.

This is not a one off.

[1] https://www.nature.com/articles/d41586-023-02299-w


I didn't say it was a one-off. But 150 papers is, to a first approximation, a one-off of all the science done in a given year. We produce millions of journal articles every year.


There's something to be said about a defense of this that doesn't account for random sampling.

Assuming that they did a proper sample of said papers, that implies that for whatever domain they sampled, 26% is likely a decent estimate of actual issues. Increasing the scale doesn't make a proportional estimate any better.


Maybe we shouldn't. What's the point of all of that data if a good portion of it can't be trusted?


Here we're talking about a proportion significantly less than 1%.


No one is shocked by the concept of misconduct occurring, the issue here is that it is no longer surprising when those committing the misconduct end up running the organization. You can pretend that the conversation is about whether scientific misconduct is endemic, but that conversation being had is about the failure of these hierarchies to actually succeed in promoting the best from among their ranks.

Of course misconduct is unavoidable, that doesn't mean you should become president. The politics aren't working.


Are you commenting on the wrong subthread? I do that all the time. This subthread is about whether the foundations of science itself are stable.


You just did it again, trying to steer the conversation to something that not at the heart of the discussion. This is the parent:

It's clear by now this isn't the case of a few bad apples - our scientific institutions are systemically broken in ways that promote spreading fraudulent results as established scientific truth

This is a concern about the corrupted institutions, with the downstream concern that science itself may be under threat. The primary concern is the systemically broken institutions who promote the fraudulent to the top of their hierarchies. Not sure why you insist on straw manning this thing, but clearly you have some person reason for doing so, and I wish you luck in that endeavor.


We disagree about what the implications of a single university president surrendering their post are to the whole of science. You're asked not to write comments imputing personal motives:

https://news.ycombinator.com/newsguidelines.html

If you want to argue this further, you should probably snipe the swipes off the end of your comment.


Is that the correct conclusion to draw? I mean there are definitely big problems on how we conduct and fund scientific research (which might also contribute to fraud), but the number of research scandals is a tiny fraction to the amount of research being done.

Considering that we get fraud every time we have humans and prestige money, I would really like to see some statistics against other things human activities. I suspect science still has some of the lowest fraud rates and the strongest mechanisms to detect and deal with it.


The problem is a tiny percentage gets any attention whatsoever. It's the same with police and doctoral abuse. These things are hugely prevalent with 30-80% of professionals engaging in some form of abuse... same with fraud.

People know about police abuse. We don't talk about doctor abuse. I'm honestly not confident that there's any police/doctors that don't engage in abusive practices (or it's a tiny percentage of the population).


It's the same everywhere not just science. The fake-it-till-you-make-it type-A charismatic bullshitters rise up the ranks in all organizations.


I feel this trend taking root in academia is still a new-ish thing. The boundaries of academia and research, especially for computer science, really started blending 15-20 years ago as Big Tech took over Oil for the best paying job / grant.

The decay has been super fast though. Maybe some academics will find the courage to do a longitudinal study of this decay. Now that'll be an interesting paper to read.


It’s most certainly not a new trend, but is perhaps a quintessentially American disease. But one need only look at the so-called “luminaries” in many fields during the mid-20th century to see that this is not in anyway a novel phenomenon. Once you get slightly afield from the hard sciences, it’s charlatans all the way down, especially in fields like psychology and economics.


Especially that the folks that are committing the fraud are raising to high places. It goes to show that we have systemic problems. This isn't a failure of a few individuals but a failure of our institutions. Clearly our incentive structure is messed up if people like this are in positions like this. Clearly we need to not only address this individuals actions, but the systemic issues that led to his ability to do what he did and still rise to the position he did.


Scientific institutions aren’t perfect. They’re made up of people like anywhere else. And where there are people there will be politics and gamesmanship. That doesn’t mean science isn’t our best shot at figuring out how the world works.

The fact that a Stanford president can be pushed out for bad research conducted before he was even there? It tells me there’s still some integrity left.


The article from Nature yesterday came up with 26% of the peer reviewed published papers they examined (all RCT) were untrustworthy based on close examination of their data. They could only invalidate 2% without data.

I personally believe this is an underestimate.


The problem is not with science, it is your science illiteracy. You never accept scientific research as "true" until it has been verify and repeated by independent sources. It has become the culture to glorify unexpected and "interesting" results in the media and society at large. But you should find these "interesting" but not necessary believe it to be true.

We do get unbelievable findings such as CRISPR, but beware these will be very few and far between.


>our scientific institutions are systemically broken in ways that promote spreading fraudulent results as established scientific truth

Scientific consensus is still very reliable and if 95% of accredited scientists in a field say something is true it is in society's best interest to consider that to be the truth.


I truly hope they toss every single paper and citations to them that ever crossed this assholes desk. This misconduct literally should be treated the same as a dirty detectives cases being reviewed and tossed out since they are no longer trustworthy.


I hope you are forced to live in an authoritarian situation: so you may truly learn what it is like to be punished for the mistakes of others.

The point here is to save the good apples - not throw out the whole barrel for zero gain.


Yep. After years of pushing back against claims that researchers skewed scientific results to fit their agenda this is a huge, demoralizing blow. Even if it isn’t widespread, how can you honestly blame anyone for being skeptical anymore.


Wide spread? PI's are required to publish. It is impossible to maintain quality of papers via peer review at scale so bad papers usually get through simply because of the volume. Throw in a profit motive and people get creative about hiding it.

See this recently published article https://www.nature.com/articles/d41586-023-02299-w.

One would think that clinical trials would be documented and scrutinized out the yin-yang but they are not.


But it was caught, demonstrating that what we're constantly assured is true is actually true: science may not be perfect, but it catches all of its mistakes, therefore we should trust it above all(!) other disciplines.


How? Peer-review, re-review, journalism, and reproduction of results are the systems the scientific community is built upon. The system does its job of finding the bad apples, as it did here.

Bad things are gonna happen in every single institution ever created. A better measure is how long those things persist.

Science is about getting closer to "the truth". Sometimes science goes further away from the truth, sometimes it gets closer. Sometimes bad actors get us further away from the truth. It gets reconciled eventually.


A combination of "publish or perish" and papers not accepting "negative results" (which results in a ton of repeated research) has led to this.


You mean we can’t just Trust the Science™?!

I completely agree. Seeing slack chats and emails regarding Proximal Origin and how researchers were disagreeing with it all the way until they published a paper that served what purpose are really disheartening. Instead of guiding future research toward preventing similar outcomes, countless scientists spent untold years of combined effort on a theory the authors didn’t even believe.


It should be noted that the volume of corruption coming out of state-run schools is much smaller than that from private institutions.


I'd wager it is just better covered up


It strengthens mine. Science is self correcting in a way the religion and politics never can be, they keep making the same faith based mistakes over and over, while science continues to progress. Evidence of that is everywhere you look, whereas politics and religion have barely made any progress in hundreds of years.


To the contrary, it increases my trust.

Just the fact that Stanford managed to conduct an independent investigation against its OWN PRESIDENT, tells very positive things about the University.

After this episode, I might trust Stanford research even a bit more than any University that never caught fraudsters.


Agreed. The system is flawed. And as a result, many scientific "findings" simply can't be trusted. And there is no solution in sight.


"A database of retractions shows that only four in every 10,000 papers are retracted."

Every time a plane crashes it's international news. But just because you regularly hear about plane crashes doesn't mean flying is unsafe.


do me a favor and look up all the papers in thinking fast and slow that failed to replicate


One has to ask what there is left to trust at all?


But they got caught, they retracted, the system works. It's not a perfect system, in a perfect system people wouldn't be incentivized to publish publish publish or be damned to the back waters. The institution is broken, but the safety nets work.


Between Marc Tessier-Lavigne, operation Varsity Blues and SBF parents, the scandals involving Stanford keep on coming. It's not sending a good signal when it comes to the overall integrity of the institution.


Believe me, this isn't just Stanford. I give credit more to some kind of internal whistleblower investigative subculture there shining a light on things than it being something qualitatively different about Stanford as an institution.

For every one of these stories you hear, I am aware of others that will never ever see the light of day.


I am aware of others that will never ever see the light of day.

It seems that you are in a position to change that, but choose not to. Why?


Personal and professional reputation


Don't forget Elizabeth Holmes and Do Kwon!


Don't forget the whole Tirien Steinbach mess at Stanford Law.


People forget VERY quickly. Think of all of the Meta scandals. People will still gladly hire anyone who worked there, it's an amazing brand name to have despite whatever damage to society. You'd have to get to an Enron-level fiasco for people to start looking at you suspiciously.


It's not about forgetting, it's about other people not caring about things that aren't relevant to a job they're hiring for. For example, Meta is known for having a relatively high bar for engineering talent, and that is the signal people are looking for.

Even in the Enron case, I know a bunch of people who were snapped up from Enron after they collapsed. The Enron fraud was concentrated among relatively few people at the company, so it's not like their failure tarnished people who weren't in on the fraud.


I think that is reasonable. The alternative is that all 80,000 Meta employees have their professional reputations tarnished by something that happened a decade ago that they probably had nothing to do with


This isn't true - I've been hiring programmers for many years and Meta/FB are definitely on my blacklist. I consider the damage they've already done far worse than Enron and they're not finished yet.


Fraud isn’t even negative at a certain point. Whichever person is the top of a prestigious fraud ring still deserves a kind of clout. There’s skills and talent involved not to mention how fraud itself is sadly valuable for most organizations.


The entire field of Alzheimer's Disease research has been a mess for years. None of it has led to really effective clinical treatments. I get the sense that many researchers are feeling pressure to show positive results in order to justify continued funding, and at the margins some make unethical choices.


Well most of the problems probably come from "Well this expirement was inconclusive but surely this mechanism is real and is the cause so I will just be a little selective in my data so it won't get dropped"

And after six iterations of this it just never pans out. Real replication is whether you can reliably build on top of a previous study.


This is why fraud in science especially when funded by taxpayer money in important fields deserves PRISON.

Damn my father has Alzheimer's, so this really hurts deep.


> fraud in science especially when funded by taxpayer money in important fields deserves PRISON

Yes, but it must meet a strict standard of intent. Jailing scientists based on the content of their work is generally risky. If you create specific areas where prosecution is likely, you're more likely to dissuade research than increase quality.

The present problem appears to be the fraud has a low probability of being caught. Improving that might have better pay-offs than deepening consequences for the minority who get found out.


There's options between jail and nothing, like fines on the institution, bans on receiving grants etc.

As for low probability of being caught, nah, academic fraud gets caught all the time. There's so much out there it's like shooting fish in a barrel. Elizabeth Bik primarily goes after biomedical studies that can be spotted via image analysis and has said:

“Science has a huge problem: 100s (1000s?) of science papers with obvious photoshops that have been reported, but that are all swept under the proverbial rug, with no action or only an author-friendly correction … There are dozens of examples where journals rather accept a clean (better photoshopped?) figure redo than asking the authors for a thorough explanation.”

University vice presidents are almost always reluctant to get involved, and is that so surprising when fraud is so widespread that the President of Stanford is caught doing it? All you do by exposing fraud is make enemies. Theo Baker is an undergrad studying CS so has many options outside of academia but if he didn't, would he really have shot the king like this?

Enforce sanctions against the people who are so lazy about fraud they get caught by random volunteers on Twitter, then worry about how to find the rest.


> bans on receiving grants etc

Bans on grants, or the government could claw back the money. In either case, the universities would be incentivized to never recognize fraud ... which is maybe no different actually.


I disagree entirely. Scientists are highly trained professionals.


There is no connection between punitive actions and reduction of the causal behaviour.

Fraud in science is a problem in that the very small minority of actors that conduct themselves unethically has had massive reach with groups that seek to discredit science.

20 years ago, being science illiterate was seen as bad for you. Today, 50% of us believe that being science illiterate is a positive trait in people seeking presidency.

Jail or not, these people are causing significant public issues, so I’ll have to agree. It’s not going to reduce it, but still, fuck them for the damage they’ve wrought.


> This is why fraud in science especially when funded by taxpayer money in important fields deserves PRISON.

Oof, I couldn't disagree more. As a society we should be moving away from punitive measures and toward systemic reform. I don't think the prospect of prison time is going to deter people from playing the game they feel they have to in order to get research funding.


I couldn't agree more with you! The problem is in either the pressure to remain funded or the pressure to not admit defeat. Or a mixture of the two. We need to work on removing those roadblocks. This could be only the tip of the iceberg and harsher punishment is only going to drive people to get better at obfuscation/manipulation.


it's a known issue and science in general is improving on the reproducibility front.

i think that technological complexity kind of snuck up on many fields and it has taken time for the very competitive research culture to adapt.

releasing data was a complicated proposition back in the 2000s, where today more and more are appreciating the need for it.

i think the real solution to all these problems comes from adjusting the funding model. there should be more money available for those who are willing to do the less exciting work of completely reproducing pivotal results from scratch.


I'm sorry for you and your father, but research fraud didn't cause his Alzheimers. It sucks that there is no treatment or cure but that is the case with a lot of diseases.

I think something like fines equivalent to the amounts of the grants for the institution, and professional censure and a ban from working in grant-funded research for the researcher, would be more effective and appropriate than prison.


But research fraud does delay finding treatment. How many hours of wasted time chasing dead ends.


In the last 25 years, tens of billions almost all spent chasing the amyloid beta hypothesis, with other theories getting the short end of the stick.

It didn't start changing until outside researchers wrote a major editorial about how bad it was in Nature.

The biggest alternative is the infection hypothesis - amyloid beta is left by our immune reaction to diseases that manage to get into the brain. If that idea had been pursued for 25 years, we might actually know by now what the real connections are with HSV1 (warts), gum disease, and so on.


That presumes there is a treatment, and the proposal to research it would have been funded, and it would have been found to be effective. But yes, research dollars are somewhat of a fixed pie, and there are always people who don't get a slice.


That research fraud cost the public billions and set the field back more than a decade. Sad, huh?

I think being flayed alive would be more appropriate than prison.


[flagged]


> 7 orders of magnitude more effective at crossing the blood brain barrier

In rats. https://www.sciencedaily.com/releases/2012/10/121011090653.h...


Human trials are underway. I’ve been taking it for 2 months. It’s very powerful. If it were my relative I would try everything (lions mane, fasting, dihexa, etc). Neurodegenerative diseases are horrifying. If you’re waiting for human clinical trials of the peptides that are up and coming it will be too late for people already starting to decline to benefit.



Academia is rotten to the core.

My wife is doing her PhD and the stories she tells me are absolutely scary. It's about how you can easily fake your entire research, manipulate the numbers and writing a borderline fictional thesis and still earn your PhD. And all that with no oversight, and in lots of cases, the little oversight there is turns the blind eye because the professors have a lot to gain from having their name mentioned in the papers.


> Tessier-Lavigne defended his reputation but acknowledged that issues with his research, first raised in a Daily investigation last autumn, meant that Stanford requires a president “whose leadership is not hampered by such discussions.”

This speaks volumes about his character. Any organization led by someone with questionable ethics poisons the trust and confidence of the entire organization. So good for him!

Those kinds of ethics questions have a real impact on everyone else in the organization trying to do the right thing - as they bear the reputational harm with much less ability to just choose to go elsewhere.

Knowingly, or even just negligently, putting your colleagues and employees in such a situation is a tragedy of leadership.

It’s important to recognize that this would be a different story had he been ousted and protested or attempted a cover up. People mess up, but sometimes that means you don’t get to be a leader anymore. That’s what’s happening here and it seems like the right way we should treat these things.


> This speaks volumes about his character.

I suppose it reveals that he resigned under pressure that forced him to after allegedly writing numerous falsified papers that led to his current credentials.

Not sure that’s an ethics gold star or anything. I guess it’s better than “they’ll take me out of my office in a casket” and digging in, but still shows massive ethical failures and since ethics are usually not compartmentalized means that there’s probably other bad news that’s not revealed yet.

I think the right way would have not to falsified research. Or to come clean on your own and resign before it’s a stink.

As it is now, it’s bad for Stanford. And means the hiring committee didn’t do sufficient due diligence to even ask people on his field if his work was valid.


From the article:

> The report concluded that the fudging of results under Tessier-Lavigne’s purview “spanned labs at three separate institutions.” It identified a culture where Tessier-Lavigne “tended to reward the ‘winners’ (that is, postdocs who could generate favorable results) and marginalize or diminish the ‘losers’ (that is, postdocs who were unable or struggled to generate such data).”

> There was no evidence that Tessier-Lavigne himself manipulated data in the papers reviewed, the report concluded, nor that he knew about manipulation at the time, but he “has not been able to provide an adequate explanation” for why he did not correct the scientific record when presented the opportunity on multiple occasions. In his statement, Tessier-Lavigne wrote that he was “gratified that the Panel concluded I did not engage in any fraud or falsification of scientific data.”

In other words, he created a culture in which fraud could be expected, and he didn't address it properly when it was brought to his attention. I'm not giving him any gold stars for ethics. But as far as I know, he didn't personally falsify research.


How is creating a culture that incentivized and rewards fraud any better?

I guess you can technically claim plausible deniability but I don’t think he gets a pass here at all.


> How is creating a culture that incentivized and rewards fraud any better?

You're asking why accidentally creating ripe conditions for fraud isn't as bad as willfully committing fraud?


The word "accidentally" is doing a lot of work here.


Precisely, that of the benefit of doubt.


So, the doubt "speaks volumes about his character"? (quoting the original comment)

I don't think it speaks of anything at all, in the best case.


I don't think it is any better. But let's describe what (we know) he did correctly.


It shows ability to scale as a leader and be a force multiplier.


Obviously, we wait for the investigation to complete. But I don’t know of any valid reason someone would “not be able to provide an adequate explanation” for something so basic.

Occam’s razor would make me think that if no one has ever seen the data and it made him famous, it’s probably fraud.


The investigation has already found there was fraud. And from his failure to issue retractions for years after he was told, one can conclude he was at best indifferent to it.

What it hasn't found is evidence he got his own hands dirty or that he explicitly ordered fraud, and I'm not sure it will. raincom's comment [1] was dead on.

I'm unsure if this distinction holds any significance for him personally. One reason I call it out is that it's worth considering how to deter this kind of "leadership" when drafting scientific ethical standards or even laws.

[1] https://news.ycombinator.com/item?id=36792536


That’s the same culture c-level executives inculcate: (a) don’t put anything in writing (b) don’t explicitly ask to do unethical things (c) use layers of lawyers or of other executives to evade culpability (d) since voice calls can be recorded, hire yes-men from one’s own network to use code-language or para-linguistic cues to execute illegal and/or illegal stuff


I generally agree, but:

> And means the hiring committee didn’t do sufficient due diligence to even ask people on his field if his work was valid.

Detecting fraudulent results isn't always easy. If there was already a public controversy about his papers and they didn't pick up on that, that was a failing. In the worst case, faked results can only be detected by running a study over again and getting a different result -- and even then, it's hard to rule out some difference that wasn't properly controlled for, an honest mistake somewhere, or just bad luck.

The standard of truth in science isn't peer review, it's having results that are consistently reproducible.


It’s certainly not easy, that’s why I said they failed at due diligence. I assume they tried.

This is the president of a $28B endowment and $720M annual revenue. They have the resources to validate candidates like any other massive organization.

This doesn’t seem like an honest mistake or bad luck.


I think it's more valuable to praise good ethical decisions than it is to deride bad ones, and we have more than enough bad ones bombarding us constantly.

>Or to come clean on your own and resign before it’s a stink.

I feel like this is what is happening, so we must be thinking on different timelines.

As it is now, it’s bad for Stanford. And means the hiring committee didn’t do sufficient due diligence to even ask people on his field if his work was valid.

To be clear, there's no net-positive here, it's still net-negative and I agree that everyone is worse overall. The thing I'm pointing to though is that this was the best outcome from an already bad situation and is done in a way that is transparent and is actually addressing harms and preventing further harm by changing the power structures.


> I think it's more valuable to praise good ethical decisions than it is to deride bad ones,

I agree. But I don’t think this is a good ethical decision. It’s not a decision at all. He was fired and Stanford PR made a statement for him. I don’t think that’s praiseworthy.

Or I suppose we could praise him for not murdering people. And lots of other things that are extremely common and I don’t think noteworthy.

I think it’s also worth reflecting on terrible decisions and people who make them and use mistakes of others to learn.


You know what would speak even more highly of his character? Not publishing fraudulent research in the first place!


This exactly. Quitting after you've been caught doesn't require much character.


> Stanford requires a president “whose leadership is not hampered by such discussions.”

You'll notice he doesn't say "I regret my actions and the harm I caused". Just a vague allusion to "such discussions". As if the people discussing him falsifying research are somehow the problem. When someone goes down for a scandal, they rarely express true remorse or take responsibility. It's always "I'm sorry... that I got caught".

His response speaks to his character, but I'm not sure it says what you think it does.


Yes, this speaks volumes about his character:

> Tessier-Lavigne defended his reputation ... "issues with his research" ... "discussions"

Meaning he believes he did nothing wrong and he's the target of "character assassination" by the Daily but he's being forced out by colleagues who want Harvard to come out of this scandal with half a shred of dignity. This is not him saying "aww, you got me, aight I'm out". This is him continuing to be a narcissistic, shitty human being refusing to admit that he could possibly be in the wrong in any way.


If he knows he is at fault he should admit it as well as resign. Conversely, people should not be ousted based on false allegations.


> Stanford is greater than any one of us. It needs a president whose leadership is not hampered by such discussions. I therefore concluded that I should step down before the start of classes. This decision is rooted in my respect for the University and its community and my unwavering commitment to doing what I believe is in the best interests of Stanford. https://tessier-lavigne-lab.stanford.edu/news/message-stanfo....

I agree, good for him to humbly resign and not drag the reputation of the bigger institution into questionable territory. The news came as a shock this morning but it's a well written letter he publicly posted.


What? This is just standard resignation PR statement about not wanting to be a distraction for the rest of the organization while not admitting culpability. It's basically a form-letter.


> It’s important to recognize that this would be a different story had he been ousted and protested or attempted a cover up.

I think it's very early to give a positive judgment for MTL's decision to resign. First of all, the most serious allegation by the Stanford Daily — that MTL knew of fraud at his Genentech lab and made an effort to cover it up — is still denied by MTL, Stanford, and Genentech.

Either MTL & Genentech are lying, or the Daily made a mistake in its reporting. As of now, the Daily still stands by its reporting and Theo Baker published a follow-up about how Stanford's investigation may have been fundamentally flawed [0]

But let's assume that the Daily is wrong, MTL's resignation statement omitted any accountability for the lesser but still damning charge: negligent leadership that fostered a research culture optimized (inadvertently or not) for fraud. As the Daily puts it:

> The report concluded that the fudging of results under Tessier-Lavigne’s purview “spanned labs at three separate institutions.” It identified a culture where Tessier-Lavigne “tended to reward the ‘winners’ (that is, postdocs who could generate favorable results) and marginalize or diminish the ‘losers’

In MTL's letter to Stanford [1], he limits his culpability to not being diligent enough in correcting the errors and alleged fraud, all of which were the fault of his subordinates:

> I agree that in some instances I should have been more diligent when seeking corrections, and I regret that I was not. The Panel’s review also identified instances of manipulation of research data by others in my lab. Although I was unaware of these issues, I want to be clear that I take responsibility for the work of my lab members.

> ...These findings have also caused me to further reassess the processes and controls I have in place. While I continually maintain a critical eye on all the science in my lab, I have also always operated my lab on trust – trust in my students and postdocs, and trust that the data they were presenting to me was real and accurate.

I get that there's a limit to how much self-flagellation he can do right now, but he's basically limited his culpability to: "my biggest weakness is that I respect and trust my colleagues and students too much!" In this era of Stanford-associated stories like Theranos and FTX, the prospect and danger of a leader who fosters fraudulent "winners" is something that should be addressed explicitly.

[0] https://stanforddaily.com/2023/07/19/sources-refused-to-part...

[1] https://tessier-lavigne-lab.stanford.edu/news/message-stanfo...


[flagged]


He's no John L. Hennessy that's for sure.


Given the rampant nepotism in Stanford admissions, the whole school needs to shut down.


When you say nepotism, do you mean legacy admissions, or something else?

I know a prominent alum who was able to get her kids into undergrad at Stanford, but not into their professional schools. Considering where they went after being rejected from Stanford, they were not remotely in the ballpark. It's good to know that there are limits to legacy admissions, even for rich/famous alums.


Sounds like she didn't pay someone enough.


I think he's hinting at the John Vandemoer, Lori Loughlin and Rick Singer debacle. Sad bit of history for the university as well.


I feel like with the skyrocketing costs of education in the US, fixing the problem is probably a better path than "burn it all down"


For every false paper, I wonder how many other researchers waste time pursuing research on false premises. And worse, how much do fake papers influence broader society, through policy and individual decision making.

I wonder if any economists have tried to measure the $ cost of fake research.


Papers don't establish ground truths in science; they start a conversation. When bad papers are published due to misconduct, there's a cost --- a pointless conversation occurs. But it doesn't shake the foundations of science. People that actually do science understand the implications of a published paper.


> Papers don't establish ground truths in science > it doesn't shake the foundations of science

the comment doesn't say any of that, just that resources are wasted participating to these "conversations"


Papers are often rejected due to lack of novelty compared to previous work, especially if they contradict it. How many paper were similarly rejected because this fraudulent work was published?


It's definitely a source of wasted effort (I think almost every postgrad student has a story of trying and failing to get something described in a paper to work), but I would say that fraud is only a small fraction of failure to replicate cases. It's also quite common that there is some other factor making it difficult to get something working in your lab that worked somewhere else (which to be clear is still a problem, but a different one).


The cost of pursuing dead ends in science is NECESSARY and much greater than the cost of pursuing fraudulent results, albeit the latter being unnecessary. Probably 70%+ of time is wasted on paths that lead to nowhere, and that is part of the process. I would guess less than 1% of time is wasted because of manipulation. The process finds fraudsters (bad) well enough, and the process of being wrong (good) will always be central to science, even though it's so expensive.


Dont forget people who spend resources just replicating studies.


And doping. Now you think what? But. It is hard to compete with people doing drugs to be able crunch 16h seven days a week to get a result.


How does academia deal with the fall out 'downstream' from such retractions? Does this automatically invalidate each and every paper that cited this one as a source? If not why not? Because if that were the consequence I think a lot of people would be far, far more cautious about what they cite and whether or not it has been reproduced.


Not many things are automatic in research paper publishing… It all good old PDFs in which even the publication date doesn't always appear. You have to look up the title and find out in which journal / conference it's been published, and you get the date.

Anyway, it would not be fair to automatically invalidate papers citing retracted ones. Including:

- reproduction attempts

- some minor citation in related works

- it's usually not obvious that a paper is manipulated or even wrong without any bad faith involved

Now, I wish we could update papers with disclaimers and notes, but again, we are dealing with good old PDFs that are never going to be updated…

Good luck even noticing that a paper was retracted.

I wish we had better formats and publication processes.


I'd just like to expand a bit on the minor citation: you often cite the seminal papers in a field simply to give context to the work you're doing and, in some fields, to show competing or past equations as contrast to your method of doing something.

None of those require the paper you're citing to be free from fraud: seminal works with fraudulent or inaccurate results may still have preparation methods that are applicable to novel, non-fraudulent, experimental techniques and equations are rarely at issue with data manipulation or other fraud.

There's still space to be aware of retracted papers in citations as building directly on those results and managing to show an improvement on doctored data would be suspicious but I'm pretty sure that is the minority of citations in many fields.


You could deal with that in a reader or in a service where you upload a draft of your paper which then spits out a list of potentially problematic citations, even if those are more than one step removed from your first citation.


Sounds like a useful tool!


> and whether or not it has been reproduced.

Nothing would ever get done if this was the bar for citation. Real advancements take place on the back of fraudulent results in widely cited papers. Actually, even within papers that have fraudulent results, advancements happen.

The motivation for such fraud is because work generally wont get published without positive results, even if the work was excellent. The fraud doesn't take the place of hard/great work. I'm not defending the practice.


I've been thinking about this for a while, what if we had a web of fraud explorer where you can follow the citations of fraudulent papers. And this just might align incentives more in a direction of caution.


This is modern science. This is our tax dollars. And this is at the highest levels.


So, what is the solution? Fire all scientists? Close down all university research labs?

And then when will that "saved" money go? A high-speed rail? I don't think so. A tax cut for private jet owners? Probably: https://www.politifact.com/factchecks/2022/feb/18/melanie-da...

With zero spending, there will be zero waste and corruption. I will create zero bugs if I write zero code.

Just like VCs expect a tremendous amount of waste when going for wealth appreciation, so should countries expect waste when investing in science, research, and innovation. There will be waste, there will be fraud, but the options are either play the game or be left in the Middle Ages.

In the same fashion, let's have innocent children go hungry because some adults abuse social programs. Let's stop all military R&D because some contractors are overcharging.


There’s plenty of opportunities to reform the scientific organization and science in general.

1. Get rid of scientific journals, replace them with databases of scientific results and raw data. A paper may explain the result, but looking at the database must be enough. Assign credibility score based on independent verifications/confirmation of trustworthiness by other leading experts in the field. Negative results, verifications of known facts should have equal significance there. Theories must be peer-reviewed first. In some scientific field this may significantly change how the research is made, probably for good (it’s fun to read papers in certain fields where authors disagree with another scientist because of some gut feeling).

2. Get rid of degrees and titles - they do not age well without continuous learning and participation in research. Bachelors and doctors should have had an expiration date. Credibility of a person must be based on exams, certifications and scientific results accepted by others and as such always has certain age. A scientist who became an expert by verifying a lot of other’s works may be more credible expert than someone who made one new discovery. The weight that this scientist puts behind each verified result must boost its credibility significantly, but if the reputation is damaged it must cascade to everything downstream.

3. Management career track must be separated from professional track: head of a lab must not be the best expert and should not be the first name on a published result. Leadership skills, ethical code and ability to assemble a great team must be more important. Choice of the direction of research must be a team decision.


Reward boring and interesting results equally? Finance more independent attempts to reproduce interesting results?


Stop trying to "business-fy" research. The demand and drive to make research more efficient and business like hurts the point of academic research. It's similar to how the theory in business itself that the singular goal of public companies is to "increase shareholder value". Academic success can't be linked to just the number of papers produced or cited.

Western cultures needs to go back to embracing plain old hard work and that business, research, etc all require difficult work and reflection at the top levels to function best.


I'm completely fine with academic success not being linked to any sort of outcome or result or increase in value to society.

I just don't want them use my tax dollars to do it.


You can see it in another way: science is directly improving your life. Your tax money is not a "gift" that you make, or even a "salary" that you pay, it's you buying the right to profit from it.

Why should the result of the scientists work be given to you for free, when you have contributed nothing as important in exchange? I don't understand why some people think tax money is some kind of favor that they are doing: are they so full of themselves to think they can profit from modern life for free like a parasite?

The choice is there and was always there: you don't like paying taxes, you can always go live on your own somewhere in the wild. But as soon as you profit from the modern life that is 100% built upon the work of the scientists, you have to pay them to live here.


As someone who worked on publicly funded grants, I find this attitude repulsive.

Everyone who receives money from public coffers owes that public a service mindset. The arrogant self-importance in academic science is inexcusable and reason IN ITSELF for de-funding the entire enterprise.

> are they so full of themselves to think they can profit from modern life for free like a parasite?

How ironic.


You are reaching conclusions that are not implied by what I've said.

You are somehow inventing that being grateful to someone means that the other person is not being grateful to you.

Nothing in my text implies anything on how the person who receive the money should act. I do not think that scientists are one bit more important than other members of the society, and I think they should act with the upmost care and respect for the trust the society has given them. Why would I think differently than that? I'm criticizing people who thinks everything is own to them. These people can be tax-payers that don't even think one second that their taxes are not some kind of generous gift, AND these people can also be scientists who thinks the society owns them money automatically for their work without any conditions.

What a strange view of the world, where everything is a competition where if someone is not considering themselves as the "owner", it means the other person is. What about no one acts like the other owns them something, what if everyone recognizes that that we are in a win-win situation?

It is really telling about our society that when someone reads my text, they are so shaped by their competitive "money = power" vision of the world that they are totally oblivious that considering that A being grateful to A does not imply that B is be arrogant and full of self-importance.


> You can see it in another way: science is directly improving your life.

You can argue about spending mentality though. Some institutions consciously ignore important signals from the research community to keep departments going. And in the field of IT especially, there's a trend to push novelty in publications alone, producing results that aren't even interesting outside the research community.

I'm happy paying taxes to fund research, but I want that research to be accessible all the way through - that includes access to collected data. Which isn't the status quo.


That's self defeating though and precisely my point. The mindset that tax dollars should only be used to fund "valuable" research degrades the actual value of the research for society.

Academic research should be funded because it's an important aspect of the human experience. The fact that it also leads to material benefits should be a knock on effect that's encouraged but not the core goal.

It's a perfect example of Goodhart's law: "When a measure becomes a target, it ceases to be a good measure". https://en.wikipedia.org/wiki/Goodhart%27s_law

IMHO, accountability is great. It just requires difficult work at the top leadership to do so well. Just boiling it down to a single number like "economic value of research" doesn't work.


https://news.wisc.edu/decades-on-bacteriums-discovery-feted-...

Taq polymerase is about my favorite thing discovered through basic research. Just some scientists looking at interesting stuff in a Yellowstone geyser, who happened to discover the molecule that enabled DNA replication in a lab and essentially exploded the field.

No corporation these days is going to pay somebody to do that. I think it's worth the risks to get something like that.


Polymerase was already known (isolated ,purified, and characterized in '56); the interesting part of Taq was its thermostability which permits thermal cycling to implement PCR (and other neat techniques).


You can simply assume that your tax dollars do not fund science directly, it’s all other’s money and other people are fine with that. The budget pie is big and your contribution to it is going elsewhere, e.g. funding military or subsidizing some big corporations which in turn fund some private science.


It is because every dollar I spend is inflated by the government debt.

https://www.worldometers.info/us-debt-clock/


I don’t think there exists a conspiracy to dilute your contribution to US budget with more debt (if it did exist, it would be a very VC style conspiracy, so what not to like?).


I don't think you understand how the debt works.


That's a lot of straw men you put up there to knock down.

Maybe have a whistleblower hotline for academic fraud. I'll bet some of the grad students knew what was going on.

That one guy's selfishness tarnished the schools reputation for probably a generation.


Sometimes they are the ones doing it if their visa is on the line.


They don't want the whistle blown.


There is definitely room for reform in the "business" of academia. IE: in how research is published, checked, verified and funded. And how Universities interact with it. There is a clear problem in how things are incentivized and it is encouraging misconduct.


Sure, but to what point?

"We spent 1 million dollars on oversight, and the 100 thousand dollar project is now completely free of fraud"


Nope. I think this is better: We spent zero citizens tax dollars on science grants and all scientists had to get venture capitalist funding like the rest of the world.


99% of modern scientists would not make it out of series A funding from VCs yet the government keeps throwing millions at them because they keep publishing papers even if the papers provide no value to society. And the people in charge of giving the funding went to the same elite universities as most of the people that are getting the funding.

Theres zero accountability.

Modern science industry is a massive scam. Its literally theft. And hasn't produced much applicable to the actual world in decades.

The solution is to eliminate white collar welfare and make the scientists get their own funding like an entrepreneur or an artist or literally any other field.


That will not advance science much because of completely wrong incentives. Scientific knowledge does not always have to be monetized and often is impossible to monetize, yet it is extremely valuable. Just a few examples:

1. Verification of prior research that produced negative results (e.g. proved some hypothesis wrong). VCs may want to take the risk and fund the original research, but what’s the risk model in verifying the negative results?

2. Theoretical research that will yield practical results only in 50+ years. No VC would wait that long (what share of fusion research was funded by VCs in the last 50 years?)

3. Research that undermines capitalist model, e.g. by demonstrating the necessity to increase taxes or altering redistribution to reduce inequality. The society will clearly benefit from it, but what could a VC gain from that?


Science is not advancing much now!

What is it produced in the past decade?? Past 30 years??

I've been on this earth for many decades and (other than the internet developed by the military)...the TRILLIONS of tax dollars that have gone into science have yielded nothing to minimal application to my life.

I completely understand your idealistic version of blue sky science needing disinterested non-results-based funding.... but that just turns to corruption and using our tax dollars wastefully with no results with the perpetual excuse of: "it's blue sky research I don't have to prove anything to you just give me more money"

and I've worked in labs and universities and I can 100% tell you scientific corruption with tax dollars is more the rule than the exception.


If you missed the progress of the last 30 years, it shows only how much are you uninterested in this topic.

There happened A LOT practically in every field. Several major mathematical problems were solved, big progress in theoretical and applied physics, astronomy, biology, medicine etc etc. All modern electronics, electric cars, medical treatments are based on recent research. AI, solar energy, green tech… shall I continue or you just subscribe to phys.org?

The problems with corruption are direct consequence of applying capitalist model with the wrong incentives. It is pretty dumb for modern scientists to value published papers over verified results and to pursue medieval titles.


> modern electronics, electric cars, medical treatments are based on recent research. AI, solar energy, green tech

Every single thing you mentioned is a result of private industry.

I've read phys.org....cold fusion has been righttttt around the corner for a century now according to them. So has all of the promising miraculous cancer cures that never materialize.

Even if some research comes from academia, I bet you that private industry would make the same breakthroughs and for far far far less money.


So you are just demonstrating your ignorance. All of these are based on years of government funded research, private funding really only got involved once things looked promising.


electric cars: invented by private industry, refined by private industry

lithium ion batteries: invented at Exxon and Asahi Kasai corp

AI: refined at IBM culminating in deep blue, refined by Google with BERT, recently refined by Open AI

solar panel: invented by bell labs private industry

modern electronics, medical technology, and green tech??

all so vague, but probably all invented by private industry.

It's actually the reverse with you showing your ignorance.


Solar panels were invented even before Alexander Bell was born, by Edmond Becquerel. His work was funded by France.

Lithium ion batteries of Whittingham were based on decades of research in academia, including his own work at Standford and work of other scientists in many other institutions. See https://en.wikipedia.org/wiki/History_of_the_lithium-ion_bat... how many research institutions are mentioned in the article.

Profit-oriented research rarely produces interesting fundamental results.


Electricity, radio waves, flight, antibiotics.. it seems to me that private industry and individuals produce the MOST interesting fundamental results.

It might be good to check your confidence in your knowledge.

> His work was funded by France. Citation needed.

Everything I've read he was a private citizen experimenting in his father's laboratory and not part of any academic institution or receiving any government funding.


Electricity - not a single result, but a big topic, to which contributed Faraday, Galvani, Volta, Thomson among others. You can check where they worked. Ben Franklin funded research himself, but his contribution to modern theory of electricity is not the biggest.

Radio waves - Maxwell, of course. Radio receiver/transmitter invented both by Popov and Marconi independently of each other, so it’s 50/50 academic vs commercial research.

Flight - impossible to attribute to a single person or team of inventors. There are balloons, gliders, airplanes, space flight etc. Wright brothers built a machine, but there was prior research in academia. Theory of flight was developed with heavy influence of military (i.e. state funding) and I can name a number of state-funded institutions in different countries which advanced the science significantly. Commercialization did help, but not as the only driver and probably not the main one.

Antibiotics - what fundamental result do you mean exactly? It’s a broad term with a rich history, a lot of initial research done in non-commercial institutions.

I will stop here. I have an impression that your views have something to do with your personal grievances. If you want to construct some unorthodox theory of how science works, good luck with that.


So much rationalization. Anytime someone has to rationalize to this great of an extent... I know they have an agenda.

It's extremely simple.

The wright brothers discovered/ invented flight.

Ben Franklin discovered/ invented electricity.

Marconi: radio waves

Flemming: antibiotics

This is settled history agreed upon worldwide, this is not controversial or complicated.

why do you have to rationalize so much!?


His father was a professor at the French National Museum of Natural History, that's presumably where the lab was.


What is produced in the past 30 years? Lots of stuff. For example the LIGO project produced sufficient evidence to conclude that gravity waves exist and we can measure them, as well as identifying sources of such waves. No real application is anticipated- but it's damned nice to know that the prediction was borne out by reality.

CRISPR- an extraordinary tool for genetic manipulation. Could potentially have a huge impact in medical treatments; has already revolutionized experimental research.

AlphaFold demonstrated that protein structure prediction with experimental-level accuracy is possible. This could also have huge implications in medical research and treatments.

Probably, we don't see advancements occurring in real time because we learn about things that happened over decades or centuries, but collapse them to short intervals.


> What is it produced in the past decade?? Past 30 years??

MRNA vaccines?


Thats an entirely different discussion as to whether those are legitimate or not.


And now you have completely disqualified yourself.


just to people who thought masks were a good idea but when the waiter brings the breadsticks it's okay to remove the mask to eat at a crowded public restaurant for some reason


Aaaand there we have it.


The optimal amount of scientific misconduct is not zero.


so based. But I do have to say that when the president of one of your top institutions resigns over misconduct, the level of misconduct is probably a fair amount over optimal. And speaking with my friends in, eg, alzheimer's research, the fraud and inability to trust the veracity of unreplicated results does really slow down work in the field.


Care to elaborate on why you think it's optimal to have a non-zero amount of fake data supporting scientific claims?


Because the only way to get zero misconduct is to drastically reduce the amount of science that is done, probably by orders of magnitude. This is an old saw when talking about government waste: the optimal amount of waste isn't zero, because there are diminishing returns to pursuing waste, and at some point the losses you avoid by eliminating waste are swamped by the costs of eliminating it.

It doesn't follow that waste and misconduct are good, only that when we talk about policy responses to scandals, we should consider the costs involved in avoiding those scandals, and whether it's rational to pay those costs. Sometimes it is, sometimes it isn't.


I am somewhat mind-blown by that government waste adage. While I supposed that in some technical sense it could be true, I would like to have a blunt conversation with anyone who believes government waste is anywhere near parity with the costs of trying to eliminate government waste


The logic of the statement, which is pretty hard to dispute, doesn't establish that the current amount of waste in any given program is or isn't optimal, only that the optimal level isn't zero.


No one is saying the current level of government waste is optimal and not worth the cost to eliminate. Just that if you have eliminated 99%, the extra 1% may not be worth it.


I'd like to cross that bridge when we get there :)


Completely agreed. The best solution to eliminate government waste is to eliminate the part of the government creating the waste.


The usual framing is that the optimal amount of X is the point where it is just below the cost of preventing X.

So, if it costs millions of dollars to pursue fraud, you would still be better with allowing thousands of dollars to go.


The optimal amount of my tax dollars going into government funded science is zero.

Scientists need to earn their money like the rest of the world.


I'm sure there's a country you could immigrate to where the political consensus is that no basic research of any sort should happen, but the country we're talking about has the opposite consensus. Put differently: is there an argument you could make here that would be persuasive to someone other than an ancap?


I don't think that America has a consensus on this at all. Numerous people I talk to think science should be privatized.

And if more people were aware of the corruption that goes on in science the pendulum would quickly swing.


What Tobias said about the NeverNudes applies.


Reality has a way of asserting itself.

https://www.usdebtclock.org/


Are you implying that private agencies have less corruption than public ones? It's fascinating how jarring that is with my life experience.

Before you answer, remember, Stanford is a private institution.


Sure. Then please refrain from using anything that was invented/developed from government funding. Including the internet.


That happened to more than 30 years ago, like I said in my original statement.

If I refrain from using anything invented and developed from government funding in the last 30 years, what would I have to give up?


Every medication developed in the last 30 years, for starters. I mean, you're basically asking for a list of every invention derived from government-funded basic research (also known as "all basic research"). It might make more sense to ask what you could use.


I find it telling you can't name one.


I find it telling that you think it's challenging to name an invention that depends on any of the last 30 years of basic research, so we're at an impasse.


You're the one you said to stop using technology invented by the government.

Then I said I would be happy to if you could name a single one in the past 30 years.

And you can't.


Off the top of my head...

- Soft-white LEDs

- Data-over-headphone-jack

- ZK-SNARKs

- Tor

- Energy-harvesting power monitors

- TCP congestion control

These all had government funding or were invented in government labs.


What kind of weak cop out is this? It's the president of the school, not some miscreant child. Pathetic.


Scientific misconduct is not a particularly "modern" phenomenon: https://en.wikipedia.org/wiki/Piltdown_Man


This is a private university.


Your statement means nothing. Look up how much money Stanford receives through NSF, NIH, DoD, etc.


The webpages for most of the graduate studies/research departments seem to indicate that some level of public funding is expected/necessary for at least some students and researchers at Stanford. An example[1]:

The department has limited funding available for MS and PhD students, which is awarded at the time of admissions by the program coordinator. Prospective students are encouraged to seek funding from external sources such as the NSF GFRP or AHRQ Dissertation Awards, and/or for Stanford-based funding such as fellowships available through the VPGE Office.

[1] https://med.stanford.edu/epidemiology-dept/education/graduat...


Which likely gets most of its research funds from government grants.

Concretely - over 70% come from the federal government:

https://facts.stanford.edu/research/


Thousands of them at any given time, of which we've had news reporting on single digit numbers from years ago. You should now update your priors.


Not spending any public funds?


Private university doesn't mean that public grant money isn't used in research grants.


And the alleged misconduct happened before he joined the university.


And? It's still an epic failure


Many of the research grants are funded from public tax dollars


> The report, at 95 pages in length, contained a number of unflattering details about Tessier-Lavigne’s lab, including the conclusion that at least four papers with Tessier-Lavigne as principal author contained significant manipulation of research data

I am surprised that Marc didn't retract a fourth paper, based on: https://stanforddaily.com/2023/07/19/sources-refused-to-part...


Still a member of the national academy of sciences.

https://www.nasonline.org/member-directory/members/20010006....


The whole field of psychology shifts uncomfortably in it's chair. Replication crisis.

My prediction is a well rehearsed closing of ranks and naked abuse of anyone questioning integrity.

Not all psych research will be unreliable just like not all professional cyclists take drugs, maybe, given the apt tour de France Armstrong analogy made by rossdavidh here.


Breaking the rat race of academic publications is long overdue.

In some domains, the call to break this vicious cycle is already happening. E.g. ACM Sigcomm is one of the most prestigious and exclusive conference in Networking and Distributed Systems research. Some of the most profound researchers in that domain are driving a pledge to fundamentally rethink what the conference accepts and what gets presented: https://sigcomm.quest/proposal.html.


And it's a bad proposal. To wit: "Concretely, after a paper has been thoroughly discussed, any paper that still has at least one advocate for acceptance should normally be accepted."

This is a terrible idea. People will have friends who get their papers in, in return for the favor of the same.

Agree with your general point though. No easy answers though.


>This is a terrible idea. People will have friends who get their papers in, in return for the favor of the same.

But this is already the case. In some fields even double blind doesn't work because community members are aware of each others research. If you're one of two people worldwide who's specific specialization is the compression of semantic knowledge streams, you're probably willing to be money on your ability to spot your colleagues research.


> It identified a culture where Tessier-Lavigne “tended to reward the ‘winners’ (that is, postdocs who could generate favorable results) and marginalize or diminish the ‘losers’ (that is, postdocs who were unable or struggled to generate such data).”

I am honestly very curious who hasn't observed the same behavior in large corporations too. (And to be clear, I'm not defending this behavior; rather curious if it's not an unfortunate generalized situation that occurs not only in academia).


The quotes from the chair of the board and from the president himself in the second paragraph of the article are very important. Neither concedes that Tessier-Lavigne's conduct disqualifies him from his position as a preeminent researcher. (Indeed, he will retain his tenure as a professor.) It is the impact of the report and the discussions of the allegations which make his continued presidency untenable. Do you note the subtlety?

Secondly, I encourage every interested reader here to pull up the actual report and read it. (The report and other statements from the board can be found on the board's website [0].) In a large laboratory, both honest mistakes and intentional data manipulation will occur. They just will. There is no evidence to suggest Tessier-Lavigne participated knowingly in fraudulent activity. All the evidence points to the fact that he was made aware of the issues at some point or another, proceeded to take some corrective action, but did not ultimately do enough. Pass your judgments on his actions and inactions, by all means, but make sure you're passing judgment on what happened, not on the impression you get of what happened from reading news articles.

[0] https://boardoftrustees.stanford.edu


I personally know two people who quit their PhD programs because of fraud. One was given the option to resign and basically not speak about it or face repercussions.


Yes I've personally heard similar stories. Students who were prevented from graduating until they produced significant results, even though their dissertation studies were approved by committees. So they'd eventually resign because they got fed up with it.

Or students and postdocs threatened with presenting or publishing null findings that run counter to senior scientists' previously published theories.

Etc etc etc


Doesn't it seem like this kind of fraud should result in prison sentences? I'm so sick of frauds in academia. I really wish the elites in our society still took the idea of Hell seriously.

Also: "will resign effective August 31"??? Shouldn't this guy be locked out of his office, have his laptop confiscated, and be banned from campus immediately?


> Doesn't it seem like this kind of fraud should result in prison sentences?

It depends on the nature of the fraud.

I'm no fan of academia (see my post history), but this has to be close to the bottom of my list of priorities. The last car salesman I interacted with probably deserves more jail time than even the most unscrupulous academic (and the car salesman actually did commit a crime, but prosecution would be highly surprising). Medical and drug insurance is another case where there is systematic, intentional, and legal fraud literally killing people every day. The insurance case in particular is pernicious and full of literally deadly Catch-22 "tricks". See also all the obviously criminal web3 stuff that will definitely hit statute of limitations before any LE/prosecutor finds the time to investigate and prosecute.

Most fraudsters don't see any legal punishment because LE and prosecutor time is so limited relative to the amount of fraud. And in the worst cases because the fraudsters have so much money and power that even obvious bullshit is at least de facto not criminal (see insurance).

So, anyways. Should it be criminal? Yes. Is it criminal? IDK. Probably somehow. Is it where finite resources should be spent? Not usually; IMO there are far worse types of fraud where the people's LE+legal+legislative resources should be spent.

> Also: "will resign effective August 31"??? Shouldn't this guy be locked out of his office, have his laptop confiscated, and be banned from campus immediately?

Conjecture: there is probably a lot of "hand-off" work to be done. Excluding my first two jobs, where I was a junior/mid-level IC, I have always been asked to stay a least a month longer than the typical 2 weeks to handle hand-offs.

I guess the best we can hope for is that the last month of employment is living hell as he has to attend a bunch of hand-off meetings as a totally disgraced academic/leader.


He prob had millions in government grants to write those papers.

Theres no accountability.


You know barely any of the money goes to him personally right? Grants aren't lottery tickets.


Ive worked in government labs and at universities.

The grant money literally pays their salary.

That aside... the amount of ways to commit corruption are endless...

The amount of conferences they had in Italy and Malibu and places like that...

The hot research assistant that never showed up to the lab but got paid.

The endless tech project that took a decade and millions of dollars to write a simple LMS because their buddy ran the LMS company.

The showing up at 10Am and leaving at 3 with a long lunch and working 20 hour work weeks.

The university creating a team devoted to hacking the grant process.

The elite university people in charge of the funding giving their other elite university alumni preferential treatment.

Zero diversity labs because scientists hire their buddies. You can literally walk through university research buildings and see all Indian Labs, all Chinese labs, etc.

The waste is massive and insane with our tax dollars. Its literally white collar welfare. And it happens everywhere and theres no accountability.

Its a giant scam wrapped in the virtue signaling of altruistic science.

It needs to end.


> The showing up at 10Am and leaving at 3 with a long lunch and working 20 hour work weeks.

Mostly this.

The conferences honestly aren't that much of a perk, relative to the pay differential, at least in STEM fields.

The "hot research asst" thing was common in the past but died down significantly with #MeToo (still a lot of egotistical creeps ofc).

But the amount of general laziness dressed up as busyness in academia is astounding. Most professors retire in place some time in their early to mid 30s.

The solution is to end higher ed carve-outs in federal grant awards. Let anyone qualified apply for and receive NSF funds. Stop tying tax dollars to university affiliation.


> The solution is to end higher ed carve-outs in federal grant awards. Let anyone qualified apply for and receive NSF funds. Stop tying tax dollars to university affiliation.

I think this is infinitely better than the current system of just giving money to scientists.

And it's a step in the right direction towards eliminating government funding for science altogether.


The prison system wouldn't be able to handle the influx.


>Doesn't it seem like this kind of fraud should result in prison sentences?

It should but it probably wont.


I remember a student who doctored his whole CV to get into Harvard (and win awards) was sent to probation: https://www.thecrimson.com/article/2010/12/16/harvard-wheele... Although his case was more egregious.


A transition period for a leadership role tends to be a good plan, especially when the misconduct of the old leader is rather tangential to the role (ie. he wasn't caught buying Ferraris with university funds).


Definitely. And on a broader note, white collar crime effectively having zero repercussions is the main reason so many of our institutions are failing.


And pay back his salary ...


That depends on what percentage of grant money you believe should be spent on extra administration and ass-covering, which is the standard institutional response to liability. Bear in mind: a healthy chunk of grant money is already taken as a rake by the sponsoring university.

Again, the optimal amount of research misconduct isn't zero!


Absolutely, but given the magnitude of the impacts, I'd like to go after white collar crime first.


> Absolutely, but given the magnitude of the impacts, I'd like to go after white collar crime first.

This is literally white collar crime.

Like most white collar crime, it's not reported in "crime statistics", and it may not be prosecuted for any number of reasons, but it's literally the definition of white collar crime.


I wonder if there should be criminal liability for this... literally billions of dollars of misdirected research effort for what may be a fraud.

Are amyloid plaques not an issue at all then, or coincidentally still an issue (but not justified by the research this person did)? Would be funny if this is a real world Gettier case.


A good reminder about Aaron Swartz and his story - https://youtu.be/gpvcc9C8SbM?t=3238


There’s a lesson here- if manipulated research is so game changing that drug companies are trying to reproduce it and it is used to propel a career to a university president then there will be consequences.

Otherwise the estimates of published research being manipulated range from 20 percent to being a majority and it’s difficult to get corrections or retractions.

https://journals.plos.org/plosone/article?id=10.1371/journal... https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-hea... http://science.sciencemag.org/content/349/6251/aac4716 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3655010/ http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.htm... http://www.nature.com/news/1-500-scientists-lift-the-lid-on-...


I don't know what's been going on at Stanford lately but feels like there's been numerous scandals piling up one by one.


"The EMBO Journal wrote in a public post last week that it is reviewing the allegations regarding a 2008 paper about receptors within the brain for which Tessier-Lavigne is listed as the *third author of 11*"

11 authors? I thought the kickbacks of "I'll publish your name on my paper if you publish my name on yours" in Computer Science were bad...


Corporatized academia is far worse than Hollywood and Wall Street when it come to protecting bad actors. The name of the the game is "don't rock the boat and keep the grant money flowing". Nobody wants to bring focus on another's questionable activities lest their own receive focus.


For his work that was funded with federal grants [1], are there criminal charges prosecutors can bring?

[1] https://pubmed.ncbi.nlm.nih.gov/19225519/


I have a question: how to fix this? Could enhanced peer review provide a solution? However, the peers are not detectives so I really do not know.

It appears we are caught in an inevitable cycle: success in academia is intrinsically tied to more published papers. That inevitably fosters widespread fraud, as a small proportion of unscrupulous individuals will ultimately corrupt the entire system. As noted in the report, his lab had trend of favoring the winners.

Furthermore, we need to consider that there are very powerful external influencer such as politics and corporations, which can sometimes lead to the distortion of data.


I'm confused.

https://www.washingtonpost.com/education/2023/07/19/stanford...

"A panel of experts concluded that Tessier-Lavigne, a neuroscientist who has been president of Stanford for nearly seven years, did not engage in any fraud or falsification of scientific data. It also did not find evidence that he was aware of problems before publication of data."


It helps if you read the following paragraph.

They couldn't prove he personally did the manipulation.

He oversaw work at three different institutions over two decades that resulted in manipulated data by someone, and didn't issue corrections when those manipulations were pointed out. If he didn't do it, he was incompetent or uninterested in fixing the issues.


Yes, I think most people (like me) will get the impression that he was personally involved in fraud but at most he is guilty of what you speak of.


At most, he's guilty of fraud. That's unproven, but not impossible.

At the least, he's guilty of not noticing/caring about stuff he really should have cared about, and as a result wasted a whole bunch of money and human life (not just in his own labs, but those relying on his work) that we can't get back.


It is so frustrating when everyone is working hard to raise money for research to see it pissed away like this. I just saw a commercial for an upcoming cancer charity show put on by celebrities and I got really angry.


Let me clear up the confusion,

> Stanford president resigns over manipulated research, will retract at least three papers


I.e. he's nobly taking the fall for fraud committed by others that he knew nothing about in the lab he was paid a large amount of money to run and who attached his name to said papers he had nothing to do with.

Even the spun version doesn't make him look very good.


So what happens next? does Stanford and his past employers sue for what he has already been paid? Does he lose his prestigious degrees? Because if this ends with his resignation, the fraud was totally worth it.


Lavigne holds board positions at two biotech companies, Denali and Regeneron. He has sold over $70 million of shares over the past three years and still has Denali shares worth around $63 million [1]…he can live the rest of his life on a luxury beach if he wishes, lol

1 - https://www.secform4.com/insider-trading/1437435.htm



What are the actual papers?


Here's a list if you scroll down and also the altered images with annotations. The photoshopping is pretty brazen.

https://stanforddaily.com/2022/11/29/stanford-presidents-res...


> Today, the Stanford board’s special committee released the law firm’s and scientific panels’ findings, which are based on more than 50,000 documents, interviews with over 50 people, and input from forensic science experts. Its report finds that for seven papers on which Tessier-Lavigne was a middle, or secondary, author, he bears no responsibility for any data manipulation. The primary authors have taken responsibility and in many cases are issuing corrections.

> But the 22-page report (plus appendices) found “serious flaws” in all five papers on which Tessier-Lavigne is corresponding or senior author: the 1999 Cell paper, the two 2001 Science papers, a 2004 Nature paper, and the 2009 Nature paper from Genentech. In four of these studies, the investigation found “apparent manipulation of research data by others.” For example, in one case, a single blot from the 2009 Cell paper was used in three different experiments, and a blot from that paper was reused in one of the 2001 Science papers.

> The 2004 Nature paper also contains manipulated images, the report found. Although the report says the allegations of fraud and a cover-up at Genentech involving the 2009 Nature paper were “mistaken”—people likely conflated the fraudulent paper a year earlier, and Genentech scientists’ problems replicating the work, it suggests—that paper showed “a lack of rigor” that falls below standards.

https://www.science.org/content/article/stanford-president-t...

PubPeer alsoshows which papers they've been involved with that has "Errata" or "Expression of Concern": https://pubpeer.com/search?q=authors%3A+%22tessier-lavigne%2...


Not sure which ones, but here is the whole history of Marc's research publications: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Marc...


I haven't found a specific list, but did find this article with a bunch more details https://www.statnews.com/2023/07/19/marc-tessier-lavigne-sta...


TL;DR: References [1-3] further below are the direct answer to your question, "What are the actual papers?" (the "3 papers" referenced in the submission title; those that it is said that Tessier-Lavigne will retract). The PubMed links are as follows:

- https://pubmed.ncbi.nlm.nih.gov/10399920/

- https://pubmed.ncbi.nlm.nih.gov/11239160/

- https://pubmed.ncbi.nlm.nih.gov/15510105/

To elaborate:

From [0], "...To date, all three of these papers (Cell ’99, Science ’01 Binding, Science ’01 Silencing) remain published....Dr. Tessier-Lavigne has stated to the Panel that he intends to retract all three papers."

Also from [0], "As to the five reviewed papers where Dr. Tessier-Lavigne was a principal author...Specifically, a group of three papers contain images that are the result of manipulation of research data (Cell ’99, Science ’01 Binding, Science ’01 Silencing)...A fourth primary paper also contains images (which Dr. Tessier-Lavigne did not personally prepare) that indicate manipulation of research data (Nature ’04)....The Panel understands that Dr. Tessier-Lavigne now intends to retract at least three publications on which he is a principal author".

For the four papers named in the quotes above, citations and links appear below [1-4]. I'd suggest reading [0] (as linked from OP's article) for more details. There are other papers discussed in [0], too.

  [0] https://boardoftrustees.stanford.edu/wp-content/uploads/sites/5/2023/07/Scientific-Panel-Final-Report.pdf
  [1] Cell ’99: Hong K, Hinck L, Nishiyama M, Poo MM, Tessier-Lavigne M, Stein E. A ligand-gated association between cytoplasmic domains of UNC5 and DCC family receptors converts netrin-induced growth cone attraction to repulsion. Cell. 1999;97(7):927-941. doi:10.1016/s0092-8674(00)80804-1, https://pubmed.ncbi.nlm.nih.gov/10399920/
  [2] Science ’01 Binding: Stein E, Zou Y, Poo M, Tessier-Lavigne M. Binding of DCC by netrin-1 to mediate axon guidance independent of adenosine A2B receptor activation. Science. 2001;291(5510):1976-1982. doi:10.1126/science.1059391, https://pubmed.ncbi.nlm.nih.gov/11239160/
  [3] Science ’01 Silencing: Stein E, Tessier-Lavigne M. Hierarchical organization of guidance receptors: silencing of netrin attraction by slit through a Robo/DCC receptor complex. Science. 2001;291(5510):1928-1938. doi:10.1126/science.1058445, https://pubmed.ncbi.nlm.nih.gov/11239147/
  [4] Nature ’04: Lu X, Le Noble F, Yuan L, et al. The netrin receptor UNC5B mediates guidance events controlling morphogenesis of the vascular system. Nature. 2004;432(7014):179-186. doi:10.1038/nature03080, https://pubmed.ncbi.nlm.nih.gov/15510105/```


I am not familiar with Tessier-Lavigne's work, except that it concerns Alzheimer's disease. Is there a summary of what findings / hypotheses are now in question?


Below summarizes the epidemic at many labs, especially in AI. If you want funding, GPUs and head count then you must prove yourself to be in top 5. The result is relentless pressure for the results that can create headlines.

It identified a culture where Tessier-Lavigne “tended to reward the ‘winners’ (that is, postdocs who could generate favorable results) and marginalize or diminish the ‘losers’ (that is, postdocs who were unable or struggled to generate such data).”



I have a stupid question. What should we think of Stanford ethics when said individual still retains a board position?

If his excuse of not enough detail investigation of lab reported data before sub mitting research report than should not same lack of skillset be a requirement for his firing form Stanford board?


All of these comments and articles and no mention of what the studies were about, and why he manipulated them.


Looking through report issued by Stanford's scientific panel, it seems quite clear that (i) Tessier-Lavigne and/or people in his group have been very naughty indeed, and (ii) most of their image manipulations might have escaped notice if they'd only used fuzzy select.


Sometimes I wonder, even after getting caught, academics has the least to loose? The guy with still be a professor, right?

Can a doctor, and lawyer remain in his profession after such a malpractice?

PS: Politicians dont count. Corrupt as they maybe, they still have to face the music every 4-5 years.


Research papers seem to be fundamentally flawed in that the person who has most to gain, is essentially the person measuring the results - classic conflict of interest problem. The peer review is clearly ineffective against a highly motivated nefarious individuals.


I would imagine this is more common that we know or expect. The incentives are too great and university administrators exist is such large numbers that cheating your way into those positions must be a common tactic for 50% of the people in those positions.


So… what’s going to happen now with all the behavioral science people and their fabulous data


Theo Baker was interviewed on the Longform Podcast recently:

https://longform.org/player/polk-award-winners-theo-baker


I know of some similar misconduct around research at another well known university. Hopefully the field can be reformed further to protect against manipulated research because it’s extremely dangerous to the public interest.


Unfortunately scientists are often rewarded/celebrated for research that finds groundbreaking or new results as opposed to results which are actually true but less interesting. This incentive is very difficult to resist.


Have to say. Pretty surprised that on HN there is such a strong emotional desire to tear down 'Academia', or even science really.

For several months I've noticed that any story about Academia that hints at a problem, or a misstatement, or something over-stated, anything dealing with science, there is just a mad rush to grab torches and scythes, pikes, etc... "Crucify", these people are all con-men.

But, if it's a tech company, then all good, just lie all you want, that's just salesmanship, 'selling hype' to promote a product.

I fear it is part of the 'post-truth' America where nothing is trusted.


Aside, what do you even do now if you are Tessier-Lavigne? Where do crooks go after they are caught but not properly punished? Some pharma company?


The problem with these people is, they are allowed to resign and say sorry, they dont get treated as potential murder suspects or criminal assaulters by proxy, if such a law exists, because we dont know if their work is being cited and used for drug treatments. Here in the UK, the part of NHS that decided what drugs can be prescribed by GP's called National Institute of Clinical Excellence (NICE), hide behind secrecy when choosing what drugs GP should prescribe.

There just isnt enough accountability of the people at the top.


Why is bro not also getting fired from his biology prof job there? They're letting the old boy down way too easy.


Was this fake research funded by tax payers? Can they sue this scientist to get some money back ?


> According to Jerry Yang, chair of the Stanford Board of Trustees...

Also the co-founder of Yahoo!.


And yet people think the sokal hoax contains some revelations about an entire field.


the 2009 paper repeatedly referred to in this article: https://doi.org/10.1038/nature07767


Just another result of the terrible Publish or Perish incentives in Academia.


The rot goes SUPER deep team


And they wonder why we dont all just "trust the science!"


This guy is like an actual billionaire. I have no idea if his fraud extends to the actual drugs he's made money from, but I wish these kinds of people were held to higher standards. Zero chance he's gonna have any legal repercussion.


Unbelievable, no wonder people don't trust the science!


There's a reason for maintaining separation of corporate capitalism and science/academia. People are persuaded by the almighty dollar and the prestige. Before you know it academic institutions and "prestigious" scientists are pushing propaganda, for kickbacks.

While it may be difficult at times to maintain ethics and integrity, it's always worth the commitment. Always.


Thankfully Daddy Business always tells us when he lied to us.

Science is self-correcting. Not always right. This is part of what self-correction looks like. It beats all known alternatives.


Science didn't self-correct here. The checks and balance in the scientific systems failed so badly, a student journalist uncovered the fraud. The problem is that Science has become in your words "Daddy" Business.


Oh, I see. Was the fraud revealed during discovery? Or was it a deathbed confession? No. But it was revealed, and championed by an undergraduate journalist.

It's important to track the provenance of ideas, and Theo Baker wasn't the first person to identify the falsified data. He has done great work keeping Stanford from burying the story, but he isn't Elizabeth Bik, combing through old Science articles looking for duplication.

And now the papers have been retracted, and the responsible party faces laughably trivial consequences, all things considered. A self-correcting system isn't going to get it right all the time. The papers were under the aegis of a powerful man, so it's not surprising that it took some time for them to be corrected.


Smart and skills do not rule out malice.


To be blunt, running an institution does not necessarily require the scientific creds in question. But interesting that they cast the first stone


Wild that this wasn't covered up


More evidence that science in its current state is more of a religion than science. People only using it to further their agenda


Which papers are being retracted?


Putting the scam in Scamford


Trust the science, they say.


Trust but verify


Verify then trust


I really think there needs to be some deep reexamination of the current way we quanitfy the value of research output in the academic world. I don't have the time or energy to develop a full argument in favor of an alternative replacement, but I'll do my best to share a "cut down" version.

Some of the biggest problems with the current system are:

* Peer Review has its share of problems[1][2][3] that create horrible second order effects, especially among those pursuing PhDs[4]

* Elsevier, Nature, Science/AAAS and others perform rent-seeking behavior to the extent where I think it's worth asking whether they hinder the funding and dissemination of good science more than they help. As a personal aside, I always found it very off-putting that DeepMind regularly publishes in Nature in Science, despite the fact that outside of AlphaFold, their work often has little overlap with the readership that typically frequents these journals.

Personally, I am of the opinion that platforms like Semantic Scholar, arXiv and OpenReview are doing a better job of promoting open and transparent academic research with improved accessibility to both the public and the researchers doing good work.

Given the power of being mentored by great scientists, it makes sense to have filtering processes which concentrate great researchers in a small amount of schools. My point is that if there is too little oversight, these institutions become incentivized to all but encourage bad behavior in order to maintain their image. We need systems which encourage MORE transparency into the process of creating science and MORE accessibility because an important part of scientific research is it's uncertainty.

Tools like arXiv, Semantic Scholar, and OpenReview are all steps in the right direction, and it would be good to promote the useage of these tools outside of their current userbase as I think they provide a system for people to observe science more easily, and for important parts of the research process to be accessed by all.

1: https://academia.stackexchange.com/questions/115231/why-is-p...

2: https://blog.neurips.cc/2021/12/08/the-neurips-2021-consiste...

3: https://jamanetwork.com/journals/jama/article-abstract/19498...

4: https://medium.com/@tnvijayk/potential-organized-fraud-in-ac...


Trust the science (TM).


he should retract himself from his job.


stanford daily absolutely killing it


He probably didn’t even read those papers.


Yet listed as co-author, so good thing we're getting rid of him.


Very glad to see this. The Stanford Daily did a great job reporting this. Reposting a comment that I found instructive from the discussion on this piece [1]. About the Genentech report [2] which made MTL look very very bad.

``` APersonWhoCanRead 3 months ago

It seems to me that the linked report goes as close as possible to accusing MTL of fraud as one could hope given that it's coming from Genentech lawyers that are trying to keep the company out of trouble:

"In order to assess whether the 2009 Nature paper contains duplicate images, the diligence team consulted an independent, outside expert who specializes in detecting image manipulations in scientific publications. This expert concluded that two sets of figures, Figures 1d and 5e and Supplementary Figures 9c and 17c, include duplicate images. The expert also concluded that a Western blot panel for Caspase 6 in Supplementary Figure 6d appears to include a composite of two images. We have not determined how these anomalies occurred."

"Genentech scientists and research associates had difficulty reproducing certain results reported in the 2009 Nature paper, in particular, the binding interaction between DR6 and N-APP (the N-terminal portion of APP). Prior to publication of the paper, employees other than the authors performed binding experiments that showed inconsistent results – sometimes binding between DR6 and N-APP was detected, and other times, it was not. Some of the employees who performed those experiments attributed the inconsistent results to variability in the purity and quality of the reagents used." --> Clearly, some employees attributed the inconsistent results differently - I'm guessing as fraud. --> These determinations were made before the paper, which contained fabricated data (c.f. above), was published. Clearly, the first author would have been told, and most likely also MTL.

"Senior leaders at Genentech including Dr. Tessier-Lavigne knew of the inconsistent binding results, and there was uncertainty and speculation within the Genentech Research organization about why the binding interaction between DR6 and N-APP could not be reliably reproduced or confirmed."

"Also following Dr. Tessier-Lavigne’s departure, one senior leader in gRED urged that the 2009 Nature paper should be retracted or corrected in light of the inconsistent binding results. Other senior leaders recognized at the time that this was an action only Dr. Tessier-Lavigne or another co-author could take with the journal." --> MTL was asked to retract and did not.

TLDR: the report is very damning. Why don't you try to dispute some of the facts reported by the Daily, instead of writing nebulously that their headline is misleading.

```

UPDATE: To clarify, that comment is responding to another comment saying "the report is very positive for MTL"

[1] https://stanforddaily.com/2023/04/06/stanford-president-rese...

[2] https://www.gene.com/download/pdf/Findings-of-2023-Genentech...


I bet that for every case of scientific fraud that is obvious from the published paper (like this one), there are 10 cases of scientific fraud which are never detected.

Think about it - a domain expert will do a far better job of faking data than a random joe, and will be aware of most statistical tests that could find out.


Another win for berkeley


Trust the science, bro


MSG got a bad rap due to a joke by scientists.

No matter how well educated, they’re still just people and biology is optimized for success.

At this point society needs to have a good long think about enabling the reach of any specific individual.

Our math is not holistic truth as it’s been shown there can never be one true set of axioms. All philosophy is relative to human awareness and agreement. The masses have always agreed rent seekers are leaches who externalize providing for themselves.

We need to stop creating landed gentry; they’re all just one of billions like the rest. The work is important, not their figurative identity; we don’t need them to carry out the work.

3 months labor, 3 months off, 6 months white collar work. A rotation such as that would effectively act as term limits on social influence.


[flagged]


Did they cover her misdeeds before? I'd love to see that reporting. Such a shame that the state of CA adopted the math framework she was pushing, even though so many of her colleagues (STEM professors, not education professors) described the negative impact it will have on learning and college readiness.


This isn't Reddit. If you have something to say, say it, but drive-by content-free comments like this aren't appropriate here.


If you have evidence of scientific misconduct then show what you've got. If not, the culture-war bullshit is off-topic.


She repeatedly misquoted research in her papers. This has been well documented. Here's a comprehensive critique of her, by Stanford math professor Brian Conrad: https://sites.google.com/view/publiccommentsonthecmf/?ref=st...


The investigation into these allegations was terminated after finding no wrongdoing.


There was a previous investigation, years ago, which failed to dismiss her as a tenured professor (which is very difficult to do — as we've seen with MTL). But the document I linked to is from 2023, related specifically to her misrepresentations in the CMF. That has not had a hearing, AFAIK. If you have more information, please share it!

I would be surprised if misstatements in such a document (which is not published research) could lead to a tenured professor being fired. But I would welcome an investigation by the Daily into the issue, which could turn up evidence of other misrepresentations in contexts that are more likely to receive administrative scrutiny. She surely has freedom of speech, as a professor. But purposely and persistently misquoting research is precisely the sort of thing that professors can be punished for.


I tried looking at that document. It's pages upon pages of nitpicky detail causing my eyes to glaze over.

Better would be something which points out the scientific misconduct. Otherwise it comes across like a Gish gallop.

As an example, I picked one of the documents - https://drive.google.com/file/d/17O123ENTxvZOjXTnOMNRDtHQAOj... - and found a comment that intrigued me:

> In some places, the CMF has no research-based evidence, as when it gives the advice “Do not include homework . . . as any part of grading. Homework is one of the most inequitable practices of education.” The research on homework is complex and mixed, and does not support such blanket statements.

I stuck "homework inequitable" in Google Scholar and found https://scholarworks.calstate.edu/downloads/8910k087s saying "Based on the literature, it is apparent that homework is not equitable for students from low socioeconomic backgrounds. It is important to mention that some studies claimed a positive correlation with homework and learning outcomes, but those studies don’t take socioeconomic status into account."

There's a book from 2000 on the topic, "The end of homework : how homework disrupts families, overburdens children, and limits learning", at https://archive.org/details/isbn_9780807042182/page/n9/mode/... and "Rethinking Homework: Best Practices That Support Diverse Needs" with a second edition in 2018.

This all makes me wonder why doesn't the research-based evidence support this statement?

If it isn't "one of the most", what are the most?

Or is the issue that the author doesn't understand the topic enough, so think it's too complex for anyone else to understand?

> the document I linked to is from 2023

That's an unfair characterization. While parts of it are from 2023, https://statmodeling.stat.columbia.edu/2022/05/05/california... shows the document was there in 2022.


> That's an unfair characterization. While parts of it are from 2023, https://statmodeling.stat.columbia.edu/2022/05/05/california... shows the document was there in 2022.

The document was edited over the last year or so, as the CMF was released and edited. But the investigation referenced by another commenter took place way back in 2006, well before the CMF popped up. [1]

1: https://www.insidehighered.com/news/2012/10/15/stanford-prof...


Thank you for agreeing with me about that point.

What about my more substantial one - I don't have the time to dig through what appear to be a lot of personal disagreements about what how to interpret research, so would you please highlight the part which you best believe constitutes scientific misconduct?

I mean, sure, repeating a false myth about calculus from the 1800s may be wrong, but if that counts as misconduct then there's a lot of misconduct going on in academia.


I didn't actually agree with you — I maintain that it is not "an unfair characterization", since the document was created a decade and a half after the investigation that another commenter referenced. Nice try on claiming the W though!

> Or is the issue that the author doesn't understand the topic enough, so think it's too complex for anyone else to understand?

I would say the same to you, in your attempt to critique Professor Conrad's piece. As you say, your eyes glazed over so you didn't actually read it. Perhaps you should read it before concluding that it doesn't contain anything of value.


You original wrote "But the document I linked to is from 2023, related specifically to her misrepresentations in the CMF".

I said it was unfair because it was from 2022, with edits in 2023.

You agreed that it was from 2022, with edits in 2023.

> Perhaps you should read it before concluding that it doesn't contain anything of value.

Again, Gish Gallop. This has the mark of throwing everything on the wall in hopes that something will stick. Making a mountain out of every molehill means it's hard to see if there are any actual mountains.

You say it's evidence that she "repeatedly misquoted research in her papers". My own spot check of literally the first topic I looked into makes me suspect the author's own ability to interpret the research. I listed my analysis - am I wrong?

Otherwise, sure, perhaps that one item I looked at is an exception and the rest of the document contains actual mountains.

Since you've read the report, what misquoted research stands out to you as the most damning?


Seems to me like she is the victim of right-wing conspiracy theorists.


For those who don't know this kerfuffle

Dr. Brian Conrad is a very vocal critic of the California Math Framework. He definitely has an axe to grind.

Dr. Jo Boaler is a British education author and Nomellini-Olivier Professor of Mathematics Education at the Stanford Graduate School of Education. Boaler is involved in promoting reform mathematics and equitable mathematics classrooms. She is the primary author of the California Math Framework.


It's probably worth noting that they are both full professors at Stanford, not just a guy with a doctorate vs a Stanford professor.

I'm unaware of Professor Conrad having an "axe to grind". AFAIU, he simply thinks that what Professor Boaler advocates is incorrect, and he sees fit to describe the myriad mistakes and misrepresentations in her research and advocacy.

He shares this outlook with many people, including those who believe that despite saying that she advocates for "equity", her agenda would actually lead to worse outcomes for many students, including low-income students who lack family resources to procure advanced mathematical education.


That certainly sounds like what someone who agrees with Dr. Conrad, and has the same axe to grind, would say.


When I look up "axe to grind" I see this definition:

have a private reason for doing or being involved in something.

I have never met Professor Boaler, nor do I have any private/hidden agendas. Do you simply mean "someone who disagrees with" when you say "axe to grind"? If so, it's somewhat of a meaningless epithet. If you mean the normal definition, I'd be curious what evidence you have for that (related to Professor Conrad, or me).


Well, let's just agree to disagree, then.


If you won’t say what you mean, there’s no other choice!


Oh yes, there is another choice. Rather than assume that what I mean is the dictionary definition and asserting your opinions, why not ask some good questions?


so much for "this is science, anything saying otherwise is conspiracy".


Trust the science.


Мехмат отзывает 500 000 своих выпускников. Причина отзыва: во втором томе Фихтенгольца на странице 187 в формуле XVIII.56 отсутствует нормирующий оператор.


Remember that these are exactly the "type" of person the media will trot out on front of the public to make wild claims about "the science". Questioning their claims can have consequences.


Anyone else vomit when they try to read academic "papers"? Its like they are all written by some kind of Borg cult


Its a very different kind of "reading". Condensing 50 pages into about 7. I can read papers in my field fairly well because I have understanding of the ideas but it takes me hours to read a single paper from outside my field.

This, and I cant stress this enough, is a good thing. You've probably complained that people spend way too much time waffling nonsense instead of simply writing the ingredients in cooking recipies. Scientific papers ARE the list of ingredients without anything else. If you feel like they missed three steps, go back and actually read it, bit by bit. Read the prior art if youre really confused. You simply cannot skim read them, each sentence is important.


Nope. Sorry, that's on you.


Additional info available from https://www.statnews.com/2023/07/19/marc-tessier-lavigne-sta...

I'm glad to see someone take some personal responsibility that isn't forced on them (that we know of). I realize that I don't know enough about the circumstances to know if this is "enough", and I am curious about his staying on as faculty, but from purely appearances this is a pleasant surprise given most recent public figure screw-ups have just posted through it.


I hear the calls for blood here, but will offer a contrarian point of view. The pressures academic researchers have to face today are unparalleled, even in 95% of industry jobs. The pressure to publish continuously, pressure to win grants, pressure to be a great teacher, pressure to be a role model for students and younger faculty, pressure to balance all this with families that really need you too.

So we basically take the brightest minds and have them compete in a gladiatorial rate race. This system is so broken that something fundamental has to change here.


"When you're in a hole, stop digging". Yes, academia is terribly broken at the moment, the incentives are fucked, and fraud is rampant. But the solution there is not to just look the other way at misdeeds, that just makes the incentives even worse. Highly-visible career executions for misconduct aren't the entire solution, but they are part of the solution.


The guy went on to become Stanford president, it's not the system that broke him, it's that he was a cheater all along and flourished in it.

There are a lot of good honest hard working people losing out, but you won't find them at an administrator luncheon because they are spending 70h in the lab every week on a temp contract.


This person is in trouble for intensifying exactly the issues you're describing:

"The report ... identified a culture where Tessier-Lavigne “tended to reward the ‘winners’ (that is, postdocs who could generate favorable results) and marginalize or diminish the ‘losers’ (that is, postdocs who were unable or struggled to generate such data).”"


I think GPs point is that there is little here to deter other academics who behave the same way. From my time in academia, there are plenty of professors I knew who behaved this way (not direct falsification, but rewarding the winners).

This is a particularly egregious case of a high profile person. In most other cases, if misconduct is detected, the buck is passed on to the individual researcher/grad student. I personally know a fellow student who falsified data, published papers, and was caught. Only he, not the coauthors, got in trouble.


I agree. I'm a physicist and if everyone I knew who lied in their research got fired, there would be no one left.


What the actual fuck. There should be none of you left if that’s the case.


Not sure why you're surprised, it's basically required at this point. Part of becoming a physicist is learning which 20% of a given paper is a vast overexaggeration of the impact or significance of the results. The exaggeration and deception is required for career progression


Probably because anyone mentioning such a thing is met with outrage at suggestion that “scientists” are anything but humble truth seekers.


Scientists are careerists first. I'm sure you've been told this many times


No the public discourse around this has been, for years at this point, that this isn’t true. Stating anything else gets one labeled as an anti-science conspiracy theorist at best or a fascist at worst. Surely you know this.


I’m from a reasonably poor background so my experience has been almost the opposite, that experts are derided and seen as weirdos who are missing some aspect of reality.

I get that there is a portion of the population for which what you say is true, but I’m sceptical that you aren’t aware of what I’m talking about, but if you’ve genuinely managed to avoid it up until now, then you’re welcome I guess


I didn't read your comment as a defense, only as an explanation. And as an explanation, you're right: The pressure is getting to way too many people. But how can you fix it? I'm afraid good answers need very deep change. Society lets a few 'winners' (whether by cheating, effort, good luck, or anything else) reap too much of our collective rewards.


> But how can you fix it? I'm afraid good answers need very deep change

How the hell do you manage to bring deep change to large entrenched bureaucracies like universities though? Honestly, I’m surprised there wasn’t a crackdown or any supression on the guy who exposed this person.


The guy who exposed this person is the child of two prominent New York Times journalists.


I didn't realize academic research was a mandatory life sentence.


It's interesting to see that software engineering (and IT overall) is not the major quality downgrade introduced by VC-like funding and push for Continuous Development.


If I understand you correctly, you are saying we shouldn't be punishing individuals for a system problem. Is it time to reform the system then?


It was time to reform the system several decades ago. But Americans are terrible at understanding "not every dollar spent on research will have a big outcome", or even understanding how research works in general.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: