How are the feds able to stick a wire fraud charge on almost anything except for academic fraud? It's one thing to make a mistake like Reinhart and Rogoff's embarrassing excel blunder. It's entirely something else to falsify data.
> Under Zlokovic’s leadership, the USC institute has expanded to more than 30 labs and grown its annual funding more than 10-fold, exceeding $39 million in 2022. NIH grants to Zlokovic have totaled about $93 million. A prodigious fundraiser, in the past decade alone he has added at least $28 million from private sources, according to USC.
So the NIH has given the man $93m. Surely that's a place to start?
Wire fraud and prison sentences would be a good start. Besides the absolute dollar amount, the amount of damage done in the form of second order affects is enormous.
Where do you draw the line? I saw many funding proposals for extremely abstract category theory justified on the basis it helps number theorists solve problems related to cryptography and protect e-commerce. Between friends everyone knows this is not true. Am not against abstract research: given how cheap it is, I don't understand why we can just allow it to be funded without having to provide justification that makes everyone feel a bit of a criminal while writing the proposal. It should be seen sort of how we public commissions art.
Another type of mini-fraud: you could say almost 80% of applied machine learning papers contain serious methodical mistakes that are almost equivalent to falsifying data, and what's worse the authors are well aware of and even the reviewers. I am talking about the kaggle-type ones where you apply some kind of PCA or neural network on some relatively small dataset.
If we had a social norm against pretending abstract research would "enable the creation of new materials, advance quantum computing, and enhance the security of your bank account" the agencies would learn to fund it anyway, because they would realize you run out of applied directions in the absence of basic research. Right now it's all laundered, so they think they can fund far more applied research than is available.
We used to have a social norm like that. It eroded, because populist politicians started ridiculing research proposals they found frivolous. Now most proposals are riddled with fake justifications, because people wanted more accountability in how tax money is spent.
At the risk of being controversial, the latter half of your statement tracks very well with my experience of organizational DEI initiatives, in particular. We have reached a point where tenure-track faculty are expected to provide justification, in writing, for the value of their research to various marginalized communities. That alone wouldn't be a huge deal if conference talks, research funding and, increasingly, tenure committees weren't being similarly pressed to provide these kinds of justification.
From where I'm standing (I hold a Ph.D in a scientific field, and my spouse is a tenure-track academic), an unsustainable portion of academic work consists in justifying one's research program, on grounds that have little to do with the subject of study.
It should hopefully be obvious that I'm not trying to stir a DEI flame war. Rather, I can't shake the feeling that DEI activism isn't accomplishing much beyond creating bureaucraty, and that this benefits neither research in general, nor marginalized groups trying to establish themselves in academia.
Anyway, this seems like a specific instance of a larger trend you're pointing to. Possibly not even the largest or most absurd one.
It's difficult to tell where the initiatives are coming from because the willingness to write and accept empty claims of relevance appears to exist at all levels and stages in the process. (For both DEI and "questionably applied number theory.") If they are getting that little buy-in from even the administrators I can't imagine how they get there in the first place.
It's like the same people put application and social consciousness first in their RFPs and then turn around to accept invisible fig leaves during evaluation... Who's getting fooled, the government? Politicians? The public?
>It's difficult to tell where the initiatives are coming from because the willingness to write and accept empty claims of relevance appears to exist at all levels and stages in the process.
Is it, though? In the case of DEI, it appears to me (on the surface, anyway) that these initiatives are being implemented at the request of very vocal special-interest groups, and at the orders of the politicians who represent them. It seems like a fairly straightforward case of lobbying by institutions whose existence depends on the existence of a social problem. It's a rough analog to Eisenhower's Military-Industrial Complex idea.
The same is true with what you call "questionably applied number theory". The only difference is that special-interest group isn't defined along ethno-racial lines, but instead comprises various bean-counters and ad-men who sell various forms of investment.
In my view, there used to be a consensus that such lobbyists were not welcome in universities, or at least not in droves, because of their tendency to subvert intellectual pursuits in favor of short-term profits.
Who were these "populist politicians" exactly? I don't remember any of them ever talking about university research.
It'd actually be great to have some politicians who ridicule frivolous academic research. It's not like there's a shortage of it especially in the humanities and social sciences (where there's no such thing as basic research anyway). Alas, their attentions are all elsewhere.
That does sound close but Proxmire doesn't seem like a populist politician, the Golden Fleece dates from the 1980s, and it wasn't specific to academia. A lot of the research was funded by other parts of the civil service than the NSF, like:
The Federal Aviation Administration was named for spending $57,800 on a study of the physical measurements of 432 airline stewardesses, including "distance from knee to knee while sitting", and "the politeal [sic] length of the buttocks."[19]
Office of Education for spending $219,592 in a "curriculum package" to teach college students how to watch television.
Amusingly it does seem to have hit the mark. A former NSF director even lied about the awards to try and discredit it!
Too bad this is all so long ago. I was asking more about the present day, as it seems like that was the implication of the comment being replied to. Where are all these populists with their modern version of the Golden Fleece awards?
Academia justifies itself by claiming it's necessary to do "basic", "foundational", "fundamental" or "blue sky" research in a non-commercial setting, because supposedly corporations can't afford to do long term research that doesn't have immediate application. So in this model academic scientists develop a foundation of firm and rigorous theory about how nature works, publish a literature on it for the good of all mankind, and then shorter term applied research is done by companies to derive practical insights from that literature.
That model doesn't seem right, because there are lots of corporate research programmes into fundamental theory that were/are lavishly funded despite a decade+ of no ROI, most obviously at the moment AI. But let's put that aside and pretend it's true.
Problem is that humanities and social studies don't produce a body of theory on which anything can be built. Humanities produces only critical theory, which despite the name is more like a belief system than a scientific theory. The social studies produce a lot of that too, but even outside critical theory their output is just a giant pile of random thought bubbles and mini-studies to try and prove them. There's nothing much linking these studies, no frameworks that make reliable new predictions. Instead we get an endless stream of Just So claims that fall apart when people try to replicate them, but even failure to replicate doesn't have any impact because there's nothing underneath them to be invalidated. At most it's shown that this one specific paper isn't reliable.
Contrast to something like physics where the work of academic theorists is all about coming up with unified theories of everything, which could then be used to develop practical applications. Einstein is super famous because he invalidated large swathes of theory and opened up space for new theory to be developed. Social studies don't have any equivalent to Einstein and can't by design.
No, the research was fine. For example, a friend of my family spent a large part of his career writing an extensive bilingual dictionary for an indigenous language in Southern Mexico. He won a "golden fleece" award from asshole Senator Proxmire of Wisconsin, who thought dictionaries for languages with only a few hundred thousand speakers was a waste of government research grant money.
Creating a bilingual dictionary for an indigenous language spoken by a few thousand people who aren't even in the country funding that research is the definition of "a waste of government research grant money."
Is this satire? Leaving aside your sloppy reading, this is one of the the most exaggerated examples of a stereotypical "technical person utterly bereft of human perspective" takes I have ever seen on this website. Which is really saying something, because there are a lot of blinkered people here.
I feel profound sadness and pity for anyone with such an impoverished worldview. I hope someday you can experience some literature, art, nature, or human connection that can pierce though it and help you reconnect with your innate capacities for empathy and wonder.
Please don't break the site guidelines like this, no matter how wrong another commenter is or you feel they are. It's not what this site is for, and destroys what it is for.
Please don't break the site guidelines like this, no matter how wrong another commenter is or you feel they are. It's not what this site is for, and destroys what it is for.
It's kind of sad that you need to jump through a lot of hoops for research funding but when talking about things like military funding everyone is ready to find it more than required and not ask too many questions
Because it actually is? I don’t understand how one can compare abstract research into category theory and investment into millions of people not getting killed and conquered and get sad that the latter comes up on top.
There is a lot of defense spending that does nothing to support that mission, or does so in a terribly inefficient way. Given the scale of spending we're talking about, many billions of dollars get wasted every year, but folks who get worked up over a few million wasted on research are often unconcerned with 1000 times more wasteful projects as long as they have the military labeled slapped on them.
From where I’m standing, American defense spending is inadequately low for the current situation. US is struggling to increase artillery shell production from 15 to 30k per year, when only Ukraine conflict should consume hundreds of thousands, if not millions. Same with many other items — American and NATO stockpiles are low, and production capabilities almost non-existent compared to adversaries.
Saddam Hussein started the Iran-Iraq war, in which Iraq achieved no significant gain, at the cost of between one and two million lives. This war took place in the middle of his 40-year tenure as a president who killed hundreds of thousands of his own people with the same military equipment.
It would almost sound like a noble motivation for all that shock-and-awe spending, except that wasn't the motivation.
Nobody cared when he was actually killing Iranians or domestics. Back then, kids of exiled Shah-regimers were protesting US gov't support for Iraq (widely reported overseas, widely ignored in the US) at colleges in CA, Michigan and Texas and getting beat up for their trouble.
Plus we were rolling in all that $15/b oil with the flooded oil market so domestic win-win. Except for all those oilmen in TX who went bust. Well, they voted for Ron, so I guess they got what they wanted.
Yes and who was supplying the weapons and other support during that time? Arguably defence spending in the US and western Europe directly contributed to the deaths.
> Where do you draw the line? I saw many funding proposals for extremely abstract category theory justified on the basis it helps number theorists solve problems related to cryptography and protect e-commerce.
That's not fraud. Not even the most gullible reviewer believes a word of it.
Leave it to anthropologists of science to describe the ritualistic aspects of writing abstracts.
I love that the grad student who caught it came from a Marxist-oriented economics program. He legit figured it out by trying to replicate the results published by Reinhart and Rogoff.
Not-so-fun fact: Economists don't publish their actual Excel spreadsheets, which would have made their error trivial to prove, but just the results!
Feds don't prosecute the stuff their own tribe is involved in. Once retired, federal bureaucrats become honorary professors, chairs, fellows at 'X school of international studies/government', 'Y school of comparative law', 'Z school of efficient administration'. All such schools are hotbeds for govt spies, other folks. Where and how do these schools get money from? They use the same techniques that this neuroscientist uses. Once they open a front, charging academic fraud in the name of wire fraud charge, etc, if they succeed in doing that, expect the next round to be after these sinecure schools and their fundings, when power structures change.
>Feds don't prosecute the stuff their own tribe is involved in
I get it. The world is complex, and grouping helps make it more manageable. However, in no way are the "Feds," part of the same tribe, especially in 2023.
>All such schools are hotbeds for govt spies, other folks.
Maybe recruitment? I'm not familiar with the curriculum at USC. Seems like you'd get them when they finished their degrees, unless they offer "wetwork" as an elective.
>They use the same techniques that this neuroscientist uses.
Disagree. Whatever tribe or faction these people you describe belong to, they are definitely not publishing papers. They'd be administrators. They are not involved in prosecuting academic malfeasance, and certainly the prosecution, is not taking advice from retired government employees. He/she/they have their own careers to consider!
This is a deeply uninformed comment. I don’t even know where to start. Nearly every sentence is wrong.
You can absolutely get hauled in front of a judge for fraud and abuse in the use of NIH grants. You can be required to pay all of the money back, among other punishments. Google will reveal a variety of cases.
It is in fact rare for retired federal bureaucrats to become professors. It is much more common for people at the Secretary level to get teaching positions at schools of public policy. I am not aware of any former CIA directors holding those roles but I invite you to look.
These schools are not “hotbeds of spies” - truly that it is a completely uninformed statement.
By the way, very few law professors work via grants and even fewer via NIH grants.
"I was just telling this guy I was going to take away his life, lock him in a cage for decades, and force him to live with violent criminals in a place notorious for violence and rape as punishment for his victimless 'crime' - and then he killed himself."
I'm not going to respond to you any more as I consider your position too reprehensible to continue engaging with. You are defending the murder of an innocent person and the abuse of government power. It's clearly a bad thing for agents of the government to use threats of extreme punishment to coerce people to accept lesser punishment. Rather than come up with some kind of argument to justify your moral degeneracy you simply keep repeating what happened.
A suicide is not a murder, no matter how little you believe in personal accountability and/or free will.
Crimes should be punished, no matter how little you believe in justice and/or the rule of law.
The life-preserving choice he had every chance to make was simple, no matter how little you believe in the righteousness of admission of clear guilt and/or simply saying what you must in order to move on with life.
These things are all true regardless of your beliefs – or my so-called reprehensibility and/or moral degeneracy.
Before you complain that a comment is a tactless description of history that distorts the truth, you should perhaps try learning some of the facts about said history. Such a habit will serve you well, I promise :)
Even the concept of a plea deal is morally atrocious in this and similar cases. The government threatens you with decades in prison in hopes of bullying you into pleading guilty where you are not.
6 months in prison is extremely inappropriate for what he was accused of. Threatening far worse to compel someone to accept a sentence they don't deserve is a crime orders of magnitude worse than anything Swartz was accused of.
Now you're being dishonest. The 6-month sentence was a plea deal he declined. It was a 35-50 year sentence if he fought and lost all charges.
It remained to be seen if he were actually guilty.
> Carmen Ortiz, the federal prosecutor who hounded Aaron Swartz in the months before his Friday suicide, has released a statement arguing that "this office’s conduct was appropriate in bringing and handling this case." She says that she recognized that Swartz's crimes were not serious, and as a result she sought "an appropriate sentence that matched the alleged conduct – a sentence that we would recommend to the judge of six months in a low security setting."
> That's funny because the press release her office released in 2011 says that Swartz "faces up to 35 years in prison, to be followed by three years of supervised release, restitution, forfeiture and a fine of up to $1 million." And she apparently didn't think even that was enough, because last year her office piled on even more charges, for a theoretical maximum of more than 50 years in jail.
> Some have blithely said Aaron should just have taken a deal. This is callous. There was great practical risk to Aaron from pleading to any felony. Felons have trouble getting jobs, aren't allowed to vote (though that right may be restored) and cannot own firearms (though Aaron wasn't the type for that, anyway). More particularly, the court is not constrained to sentence as the government suggests. Rather, the probation department drafts an advisory sentencing report recommending a sentence based on the guidelines. The judge tends to rely heavily on that "neutral" report in sentencing. If Aaron pleaded to a misdemeanor, his potential sentence would be capped at one year, regardless of his guidelines calculation. However, if he plead guilty to a felony, he could have been sentenced to as many as 5 years, despite the government's agreement not to argue for more. Each additional conviction would increase the cap by 5 years, though the guidelines calculation would remain the same. No wonder he didn't want to plead to 13 felonies. Also, Aaron would have had to swear under oath that he committed a crime, something he did not actually believe.
The intimidation of the stacked charges led to Swartz's suicide.
The gov. was 100% responsible for his death, which is why I say they killed him, indirectly, but they still caused the anguish that made him feel like suicide was the only way out.
Sadly this strategy is used all too often on citizens, plea deal offered to say "look they pleaded guilty, we were right", otherwise you get the stick, life in jail.
Not if you believe in personal accountability. He made a choice, he faced the consequences; he made a choice again, he faced those consequences. If I kill a man and am given a very long sentence, then kill myself to avoid the wait, it's not the government's fault I died.
Ahh, "personal accountability" for the victims. It's the victim's fault the government threatened them with decades of imprisonment over a minor "crime".
Where is the personal accountability for the prosecutor?
There is a notion of reckless disregard for human life in the law. If you are throwing bricks off an overpass just for fun, you are still guilty of murder if you kill someone - because you have reckless disregard for human life even though you weren't trying to kill anyone.
Likewise, a prosecutor who threatens someone with decades in prison for freely distributing academic material displays a reckless disregard for human life. Even if the prosecutor didn't intend to provoke a suicide the prosecutor was doing something bad with a reckless disregard for human life and it got someone killed. That's murder, in my view.
Perhaps you should ask for accountability from the prosecutor/murderer rather than from her victim.
Yes, she is accountable for her actions of making clear the gravity of the matter and letting him have an easy out of 6 months in low security prison should he make the choice of his own free will to admit his trespassing and copyright infringement were illegal.
And he is accountable for his actions of refusing to do that, and furthermore refusing to participate in the matter whatsoever.
> Now you're being dishonest. The 6-month sentence was a plea deal he declined. It was a 35-50 year sentence if he fought and lost all charges
No, it was not a 35-50 year sentence. That's the theoretical sentence one could get for those same crimes if all the sentence enhancing factors that can apply do apply. Repeat offender, part of organized crime, massive monetary damage, drugs involved, things like that. Swartz didn't have any of those factors.
Here's an article on how DoJ press releases ridiculously exaggerate potential sentences [1].
If the prosecution has been able to prove everything they alleged and the judge decided to make an example of Swartz it might have been up to 7 years, but that is unlikely. Swartz's attorney said that if they had gone to trial and lost he thought it was unlikely that Swartz would get any jail time.
> The 6-month sentence was a plea deal he declined.
Right, so he could have picked the option where he walked free in six months, but instead he picked the option where he's dead.
> Felons have trouble getting jobs
> aren't allowed to vote
> and cannot own firearms
He's not doing any of those now, so I do not see how his would-be restrictions are relevant. The following speculation is just that, and also irrelevant.
> Aaron would have had to swear under oath that he committed a crime, something he did not actually believe.
Ah yes, the "Sure I secretly entered the networking cabinet of an institution I have no formal relationship with, to install my personal equipment into it without their knowledge (much less consent) in order to exploit their access levels to fulfill personal objectives that would otherwise be impossible to me, then hopped around IP's as the institution I was stealing documents from blocked the IP I was stealing until the entire IP range of the institution whose access I was hijacking was blocked due to my hacking and hence all the researchers doing legitimate work at that institution could not access the resources they paid for, but how could I have known that was unlawful?? I fundamentally cannot accept any blame for my actions." defense. I don't buy it.
> Right, so he could have picked the option where he walked free in six months
You missed this part:
> However, if he plead guilty to a felony, he could have been sentenced to as many as 5 years, despite the government's agreement not to argue for more. Each additional conviction would increase the cap by 5 years, though the guidelines calculation would remain the same. No wonder he didn't want to plead to 13 felonies.
There was no guarantee it would only be a six-month sentence.
I'm actually in the "personal responsibility" camp myself, but the amount of overcharging they did to him was obscene. He wasn't facing the consequences of his actions-- he was about to get fucked by the Statutory Ape.
To put it in perspective...Ghislaine Maxwell got 20 years for being an accomplice to child sex trafficking. This is the same sentence given to lesser spies. Swartz committed trespassing, a bunch of copyright infringement, and caused a DoS? It's not cool, but it's not 2.5x worse than pimping children.
Again, that's all speculation. Sentences of this sort are almost always served concurrently, not back to back. Just because a sympathetic narrative on the matter wants to paint the picture of it being a hopeless situation does not make it so.
If he was willing to admit his trespassing, bunch of copyright infringement, and DoS was wrong he'd in all likelihood walk free within a year. But for whatever reason he was not able to do that, and here we are.
Personally, I'm glad we are in the world where trespassing into a property to hijack their network and cause outages of global services is not considered "minor". Obviously the situation as it played out here is an absolute tragedy, but the matter was indeed serious and was handled with an appropriate level of gravity. All the prosecution was looking for was a "my b, that was wrong" to prove their point - and he refused to give it.
As they say: if you can't do the time, don't do the crime.
The fact that he was unwilling to face the consequences of his actions (or even stand trial on the matter) does not make his death the fault of the government, no matter how tragic the situation was.
I would assume that once law enforcement has established facts were materially altered for the purpose of obtaining funding, the jump to wire fraud would be relatively trivial (given today's heavy reliance on email/internet to do just about anything).
Scientific fraud robs the community twice: the first time by wasting research funds on fraud, the second time because experiments with the appearance of success funnel more dollars to them, taking away funding from other areas that might look less sexy but might have yielded a breakthrough of they'd been pursued. Grant issuers should insist on some form of end-to-end third-party data custodianship to prevent tampering with data during analysis. Seems better than the many billions of $ being wasted on fraud and subsequent missed opportunity.
Not to mention all of the students who end up dropping out or having to take Masters instead of PhDs because their theses built on the fraud fail.
Just like we should place failed results (when due to wrong science, not bad skills) on an equal level with successful results, we should place failed thesis projects on equal level with successful thesis projects as having added to the general knowledge (again, when the failed projects demonstrate a falsified theory, not when they fail due to mistakes or inability on the part of the student).
Great comment. I would encourage all PhD students to bomb-proof their thesis topic and build it on as solid a foundation as possible. I chose perhaps a less flashy, relevant, and lucrative topic for this reason but I would rather finish than be staring down the barrel of a "my entire work has been based off a lie"...
When you have lots of people whose livelihoods depend on the gravy train, who can't be sure that what they are working on is fraudulent because they are so specialised, who would take that risk?
Its all about funding. And in the US basically all funding comes from the same source - government, military or corporations - what I think of as the governance system.
It’s very weird to me that science now equals vaccines and home-mandates, and it also equals advice given with little data. Science is way way bigger, and is an idea not a particular set of people or current beliefs. It also has brought untold value of taking us out of the dark ages but apparently that’s been normalized so much it’s no longer valued.
Dan Ariely and Francesca Gino were two of the most well-known behavioral economists. Hell, Ariely even published a a board game about it (as well as a bunch of popular books). They've both been accused of data manipulation this year.
My friends in the field are worried that the whole field will be tainted. It grew out of a controversial idea - that despite what past models demand, people don't consistently behave in a rational way. If the two biggest practitioners of a new discipline are outed as frauds, what does that do the the reputation of the discipline as a whole? Will people be skeptical of any behavioral hypothesis that bucks tradition because Ariely was accused of fraud?
And they are far from the only ones: Diederik Stapel, Brian Wansink, the list goes on and on. (I'm not listing the many names, also at R1 universities, whose verdicts are still "in process.")
What NPR liked to call "replication crisis" was a combination of junk science and blatant fraud.
The whole field (slightly more broadly, of social psychology) is badly damaged for at least a generation.
Pete Judo has some consumer-friendly youtube videos on the topic.
So many problems in science could be alleviated if we changed the culture and processes to multi replication. Your paper can only be accepted into a paper if an unrelated group can replicate the results with only the paper and associated submitted info.
This doesn’t scale. Sometimes only a small number of labs have the equipment needed to run an experiment, let alone the training on the equipment to do what needs to be done. This equipment can cost millions. Even if the results are correct, experimental science is very finicky by its very nature. Getting an experiment to work can take months.
Fine, let's just do it on those that scale. Not all science is done in high-end, expensive labs, and this type of control will help the young researchers who are publishing their first works to gain more credibility and develop healthy habits.
Sadly, as long as the tyranny of publishing exists, the other researchers will always prefer working on their own things than replicating someone else's not yet published experiment.
I'd say if no one can afford to, or knows how to replicate the thing you did. What point is there in it? And only because such an approach has downsides it's not clear it means the overall benefit of lessening the replication crisis and forcing scientists to provide enough information their work can be replicated at all would be a huge win. Based on people like https://www.youtube.com/@AppliedScience experience, 9 out of 10 times there is some crucial information missing in the papers they are trying to replicate.
Because sometimes they can afford to build on your prior results, and advance the state of the art. Factoring in all of the malfeasance, what's the trade off between not publishing due to inability/unwillingness to replicate, and publishing bad results? We should know this tradeoff before significantly changing the status quo.
Agreed - having done a PhD in an experimental neuroscience lab, there's a nonsignificant number of things that nobody else in the world, even in my own lab, can do. I can train the techniques to others, and do, but this is separate from scientific discovery. There are no incentives for someone else to spend their time replicating my work using unbelievably challenging and expensive methods. It just wouldn't work in practice without a more fundamental restructuring of the whole enterprise (which is also necessary but hard).
Like what? I can think of extremely few domains that any decent research university isn't well equipped to competently replicate. One of the very few benefits of the tuition explosion - even undergrads get relatively casual access to equipment worth millions of dollars.
Of course they can't replicate things like ultra high energy particle research, but these sort of obscure things make up a very negligible chunk of all science produced, even if it's quite an important little chunk.
I guess you're not thinking hard enough. I estimate >90% of research in most technical fields would require 10s of years to replicate unless you are one of the few labs already working on the same topic.
For example in on of the research areas I'm familiar with (optical communications), there are maybe 10 academic labs in Europe (and even less in the US) who have the equipment to reproduce some of our experiments. In our lab there is 1 PhD student who could pull off reproducing the more sophisticated experiments (because he is the one focusing on communications) it took him 2 years to get to that stage.
This is an relatively easy area, i.e. equipment is largely off the shelve, very applied with lots of industry involvement. There are plenty of experiments published which could only be done in 2 labs (both of them industrial), just due to the cost of the required equipment.
In other areas (e.g. with fabrication in the clean room) reproduction would require even more time investment.
Don't get me wrong, reproducing results is important, but what people don't realise it happens all the time when people do adopt part of published results into their research. Mandatory reproducing results would just create large overheads which would get us nowhere.
It's entirely possible there are fields with needs I am unaware of or not considering, but your response is not compelling and sounds like hand-waiving. Exactly what equipment are you talking about?
I suspect you are likely grossly underestimating the available supplies at many research university in the US. For instance things like class 100 clean rooms are basic facilities. Many (and I want to say most) research universities also have partnerships with (if not ownership) of various specialized labs in the surrounding areas for more specific purposes. For instance the NASA Jet Propulsion Lab is managed by Caltech.
Realtime oscilloscope at least 4 channels > 50 GHz bandwidth $0.5M
(for some research you need >=12 channels so multiply that number)
Arbitrary waveform generator 4 channels > 45 GHz bandwidth $0.3M
(again you might need more than for channels)
RF amplifiers, electro-optic components etc. easily cost $2000-$5000 each and you need several (4-8 at least) of these. The RF cables and connectors/adapters easily cost several thousand $ each.
Fibre components, subsystem components (e.g. a WSS of which you will likely need 4 or so is $50k).
And I will certainly not let a student without training touch the sensitive high-speed RF equipment.
Regarding your comment on clean-room. For many fabrication purposes class 100 is not sufficient (also calling it basic facility is quite rich). And the equipment in is very expensive, LPCVD machines, E-beam, other lithography ... is $10sM. Most universities I'm aware of require fees (typically paid from grants) of $10sk per year to use the facilities (those are the reduced rates for university staff). The training/certification on the equipment typically takes about 1year.
Regarding JPL, yes it's managed by Caltech, what do you think NASA will say if Caltech professors will ask for a student to use the facilities to verify some
paper? Sure, lets delay the next Mars mission a year or so, to let some PhD students try stuff in the labs.
I think you seriously underestimate what the cost of using all that equipment is and how much training is involved to be allowed to use it.
You definitely don't want any
When I say a class 100 cleanroom is a 'basic' facility, I mean it's one you'll find at any decent research university in the US, and it is. If there's something that can be reasonably expected to be required for cutting edge research, you'll find it. As for lab access, my experience is in CS. I was granted access to a globally ranked supercomputing system paired alongside a paired largescale audio-visual facility. The only requirements for access were to be whitelisted (involved in research) and registered. After that of course you needed to slot/reserve time, but it was otherwise freely and unconditionally available.
It's difficult to really explain how much money is spent in top US universities. It's as if there's a fear that revenues might manage to exceed costs. But one of the practical benefits of this is that bleeding edge hardware and supplies, at costs far greater than anything you've listed, is widely and readily available.
I think the cost and complexity of reproducing work is somewhat overestimated, as is the specific expertise of individual researchers, though maybe your field is exceptional in this regard.
Primary research, pioneering new techniques and equipment to explore the unknown, is time-consuming and costly and requires a lot of original thought and repeated failure until success is achieved. However, reproducing that work doesn't involve much of this. It's taking the developed methodology and repeating the original work. That may well involve expensive equipment and materials, and developing the technical expertise to use them, but that does not involve doing everything from scratch and should not take anything like as long or cost as much.
I also believe that we far too readily overestimate the specific special skills which PhD students and postdoctoral researchers possess. Their knowledge and skills could likely be transferred to others in fairly short order. This is done in industry routinely. A PhD student is learning to research from scratch; very little of their expertise will actually be unique, and the small bit that is unique is unlikely to be difficult for others to pick up. I know we don't like to think of researchers as replaceable cogs, but for the most part they are.
My background is life sciences, and some papers comprise years of work, particularly those involving clinical studies. However, the vast majority of research techniques are shared between labs, and most analytical equipment is off the shelf from vendors, even the very expensive stuff. Custom fabrication is common--we had our own workshop for custom mechanical and electronic parts--but most of that could have been handled by any contract fabricator given the drawings. And the really expensive equipment is often a shared departmental or institutional resource. Most of the work undertaken by most of the biological and medical research labs worldwide could be easily replicated by one of the others given the resources.
Depending upon the specific field, there are contract research organisations worldwide which could pick up a lot of this type of work. For life sciences, there are hundreds of CROs which could do this.
As one small bit of perspective. In my lab a PhD student worked on a problem (without success) for over a year. We gave it to a CRO and they had it done in a week. For less than £1000. The world is full of specialists who are extremely competent at doing work for other people, and they are often far more technically competent and efficient than academic researchers.
It's broadly true that most research is eventually subject to attempts at replication. However the replication isn't explicitly attempting to reproduce the exact prior research, but to build on it.
If the foundation established by the prior research is flawed, attempts to build on it will usually fail.
Some practical ideas, when allocating the money for research 30% or so has to be kept back for replication. Public science is mostly government funded. And the government could allocate X% percent of the budget for replication work. It sounds like a solvable problem to me.
I agree in principle but forcing institutions or governments to allocate money for this and enforce is pretty much an unsolvable problem at this point IMO.
You can make the data say almost anything. IMO, this happens way more than anyone thinks. When you only have 2-5 people that _actually_ read a paper and they have to slog through it, its not too shocking stuff like this could happen.
In my experimental design class in college, I remember the professor talking about the difficulties of dealing with data and what to include and what not. He pointed to a case where a datapoint looks like an anomaly and possibly should be removed. He showed the math behind it and including it means the experiment doesn't show a positive result and excluding it does. So which to do you do? If including it means you don't get funding for you and your team, what do you decide? This, of course, led into the ethics portion of the course, and how easy it is to go down a bad path, because you can manipulate data to make it say what you want.
You're describing a legitimately hard problem faced by honest researchers. This case, it seems like we have enough evidence to suggest that we are not dealing with honest researchers, but rather deliberate fraud.
That's not a hard problem for an honest researcher. Just explain the risk of that data point in the grant application and if the funders decide not to take the risk, that's their perogative.
Long while back, I had a big pile of numbers and I knew they could offer some meaning if only I could extract it. There's this whole discipline that advertises techniques for doing that, called "Statistics", so I looked there for lessons.
What I found was "How to throw away data that doesn't support your desired conclusions," for the most part. "Actuarial Science," a different field, had some useful techniques but not many. They're most interested in ensuring the bad data doesn't get into the tables in the first place; but at least they are doing "data on data" comparisons and not "data to expectations"
We're building "AI" right now but think about the inputs those see: The very first step is to throw away the statistically too common "stop words" ...
> What I found was "How to throw away data that doesn't support your desired conclusions," for the most part.
What exactly are you referring to here? This seems like a wildly misguided characterization of statistics, which I am sure cannot be based in expertise or practical applied experience.
> We're building "AI" right now but think about the inputs those see: The very first step is to throw away the statistically too common "stop words"
This is a fundamental misunderstanding of what a "stopword" is and how it's used.
Words like "the" are hard to utilize within with a bag-of-words model specifically. Removing them is not something people do/did because they are clueless monkeys. The goal is to improve the signal-to-noise ratio.
For example, traditionally spam filtering uses a very crude variety of bag-of-words model called "Naive Bayes", in which we assume (wrongly of course) that word choice is completely random, and that the only difference between spam and not spam is that random distribution of words. Are you really going to argue that the word "the" is critical to that process? If you can build a better NB spam filter by including stop words, by all means go ahead and do it. But both linguistics and decades of success in the field are against you.
On the other hand, words with grammatical function like "the" are absolutely important and relevant to the overall structure and meaning of a document. Therefore, training pipelines for modern deep-learning-based LLMs like GPT don't remove stop words (as far as I know at least), because the whole idea of a stopword doesn't make sense in a model like that.
I want to be respectful here, but it sounds like you took a cursory look through three vast literatures, without the perspective of having actually used any of this stuff in real life, and drew some invalid conclusions.
You're entitled to your own opinion of course, but your conclusions appear to be based on beginner-level misunderstandings. That doesn't seem like a constructive or productive way to conduct oneself through life.
There are so many complications because of data fraud and the fear of perception of data fraud.
I'm the guy who builds the experiments on a team of user researchers. There are all sorts of things that seem intuitive to an outsider but are poo-pooed by practitioners as unethical. For instance, you might run a study that doesn't have enough participants to have a statistically significant conclusion. An outsider would deploy it to more participants to see if the trend becomes significant with more data. A trained researcher will cringe at that proposal.
So far as I can tell, researchers consider the experiment final as soon as you peek at the data. If you want any changes - more data, different demographics, etc - you have to throw out everything and start over. Even though it's logically interchangeable, the data you've already collected is considered spoiled, because they don't want allegations of tampering/data grooming.
>"When you only have 2-5 people that _actually_ read a paper"
Even when a paper goes through a vigilant, rigorous peer review, it relies upon the data that the research team supplied. Over and over again cases like this have encountered manipulated data. Humans are flawed creatures, and if you spent a lot of time and professional credibility on a hypothesis, there is a strong motive to find what you sought. Doubly so if the results are salacious or contrarian in some way and thus get viral attention. Just convince yourself that it's the data that was wrong somehow and you know your assumptions are right so just this One Time you need to do a little manipulation.
A study I have seen cited on here countless times is the "honesty pledge" one by Ariely et al. It was the one that claimed that when a person signs a form at the beginning, they're more honest. It was complete and utter BS, based entirely on fabricated data. It joins an infamous list of studies that have had enormous influence (especially if they have an "aha!" factor -- if it is the sort of thing that Malcolm Gladwell would talk about, consider it suspect) but were the creation of someone making up data in Excel.
If I'm not mistaken, Ariely is asked about this on the "Armchair Expert" podcast. His claim was that he did not manipulate the data personally and it was someone further upstream whom he trusted. His point was that at some point, trust has to enter the process, except in the rare cases where a single person is doing all the research themselves (apologies if I'm misremembering this, but I think the following point still stands.)
IMO the researcher still has some responsibility because ultimately, it's their research. So the questions to me are:
1) How much due diligence is reasonable? Does it change depending on the source? For example, is it more/less reasonable to accept government-provided data at face value vs. data collected by an undergraduate?
2) What processes can be implemented to safeguard data manipulation? I know there is a movement to provide data with peer-reviewed submittals, but it's still a low probability that a peer-reviewer has the time or inclination to really dive into the data to assess the claims.
Ariely tried to frame both his assistant and the insurance company he got the data from. The insurers provided the same data to a journalist who found there had been very substantial alterations made. The assistant showed that the Excel metadata indicated Ariely was the last to edit it.
Yeah, I read the same digging a bit more after I made that original comment. It’s much more damning than what he made it sound like in the interview, although it didn’t say how substantial his edits were.
Very substantial, to the extent that the dataset was made up. The original data obtained from the insurer showed no effect.
The more interesting thing though is why he chose to investigate this question in the first place and why he chose to do fraud to make it seem true. The hypothesis is a very weird one and there's no reason to think it would hold. Unless that is you think of people as being child-like lumps of Playdough, so easily manipulated that trivialities like where exactly something appears on a form can yield huge behavioural differences.
That belief is the only reason you'd ever come up with such a hypothesis, and I think it's not really surprising that someone like that would engage in fraud. After all they have spent months (or years?) on trying to prove that people's levels of honesty are trivially controlled by psychologists like yourself. If you believe that's true then why wouldn't you commit fraud? After all you can easily manipulate people into not noticing it.
The manipulation seems substantial, but the point I was alluding to was that there wasn’t a smoking gun (at least by the amounts that I’ve heard and read) that Ariely made those substantial changes. I’m not sure what was included in Excel metadata, but it’s at least conceivable that the copy/paste + random change edits were done by someone else prior to Ariely edits. Where it gets damning is that he was the one the original dataset was sent to and the last to edit it. At the very least, it shows a lack of due diligence in not catching there were somehow many more datapoints added.
Just conjecture of course, but this came at a time when governmental “nudges” were very en vogue. I could see where successful research could be thought of AAA a pathway to influence, prestige, and money through government grants and appointments. And there were some highly regarded behavioral psychologists who were substantiating its effectiveness.
I think if the data goes insurer -> Ariely -> assistant, and both insurer and assistant present evidence that they didn't do it, then there is certainly a suspiciously large amount of smoke for there to be no fire. Short of CCTV footage showing him doing it, it's hard to get stronger evidence.
Yeah, governments love the idea that they can influence the population via simple tricks. That's understandable.
Unfortunately nudges are still very much en vogue. COVID was nothing but endless nudging, maybe more like pushing, with tricks like making everything into a social responsibility towards others being deployed endlessly even when not supported by the underlying facts. It worked extremely well. That said, I'm not sure you need psychologists to tell you that "do it for your grandmother" is a powerful manipulation tactic. A lot of the valid findings in psychology are obvious, and the non-obvious findings are often invalid. So we could just defund that field and not lose much IMHO. I say that as someone who has studied psychology. I have a good friend with a PhD in it who thinks the same.
Oh for sure. It would be very easy to determine if the data is bad (in most cases) if someone tells you where to look. It is almost impossible to expect people to put their mental resources to trying to "crack" every paper that comes across their desk tho... so we have a bit of a problem, to say the least.
Yep. I actually am a co-author of a paper that has been published and while I was working on it, I realized how easy it would have been to make the data agree or disagree with the hypothesis. Based on the feedback of reviewers, you see they don't dig into how you get your data and it seems as if its based on "good faith."
I could have made a mistake, or I could have been malicious. I don't think they would have caught it because it would have involved hours and hours of work on their part.
With current complex data processing pipelines it is almost trivial to add e.g. a wrong sign to some variable to get "results" from data that doesn't contain any. I've had many "results" disappear after I found a bug. I could have probably gotten papers quite easily by ignoring the bug.
I'm quite certain a huge share of "results" are due to bugs. Probably many of my own too even though I stress about this constantly.
An intentional bug would be practically impossible to show to be intentional. With notebook/REPL style analysis there wouldn't necessarily even be any documentation of the bug. I'd wager it actually happens, and even surprisingly often. We only know of fabricators who are bad at fabrication.
Absolutely, but most of this stuff happens outside of what shows up in the paper like coming up with excuses on why data points can be dropped or shopping around for different statistical methods that make things look the best. There can even be legit reasons for doing this kind of thing and that's what makes it hard to detect, it's basically just the honor system on whether you're "p-value hacking" or following common methods.
This is kind of why it annoys me a bit when I hear people harping on about trusting science, most science is not as simple as finding objective truths and just reporting them. That's not to say all science is bs and you're better off consulting a magic 8 ball, just that it should never be discouraged to look at methods and conclusions with a critical attitude. There is room for things to be fudged or pushed and very strong incentives for people to do it given how much money and prestige are on the line. It doesn't even have to be as big as a drug trial, one high profile publication can be enough to make a career so you can see how tempting it can be to just change a couple pixels in an image to boost a theory you earnestly believe is true
"read" is a pretty loose term. I think its a house of cards. When you cite something, you do so (usually) because it supports your paper, basically "X did Y and we need Y to be true for the foundations of this paper." When you cite "X", you do so with the assumption that X did their due diligence and peer review would have caught any issues... but its still only an assumption. If you had to re-create every experiment for every paper you cite, I'm not sure if one would ever actually finish their own research.
I have only ever published one paper though, so take what I say with a grain of salt. It's just my experience.
With relatively new, relatively small citation counts, the numbers are probably indicative of the number of actual readers.
But well established, high citation number papers often take on a "shorthand" role. You'll often see them in introductory sections or other supporting text with statements like "previous authors have X", "common approaches such as Y", etc. Here they often have little to do with the core of the paper, they are providing context.
Now really people should have read them, but sometimes Jones, et. al. 1998 just becomes a collective shorthand for a set of ideas. As such people will quote it just because the papers they did read quoted it, etc.
Often, over time, a single paper becomes the landmark for a set of ideas, and just gets cited to pull those in by reference. In theory this is the paper that "invented" those ideas, but in reality it's more complicated. Overall it's not a terrible practice, as a way to frame things, but can be error prone.
If you read papers in the less reputable parts of science (public health, epidemiology...) you'll quickly notice citations going to papers that contradict the cited claim, or which are about something totally different, or that don't contain any support for the claim within them. It's common. Peer reviewers don't notice this stuff, journal editors don't either. They might scan read the paper but they aren't checking it adversarially.
There have been cases where an early citation had a typo in the title, journal name, date, or page number ... and then hundreds of later papers ditto the citation with the typo, indicating that they did not actually bother to get an actual copy of the paper.
Many destructive behaviours are rewarded in a system that is putting emphasis on the quantity and proposed impact of publications. Many scientific fields suffer from behaviours of individuals who are more interested in social status (or just economic security) than in seeking truth. Just like in overall society the humble truth seekers necessary for slowly chiseling out the firm ground of solid knowledge are in the minority. They are definitely disadvantaged in this system, and more likely to be driven out than those who lie, rush or cut corners.
At the modern complexity and importance of science, we do need to put much more emphasis on checking whether it is done right. The original vision of the Royal Society involved rigorous peer review, to the point of demanding scientists to demonstrate their experiments in front of their peers: https://www.sciencemuseum.org.uk/objects-and-stories/17th-ce...
The curator of these literal peer reviews held the first paid position in science.
Journals do have the profits to hire or otherwise pay peer reviewers full time. They would become experts in their field in the process, and thus more suited for the task than the ones in the current system, where even undergrads are asked to review for high-profile outlets like NeurIPS. Also, full-time rigorous peer reviewing would be an interesting career prospect for many current scientists. And here is your startup idea..
It's way way bigger than that. The whole world changed since those early days of modern science.
During those days, the science you were talking about was one of countless hobby clubs for aristocratic circles, where bored rich folk could dedicate themselves to one upping each other in the discovery game their peers and recent predecessors had invented. There were certainly cheaters then, too, but the whole thing was very insular and the only people who even cared about it were the people who were in on the game themselves. So excelling within the rules and elaborate demonstrations of the game were all part of the club sport.
400 years later, billions of people across five or six generations have been told that this game is the secret to human prosperity on earth and that the more its played, the more aligned we'll be with the truth of the universe, the longer we'll all live, the more leisure and luxury we'll all enjoy, etc
That's a whole different game! The demand for scientific output has gone from an exhibition sport shared among a small and snobby circle to a replacement for an eroded theology. Trillion dollar governments and globe-spanning trillion dollar industries now (ostensibly) make their billion dollar decisions based on each day's summary of the sport, and billions of people dreaming about prosperity or salvation await the next big game's result as though it were a demonstration of divine grace.
There are now so many games being played, by so many people, with so many bets and spectators, that incentives are unfathomable and referees are sparse and cheating is rampant.
Unfortunately, you're not going to clean all that up by telling some journals to reshuffle their revenue allocations.
The rot has set in relatively lately, but is getting exponentially worse. Probably still in the 1990's or even early 2000's (depending on the field) an academic could have had a fine career publishing (a good paper) every few years. Or write a (good) book or two in their whole career.
I did my PhD under a professor who was very respected and influential in his field. He published quite rarely, although he had piles of manuscripts that would have passed review but he didn't find them worthy. And refused to have his name on his lab's papers if he didn't feel like he contributed enough (nowadays many/most professors who won't even read the paper require them to be added as an author).
In my experience a lot of older professors are like this, likely because it wasn't yet about counting papers for them to get their tenures (other games were involved for sure).
I agree they should have refused. I hope I would in their situation.
> where bored rich folk could dedicate themselves to one upping each other in the discovery game their peers and recent predecessors had invented. There were certainly cheaters then, too, but the whole thing was very insular and the only people who even cared about it were the people who were in on the game themselves. So excelling within the rules and elaborate demonstrations of the game were all part of the club sport.
I see this meme about the early scientists a lot. Unfortunately, these first early scientists are not so easily categorized. I would encourage others to delve into the biographies of these progenitors. It is true, some were very much in the vein of this meme. But many were much more complicated individuals.
For example, Darwin is about as blue blooded as it comes. Yet Origin of Species has a very long section at the beginning where Darwin painstakingly goes over all the scientists before him that that in any small way had already discovered evolution.
Another good one is that while digging potatoes with his hands at the family farm in his native New Zealand, Rutherford got the news that he had been awarded a scholarship to study physics at Cambridge under William Thomson (Lord Kelvin).
Many other scientists came from very 'low' births. But science is a 'strong-chain' domain where only the 'correct' ideas survive. Anyone can, and did, contribute despite those obstacles.
Science is seen as the road to human (not just personal) propsperity by people educated in the last 100-ish years, which is mostly everybody now, which puts extreme pressure to perform/produce on what was once a pure little hobby sport whose spectators were almost all invested as players themselves.
Science and the advancement of technology is literally the only thing that has ever led to lasting prosperity for humans. All other periods of prosperity over the last ten thousand years have been temporary and highly localized. You can lose your golden age of prosperity through a single assassination. Only science breaks the wheel.
If you factor in all future humans in your utility calculations, then science is by far the #1 noblest pursuit humans can ever undertake. We're talking about scientific advancements potentially helping trillions of people before all is said and heat-deathed. Great works of art can also be enjoyed by all future humans, but science has a super-linear (if not actually exponential) growth curve where every advancement makes future advancements a little easier.
The question is whether the noble pursuit of "science" and the day-to-day activities of "being a scientist" have diverged or not. There is mounting evidence that the institution of science has been subverted to the extent that many people who are professional scientists are not actually contributing to the pursuit of science. Or, far worse, detracting from it, as we see here. It's one of the great tragedies of our era.
> Science and the advancement of technology is literally the only thing that has ever led to lasting prosperity for humans. All other periods of prosperity over the last ten thousand years have been temporary and highly localized. You can lose your golden age of prosperity through a single assassination. Only science breaks the wheel.
How exactly do you know this civilization will not crash too? (and take with it that highly localized habitat)
That was exactly my question when I read that quote too.
Everything is temporary on a large enough scale and everyone w/i that golden era also thought it would last and that belief is part of why it didn't.
Humans will never build anything that truly lasts for a very simple reason. The lessons learned get forgotten after 2 generations (3 at most). Stop and consider how the US fought for its freedom and how nowadays many people from the US would vote to have more limitations on speech.
> The lessons learned get forgotten after 2 generations (3 at most).
What are you talking about? Did we all forget the Pythagorean Theorem after 3 generations? Did we forget the force-multiplying effect of levers and pullies a few generations after Archimedes died? How can you sit here with Wikipedia at your fingertips and tell me that the collective sum of humanity remembers nothing from over 100 years ago?
> everyone w/i that golden era also thought it would last and that belief is part of why it didn't
Yet from almost every collapsed golden era, scientific progress from that era made its way back to collective knowledge (sometimes very slowly, admittedly). People in this modern age have this extremely simplified view of what "collapse" actually looked like.
What's the difference between data and information?
data is data
information is data with context
The values and lessons learned to build something that truly lasts gets forgotten over time until the thing that was built gets changed into a weaker form as the people involved stop valuing the things that gave it the stronger form.
Which civilization? The U.S.? That might slow science down, but it would not significantly reverse progress on it. Of course nations/kingdoms/empires/civilizations wax and wane, but you can't use the word "too" if you're talking about a total worldwide collapse of human civilization as a single event -- such a thing has never happened. If your point is that a Chicxulub or a worst-case Wyoming Supervolcano could outmatch the progress that science has made, then: sure, what's your point?
The vast majority of established science will outlive mere nations and petty politics. There are too many copies of Wikipedia, too many printed textbooks and encyclopedias, to lose a significant portion of established science. The possibility of worldwide humanity-destroying events does not disprove this at all. In fact, if we ever faced a humanity-destroying event, then it would be nothing less than our collective scientific progress that would have any chance of seeing us through it.
> I sincerely doubt that science is exploited to a similar degree as basically any others.
The world being shitty in other ways does not diminish the tragedy that we are only a fraction of how effective we could be at pursuing science.
> Can you back up your “many” with actual numbers as a percentage of investment?
"A 2011 analysis by researchers with pharmaceutical company Bayer found that, at most, a quarter of Bayer's in-house findings replicated the original results."
"In a 2012 paper, C. Glenn Begley, a biotech consultant working at Amgen, and Lee Ellis, a medical researcher at the University of Texas, found that only 11% of 53 pre-clinical cancer studies had replications that could confirm conclusions from the original studies"
"The ... paper examined the reproducibility rates and effect sizes by journal and discipline. Study replication rates were 23% for the Journal of Personality and Social Psychology, 48% for Journal of Experimental Psychology: Learning, Memory, and Cognition, and 38% for Psychological Science"
> I don’t really accept these ridiculous trumpisms. “Many people are saying”.
I didn't say "many people are saying", and comparing me to Trump is a much greater insult than things comments get flagged and removed for. Respond to what I actually said and avoid the (extreme) personal insults.
After seeing that academics still have to take a “two job” approach with serious research and fun research I’m not convinced of this. Not to mention the very protracted timelines to get to that stage of freedom.
That depends on whether you accept the risk of having to find another line of work. Depending on the place, you can do almost anything you want even at PhD level. I've been on a chain of grants, doing quite freely what I want, for over a decade. But I accept it may break at any point and then I'll just go do something else.
After PhD there will be nobody telling, and often not even caring, what to do. But that may mean that you don't get your PhD or you don't get another grant to live on. If you get a tenure it almost literally means that you can't be fired even if you do nothing at all. What is surprising is that almost all with tenure keep running the rat race even though they don't really get anything at least material out of it.
(Nitpick: I think serious research is the fun one. The one churned to get another grant is neither serious nor fun.)
I am a postdoc, just have been for quite a while (on four different grants at least). In Finnish academia it's not that uncommon to stay a postdoc even until retirement.
> That’s not true. If you make the hiring cut in you’re in for about 5 years of grunt work as an assistant prof.
For teaching and admin yes. But at least in fields I know, what research you do or whether you do at all is all up to you. Of course the risk is that you'll be unemployed after the assistant prof. term ends. My point is that if you don't care about that, you're quite free to do whatever research-wise.
Maybe, but it's probably a relatively small circle that elevates someone's status by being in academia. (Not to say that isn't the circle whose opinions matter to them, though).
It's not necessarily even status, just feeding themselves: Publish or die, and you aren't publishing if your conclusions aren't positive and strong.
In medicine it's especially tough, as testing is among the most expensive and the number of available data points is quite low, so one gets to fignt over individual data points that make a difference. Did this person not follow the protocol, and I have a paper, or not?
I've seen something similar when working in agriculture: Early tests of new plant strains are low data, because a company will start by testing so many experimental plants that it'd be unaffordable to test them all very strongly. This makes people really argue about single data points in those early tests. Was this chunk of the field contaminated? Attacked by a wild animal? But there at least the interest in fraud is small. If a breeder gets a stinker through, it just goes into trials on the other hemisphere, where it's planted in an order of magnitude more fields, and therefore just lead to disappointment 6 months later.
With medicine, the cost of replication is so high, and often takes so many years, that it's not just that an honest mistake is catastrophic, but that the difference in outcomes for the person doing the curation is so high, one doesn't have to be all that dishonest to make biased decisions that will lead to a strong career.
Much of the problem is that wider science has taken too many lessons from economics.
Incentives matter a lot but creating incentives that work is almost impossible.
Science used to run on ethics (like many other professions). But we learned from economists that there are no such thing, only utility (money) maximizers.
Incentives that work are easy. Do research in corporate labs. That's all it takes. Now there is an incentive to do work that replicates (because it is intended to be used to build useful things), there are people responsible for detecting and resolving fraud (managers), they are incentivized to do so by a mix of carrots and sticks some of them legal.
Fact is, corporate R&D doesn't have this relentless problem with reproducibility. It's academic output that does, because academics only care about getting a paper published and don't expect that anyone will use their results. Often they don't even make their code or data available at all because it's not to their advantage for others to be able to replicate their work, as that would yield fewer papers. But this is all wrong. Science exists to be used in technology, not for its own sake.
There are corresponding responsibilities in academia. Perhaps even more so. But like a business culture can, and often does, rot the incentives and responsibilities, so has the academic culture.
I've worked for two corporate labs and have collaborated with several. There was less rigor if anything.
Corporate lab work is quite cozy and stable. In corporate lab your job doesn't end if you don't (pretend) to get a new major discovery every year. They don't make anything available, often not even internally.
Hmm I've had the opposite experiences. Corp labs release cool and useful stuff all the time. On the front page right now is Seamless, a very useful thing released by a corporate lab. And corps have performance evaluation and management programmes designed to encourage high performance output. And of course the competent ones manage to bring research into production regularly.
Whereas with academics, you get a paper. You might get data and code, or might not, depending on field and temperament of the researchers. If you're really lucky that data/code might actually be correct, match the paper and be usable for something, but it really depends a lot on the field. You almost certainly won't get products.
> Also, full-time rigorous peer reviewing would be an interesting career prospect for many current scientists. And here is your startup idea..
I don’t know about this. (But I don’t have any answers for the question either).
If you’re a full time peer reviewer, first, are you really a peer? But more importantly, your motivations change. No longer are you looking to try to see if the paper is worthy of publishing or if it is sound; instead, your motivation is to push through as many papers as possible. When getting paid depends on approving papers, the quality will drop.
Maybe the problem is where the money exchange occurs. What about if authors paid to have their paper reviewed, instead of published? Currently, journals only get paid when a paper is published. What about if they got paid to review the paper at all? It would limit the paper submissions to Nature/Science/Cell, but you’d be paying for a high quality review (which often makes a paper better). You might even have luck with decoupling reviewers from journals completely… make the journals compete over the best (already) reviewed papers.
>instead, your motivation is to push through as many papers as possible.
I think this is an assumption that doesn't have to hold true in practice. Maybe it's biased by the 'publish or perish' paradigm that's pervaded academia, but there's no reason to replicate the same problem elsewhere.
Rigorous peer review, let alone paid (which causes its own issues beyond money), is totally infeasible with the current publication pressure. It's hard to find reviewers even with the current lax criteria of both reviewers and quality of their review (the latter is more problematic).
The publication volume is just way way way too high. But researchers who don't publish multiple articles per year, regardless if you have anything of value to publish, perish. If you don't churn out paper-per-year in your PhD, you don't get the PhD.
Much of the manuscripts that get submitted to journals are incredibly bad. Most are just bad. But as both editor and reviewer, I usually let them be published out of pity if the stuff isn't blatantly wrong; the poor PhD student's whole career is on the line.
Journals being full of crap is not SO bad within science because everybody knows they are full of crap. But if an "outsider" thinks that being published in a peer-reviewed journal, even a "good" journal, means that the article isn't crap, it can be literally life-or-death (like in this case).
Academia is broken because it's being run as a business whose purpose is to churn out papers. Welcome to neoliberalism.
Although the above comment might sound negative and harsh, it is a perfect distillation of modern research-oriented academic environments. (I was a moderately-successful professor in these environments. I woke up one day and simply couldn't do it anymore.)
Publication rate DURING your PhD is highly variable from field to field...I don't know what field you are in but generally speaking across my 3 fields students are required to have at least 1 from the entire PhD to graduate. Some even have none published by time of graduation.
It is wildly different, true. In some areas, it is not uncommon to see PhD "thesis" that is essentially an intro stapled to 3-4 papers.
It's also true that publication rates post PhD vary wildly as well, e.g. expectations for a tenure packet.
However, it's also fair to say that expected publication rate has significantly grown universally. A generation or two ago, a solid career could be built on a handful of high impact papers. That's hard to imagine now.
That thesis format is the norm in Finland in sciency fields and also in some arts.
You can write a monograph (essentially a book) instead but that's frowned upon. Especially by admin because the papers bring the univerisity more money than monographs.
Nowadays piles of inconsequential papers where you contributed little but your name is almost a requirement for a solid career. It's horrible.
This does vary across countries, and fields. In UK I was a bit surprised that you're not expected to publish at all before post-doc.
In Finland usually at least 3 peer reviewed (first author) papers are required for a PhD in my field (cognitive science). In some fields (e.g. many engineering fields) even more. And PhD grants are typically for three to four years.
It is ridiculous and the paper quality is what you'd expect.
Feels to me the risk of discovery is deemed so low that it is cultural/ systemic. How many other cases will be uncovered?
But speaking to Science anonymously, four former members of Zlokovic’s lab say the anomalies the whistleblowers found are no accident. They describe a culture of intimidation, in which he regularly pushed them and others in the lab to adjust data. Two of them said he sometimes had people change lab notebooks after experiments were completed to ensure they only contained the desired results. “There were clear examples of him instructing people to manipulate data to fit the hypothesis,” one of the lab members says.
The incentives are clear to me, but punishment is less of a deterrent than good incentives, but which incentives would reduce this allegedly fraudulent behavior?
Peer (or any type of institutional) review needs to be implemented at the national level--same as the funding for the original research. Why would you pay for research and not check that it is correct? Congress needs to fund a new science agency that explicitly does this. I have suggested before that part of graduate training should be replicating select studies that are published (a national review board could select those that seem the most high impact). State-funded schools could take this on, and students would probably learn at least as much doing this as they do in their other studies.
Sure, and that's on the private funders to ensure they are getting what they pay for. Google pays for plenty of research--since they are the payee, it's their responsibility to ensure it's accuracy to whatever degree satisfies them. Institutions like the FDA are supposed to regulate private research when it comes to market (.i.e pharmaceuticals and the like). Whether or not the FDA and related agencies are effective is a different, but just as important question. Taxpayers desperately need a formal, funded system to verify the science they are paying for--particularly for biomedical research where the incentives for fraud are so high.
I think that's reasonable for the "institutional review" aspect of the OP. But regarding the "peer review" aspect, I don't think it works, at least in the bulk of the current framework. Peer-review is typically performed by separate organizations, independent of the funding organization. To expect a national-level organization to essentially take over the duties of peer-review journals is a very big ask (not in small part because the current system benefits from free labor from the reviewers).
> To expect a national-level organization to essentially take over the duties of peer-review journals is a very big ask
Fair, but what is the alternative that would actually work? What is the budget of all of the journals compared to the NSF+NIH? Is medical research that is true, and certainly actionable worth as much as an F whatever fighter jet? People will have to decide.
The tradeoff is a different arguement and a digression, so while it's likely we agree, I'll side-step it here. I do think it's an uphill battle to pursue a massive reappropriation of funds though.
There are some alternatives that I'm aware of. Here's a few:
1) One is to allow journals to focus on less-than-great results. Right now the focus is on novelty, so there is an incentive to show that your work has some new, great outcome. But there's also value in showing "Hey, we thought this idea had legs but it turns out it didn't." Publishing that work should be part of science but right now its not. (As a side benefit, you could prevent a lot of researchers wasting effort on the same idea simply because they weren't aware that other people already tried, and failed.)
2) Journals can put a premium on sharing your data and code during the review process. Right now, it's often just up to the author and there are lots of veils to hide behind that essentially give the impression of sharing data, but not in a very useful way.
3) Give value to replicating work. Maybe not as much prestige as creating new work, but showing that it can be replicated obviously has value to society as a whole. Most of the time this won't get published, except in the cases where it's sensationalized, like fraud. (This effect is related to #1)
4) Journals can do a better job vetting their reviewers. They struggle to get timely reviews and reach to anyone who accepts the duty. Reviewers may agree to review something they have little background in, and as a result, it's easier to skirt bad articles through the system.
I don’t disagree with any of these points, it’s just I’ve been involved in open science circles, where these things are always mentioned, and I just don’t see any material progress (maybe I’m not looking that closely though). I think the reason for the lack of progress is mainly funding—so until someone gets serious about funding (billionaires or taxpayers), it just seems like the same merry go around. It’s very expensive to replicate biomedical studies—but it’s the only thing that works. Maybe the tide is turning though and simply incentivizing/protecting grad students to become whistleblowers will do more good, but I fear this case was more the exception than the rule.
I think the difference is between a ground-up or top-down approach. Maybe both are needed. My current stance is that while a top-down approach would work, there's very little chance of it happening. For one, government research funds have largely flat-lined in the last 20 years, and expecting them to take on more costs for managing peer review would likely exacerbate the problem. I also don't see the govt clamoring for additional administrative burden. I don't think replication has to be the only method (although I think it's probably the best). Opening the data to the public can do a lot to suss out bad practices or outright fraud, as we saw with the Ariely situation. The progress has been slow, for sure, but I think there is some. For example, there are now journals that specialize in publishing "non-surprising" results.
A friend did her PhD in climate science and the research involved weather station data. She said all of the data is full of gaps, outliers, messy, and the way the "science" solves it is to just get rid of reports that don't align with priors and use very basic functions to fill in missing data.
When people say "trust the science" it means making scientists and researcher High Priests of the Religion of Truth. Until every single experiment is pre-registered, all data is public and transparent, and all results are published, the entire experimental scientific establishment should be treated with massive skepticism.
A friend of mine used to be responsible for (among other things) water temperature measurements on naval ships.
Speaking loosely to make the point, they used to measure temperature from the front of the ship, then inexplicably (to him, anyway) changed the procedure to sample from behind the ship (where the water was warmer from engine exhaust/water output), leading to a sudden increase in water temperatures.
It’s absolutely how a dishonest science institution functions. Science itself is nothing more than a method. The disdain is for people who confuse the process with the institution. And at that point it is just blind faith like any old religion. Show me the preregistration, the conflict of interest disclosure and the third party replication and then I’ll “trust the science”.
PEOPLE are abusing bad incentives. I'm quite certain that if this lab was exposed for doing what the poster claims it would be a major controversy just like other cases we've seen. That's the institution correctly reacting to bad actors.
How is this relevant? Do these people define what science or is it an established set of principles and methods? You think scientists are the only ones who can do science and what they do, wrong or not, defines what science is??
How science functions is how it functions. You do not have knowledge of the entirety of scientific endeavors, it only seems like you do, presumably due to being trained on propaganda, like an LLM.
Maybe you are conflating it with how it is supposed to, and is claimed to function by people who share the same form of faith based epistemology as you.
Science is a set of methods. If you abandon those methods and fake data to fit your needs you aren't doing science you are committing various kinds of fraud. Maybe ChatGPT can explain that to you instead of listening to Alex Jones.
> Science is a set of methods. If you abandon those methods and fake data to fit your needs you aren't doing science you are committing various kinds of fraud.
You still have the title "scientist", and still get your paycheque. Like baking, there is the recipe one is supposed to follow, but there is also the how the baking is actually done. If a baker failed to follow the recipe in an instance of baking, would you also believe that they are not a baker, or are not baking?
I think it's interesting how people intuitively frame (construct a virtual model of reality, and perceive/present it as reality itself) the practice of science such that it "is"[1] literally impossible for scientists to do wrong, and with such a simplistic method: if it isn't perfect, it isn't science (which opens up a serious ontological problem: because it cannot be known to what degree each potential scientist executes the method with perfection, it is not possible to know how many scientists exist, or if a given candidate actually is a scientist...an individual could be one for decades, and then one off day and Shazam: you "are" no longer a scientist, despite having the title, the income, and the respect and admiration, despite not actually being the thing itself).
>Maybe ChatGPT can explain that to you instead of listening to Alex Jones.
What's the current scientific consensus on mind reading? Maybe it's not me who has to brush up on my scientific scriptures.
And since we're on the topic of who to take advice from: perhaps you should reevaluate the trustworthiness of that Oracle inside your mind, because it's "fact" here is way off: I do not listen to Alex Jones. Do you now wonder how many other facts your Oracle got wrong? My Oracle suspects not, but cannot be sure.
[1] here I am using the colloquial, normative meaning of the word "is": how humans believe "reality" "is".
You seem to be completely obsessed by titles for no apparent reason. I don't care what your title is, if you fake data to validate false hypotheses you aren't doing science. It's very simple.
>You still have the title "scientist", and still get your paycheque. Like baking, there is the recipe one is supposed to follow, but there is also the how the baking is actually done. If a baker failed to follow the recipe in an instance of baking, would you also believe that they are not a baker, or are not baking?
If you purchase a cake from Walmart and tell people you baked it from scratch you are not a baker. If you 3d print a cake look alike made of plastic and tell people it is a cake you are not a baker.
You seem to be in the midst of a mental break so good luck to you.
> You seem to be completely obsessed by titles for no apparent reason.
You seem to be an overconfident Naive Realist.
> I don't care what your title is, if you fake data to validate false hypotheses you aren't doing science. It's very simple.
I doubt it. You don't take the opinions of scientists more seriously than non-scientists? Shall I go through your comment history to find instances?
And this is the problem: "science" (which is copposed at least in part by scientists) CANNOT make an error according to this reasoning.
>>You still have the title "scientist", and still get your paycheque. Like baking, there is the recipe one is supposed to follow, but there is also the how the baking is actually done. If a baker failed to follow the recipe in an instance of baking, would you also believe that they are not a baker, or are not baking?
> If you purchase a cake from Walmart and tell people you baked it from scratch you are not a baker. If you 3d print a cake look alike made of plastic and tell people it is a cake you are not a baker.
As the saying goes: Reality is perception (as demonstrated by your very comment!).
> You seem to be in the midst of a mental break so good luck to you.
Do you have any interest in whether the reality your mind generates and projects into the "you" service's experience (as "reality") is actually correct?
For example, take your prior comment:
>> Maybe ChatGPT can explain that to you instead of listening to Alex Jones.
By what means could you acquire knowledge of my interests? Feel free to peruse my comment history, you'll find no praise or likely even mention of Alex Jones (I think he's a dummy, though I do like him). And if you're going to suggest you have mind reading capabilities, I am happy to have that argument.
Could it be, perhaps, that an idea popped into your mind, and you accidentally forgot to apply any(!) epistemological rigour to it before streaming it out onto the page, like an LLM? I mean, come on man.
It's not called fraud in climatology and yes it is how "science" functions. There are no universal standards in academia. Climatological norms involve heavy processing of thermometer readings to "clean" them in various ways. They all do it and do not see any problems with what they do. Every claim you've ever read about temperatures from climatologists is based on that kind of procedure.
Some of this has good intentions, at least originally. Weather stations aren't normally intended to be used by climatologists. They exist for other reasons. So they get moved around, or not moved even as the environment changes around them, get placed in inconveniently unrepresentative places like airport runways, and more. Climatologists scrape this data from the internet or collect it from logbooks and then try to work out what's happening, but the data is super noisy.
Now the way science works is that you characterize the uncertainty in your data and propagate it through any calculations you do, in order to track your uncertainty intervals. Then you communicate those and take them into account when making predictions.
But in climatology they don't do this. Instead they use lots of algorithms and manual tweaks to try and "fix" the data to bring it into line with what they know it "should" be, and then report the data without CIs, as having 100% confidence. For example if a time series at a weather station is stable for 20 years, then experiences a short break, then it returns but the average is consistently 0.3 degrees different than before, they infer that it must have moved and they then "correct" it back to the previous baseline. If there are gaps in the data then they generate fake readings by interpolating between the nearest alternative weather stations, and so on.
Outsiders might expect that they would investigate and try to improve the quality of their source data but they don't. Like, if their algorithms infer a station move, they don't contact the station operator to figure out if that really happened. They just assume their corrections are fine and move on.
Another fun thing they do is alter data that was already published. When they update their algorithms for deciding what data points to include/drop/change, they don't just use it for new data running forwards. They reprocess the entire historical data set. That can yield outcomes that would normally be taken as a clear indicator of scientific fraud, for example where NOAA declared a temperature record, and a few years later declared a new record that was lower than the previous one [1]. Or where scientists invalidated decades of published papers (thousands of them) by deciding that the temperature trend in the first 15 years of the century was totally different to what had previously been reported:
The underlying data on which those papers were built was announced to be all wrong, but nothing was retracted! That's how science functions. And the best part is that they've trained the public so well that for anyone who calls any of this fraud, as you just did, they are instantly ostracised for being a heretical Denier.
"Zlokovic is recognized internationally as a leader in the fields of AD and stroke research. Thomson Reuters and Clarivate Analytics listed Zlokovic as one of “The World’s Most Influential Scientific Minds” for 21 consecutive years (2002-2022) for ranking in 1 % of the most-cited authors in the field of neurosciences and behavioral sciences. He received [lots of awards.]"
I support the (probably unpopular) opinion that scientific fraudsters should face criminal consequences. People die because of this, and the authors of these schemes go and get awards for their "substantial contributions".
Stories like this make me a smidge more sympathetic towards people who were against coerced COVID vaccinations.
I.e., this supports the suspicion that policy makers who claim "the science" supports certain policies are drawing on bad information / compromised experts.
Note: I'm personally very pro-vaccine. I'm suggesting that we treat academic misconduct according to the harm it could cause.
Ditto. I took all the vaccinations, but the reality is that the testing of the vaccines was rushed. It was only in the real world that we discovered that they could cause blood clots and myocarditis.
The vaccinations were still a net benefit for COVID, but the unexpected side effects have seriously damaged people's trust in vaccines in general. We're going to pay the price for that for decades to come.
How sure are you of the rareness? What if you looked into the VAERS data and it told you adverse side effects were several orders of magnitude more common than any other approved vaccine? Might want to look that up.
You cannot see these kinds of very rare side effects in typical phase III trials. Those trials are already pretty large for vaccines, but even then they're simply not powerful enough to detect very rare events. Nothing would have changed here with the normal vaccination approval process, those side effects would not have been detected there either. That's why side effects are monitored after approval of the vaccines as well, because then you have the large numbers you need to detect very rare events.
I think the anti-science sentiment was already there, and the vaccine work for COVID being rushed was just an excuse for people who had already decided that science was inconvenient to become full on outspoken anti-science people.
There should be stronger consequences for this behavior. It costs hundreds of millions of dollars and several careers to deal with the outcome of data fabrication. Meanwhile, the worst case scenario for the fraudster is losing their job.
Yeah, we have a societal problem in this regard; statistical murder is tough to quantify and thus is largely unpunishable. Coal power, for example, likely kills thousands of people a year in the USA, but none of it directly enough to count as a crime. https://www.theguardian.com/environment/2023/nov/23/coal-pow...
I think negative externalities should be priced in. But it's a very complicated problem.
With the above premise, where do you draw the line? For example, are you complicit if you use that electricity (or buy that cell phone made from unsavory practices)? From a legal standpoint, should you be charged because you materially benefitted from a criminal action?
You could start with claimed increase in life expectancy in the target population times actual deaths during the fraudulent research funding period times the usual value of a life.
But again, this is just a statistical argument, not actual damages. For a tort case, you generally need to show real damages. For a criminal case, I don't think a statistical argument would sway a jury beyond a reasonable doubt that academic fraud caused specific deaths. There's a reason why the burden of proof in our court systems is structured the way it is.
Besides, you don't even know that the unfunded research would have resulted in increased life expectancy because the research was never conducted. You're trying to prove a counter-factual where you don't have data. Additionally, there are probably far too many confounding factors in health science to make a strong statistical claim against a single action as you're insinuating.
In this case perverse incentives. Check out 'How not to study a disease: the story of Alzheimer’s' (by Karl Herrup) for a perspective on why this is not at all surprising.
That being said, I think it would be unfair to tarnish all of science with this brush though. There are many fields that don't suffer these problems nearly as much.
We shouldn't have any one scientist able to manipulate results like this
When it comes to trials where the results matter (ie. for approving a medecine), the blinding process should blind the scientists doing the trial.
There should be a third party who will mix up the placebo and real vials of treatments. The treatments should stay in a sealed bags until used, and when used there should be two people in the room recording anything that happens.
For treatments involving "inject this drug", the scientists, investors, and anyone else with a horse in the race shouldn't even be present.
It's a bit sad to see people jump on the witch-hunt bandwagon every time something like this is published. That kind of reaction is an absolute disservice to the public. Even if this guys turns out to be a totally 100% evil fraudster who took millions of NIH funding to buy coke, b&*#!es, and fancy cars while knowing that his drug might kill dozens of people, it just demonstrates that the system for awarding such funds is deeply flawed. Science is about fact finding, how are we even pretending that that's a remote possibility with a sample size of one? Studies need to be independently verified _multiple times over_ to actually establish them as facts.. that's standard procedure in other fields. That we just abandon that scrutiny by blindly accepting anyone's claims because they wrote X number of papers on a topic is not on that guy at all. And worse, nothing will change buy burning him at the stake of public opinion. The next guy comes along and the story repeats, science.org and the "whistleblowers" get their five minutes of fame (and click $$$). If we want things to change we need to hold the institutions who award such grants and fast-track permissions accountable, instead of allowing them to get away with a cheap scapegoat.
"Medicine is broken. We like to imagine that it's based on evidence and the results of fair tests. In reality, those tests are often profoundly flawed." - Ben Goldacre
Antibiotics and the childhood vaccines are modern medicine and have had an enormous impact on modern health outcomes. The bigger issue is that we picked a lot of the low hanging fruit in the 20th century so now advances are harder to come by.
I'm astounded that this guy's the world #1 in his subfield by academia's success metric, citation count. (The table about 2/3rd of the way into the article).
It kinda makes sense to me. The people who are most likely to have a high citation count are those who value making publications. Those who treat publications are the primary goal are probably those most likely to take an "ends justify the means" approach to publishing. (That's not to say all those who value publication count are unethical, just that the unethical group is more likely a subset of those whose primary metric is publication count).
Goodhart's Law says: "When the measure becomes the target, it ceases to be a good measure." When the target is "publication or citation count" rather than "meaningful research or the pursuit of truth" things can get weird.
There are three reasons this happens, and money is at the root of all of them.
- Publish-or-perish: You want to keep that professorship, you had better publish, frequently. Experiments that don't yield results don't count.
- Grant money: You want to get funding for your lab, your assistants and your students, you had better publish. Experiments that don't yield results don't count.
- Start-up culture: Just another permutation of "get rich quick", but you need a saleable product. Experiments that don't yield results don't count.
So there are massive incentives to fake results. "Fake it until you make it", only with actual lives at risk. Say hello to Elizabeth Holmes.
If it didn't come out of a Physic Lab or a hard Engineering Lab dont trust the results at face value and even it did there's plenty of reason for doubt. See LK-99 or Ranga Dias.
Seems like freshmen need to practice their academic dishonestly more to have these skills ready by career time! Maybe the academic integrity councils need to include professors’ work.
I think corruption is a very poorly studied (and solved) topic. Not religious, but the Ten Commandments seems to have more wisdom that its application in the real world.
I have seen peer review hinder actual research more than actually catch fraudulent papers. There is no incentive to call out bullshit in academia because you might not even pass peer review when you call out bullshit. Might as well work on another paper instead.
Meanwhile people doing actual work get scooped or ignored by people abusing the practice or just doing outright fraud.
This would require/requires a total overhaul of how the current academia works. E.g. PhD would be unattainable with that publication criteria. Tenure and funding criteria would have to be totally overhauled.
I'm all for the overhaul of course. But it's not fixable solely by publication criteria. In fact publishers benefit from poor quality. The publisher system makes no sense and is actively detrimtal to science and disseminating information.
From the inside, researchers/scientists see this as a non-issue. A few bad apples. From the outside, the public sees this as a systemic problem. Theranos, failed promises of covid vaccine efficacy, to mask or not mask, polyunsaturated fats are bad for you .. no wait, they're good for you.
Science has a PR problem and like it or not, science needs the support of society at large to succeed and move forward.
I think the hard problem is that if science was direct, most of the population - the majority, are either not intelligent enough to understand it or not intelligent enough to understand why it is that way.
Take masking for example, and here is a bomb most people are totally unaware of:
You want the population to wear masks so that the sick people are wearing masks. Masks aren't going to do much to prevent healthy people from getting sick. But they will do a lot to stop sick people from getting others sick.
This truth is almost never communicated though. Why? Because then wearing a mask becomes a "mark of illness" and nobody wants to label themselves as sick or potentially sick. So the solution to this is everyone wears a mask all the time. But you cannot communicate this whole scenario either because most people aren't smart enough to grasp it or not smart enough to understand why they need to wear a mask healthy or sick.
So this is the kind of problem science faces, and frankly I think the best thing to do is exactly what they did: Insinuate that wearing a mask protects you and insist everyone wear a mask.
> This truth is almost never communicated though. Why? Because then wearing a mask becomes a "mark of illness" and nobody wants to label themselves as sick or potentially sick. So the solution to this is everyone wears a mask all the time. But you cannot communicate this whole scenario either because most people aren't smart enough to grasp it or not smart enough to understand why they need to wear a mask healthy or sick.
I can't help but think this attitude is awful. Most people are too dumb to understand that it is the sick people who should wear masks!? Like what in the world!?
No. You shoot straight with the public. Then you plead with them to do what is best for everyone by wearing a mask when they are sick.
The alternative, as you suggested, is extremely damaging to society longterm.
This way of thinking falls apart as soon as the people who are capable of effectively communicating the truth (or at least, gesturing toward evidence that the dominant narrative is being falsified) reach those who are believed to be incapable of comprehending the truth (or at least, the evidence that suggests that the dominant narrative is being falsified).
> From the inside, researchers/scientists see this as a non-issue. A few bad apples.
What "inside" are you in?
I don't see this as a non-issue as a scientist. It's a massive problem. It means that when I read papers, I'm constantly extremely suspicious of the results.`
I don't see this as a non-issue as a scientist. It's a massive problem. It means that when I read papers, I'm constantly extremely suspicious of the results.
As a scientist this how I always read papers. It's how I was trained. Don't just read the results, read the paper and critique it - was the method appropriate? were control tests performed? is the statistical analysis sound? And even beyond that - if a substantial claim is made, it's assumed to be an anomaly until reproduced by others.
I will say that labs get a reputation. Well known labs with PIs who produce good science are usually treated with a more accepting view (i.e. this is probably true). Unknown labs are treated with a ton of suspicion.
If it is widely perceived as a massive problem, why are there no moves to change the system? The intransigence shows there is no recognition of the problem, or the perverse incentives prevent change, which is the same problem at another level.
Upton Sinclair said, "It is difficult to get a man to understand something when his salary depends upon his not understanding it."
This applies to multiple levels in the publication game. Lab grants would likely dry up if the size of the problem was revealed to be massive. Publications would lose credibility, prestige, and money if the problem proved large. It can be recognized as a problem by those who's salaries don't depend on it, but still remain unchanged because those in the best position to change it have salaries that do.
I'm glad you think this is a big issue. I'd be curious to see polling around the level of trust scientists have in the integrity of the system. I doubt its below 50% approval.
To be fair, 55-65% trust in the integrity of scientific research among scientists is still abysmal and indicates a huge problem.
If we are talking about the number of scientists who see flaws and hold skepticism, that's absolutely fine (I'd argue the number could be much higher and be quite healthy overall). But we are talking about integrity. Ethical research, lack of fraud, lack of bias, lack of perverse incentives, a system that produces high quality and trustworthy output by design, etc.
> From the inside, researchers/scientists see this as a non-issue.
This doesn’t negate your point, but I know someone who’s an "insider" in this particular scandal and sees it as a systemic problem. This researcher/scientist was very disturbed by this scandal after learning about it, and some of their colleagues feel similarly sickened. If you're a good person, it's pretty disturbing to learn that some of your peers would be willing to risk lives by running a trial based on intentionally faked data.
The best "researchers" are poor scientists and vice versa. I thought I had hitched my wagon to a star in grad school when I joined a lab with a high profile and big money-raising PI. After awhile I realized they did no research themselves and focused solely on hype and raising money.
When I found out a software bug had invalidated a year's worth of simulations they pressured me to present the false data as true at a large talk. That's when I realized that being honest and truthful with data was not the way to succeed in academia. I also unknowingly ended my career that day by not going along with my professor's fraud.
That being said, nothing surprises me anymore about academic dishonesty and research grant funding.
These days science is done as a "collaborative" effort, there are tons of people gaming that, like putting their names on papers where they didn't do that much, churning out papers which doesn't really add much to science etc...
In my collaboration you get to be an author on every paper (around 100 a year) just by doing any work in the experiment. I'm an author on around 1000 papers that I haven't even downloaded, much less read.
It's a thing for sure. It would be great in a post-scarcity society where everyone can be funded regardless of what they do, but people are still competitive. For example:
- There are still internal databases (which funding agencies can access), people fight over who gets on which paper there. There are people trying to game that system as much as they would try to game a public metric.
- There's a whole cottage industry of people in these collaborations who simulate the experiment, show that it could do something, and then publish a paper (with a shorter author list) about it. This is so popular that many of these people never get around to doing the real experiment.
Unfortunately I don't think we've broken free of Goodhart's Law so easily.
Unfortunately the systemic incentives support PIs that are basically hype salespeople. PIs gain prominence on metrics: number of dollars raised, number of PhDs mentored, number of post-docs on staff, number of papers landed, number of awards granted...
They are not supported on number of hours of research done-- they arguably put in that time when they were PhD candidates or post-docs themselves.
When I was a PhD candidate (couldn't handle the papers-or-die academic environment so I left) I really, really loved my PI. He had immense academic and intellectual humility. But even he had 10+ papers in-flight at the same time and rarely had time to do any work himself!
Also, between the lines there. Why was there a software bug that existed for a year, where was the checking. Researchers also do not care about any kind of software best practices, or the value of software. So it is all, fast and dirty hacks.
I've worked with "research" code and the lack of best practices is very true. Few to no tests. Little to no modularization or reuse. The same code copy-and-pasted all over the place. Reinventing the wheel poorly instead of using a 3rd party package/library. Dev environments that are difficult to reproduce since various pieces are either not checked in or in some external drive share. I won't even talk about the actual code itself.
In a related note of worst practices, the main research assistant (who was doing most of the lab's work) died unexpectedly and since no one was using any form of version control we couldn't find his code and data after his death. Which was doubly tragic because we were in the middle of a promising cancer drug trial and it sputtered out after the cell line testing because the original predictions and methodology were gone.
The solution from the PI was to have everyone buddy up and tell your partner where you kept code on your laptop.
Knowingly publishing wrong simulation data is stupid -- (imho, no blame should have gone to you, a student, even if you had gone along with it, but rather to your supervisors in that situation) -- because published simulations would always be reproducible, by definition, if you think about it.
Anyone trying to replicate the results will discover that the previous simulations are wrong, though will probably assume mistakes and not fraud by the original researchers. Of course it could take years for someone else to notice the problem (if ever), depending on how exciting were the claimed results and how easy to run are the simulations.
The incentives are wrong in the first place. People are judged based on number of papers published to prestigious journals, and demoted when they don't have quarterly positive results, and given promotions and financial boosts accordingly. People attempting to reproduce past results for verification get little to no reward. This incentive structure is in direct contradiction to honesty and real science.
I have not a single bad thing to say about Dr. Promislow, I never studied under him in graduate school and only worked loosely with him in undergraduate research. I also played a lot of music with him back in the day. If you are trying to sleuth my CV you are looking at the wrong school entirely.
> Several former lab members provided details of experimental data from Zlokovic’s lab that they say were falsified. These included experiments referenced in the whistleblower dossier. In some cases, they said, data points that would have invalidated the desired results were removed. “It was not real science. He already knew what he wanted to say” before the experiment was completed, one says. “I started hating science. … It made me sick.”
> Two of the insiders also say Zlokovic sometimes had his team improperly alter existing notebooks. Normally these notebooks—in which scientists record details of their work as it proceeds—provide a ground truth for an experiment’s methods and results. As a result, they’re also often central to misconduct investigations.
> But two of the former lab members say that after an experiment was completed and its results published, Zlokovic sometimes admonished his scientists to make sure the notebooks were “clean.” That was understood to mean pasting into them printouts of the published results and methodology or omitting contrary details that challenged the paper’s conclusions. Zlokovic explained that those changes were needed in case of an “audit,” according to the two scientists.
Those allegations are pretty damning if they are true. Falsifying lab notebooks is very obvious scientific misconduct, and there are no benign explanations for that.
Science (with a capital S) is the modern incarnation of the church. They have special knowledge, special privilege, special language that the average person can't understand (like the church had Latin). We are supposed to listen and trust lest we be damned.
This is a terrible hot take. Science is no more immune to human misbehavior than any other institution but it has self-correction as a core principle. This guy is going down due to it, not getting shuffled off to another diocese. Many people would like that process to be faster but it’s doing better than most other areas of our society because there are so many people who do prioritize truth over personal loyalty or institutional reputation.
Thoughtless comments like this are how I know I am right.
This is not a situation where a simple correction needs to be done. This guy caused damage and wasted limited time and resources. Sure we can throw out his results but science was still scammed. Losing your career for such egregious behavior is as limited a punishment for this as preiests being shuffled off. This guy should be in jail at the very least.
Universities and funders should start suing these quacks. Like how sponsors sometimes have gone after an athlete busted for doping. Its insane how free scientific fraud is to do.
Universities are the ones collecting the checks from the fraud, they have zero incentive to uncover anything for the same reason they never looked too hard at their athletes.
Wouldn't companies with frauds for CEOs avoid VCs that are diligent? Wouldn't that mean those bulldog VCs would get more (honest) money? I guess investors want to support the good ol' pump and dump rather than real new value creation.
I doubt this is the case. However they should deal with it before it ruins the reputation of the broader community - it would prevent comments like this one from gathering steam.
They can’t… it’s so endemic they can’t just fire half the labs and give up all that sweet grant money. The reputation hit would be ruinous. Universities would rather look away and pretend there is no problem because it’s too big to deal with.
This was downvoted for some reason but it is very clearly true that universities just can't go on firing sprees if they want to stay solvent.
Universities run a lot of even basic functions on grant money (university takes a cut in money and/or labor) and researchers and labs are the ones that apply for the grants.
Perhaps, but a sizable cohort of people that paid their hard earned cash for a product and go on the recommend others do the same is much more reliable a signal than an academic convincing their buddy in some government agency to allocate them more taxpayer dollars.
Number theory and electricity development was/is very much paid from public funds, even dollars. For wheel there obviously was no money at all, let alone dollars.
For most science there is no obvious commercialization at least in any predictable timeframes. When there is, it's typically called R&D.
A distinction should be made between public funds from specific agencies with specific engineering goals in mind (department of energy, department of national security, NIST, etc. etc. etc.), vs "science is good we need science, health is good we need health, tell your local bureaucrat buddy how you will use science to fix health and collect your paycheck" funding (NSF, NIH).
It depends on the field, but medicine has replication rates [1] that are so absurdly low that it's near to impossible for there not to be significant scale shenanigans going on in the research:
---
'A 2012 research paper found that only 11% of 53 pre-clinical cancer studies had replications that could confirm conclusions from the original studies.[79] In late 2021, The Reproducibility Project: Cancer Biology examined 53 top papers about cancer published between 2010 and 2012 and showed that among studies that provided sufficient information to be redone, the effect sizes were 85% smaller on average than the original findings.[80][81] Another report estimated that almost half of randomized controlled trials contained flawed data (based on the analysis of anonymized individual participant data (IPD) from more than 150 trials).[83]
---
There's also a simple pragmatic issue. When finding something is effective gives you billions of dollars in profits, and finding it ineffective gives you millions in losses, you have motivations beyond just the truth. This is where regulatory agencies are supposed to come into play, yet those agencies tend to be staffed (if not lead) by people from the exact same companies they're supposed to be regulating.
Yes it is a systemic issue. A cursory search for academic research fraud will reveal this. Some fraudsters are very high profile. There are all kinds of perverse incentives; prestige, grant money, not wanting to waste time with ‘useless’ negative results.
The unofficial motto in academy: "cheat or perish"
but well... people gets what they asked for, so everybody should be happy with the current situation.
If cheaters have the money it means also that somebody has zero money. Is a double loss. This people simply pass over the other researchers and push them out of the road.
If they don't start a company on the side or something, no. They get a relatively nice salary, but probably less than e.g. many high-in-demand programmers.
It depends, the compensation is highly skewed. The PI in question is likely compensated over $1M/year, while other investigators involved (assistant and associate professors) have compensation comparable to entry-level programmers. To confirm these things, you can explore this database (only public CA universities though): https://www.sacbee.com/news/databases/state-pay/
$1M/year sounds insane. In Finland at least top salaries for professors is about tenth of that. And I don't think any of the professors I know make much more, in Europe or North America.
Admin people like the principal may get obscene salaries (like 4 times the max professor salary) though.
As an academic, I think academics (including me) are paid too much. We need people who are interested in the science, not those who (pretend to) do it for money.
> We need people who are interested in the science, not those who (pretend to) do it for money.
In the US, the problem is mostly the opposite. Lots of people would rather do science, but it's hard to choose using your skills for real science when some company will pay you 2x-10x to instead optimize ad clicks on their website or algorithmic trading or whatever.
In some of the humanities, liberal arts, social sciences etc academia may pay better than other options for those people. But for most STEM folks, it's a tradeoff between "low-paid meaningful work" vs "high-paid meaningless work".
Medicine and AI research may be the only two areas where people can simultaneously do cutting edge research and make high salaries.
Maybe a bit harsh, but I'm not sure science would get more trustworthy if we'd get more people who choose to optimize clicks just because it pays better.
It's really easy to do "shortcuts" in academia that gets one better salary if that's what they want. As seen in this post.
It's rare for an academic to make over $1M in salary, but mid-high six figures is not uncommon for a highly published, well known name with a track record of winning large grants.
Universities are some of the wealthiest institutions on the planet, and they don't seem to have any issues getting a legal team together when it comes to other matters.
Enough Universities, such as USC in the article, definitely have the funding to do this however. Enough Universities do that could start some precedence and give researchers at smaller unis payse before bad behavior.
How do people who don't want anybody to "trust the experts" plan to live in a world without experts?
To be a bit less snarky, basic education on philosophy and science should make it clear what are the disclaimers implicitly assumed when trusting the experts.
A serious answer to your question, something I learned from interacting with a couple especially open flat-earthers once: they want everyone to "trust their senses" and "do their own research". In one sense you could say they want to destroy science as an institution, but in another you could say this kind of behavior is closer to the Platonic ideal of science: replicable experimental results that reiterate useful theories can be passed between people rather than just handing down insights from on high.
Their attitude is certainly encomiable, and yet one cannot help but wonder how they could reach such spectacularly wrong conclusions when, in 2023, it is not so difficult to verify (in multiple ways) that the Earth is round (thanks to long distance traveling, instant communication, and some geometry).
The crux is that human knowledge has progressed so much that "trust your senses" is simply not feasible anymore, as the majority of modern scientific discoveries cannot be verified with the resources available to normal people, and one would need years of study to actually "do their own research" in a productive manner (as I was discussing with the sibling comment, who is convinced that a few days of study are enough to judge neurology research).
People who are labeled as experts and who talk the loudest in society aren’t always capable of independent critical thought, so a lot of what they say is useless.
Nothing is stopping you from independently investigating a subject. You don’t need to know about phylogenetic trees and be a bio wizard to understand how neurons work and learning how neurons work would take you a day or two max. Pick a narrow but not too narrow scope, research it and make your own conclusions. I don’t know why anyone would ever blindly follow “experts” other than sheer laziness
It is always good to be skeptical of people who talk the loudest, regardless of their role in society.
> learning how neurons work would take you a day or two max.
I seriously question your definition of expertise. Harvard offers a bachelor's degree focused entirely on neuroscience, with at least two separate exams focused only on neurons. And that is barely enough to understand current research in neuroscience, since it usually builds up on literature that is too recent for a bachelor's.
If you took your high school and college education even somewhat seriously, you would have already learned how an action potential travels through a neuron, and you’d have a good grasp of science fundamentals to read any research paper. I learned it in my high school biology class along with a host of other things. Also, who says your so called “experts” know those details either? I’ve met a physician (recent new grad) had no idea what mitosis was, something I’ve known since I was like 12 and I develop software for a living. We’re supposed to trust what these people say? Nah
Got it: neuroscientists don't know about action potentials, and high school education is enough to appraise any research paper.
Between one Jira issue and the next, try to read some actual neuroscience preprints and come up with a few reasons why that piece of research is good, and a few why it is bad.
Take this one [1], for example (Cerebrovascular disease drives Alzheimer plasma biomarker concentrations in adults with Down syndrome):
> Main Outcomes and Measures: We examined the bivariate relationships of WMH, Aβ42/Aβ40, p-tau217, and GFAP with age-residualized NfL across AD diagnostic groups.
Are these biomarkers specific enough? Did they miss any? Why did they limit the investigation to bivariate relationships?
> We [...] examined whether
1) GFAP mediates the relationship between WMH volume and p-tau217 concentration, 2)
whether p-tau217 concentration mediates the relationship between WMH volume and NfL
concentration, and 3) whether p-tau217 concentration mediates the relationship between GFAP
and NfL concentration.
Why did they test these three hypotheses? Did they miss anything interesting? Why did they choose that specific method for mediation analysis? What limitations does it have? Are there alternatives?
> Two specific percentile thresholds were computed: [...]. These thresholds initialized a Gaussian mixture model (GMM) and expectation-maximization algorithm within the white matter segment of the FLAIR images, using two components to represent hyperintense and non-hyperintense voxels.
What do you think of their method to quantify white matter hyper-intensity from the MRI scans? What percentiles did they use, and how sensitive is the analysis to these choices? Is a gaussian mixture model appropriate?
If you were a peer reviewer, would you think that this paper is ready to be published? What feedback would you give to the authors? What is the significance of these result, and what future research do they support?
Big words don't scare me. If I cared about research related to people with Down Syndrome, then I’d read it and understand it thoroughly, but I don’t. Critical thinking skills are largely innate, it doesn’t matter if you have a PhD or high school degree. Credentials say nothing about your intellectual curiosity or your motivation to learn new things
On the contrary, critical thinking can be developed, but it will not save you from ignorance. It takes much longer than two days of reading to be able to thoroughly understand research like the one above, I really suggest you give it a try.
Not sure why you started rambling about credentials now, but I broadly agree with your point. However, again, all I have been arguing since the beginning is that it takes a lot of time and effort to reach expertise in a topic. I really don't see what is so controversial about this. Or do you really think that an average person can learn software engineering and criticize your work in only two days?
For the typical reader of HN, phylogenetic trees would be far easier to figure out in a few days than how neurons work, especially as "how neurons work" isn't even established.
Yes, I'm sure you could know exactly how the well-studied synaptophysin, synaptobrevin, neuroligin and PSD95 cooperate to control neurons in a day or two. Particularly in the context of schizophrenia.
Easy as pie.
Edit: Here's a freely available relatively recent article on just one aspect of neurons, the synaptic proteome, which contains about 1,466 proteins [0].
I will check back in a few days once you've figured out how they all work together.
If I was interested in this paper, I would read it and understand it like I do with all the bio papers I read…all research papers do is narrow in on something highly specific, they aren’t hard to understand once you have the vocabulary down. It’s like reading someone else’s code. And I highly doubt your so called trusted “experts” like Fauci understand any of it
Understanding what a paper says is considerably easier that understanding its limitations. See the questions I asked in the sibling thread for a taste of the difference.
Or think in terms of code: it is easy to understand what some code does, but to understand why it is like that you need to know a lot more about the context surrounding the problem.
It’s bad. It’s so bad we’re at the “coin flip is about as reliable” stage of science. In some soft sciences almost 75% of studies cannot be replicated. AKA are false.
We can’t tell if salt raises blood pressure in humans but there are legions of people here that will state with all the conviction of a Jesuit priest that climate models can predict climate 100 years into the future - all without evidence, or replication, or open data, or falsification, and if you don’t agree with the consensus you will never be published. Literally censoring contrary views. That’s some serious “just shut up and believe me!”.
The certainty that I see based on very uncertain experiments is seriously suspect. I can only hope the structure of scientific revolutions puts this down in the next generation, this one seems hopelessly lost.
But yeah, that’s “science”. And if you don’t believe that just wait for the non scientific personal attacks that are replies to this comment.
"Trust the experts" is always "trust my experts". If you ask them to trust other people with the same (or similar) degree, but a different opinion, they don't. It's just an appeal to authority rather than a real philosophy.
Honestly? These kinds of stories should enforce the general idea. Trust has to be seen as a long term game, though. And requires well run checks to keep things trustworthy.
So, if you view trust as only a short term, "the answer is always right," then that will fail. But you should not have a sole source here.
And this style of system is no different from authority anywhere else. You should be able to trust your local religious/community leader. But you should also not have one that is absolute authority on their own authority.
These "few bad apples" don't operate in a vacuum, they are actively destroying the credibility of their fields and science at large. They are wasting millions of taxpayer dollars, as well as private funding. They are diverting the attention of scientists for years or decades, pursuing red herrings.
You don't get to just point your finger at the other side and say "nah uh, they're doing WAY MORE bad things!". There's a serious problem in science right now and it has to be fixed by those "vast majority of experts" remaining in the field. What self-respecting person will want to go into research after seeing the amount of fraud taking place?
> Under Zlokovic’s leadership, the USC institute has expanded to more than 30 labs and grown its annual funding more than 10-fold, exceeding $39 million in 2022. NIH grants to Zlokovic have totaled about $93 million. A prodigious fundraiser, in the past decade alone he has added at least $28 million from private sources, according to USC.
So the NIH has given the man $93m. Surely that's a place to start?