I recently had the privilege of trying to do the right thing when I identified fraudulent research carried out by an institution in Austria. The initial response of the institution was positive but when I pressed for further details on how such things could happen suddenly nobody anymore would take my call. The research was paid for by a private company to pimp a nonsense product. The research was never published in a research journal but it didn't stop the company using the name of the university alonside exerpts from the paper in marketing material alongside gushing claims of "proved by science".
The company threatened to sue me and the university threatened me as well. Neither has followed through on threats. The company wants to keep selling their rubbish magnetic health ding ding and I assume the university wants nobody to look into how positive results for the product came out of their institution. Allround an education on how the real world works.
It seems well intentioned, but I'm not surprised at all about the outcome. I don't think you can expect the institution to implicate themselves like that. You have to realize that preventing legal and financial liabilities is the #1 priority for institutions. Unfortunately this includes when they are wrong. Even if the employees are "good" their lawyers will advise them against providing any details like that and most are afraid of retaliation. Anyone asking questions like that is the enemy to them. This type of behavior is bad for society but fully expected giving the incentives and consequnces of telling the truth.
I think this is one of the hurdles human civilisation has to iterate over en masse and long-term. Game theory is distorted by a mixture of power dynamics and the abstract nature of "society" - i.e. determining risk v reward (professional, personal, financial, etc) comes down to the individual person or organisation, whereas "society" as a concept doesn't get a say in each transaction as it occurs, so a lot of discussion gets side-tracked there. Without a gestalt consciousness such instances of suboptimal outcomes will continue to occur, and since gestalt consciousness is technologically a remote prospect, present-day social hierarchies are at best a gross approximation of the ideal "average path" or "surfing on the bell curve of understanding", balancing individual freedom and satisfaction against fear of repercussion and ridicule with accuracy and efficiency (at a social scale).
I do think that modern technology can revamp antiquated systems of peer review and appeals to authority with more robust and deterministic / objective measures of quality - an evolving consensus repository of the sum of human knowledge where each pull request is a debate, each experiment a commit to a branch that may become a new foundation or a footnote. Yet we still see fit to define knowledge as property, ownership as power, power as success. Fear of the impression of weakness, or the threat of starvation and poverty seems to override all other priorities and for the most part, we toe the line.
Eh, we do have a basic "gestalt consciousness" (I prefer "hive mind") in the form of a legislature, which has the power to modify "the rules" so the market and individuals react. Recently though it's the mind of a child who often throws tantrums.
I'm not so sure that a legislature is an appropriate equivalent. There are far too few people making laws who understand the things they make laws about.
So what should be done? Hire your own lawyers to go after them pro bono?
Write an article and send it to the papers?
This kind of thing needs to be shamed and punished. While I agree that the institutions can be expected to cover there ass, there must be a course of action.
> While I agree that the institutions can be expected to cover there ass
Should we have that expectation? If a person does it we call it concealing evidence, intimidation, and perverting the course of justice. On the other hand, if they confess right away, we treat them leniently. Why should institutions differ?
Because institutions are not individuals. They do not possess a conscience, morality, or any other humane emotion that makes leniency possible and produce favourable outcomes.
You don't need to consider conscience and morality to implement harsher punishments for hiding evidence or not making things right after being informed vs. more lenient punishments when the individual or organization has shown to proactively attempted to remedy the situation and has taken actions to prevent it from happening again. It's all about making sure that the incentives are aligned with good behavior, even after initial bad behavior.
> the whole doesn't necessarily have the properties of its constituent parts.
This is because the "constituent parts" - members of the organization - are often just terrible human beings that hide behind the concept of an institution to shield themselves from the cognitive dissonance caused by their own unethical actions.
If an institution does something unethical it can always be traced to its employees.
As an example, all the researchers involved in promoting this fake health product ought to be fired and academically blackballed. It really is that simple. That's how we can deal with institutional rot.
Difference in the parameters. If immoral activity suddenly isn't immoral or rather "okay" if executed by an "institution" how can a (so called) "terror Group" be judged differently? Why are they responsible for the institutional actions while other institutions (or the people behind them) are not?
Tip off a newspaper. If they don't take it, try another. Generally, the journalists will have training in how to approach it without legal exposure and/or have it vetted by their publication's lawyers.
Monies paid for students education should be conditioned on not being skeevy and unethical. Basically just arrange the penalties such that outright fraud is effectively the death penalty for the institution and lesser malfeasance is still financially disastrous.
Wait for one really good case and let the papers instead discuss how a century old institution collapsed.
We just need a 'Yelp for research institutions'. Perhaps there already exists a global register tracking reputation like this? If so it needs to be visible (particularly to the press) and be properly vetted to have real consequence
> I don't think you can expect the institution to implicate themselves like that.
Um what? It's basic ethical behavior. What has the world come to that someone can defend unethical, unprincipled behavior of a educational institution because "preventing legal and financial liabilities is the #1 priority for institutions ...", with the implication that therefore unethical behavior should be expected?
> I don't think you can expect the institution to implicate themselves like that.
Admitting your mistakes is a strength, not a weakness. We should not expect other's to self-censor their shortcomings because if we do, we live in a fake world already. I mean, it happens, but whatever happened to honesty, the greater good, long term goals? Your take short sighted, short term benefits which will bite them in the ass eventually in the long run (like a can of worms).
That's a good point, unfortunately. Though it feels like something which especially applies in Common Law.
Going into hoops about some kind of alternative theory, which is tried to make sound plausible (but often seems like a bad joke, insulting to the victim as well) should be less honored than admitting to a crime, showing remorse, and demonstrating you learned from making your mistake.
Showing you admit your deed, showing remorse, and demonstrating you learned from your mistake is important for the victim (and/or their peers), as well as for society in general. We as society should reward such behavior, but indeed more often than not our legal systems fail here. However, at some point the evidence is so overwhelming that the theater as described is only harmful. How can we lower the incentive for such theater?
> I don't think you can expect the institution to implicate themselves like that.
A) Why not?
B) If not, then you can and should expect the people within the institution to implicate it: Everything is a decision, even the decision not to make a decision. For them to not make the decision to become a whistleblower, is for them to make the decision not to become a whistleblower -- i.e, to condone research fraud. If you really, really want to insist that institutions can't be black-balled, then we'll just have to start black-balling the people within them. (If the world starts to do this diligently and systematically, I think you'll rather quickly see a lot of people denouncing this kind of shit at their institutions, probably just as swiftly followed by a developing consensus that maybe you should expect institutions to police themselves.)
It's pretty easy to set things up to overcome this hurdle: say either the institute handles your enquiry in a scientifically honourable manner or you will contact one of the high profile muckrakers.
The most entertainingly written such blog to my taste is Leonid Schneider's:
> I don't think you can expect the institution to implicate themselves like that.
Institution should have nothing to fear if they've followed due process and shouldn't be in a position of being implicated in dodgy science to begin with!
I have a personal philosophy on most health oriented fads: if someone is actively trying to sell something, it's snake oil. Too many people are looking for everlasting life that they're okay with being swindled in the pursuit of it. Naive enough to believe that people are honest when promoting products they're were paid to promote.
You can pair it with: if it works, it's controlled. Meaning you can't get it without prescription and have to go through the tortures of a health "care" system.
I would take a pharmaceutical/drug related test (administered through the state), and it could be comprehensive; If I could renew my long term prescriptions.
The pharmacist wants to see a blood panel before refilling, fine, I should be able to order one.
(Only on long term medications (Blood pressure, diabetes, and most psychiatric drugs within reason) , and never antibiotics)
While I agree with the sentiment, anyone on long term medication needs regular monitors by a doctor anyway. If you are only 25 this won't make sense, but by the time you get to 50 you need regular checkups for lots of things that are best caught early. I've lost enough family to colon cancer (spouse of a second cousin - can I even claim him as family?), and several others are only alive now because their cancer was caught in time. Then there is heart problems which again are best treated before the heart attack (if possible) - I just named the two most common killers of old people that you can be sure is in your family and coming to get you in the future (I know of exceptions - genetic disorders that will kill at 50 or so), but regular doctors visit can hold off.
Right now American law states you need to see a doctor once a year in order to procure medications.
We all know how many times patients are brought in just to get that script. They (MD's) love the six week interval. It pays for the lifestyle.
I'm lucky if my blood pressure is checked.
And the Colon scan is something they might be mentioned while walking out the door.
My point is if you have a conscience doctor, and good insurance, by all means take advantage of it.
Many Americans don't have good insurance, but need medication.
We shouldn't have to come into pricy office visits (basically a stare down, or if he's peppy--some dubious advice) just to get a medication we have been on for years.
I think this is a fabulous idea and this kind of sensible approach to regulation could be applied in many different areas, increasing personal freedom without excessively increasing risk and third-party costs. For example, in finance, maybe the default could be that only fixed rate mortgages can be sold, but if you take and pass a rigorous exam and indemnify the government from any claims if you end up losing your house, then you can be sold an adjustable rate or interest-only mortgage. Maybe you could have default speed limits as we do now, but if you take an exam +/- agree to telemetry in your vehicle, you can go faster than the speed limits as long as it’s safe.
Particularly for my low-income patients who end up coming into the ER for a refill (or a complication of having ran out of their meds), I’ll often write a 3 month supply and 3 refills, so giving them one year of access to their meds without needing a doctor. I still strongly encourage them to get a primary doctor and see that doctor in order to get help with their medical conditions, but the idea that we should effectively coerce patients into seeing a doctor by withholding life-saving medications (“it’s for their own good!!!”) seems grossly unethical to me, and doesn’t take into account the problems poorer people have in getting good affordable healthcare. But I’m pretty sure my position is an outlier among emergency medicine docs.
I think there actually is something like that in finance — I can’t just walk into my bank and do some very risky trading with high leverage without taking some exam.
I’m hazy on the details, just have developed an app for a bank that had to do some validation on clients. But I think it is an EU rule (or perhaps Hungary only if someone were to know)
The real question from my perspective is why the pharmacist can't be the one making the determination in the vast majority of cases.
> never antibiotics
Despite the common wisdom, the issue there wasn't (mis)use in humans but rather certain agricultural practices which I understand (in the US at least) to have largely been curbed at this point. Antibiotics that were still clinically relevant in humans were being added to animal feed en masse because "reasons" (ie cost cutting of various sorts). Evolutionarily, that went about as well as you'd expect.
That's too harsh, IMO: not everything that works is gated behind a prescription.
What I'd look into instead is how it's legally classified. My heuristic is: if it works in any meaningful way, it's classified as drug or medical equipment. Might be OTC, but it's clear about its status. Stuff that's not classified like that can at best correct some nutritional deficiencies, but typically does nothing at recommended doses (but sometimes can still hurt you if you severely overdose).
There are substances gated behind a prescription that I wish people would have easier access to - but I understand the need for regulatory control. If people selling all these fraudulent cures can dupe so many regular folks, imagine what would be happening if they were allowed to put medically relevant quantities of an active compound in them.
that’s not true in developing countries. Vitamins and “over the counter” (OTC) medicines definitely work - the regulators need to decide the relative risk and benefit of allowing OTC access. As a physician, my personal view is that we should allow more people access to more medicines that are currently prescription-only in the US, but there are always going to be some medicines where the risk of harm is high enough that having a licensed doctor be the gatekeeper is overall beneficial for society. So while I might draw the dividing line in a different place, I think the concept of controlling access to certain medications is sound
If it works, it has an effect on the body. If it has an effect on the body, you can in nearly every case induce too much of that effect. Also, it probably does not have one single effect, but a cluster of them, some good, some bad.
In summary, if it works, it can also kill you. That is why most medicines that work are also controlled.
I have encountered a similar thing, with a university professor somehow supporting the finding that esoteric stickers are having an effect on patients, because they block EMF: https://www.frontiersin.org/articles/10.3389/fnins.2018.0019... – this is bullshit (up to the measurement equipment supplied by the German EHS-community).
I think at least the university should reprimand this people, how did you get in contact with the people threatening to sue? As I'm currently a grad student (in an unrelated field) I guess I could have some fun pushing this as academic discourse.
Those threatening to sue were the founders of the company Powerinsole. They made contact via their lawyers. From the university I just had an ugly phone call with a senior administrator who suggested that if I ever contacted her again the police would be called.
I contacted and spoke at length with the Austrian Kosumantenschutz. https://www.arbeiterkammer.at/beratung/konsumentenschutz/ind... The product is clearly and at minimum a case of false advertising as there is nothing approaching a computer chip inside. However and though sympathetic the KS/AK said they couldn't do anything and couldn't refer me to anybody who could. I found this rather surprising.
I don't understand what the problem is: The data in the double-blind study clearly shows no statistically significant effects (just look at N and the standard deviation bars) and the company doesn't claim that there would be such effects.
The company only says there is a "positive trend" which there indeed is.
Doesn't matter if the SD bars make it obvious. No layman can read such a chart. The text 110% makes it sound like the study PROVES without any doubt that the product had an effect.
"the lactate measurement shows a difference"
"The lower lactate value with Powerinsole means longer performance and a later onset of muscle fatigue."
and finally
"Even with the first application of the power insole, a lower skin conductance is evident compared to the placebo. This means that the power insole can help reduce stress levels."
Nowhere are they talking about statistical significance.
Yep, I'd love to do that, but I'm squarely in the health sector, no getting around that sadly. I might end up cutting some of the main features to qualify as a lifestyle app though. Sad but better than not launching I guess.
Not to mention almost all pharmacies selling homeopathic rubbish. I find this particularly irritating given the ridiculously regulations and control around things like buying paracetamol or ibuprofen.
Medical and ethical standards are in complete free fall in European pharmacies. I was told by my vet to rinse my dog’s eyes in sterile isotonic saline solution. When I went to the pharmacy to ask for it they didn’t know what it was. They suggested using chlorhexidine wound disinfectant instead.
My poor dog would have probably been blind or worse if I had taken the advice. But they don’t care, they just want to move products, whatever the cost.
Thanks. I’m sure regular table salt and googling the concentration would have done the trick, but there’s something to be said for those 30 ml plastic vials you break off a ribbon that you know have the right stuff and is sterile and stays that way until you use it. I don’t want to take risks with things I don’t have enough knowledge to make them calculated risks. But it worries me when some of the people we defer to wouldn’t even pass high school chemistry.
After having checked five pharmacies with nobody having heard of it the vet took pity on me and gave me a handful out of their supply closet.
These products including powerinsole are marketed by a local tv station and program https://www.puls4.com/2-minuten-2-millionen/staffel-8 which pretends to be a startup investment show but is really just an infomercial for woo products.
What ridiculous regulations are there with regard to ibuprofen and paracetamol? Are you talking about the fact that medicines are not allowed to be sold in supermarkets?
I think it's a good thing, since it's a pretty good guarantee that you actually get what you buy.
You can get ibuprofen and paracetamol at any pharmacy in Austria for a few euros without a prescription. And in every district there's always a pharmacy that is open 24h.
they can and do give proper medical advice on the medical products they are selling but at the same time they are selling homeopathic products and other magic pills, powders and devices from the same counter. This seems conflict of interest / ethics. I am not sure why pharmacies are not regulated but that is the way it is in Austria.
It's unfortunately surprisingly common for doctors and nurses to recommend all kinds of quack treatments in Austria. The pharmacies are not the only profiteers of these scams.
MIT response (at least if PI is well-connected): Intimidation, NDA, non-disparage. Plenty of fraudulent information comes out of MIT.
I'm pretty sure that's common to many of the elites. You don't get to be elite by being honest. Schools one tier down (e.g. large state schools) tend to be a lot better, but they're declining too.
Ouch. OK I'll start by saying I don't know the full details of the study. But those results and plots scream of PR to someone who works with data analysis for a living
I've recently come across a 'for pay' report from a university as well that had holes in it the size of which you could drive a truck through, which was also used to establish claims of efficacy that were off the scale. This is probably quite common, and I really wonder why universities would lend their name to this kind of practice.
I have an amusing letter from the lawyers working for powerinsole threatening legal action for asking questions and that I should remove all my comments from social media. It was also demanded that I pay approx 350 euro that the letter cost to write. My response was to invest in the cost of the device, tear it down on video and post the analysis to facebook. That was more than 6 months ago. I have not heard from the lawyer since. I'll happily go to court with a printed t-shirt with the text "where is the battery?" but it won't come to that I guess.
True, but while I don't know Austria's laws, I know that in any country that isn't fully corrupt (and even in many of them) anyone can take anyone to court for anything. There are different rules around loser pays (there is no good answer here - only bad compromises) and how fast the judge will dismiss fraudulent claims, but if there is anyone who can't get a day in court over a legitimize issues the country is corrupt. Austria isn't perfect (no country is), but there international reputation isn't nearly bad enough that I would expect someone to be unable to get your in court for anything they want.
> Whether or not that lawsuit has any merit is irrelevant.
That’s not entirely true. In some jurisdictions persistent baseless legal action, or even empty threats of legal action can themselves be considered a cause for action.
Also, to sue someone for more than a "small claims" amount you need a lawyer and most lawyers will not bother with cases that they are obviously not going to win, and they are ethically bound to tell you that you have no case.
I don't think that's how things work in most European countries. Yes, there are costs but they will be covered by the loser. No way this person loses against these particular fraudsters
The situation in the EU in general is all but satisfactory, especially for investigative journalism. There exists a term for it: SLAPP – Strategic Lawsuit Against Public Participation. There is currently an initiative of a group of European MEPs underway for better anti-SLAPP legislation in the EU member states. You may read more about the SLAPP problem in this article, published by the European Centre for Press and Media Freedom: https://www.ecpmf.eu/slapp-the-background-of-strategic-lawsu...
Some countries are also sensible to abuse of the legal system to this end, e.g. Lenovo had to pay 20,000 EUR in damages after they decided to drag the case of a consumer who wanted his 42 EUR Windows license refunded: https://fsfe.org/news/2021/news-20210302-01.en.html
You can be sued for anything, or nothing. You might be able to "easily" win the case, which may still cost you resources that you may not want to expend, and you won't be reimbursed for the stress and time used, only the financial expenses (and even that isn't true in all jurisdictions).
That said, if you know the legal situation, demonstrating that you're not afraid of the legal threats is often the way to make them go away.
For security research, if the goal of the legal threat is to prevent you from publishing your results for the first time... publishing the results can significantly reduce the incentive to cause you further trouble because the goal is no longer achievable.
IANAL but my gut tells me no, unless you're in a place like Russia, North Korea, China, most of the Middle East, parts of Africa and Latin America, and some parts of the former Soviet Bloc. In western nations, you might get arrested for asking questions if it upsets the powers that be and they deem you a nuisance.
In UK you have a chance to end up with a "hate crime" if you don't follow the official line of (no)thinking(with a low change of being arrested). Pretty crazy stuff going on lately(past few years).
Well, maybe you own a waxing salon that does Brazilian waxes and you're Muslim and don't believe you should see or be in contact with male genitalia besides that of your husband.
> In 2018, Yaniv filed discrimination complaints with the British Columbia Human Rights Tribunal against multiple waxing salons alleging that they refused to provide genital waxing to her because she is transgender.[15][16] Yaniv's case was the first major case of alleged transgender discrimination in retail in Canada.[17] Yaniv was seeking as much as $15,000 in damages from each beautician.[18] In their defence, estheticians said they lacked training on waxing male genitalia and they were not comfortable doing so for personal or religious reasons.[19] They further argued that being transgender was not the issue for them, rather having male genitalia was.[20] Yaniv rejected the claim that special training in waxing male genitalia was necessary,[21] and during the hearings equated the denial of the service to neo-Nazism.[22][16] Respondents were typically working from home, were non-white,[23] and were immigrants[24] who did not speak English. Two of the businesses were forced to shut down due to the complaints.[25]
Before anyone jumps in this thread saying "WOKE CULTURE IS DESTROYING THE WORLD", she also lost all of her lawsuits.
> In October 2019, the Tribunal ruled against Yaniv and ordered her to pay $6,000 in restitution... The ruling was critical of Yaniv... stating that she "targeted small businesses, manufactured the conditions for a human rights complaint, and then leveraged that complaint to pursue a financial settlement from parties who were unsophisticated and unlikely to mount a proper defence."
This is a person who deliberately does all sorts of weird shit for the sole purpose of using her trans identity to sue people when they react negatively to it.
For example, she apparently called her local fire department "dozens of times" for "help getting out of the bath" (when in reality she needed no assistance) and "subjected Fire Department staff to "inappropriate and lewd conduct"".
More like "just the one example." This same case is trotted out every time somebody wants to make this point because it does not actually seem to be a trend.
But besides that, it doesn't answer the question you were responding to. Asking someone's birth sex does not tell you what genitalia they have. It doesn't even really tell you what genitalia they had at birth. Since those salon owners say they specifically objected to the woman's current genitalia rather than the woman's status as transgender, birth sex is the wrong question to ask.
> They further argued that being transgender was not the issue for them, rather having male genitalia was
To be clear, the beauticians explicitly didn't care (per their claims) what the birth sex of the customer was, so I'm not sure how that would be a relevant question here? Per their claims, they'd have had the same issue with a trans-man who'd had bottom surgery.
> Well, maybe you own a waxing salon that does Brazilian waxes and you're Muslim and don't believe you should see or be in contact with male genitalia besides that of your husband.
Then you’d probably want to ask “What kind of genitalia do you currently possess”. Sex assigned at birth is not a reliable indicator of that, for reasons very similar to why current gender identity isn't.
> Sex assigned at birth is not a reliable indicator of that
You might want to rethink that phrasing, because it’s a very reliable indicator. That’s the entire problem. <1% of the population have genitalia that doesn’t match their sex assigned at birth.
Asking someone “what genitalia they currently posses” is more offensive because it sound like you’re talking about an accessory that people swap out on whim. “Which will you be carrying with you today? The penis or the vagina?”
> > Sex assigned at birth is not a reliable indicator of that
> You might want to rethink that phrasing, because it’s a very reliable indicator
In the context where current gender identity is an insufficiently reliable indicator, which is the context of the supposed need to ask the question, no, it is not.
If you need to probe beyond current gender identity because your personal sensitivities about genitalia can tolerate no error, then you need also to bypass indirect proxies entirely and ask the question you are actually concerned about regarding genitalia.
Based on the discussions therein I can see where my thinking was not quite on point, or I did not explain myself properly. YES a question could be a a form of a hate crime, I can think of examples where asking someone a question that be interpreted (probably rightfully) as a hate crime. I was thinking more of non-hateful questions, such as questioning authority.
From what I perceive one of the problems is funding.
This is not strictly related to what you write and more about research in general, but most researchers seem to avoid submitting negative results. Disproving something can be just as important as proving something, but it is seen as a failure in most cases and can negatively impact your future funding.
This leads some researchers to just hide their "failures" and some go as far as doctoring the results.
This is one of the problems I have with the absolute freak show that climate science/global-warming/saving-the-planet has become. It is, at a minimum, a triangle of bad actors with one corner being politicians --fear-based vote harvesting--, the second being business --jump on the bandwagon and print money...facilitated by politicians who want votes in exchange for fear mongering-- and, finally, religious-based detractors --using the best ignorance can offer in order to advanced humanity.
You combine these three factors (and likely a few more) and the entire thing is a rotten stinking mess that exists on a binary state between religious deniers and religious zealots.
What's a researcher to do? Tell the truth? Ha! Only if you want your career completely destroyed as well as never seeing even a hint of a grant. Going against these forces is a sure path to having more PhD's driving taxis.
I have to say, I have become very cynical about what we call "science" these days. It seems you have to be very guarded about accepting anything you are told, because the forces at play could be beyond anyone's imagination in scale, breath and reach. The problem is that the general voting public is ill-equipped to take an intellectual stab at what they are being told, which means they are easily duped and herded like cattle in and direction that might be of benefit to the puppet masters in politics.
Not sure I'm following your point. If you weigh the bad actors and financial incentives of climate change proponents against the bad actors and financial incentives on the fossil fuel side, do you think the scale tips in favour of more honesty for the fossil fuel side or climate change side?
There's no doubt groupthink happens in academia on many issues, but the need to displace fossil fuels really is very important. Not just for climate change reasons, but overall human health. For instance, air pollution from fossil fuels kills tens of thousands of people every year.
Too bad everyone has been convinced nuclear is way worse because of a couple of accidents. That was a legitimate alternative that didn't need to wait for the 2010s to become an economically viable 15-20% of energy production.
> If you weigh the bad actors and financial incentives of climate change proponents against the bad actors and financial incentives on the fossil fuel side, do you think the scale tips in favour of more honesty for the fossil fuel side or climate change side?
There's no honesty anywhere - or rather, the honest people get squeezed out of the field. Which is why the issue ends up so polarised.
> There's no doubt groupthink happens in academia on many issues, but the need to displace fossil fuels really is very important. Not just for climate change reasons, but overall human health. For instance, air pollution from fossil fuels kills tens of thousands of people every year.
Changing the subject like this is a huge red flag that you're using motivated reasoning, as you'd probably have noticed yourself if this wasn't such a political issue. It's a very small step from "it's important to displace fossil fuels even if not for the reasons I originally said" to "I'll overstate the effects of climate change to ensure that we abandon fossil fuels as quickly as possible to save thousands of lives", and once you do that all hope of finding the truth lost.
> Well that's an unsupported conjecture that I see no reason to accept.
I base it on having friends who were in the field; of course that won't be particularly convincing to you. (I'm pretty sure others in these comments had examples of people being pushed out for reaching the "wrong" conclusions, but I have to be honest that I'm trusting the people I know personally rather than anything else).
> Where is the motivated reasoning in acknowledging that climate change is not the only reason to replace fossil fuels?
Suggesting that climate change is likely to be true because you have non-climate-change reasons to want to replace fossil fuels is motivated reasoning. The fact that you brought up non-climate-change problems with fossil fuels in a thread about whether climate change is occurring suggests that you're doing it.
> Suggesting that climate change is likely to be true because you have non-climate-change reasons to want to replace fossil fuels is motivated reasoning
Except I didn't do that.
> The fact that you brought up non-climate-change problems with fossil fuels in a thread about whether climate change is occurring suggests that you're doing it.
The thread isn't strictly about whether climate change is real, it was about climate change alarmism, about whether climate change was reducible to a single variable, and whether we should be motivated by the available evidence to make drastic changes to potentially avoid the predicted outcomes. The additional point I made is perfectly in line with that.
I don't know. I believe the most likely outcome is effects that are serious, but significantly less serious than the mainstream consensus claims. I believe the error bars are much wider than anyone admits. I believe the narrow range of countermeasures that it's politically acceptable to move towards implementing are not particularly reasonable and will not be effective.
Off-the-record conversations with (ex-) climate researchers I know personally. Which I don't expect anyone else to find convincing, but there's enough talk going around of researchers who got the "wrong" results getting pushed out that I found what they said more plausible than trusting the big-name journals etc.
> Not sure I'm following your point. If you weigh the bad actors and financial incentives of climate change proponents against the bad actors and financial incentives on the fossil fuel side, do you think the scale tips in favour of more honesty for the fossil fuel side or climate change side?
No one funds research into problems that aren't that big a deal. There has to be a crisis NOW for climate scientists to get more funding and social status. The news media knows there has to be a crisis for people to tune in; no one would watch programming on a problem that won't occur in their lifetimes. And politicians love to take the side of causes that give them the moral high ground. They also get to be sanctimonious and pretend they are "on the side of science" when they have no more grasp than anyone else.
To wit, if politicians were actually interested in reducing CO2 emissions they could just tax it straight up. They could combine that with lowering taxes on other things so as not to hurt the economy. The market would then implement green energy solutions of all kinds on its own.
But that wouldn't give the politicians any leverage for donations from green tech for future campaigns. So they pick and choose winners and losers based on donations and the revolving door. Fossil fuel isn't that unhappy because they get to set up a Nat Gas plant with new wind and solar installs to fill the gap when it's cloudy and calm.
No. There is no honesty anywhere, that is my point.
> the need to displace fossil fuels really is very important.
Why? I don't necessarily disagree. But reality isn't a problem managed through a single variable. The things you list are not singularly caused by fossil fuels.
In fact, a very solid argument could be put forth about just how much uglier things might be without fossil fuels.
Here's the basic math someone would have to do before making the assertion that the elimination of fossil fuels --as a single causally-connected variable-- would make things better:
The simplest (well, not so simple) calculation is that, while we might eliminate fossil fuels we do not eliminate the need for the energy they provide. In other words, in rough terms, you still have to explain how we would generate, harness, create, transport and distribute a certain amount of energy per unit time (hour, day, week, month, year, whatever).
In fact, I think we can, in historical terms, state that energy requirements increase over time, they do not decrease.
The next element of the story is how we are going to replace the massive number of byproducts of fossil fuels that modern life pretty much depends on. We know that making complex hydrocarbons any other way is in a range between highly inefficient (which would increase the aforementioned energy requirement) and impossible.
My point --in stressing that reality is a rather complex multivariate problem-- is that, while it would be nice to think of a desirable reality without fossil fuels, in the real reality (just go with it) this is much more of an aspirational thing than an attainable objective.
The same is the case with electric vehicles. I have yet to see someone do the math on the total daily energy requirement of the installed fossil-fuel based vehicle fleet and explain how on earth (literally) we are going to generate that much energy without causing even more problems. Our current electrical grid is designed for current energy requirements (and power requirement, which is equally important). The current system, in any country I know of, doesn't magically have an extra 100% in power/energy generation capacity to support every vehicle going electric.
Reality: A multivariate problem. You push here and it pulls there. Not so simple.
> For instance, air pollution from fossil fuels kills tens of thousands of people every year.
Fair enough. Containerships, as a simple example, burn bunker fuel, one of the nastiest things you can burn. They are singularly responsible for more pollution along certain vectors than the entirety of the ground transportation industry. And yet we do nothing about it.
Why?
I can only guess. Part of it has got to be a case of "well, what we have works". The other issue --which I think is very real-- is that bunker fuel is, quite literally, the bottom of the barrel. It is what is left after you extract everything else from petroleum.
So, next Monday we stop using bunker fuel everywhere in the world. No problem. Right?
Wrong.
You see, all the other oil byproducts are still needed. Which means that the bottom of the barrel...the bunker fuel...would still be produced in absolutely massive quantities. Except now we are not using it, because we want to clean-up the planet.
Wait a minute. What do we do with it?
Well, we likely have to bury the stuff, dump it somewhere, make huge mountain-sized piles out of it. We would now use massive amounts of fuel (yes, everything is "massive") to run the machines that have to haul and manipulate this stuff. We also have to devote massive (sorry) resources, land and ecosystems to burying what we are not using. Where it goes from there I cannot even guess.
Once again, reality isn't a single variable problem. Bunker fuel == bad? Yes, no, maybe, hard to say. Because the alternative could be worse, far worse.
This is precisely what I don't see treated fairly these days. Imagine a politician taking the time and making the effort to fully analyze and understand the bunker fuel ecosystem and also taking the time to present this analysis to the voting public. Good luck. It is far easier to say "bunker fuel == bad", get votes, stay in office and move on. It's easy to show how horrible the stuff is (and it is!). It is impossible to show how much worse things could be if we don't fully understand what reality looks without it.
I'll overstay my welcome and give another example from real life.
A number of years ago a well-intentioned yet mathematically-challenged "science" teacher at my kid's school showed the kids this gut wrenching video animation that pretty much says humans are a pile of shit destroying the planet. The thing is a close as you can get to an ignorant politically-motivated pile of lies.
She was receptive to having a conversation. I asked if we could go through a simple exercise where we would try to understand what our small town would look like if we did not use the products of evil industrialized society. Petroleum is a favorite, of course.
I won't bore you with the details. Before we got done we had destroyed every forest in sight, had piles of human excrement the size of mountains, all possible fields where you could grow something in the region were dead, sources of water were polluted (human waste and other by products of inefficient source for everything) and more. At the extreme we were using horses to get around, etc. A town of a few tens of thousands of people relying on horses has a serious manure problem. We would burn trees for heat and cooking, etc.
As we extrapolated this from a town of tens of thousands to cities with millions and regions with tens to hundreds of millions of people, it became very obvious that modern life (or more accurately, modern population levels) would quickly become unsustainable if we demanded that humanity abandon how we got here and embrace everything "natural" an "sustainable". She was certainly surprised to understand the scale of the problem.
Once you start thinking at scale --planetary scale-- "natural" and "sustainable" quickly end-up with razed forests, depleted marine life, polluted water sources and a sky blackened with thick pollution.
Not to end on a depressing note. Yes, we are doing better, have been so for decades. We just have to be careful that we don't reduce reality to single variable problems, because that isn't reality, it's a fantasy, and a dangerous one at that.
Climate change is one of those. It is hard to find truth that is being discussed with honesty in the mainstream.
> I think all of your points miss one glaring fact: Quality of life doesn't matter in the face of an existential threat.
Existential threat?
Really?
Where is the proof of that?
Not trying to be obtuse at all. I am also not suggesting that things are changing towards a future with potent climate events. I am not challenging any of that.
You see, the dark future everyone is selling is one where we all die. Everything dies. Mass extinction of all kinds of living things. Another two degrees and we are done for.
Hmmm.
In the face of this we are supposed to have the technological prowess TO ACTUALLY TAKE CONTROL OF PLANETARY SCALE PROBLEMS and magically bend the curve to where WE want it to be, not where THE PLANET wants it to be. Upper case for greater emphasis than this which does nothing for me, not yelling at you.
Do you have any idea of the scale of this thing? Planetary. Yes. What does it mean?
In other words, we purport to have the power to change the entire ecological balance of the planet (hence "planetary scale", and, at the same time, we can't deal with the purported effects of global warming?
I applied "purported" because, once again, climate change is being treated as a single variable problem where NOTHING ELSE changes. In other words, "CO2 bad -> CO2 ppm rising -> Existential threat".
What?
The planet has been dealing with this kind of stuff long before humanity was a thing. It adjusts to atmospheric CO2 through weather. Specifically, storm, hurricanes, cyclones, rain, etc. Water. And growing vegetation. Yes, at a planetary scale. We have data on this, reliable data, going back at least 800K years.
Is CO2 bad?
Well, yeah, taken as a single variable, sure. Yet, that isn't the entire story, is it?
Have you heard of indoor farming? This is where food is growing in controlled indoor environments rather than outdoors.
Do you know what they do in indoor farms to promote plant growth?
They inject CO2.
Yup. They actually have CO2 tanks delivered to the farm and CO2 is metered by a computerized system in order to raise the level and promote plant growth as well as other characteristics.
When you start leaving the "CO2 bad -> CO2 ppm rising -> Existential threat" myopic view of the universe being pushed and start to consider that reality is a complex multivariate problem, ideas and the potential actual reality start to surface.
Have you ever walked around your home, family and neighbor's homes and your neighborhood with a CO2 meter?
I have.
Levels in my home and my neighbors are in the range of 500 to 600 ppm. No, we don't live right next to a highway. Outside, about the same range. Some of the office environments I frequent, about the same.
In the car? It can reach 1100 ppm. No, that isn't with me breathing directly into the meter. If the ventilation system is set to forcefully ingest outside air it comes down to about 700 in neighborhood streets and spikes back up to 800 to 1000 on the highway (which makes sense).
My point is that this "CO2 bad -> CO2 ppm rising -> Existential threat" scenario is one that, very likely, billions of people have been living in for decades, maybe more. Care to guess what indoor environments looked like 100 to 200 years ago? I have no clue, but I cannot imagine them being better than what we have today.
And cars? How much time do billions of people spend in their cars at 700 to 1100 ppm CO2 every day? Hours.
Has the sky fallen?
No.
WHY ARE WE NOT QUESTIONING WHAT WE ARE BEING TOLD THEN?
C'mon folks. This isn't about denying our influence in increasing atmospheric CO2. However, this is, very much so, about gaining a sense of proportion and putting what we are being told to scrutiny.
Yes, we absolutely managed to increase atmospheric CO2 through the burning of highly dense hydrocarbon fuels. No question about that. Is the inescapable conclusion "CO2 bad -> CO2 ppm rising -> Existential threat"? I don't know. Somehow I don't think so.
For example, increased levels of atmospheric CO2 might promote more efficient growing of food in indoor farms. Controlled environment farming is more efficient that outdoor farming, uses less water, delivers higher quality food and reduces damage to the land. More importantly, controlled environment farming can bring food production to places that could not consider it before, like the desert.
How about all the storms, rain, etc. that are a part of the planet reacting to CO2 levels? Well, this will among othr things, promote vegetation growth everywhere.
So is, "CO2 bad -> CO2 ppm rising -> Existential threat" real? I, for one, after devoting a non trivial amount of time to truly looking at this as a complex multivariate problem rather than that silly statement being pushed around, do not believe so. I think this is a silly and damaging reduction to an absurd conclusion.
Will we have to adapt to potential changes? This is likely. However, we are already living in a 700 to 1100 ppm environment (homes, offices, inside cars) and we haven't all turned into a pile of goo on the ground.
Somehow I don't think the threat is existential as much as it is evolutionary. What I mean by this is that we likely have to evolve how we live, where we live, how we grow food and, yes, of course, how clean we are about our affairs. I am all for reducing CO2 emissions and being clean, just not because of a potentially flawed conclusion but rather due to the fact that, yes, humanity should pollute as little as humanly possible. This is a good goal. Yet we should not be hysterical about it. The sky isn't falling.
> In fact, I think we can, in historical terms, state that energy requirements increase over time, they do not decrease.
They do, and all energy needs can be met with solar, wind and grid energy storage. Or nuclear if you don't want to invest in energy storage for whatever reason.
> The next element of the story is how we are going to replace the massive number of byproducts of fossil fuels that modern life pretty much depends on. We know that making complex hydrocarbons any other way is in a range between highly inefficient (which would increase the aforementioned energy requirement) and impossible.
Burning fossil fuels are the biggest immediate problem. Other fossil fuel products may or may not be a problem. But you don't ignore the heart attack because you just noticed a rash that may be flesh eating bacteria. Triage is key.
> You see, all the other oil byproducts are still needed.
"All" is overselling. Some are arguably useful, but for example, most product packaging is likely superfluous and a product of our current economic incentives. For instance, why do we have disposable containers for each unit of cleaning product we buy rather than reusing containers that you get refilled at the store? These choices are driven by market incentives that prioritize convenience over sustainability.
Some products may never get rid of their plastic packaging, perhaps something like sterilized vacuum packed needles that hospitals use. Those would be the exceptions but not the rule.
> We just have to be careful that we don't reduce reality to single variable problems, because that isn't reality, it's a fantasy, and a dangerous one at that. Climate change is one of those.
Climate change isn't a single variable problem, and I don't think anyone serious is pushing it as such. If you look into the IPCC report on climate change, you'll see all sorts of factors being accounted for including cloud cover, contrails, methane, water vapour, CO2 and more.
We only have so much influence over some of these factors, but the biggest and most obvious factor for which we have alternatives, is CO2 emissions. Do you deny that?
> Once you start thinking at scale --planetary scale-- "natural" and "sustainable" quickly end-up with razed forests, depleted marine life, polluted water sources and a sky blackened with thick pollution.
You and I clearly have different understanding of what "sustainable" means.
> They do, and all energy needs can be met with solar, wind and grid energy storage.
Not true. Not even close. Particularly if you move away from optimal solar regions.
In addition to that, manufacturing the grid energy storage capacity required to service planetary scale requirements will result in unspeakable consumption of natural resources (mining), pollution and environmental challenges. Not to speak of the amount of energy required to produce, ship and install such storage systems.
> Or nuclear
Nuclear is the ONLY viable solution. It makes excellent use of the existing infrastructure and we pretty much know how to do it right.
> if you don't want to invest in energy storage for whatever reason.
You are confusing things here. It isn't about me, or someone like me, not wanting to "invest in energy storage for whatever reason". Such loaded words too, "not wanting to", implying negative intent and "invest", implying a benefit that clearly might not be there. In other words, fabricating a conclusion while, at the same time, attempting to diminish the other persons standing.
I must repeat myself here. Too much of what we discuss these days boils down to the magical waving of a single variable that will solve all of our problems (or cause all the harm). "Energy storage", this case.
Well, this single variable solution to all of our problems isn't, in fact, a solution to all of our problems. At all. Start digging into what "investing" in this stuff actually means and you might come out of it thinking that coal and natural gas look pretty good in the comparison.
If we are going to invest in anything it should be nuclear plants. Manufacturing batteries with ten to twenty year useful lifespan at a global scale is bound to cause more harm than good. Of course, some prefer to look the other way and think it is "green" because it seems clean at the application level.
Here's an example of how not "green" these things can be. Do you know what you'd have to do to grid scale battery-based storage in the winter in Nebraska or Alaska in order not to lose capacity like crazy (or, even worse, have the plant shut down)? Heat up the batteries and keep them warm. If you think this is going to happen with solar energy...in the dead of winter...with snow and blizzards. Well.
> Burning fossil fuels are the biggest immediate problem. Other fossil fuel products may or may not be a problem. But you don't ignore the heart attack because you just noticed a rash that may be flesh eating bacteria. Triage is key.
Well, the analogy is nonsensical to begin with. Setting that aside, you don't get the petroleum byproducts without making fuel. This is a highly optimized production system. Every layer of it, almost literally, extracts a different useful substance. So, you can't say let's stop producing diesel and gasoline and keep producing the myriad industrial and commercial products that share that manufacturing pipeline.
I mean, if you want plastic tubing, syringes, equipment and almost everything you find in a modern hospital, you have to start with oil and derive everything else. A modern hospital would be reduced to medieval times without the outputs of this process. Manufacturing a modern home, car, food, clothing would also be impossible.
It is critical to understand this fact before continuing to push this fantastical idea that we can magically stop using petroleum. We cannot. It isn't that simple.
Here's a review of how fuels and lubricants are made:
If you don't understand this, you are missing a key element of the perspective you have to have in order to have this discussion. Facts do matter.
From the article:
"By pulling the condensing liquid from the column at different heights, you can essentially separate the crude oil based on molecular size. The smallest of the hydrocarbons (5 to 10 carbon atoms) will rise to the very top of the column. They will be processed into products like gasoline.
Condensing just before reaching the top, the compounds containing 11 to 13 carbon atoms will be processed into kerosene and jet fuel. Larger still at 14 to 25 carbon atoms in the molecular chain, diesel and gas oils are pulled out.
Those compounds with 26 to 40 carbon atoms are a tribologist’s main concern. This is the material used for the creation of lubricating oil. At the bottom of the column, the heaviest and largest of the hydrocarbons (40-plus carbon atoms) are taken and used in asphaltic-based products."
Translation: You don't make lubricants and a bunch of other critically-needed oil byproducts without making gasoline and the lighter hydrocarbons first. Or, put a different way: You can stop using the lighter hydrocarbons but you still need the other stuff, which means that you are going to fill entire lakes with gasoline-type hydrocarbons that you are going to choose not to use, which makes no sense at all.
> "All" is overselling. Some are arguably useful, but for example, most product packaging is likely superfluous and a product of our current economic incentives.
I am sorry, it is clear you don't know much about the industrial production of, well, anything. Plastic bags is the last thing anyone with this knowledge would even remotely mention. Factories would come to a grinding halt (in some cases literally) without oil byproducts. To continue with my medical example, none of the equipment, drugs and supplies a hospital needs to save lives can be made without myriad oil byproducts.
> Climate change isn't a single variable problem, and I don't think anyone serious is pushing it as such. If you look into the IPCC report on climate change, you'll see all sorts of factors being accounted for including cloud cover, contrails, methane, water vapour, CO2 and more.
That is not what I am talking about. I am talking about things like us "saving the planet" by reducing CO2 emissions. Single variable. Complete bullshit.
And when I say that, it isn't an opinion. This is supported by 800K years of atmospheric data. If you have an honest interest in truly understanding what's going on I'll point you in the direction of the data. This assumes you have a modicum of scientific training, high school science and math is enough. No need to have a PhD at all. This is actually very simple and easy to understand, but you have have to be willing to leave the bullshit you've been told behind and honestly consider the data and nothing else. When you do that it is very difficult to find support for what we are being told.
This isn't about what I say. This is about what the data says.
I know of only one paper --reputable paper, by a major organization-- that finally admitted the unavoidable conclusion, that, in effect, we are being sold a pile of bullshit. Again, if you are honestly interested in digging deeper and are able to leave preconceived notions behind, I'd be happy to provide you with a link to this paper.
> You and I clearly have different understanding of what "sustainable" means.
Of course, you are thinking "sustainable" at the utopic local level. I have taken the time to have a look at these problems at a planetary scale. You can make "sustainable" work at a local level in selected areas with very careful design and management. It almost never pays off, but we still do it. The reasons are sometimes good, we should clean-up our act, no question about that.
Once you look at this as a "solution" that must address the needs of over seven billion people at a planetary scale it is hard to impossible for "sustainable" to actually be sustainable. In a lot of cases you create more damage than benefits in order to create the illusion of being "green". One example of this is that case of not using fuels, such as gasoline, while still having a need for the massive number of heavier hydrocarbons that come out of the distillation process. Great, no gasoline. Now, where to you put that shit while you only, literally, use the bottom of the barrel? Ah, you destroy the planet. Got it.
BTW, I don't have the answers other than to warn that we really need to start reducing reality to single variables for each problem. That is simply not the way we are going to solve anything at all.
A lot to parse here, so I just feel like picking a couple of points.
There exist lubricants for which non-fossil crude oil production sources are possible such as PAG and others. Likewise, there exist many plastics and synthetic rubber that are not petro based, however ethylene, which is a petrochemical byproduct, is still required for a lot of plastic uses. This doesn't matter much, it is totally possible to produce these things without vast ghg production.
The production of lubricants which are not burned do not contribute in large part to greenhouse gas emissions. It is fully possible to produce industrial petrochemicals such as plastics and lubricants that do not produce large ghg emissions from burning. Most lubricants are recycled, lots are done so by burning, these can instead become industrial stock for production of lubricants and plastics given research.
Diesel and fuels can also be produced from non-petro sources, this has been done in people's garages. Diesel engines can burn corn, peanut, canola, waste vegetable, some of them can run on Dexron III (this is petrol) and diesel created in a garage reactor from plant sources. Biofuels are not fossil fuels. Electrification of vehicles removes vast sums of ghg from vehicle emissions, the production source can be any other non-fossil source.
The thesis of your comment, I gather, is essentially:
>that case of not using fuels, such as gasoline, while still having a need for the massive number of heavier hydrocarbons that come out of the distillation process. Great, no gasoline. Now, where to you put that shit while you only, literally, use the bottom of the barrel? Ah, you destroy the planet. Got it.
The economics of gasoline and diesel as a byproduct of petrochemical production makes rather favorable economic rationale for fossil fuel uses, but it does not follow that the byproduct must therfore be burned or drank. CO2 reduction will be required to prevent greenhouse heating and climate destabilization. Once energy production has transitioned away from fossil sources, smaller amounts can be used without the full scale ghg emissions we are currently creating.
Not using gasoline does not mean you "you destroy the planet", you could for example, return it to the fossil fuel reservoir from which it came. Seems pretty stable.
> There exist lubricants for which non-fossil crude oil production sources are possible such as PAG and others. Likewise, there exist many plastics and synthetic rubber that are not petro based, however ethylene, which is a petrochemical byproduct, is still required for a lot of plastic uses. This doesn't matter much, it is totally possible to produce these things without vast ghg production.
Sure. However, the problem with this idea is that it is easy to pick a few things here and there that can be done without petroleum. However, once you take in the entirety of industry, the range and scale of products we derive from petroleum or products that very much directly depend on petroleum and derivatives it truly massive. The world would come to a grinding halt if we stopped using petroleum.
This stands to reason. In an effort to extract more and more value out of the stuff we have become very creative and efficient at using as much of that black goo as possible for all kinds of things. We are talking about hundreds of years of research and technological evolution. It is only reasonable that, given that, humanity is as dependent as can be on the oil ecosystem.
This is what I talk about when I say that we need to stop this business of reducing reality to single variables. The consequences of reacting to a single variable while ignoring the dependency tree could result in far worse outcomes than anyone could possibly imagine.
What you mentioned here is for sure true, but mainly the pertinent data revolves around burning the stuff. All the petrochemicals aren't doing this, its the ghg, mostly from combustion.
Sorry to quote in this way, it isn't meant to editorialize, only to organize my response:
> [...]the problem with this idea is that it is easy to pick a few things here and there that can be done without petroleum. [...] The world would come to a grinding halt if we stopped using petroleum.
Its super useful and there's no reason to quit using it entirely. We don't even have to, there are many other means of doing what we need to do without it releasing ancient carbon stores into the atmosphere.
Everything bound up in the petro supply chain is clearly beyond the scope of my comment here and education in general, but the only way to approach reasoning about this issue is with single examples. Its too easy for somebody to gish gallop about the huge scale of industry involved, and quickly lose the fact that CO2 has to be the target. NMOG and Methane and nitrogen oxides as well, but mainly CO2.
Nobody will clear the table with a single "we can just X" or "we can't because we'll lose X". Its definitely much more involved, just as you say. One thing I strongly agree with you about is that nuclear energy is likely a critical help to this endeavor.
The largest global sources are, unsurprisingly, electricity/energy, transportation, and manufacturing, but also consider the global hegemony that for the most part turns on controlling and using this strategic resource, ever since WWI. That is all about burning it in large part.
Maybe we need a "Breton Woods" for carbon, as triggering and possibly unpopular as that may be. I'm certainly not betting on that turn of events, but we may just end up resorting to a different type of nuclear power if we just keep the blinders on ...
> Its definitely much more involved, just as you say.
That's all I am trying to convey. I am mostly sick and tired of the reduction of reality to a single variable or culprit and then pounding on that ad-nauseum as if that is actually how we are going to solve anything. This is how one gets to stupid ideas like seeding the ocean with chemicals to promote CO2 capture. What? I don't care what anyone says, that's far more likely to kill all life on earth than save it.
Our history is full of unintended consequences of sure "solutions", like that island in Australia, where they introduced one species to get rid of another. The end result is that they swapped one plague for another that might actually be worse. We can't even do something like that successfully --because reality isn't a single variable problem-- and we actually dare to suggest we can modify climate at a planetary scale? The hubris in this kind of thinking is thick and dangerous.
The single variable reduction is how we get idiot politicians like AOC pushing absolute nonsense day after day. Because it is simple they are successful at linking <variable> to "bad" and, after that, position their hairbrained idea as the savior. This kind of thing should inspire projectile vomiting, not a following.
> One thing I strongly agree with you about is that nuclear energy is likely a critical help to this endeavor.
Nuclear has been the elephant in the room for decades. OK, I get it, plants build in the 1960's might not have been optimal. We can say that about cars, planes and even ballpoint pens. That's the history of humanity. Well, I think we can build them to be very safe these days. Don't build them where a tsunami can hit them, etc. To paraphrase, there's a list for that (or there should be).
What's interesting about nuclear is that you can simply (OK, not so simple) connect them to the grid and your energy and power delivery capacity instantly increased, 24/7/365. Build a 1 GW class plant and you have 1 GW, rain or shine.
Nuclear, from my perspective, is the ONLY way we can support the conversion of the entire ground transportation fleet to electric power.
Here are the results of a simple model I threw together trying to answer a simple question:
How much power do we need to support the entire US fleet of cars going electric?
The simplest assumption is one where 100% of the fleet uses 8 hour long charge cycles:
daily charge energy 50,000 Wh
cars 300,000,000 cars
long charge 8 hours
fast charge 0.5 hours
Portion charging long 100%
Portion charging fast 0%
% of long-chargers charging simultaneously 100%
% of short-chargers charging simultaneously 0%
Total daily energy requirement 15,000 GWh
Cars long-charging simultaneously 100,000,000 cars
Cars short-charging simultaneously 0 cars
Power for simultaneous long charging 1,875 GW
Power for simultaneous short charging 0 GW
Total power requirement 1,875 GW
This isn't realistic, you are not going to have 300 million cars charging simultaneously during the same eight hours. Or, are we?
If every hour we have, say, 1/8 of the entire fleet plug in for eight hours to charge, what's the maximum number of vehicles that will be charging simultaneously at any point in the day? The assumption is that car will charge for eight hours and be off charge for 16.
Well, eight hours into the day we will, in fact, have 300 million cars charging simultaneously. After a full 24 hours from the start of this approach, the minimum number of cars charging simultaneously will be 187.5 million and the maximum 300 million.
So, yes, at peak utilization we will will have 300 million cars, requiring that we deliver 50 kWh in 8 hours, which means a peak requirement of 1,875 GW.
This means we need nearly two thousand giga-watt class nuclear power plants to support a fleet where 100% of the vehicles will slow charge.
What happens when some percentage of the fleet needs to fast charge? I am defining fast charging as delivering 50 kWh in 30 minutes:
daily charge energy 50,000 Wh
cars 300,000,000 cars
long charge 8 hours
fast charge 0.5 hours
Portion charging long 80%
Portion charging fast 20%
% of long-chargers charging simultaneously 100%
% of short-chargers charging simultaneously 20%
Total daily energy requirement 15,000 GWh
Cars long-charging simultaneously 240,000,000 cars
Cars short-charging simultaneously 12,000,000 cars
Power for simultaneous long charging 1,500 GW
Power for simultaneous short charging 1,200 GW
Total power requirement 2,700 GW
Now we need 2,700 giga-watt class nuclear power plants in order to be able to deliver the power needed to support the bulk of the fleet slow-charging and the remainder fast-charging spread across the day.
TWO THOUSAND SEVEN HUNDRED nuclear power plants.
Even if I am off by a factor of ten (I threw this together and it is very simplistic), that means nearly 300 nuclear power plants to be built in, say, 30 years. We have to build ten per year and we had to get started yesterday.
This is the kind of thing I look at when I talk about not reducing reality to single variables. The amount of energy we delivery by using petroleum is of a scale that is hard to imagine. To go electric we have to find alternative means to deliver some percentage of that energy (because electric cars are more energy-efficient than IC vehicles) to every car on the road every day. This task is far from being simple. Beyond that, the unmitigated mess that US politics has become over the last few decades virtually guarantees we cannot build a single nuclear power plant, much less ten, fifty or a hundred.
Frankly, I have no clue how this could even be possible. I think we are going to have some number of people driving electrics and, in the hubris of it all, we are going to ignore the fact that we are going have to burn twice or three times more coal to charge those cars every day. It has all the potential to be a larger mess than what we currently have.
I would love for someone to take the time to develop and publish a better model than my mindlessly-simple quick calculation. I know a lot of subtlety could be introduced. That said, I somehow don't think we can escape physics.
You may very well be correct, we haven't moved nearly at all on this problem in 50 years, and we are really far behind.
On the plus side, moving combustion from vehicles to centralized sources does a lot for much better efficiency. There is a lot of variation in the efficiency of vehicles that is nearly impossible to control for. Centralized sources can be much more easily managed than aging ICE all over the place.
That definitely doesn't address your main point that we are unprepared to convert.
Thanks for the work you've done preparing your analysis, I've read it entirely. I can sense your frustration with the general ignorance of more or less everybody wrt what we actually need to do. It's something I also feel. When examining the problem the conclusion I have immediately is that the entire industrialized world and probably the rest of the world, basically all of human society, is very tangled in the business of burning petroleum.
You've mentioned a lot of things that ring very true to me, such as the scale of the problem, the political boondoggle (I don't know of another way to say clusterfuck, but that doesn't really capture either) and the nearly complete lack of functional solutions as well as a tendency for incumbent forces to prevent implementing what solutions we do have available.
There are also a lot of vested interests who frankly make a lot of money doing what we do now. The US has also benefited greatly from the status quo of fossil fuel in a geopolitical sense, It's pretty much been the center of global foreign policy since WWI. "Developing" nations such as China (they are definitely developED) are using it too, they're on the same page of the usage story, I don't see anything at all different there.
The only thing that is really gonna do the heavy lifting are economic needs, because as much as I hate to admit it, it's the only language that is useful or understandable at all anymore.
Our conversation is really about nuclear power it seems. I recognize that, but I don't have any solutions to that impasse that we are experiencing. The one thing that I see helping that cause is, weirdly enough, alternative energy like solar, geothermal and hydro electric. Don't lose it on me yet please, I'm not trying to change the subject.
If we do end up taking seriously the prospect of implementing as much non-nuclear, non-fossil electrical generation as we can, it will have a positive economic effect on the usage of electrical power vs fossil power. This doesn't melt the enormous capacity iceberg that you have very well pointed out, but providing additional economies of scale for this kind of electric power will allow economic forces to begin to favor it vs fossil fuel.
If electrical storage becomes more necessary, we might be in a position to create a demand for additional electric resources including nuclear power. Additional development of alternative sources will drive more innovation, dollars, research and political interest into the usage and creation of this kind of energy.
When everybody wants to have solar panels which are looking more and more economically desirable, they may also invest in storage technologies that allow them to use it more effectively. This kind of thing can augment baseline electrical demand in a variety of dimensions: Politically, it will be much more desirable to create electrical sources, economically it will be easier to achieve because of greater scaling, and the technology will improve as investment increases with the demand. I suspect nuclear energy will be a better sell in a world where there is more understanding of electrical needs.
I don't know how to give nuclear energy a better PR campaign... people just don't understand why its desirable, but its easy to imagine how it could be undesirable. By the same token, people just live their lives with whatever is there, and that's gasoline and whatever makes electricity for them now or whatever they feel culturally comfortable with. As well, there is a clear fact that oil producing corporations have a lot of power, politically and economically, with which to do their own PR, but nuclear energy does not have giant multinationals pushing for its development and use.
It doesn't look good, that's for sure.
I appreciate the effort you have expended making your point, it has benefited my thought process.
This is a very difficult problem to tackle, this idea of a transition to a cleaner and more sustainable way for humanity to live. Like it or not, we had in the order of one to many centuries of optimizing the use of oil to either directly provide or support just-about everything we do and need. It is going to be very difficult to unplug from that.
What we need more than anything else are honest conversations about all of this. Sadly the mixing of political forces (which only exist for the benefit of the political class) and industrial/business/financial forces (which, of course, exist in support of their goals) makes this nearly impossible to address, at least on the time scale of one or a couple of human generations. I think this is a multi-generation problem, meaning, somewhere in the one to two century range.
BTW, I designed and built a 13 kW ground-mounted solar array three years ago. By this I mean, I purchased all the components and physically built the structure and wired it all. I have about three years of minute-by-minute data on solar production. No batteries yet, they just don't make sense in terms of ROI, at all. Eventually, maybe.
I'll just say the solar experience has been "interesting". Homes around mine don't have nearly this size system and they likely spent two to three times the money to install them. I have spoken to a few neighbors who are actually sorry they put money into solar because the size of their systems were calculated based on rates at that time. As rates have gone up they find themselves paying to lease their solar system as well as paying a bundle for electricity.
Going back to honesty in discussing some of the issues of our time. Climate change and the issues regarding atmospheric CO2 concentration often lead to the idea that we have to act immediately to "save the planet" or we are all going to die in twenty years (or whatever nonsense politicians are pushing). This is objectively false and it is amazing to me that the scientific community does not riot against such dishonesty.
Furthermore, understanding the idea that we just can't do anything about atmospheric CO2 accumulation can be verified while armed with very basic high school math and critical thinking.
The first thing you do is look at the graphs we have from reliable and accurate atmospheric CO2 concentration data from the past 800,000 years. Here's that graph:
You then fit straight lines to the graph in order to determine the rate of change of both atmospheric CO2 accumulation and decline. Here are my lines for the decline portion of the data:
Looking at it in rough strokes, it looks like it took, on average, somewhere around 25,000 years for a 100 ppm increase and, say, 50,000 years for a corresponding 100 ppm decrease. In some cases it took twice that time, I am just trying to generalize.
The planet did this entirely on its own...because we were not around or we were insignificant during this time period.
This is extremely valuable data and an equally valuable conclusion because it establishes an important baseline:
If humanity LEFT THE PLANET tomorrow, it would take about 50,000 years for a reduction of about 100 ppm in atmospheric CO2.
I'll repeat that: If we left the planet and all of our technology was shut down, you are looking at a minimum of 50,000 years for a meaningful "save the planet" change in CO2 concentration.
At this point the question becomes glaringly obvious:
How does anything LESS than leaving the planet even make a dent on CO2 at a human time scale?
This is important. 50K-years for 100 ppm is not a human time scale. We could very well be extinct by that time due to a virus or collective stupidity. I am going to define "human time scale" to mean a century or less. In other words, something we can wrap our brains around. That also means making plans and taking action today for something that will not deliver results for, say, 50 to 100 years. Imagine the world making decisions in the 1920's for us to benefit from today. That's pretty much ridiculous on the face of it.
And yet, that isn't the problem, is it?
Because of the baseline revealed by this data we know, without any doubt, that anything less than leaving the planet cannot possibly delivery a faster rate of change, a faster decline than 100 ppm in 50,000 years.
Solar panels all over the planet? How is that MORE than leaving the planet?
A billion electric vehicles? Same question.
No more fossil fuels? Nope.
In fact, Google Research boldly set out to show the world that a full migration to renewable energy sources could address the issue. To their credit, when they discovered just how wrong they were, they published the data. In this charged environment these researchers deserve a ton of respect. They went in --and say so themselves-- with a position of believing that renewables could save the planet. What they discovered instead was precisely what I understood through the simple exercise on this graph, that this is an impossibility. Their methodology was different from mine, the result was the same.
"we had shared the attitude of many stalwart environmentalists: We felt that with steady improvements to today’s renewable energy technologies, our society could stave off catastrophic climate change. We now know that to be a false hope"
"Trying to combat climate change exclusively with today’s renewable energy technologies simply won’t work"
"if all power plants and industrial facilities switch over to zero-carbon energy sources right now, we’ll still be left with a ruinous amount of CO2 in the atmosphere. It would take centuries for atmospheric levels to return to normal"
"<snip> to see whether a 55 percent emission cut by 2050 would bring the world back below that 350-ppm threshold. Our calculations revealed otherwise. Even if every renewable energy technology advanced as quickly as imagined and they were all applied globally, atmospheric CO2 levels wouldn’t just remain above 350 ppm; they would continue to rise exponentially due to continued fossil fuel use."
"Suppose for a moment that <snip> we had found cheap renewable energy technologies that
could gradually replace all the world’s coal plants <snip> Even if that dream had come to pass, it still wouldn’t have solved climate change. This realization was frankly shocking"
Well worth reading. Like I said, these guys deserve a ton of respect for effectively saying "we were wrong, and here's why".
Why aren't we talking about this AT ALL. This is reality. Not what we are being told by politicians and zealots. Climate change has become a religion or a cult and science has been left far behind. Here are two ways to come to the same general conclusion. One uses a super-simple look at 800,000 years of atmospheric CO2 data. The other took a detailed look at mathematical climate and other models. The conclusion was the same: We can cover the planet with renewable energy sources and do NOTHING to atmospheric CO2, or worse.
I've been trying to elevate this to some level of consciousness here on HN any time the topic comes-up. It is often met with a pile of downvotes and attacks. Because, of course, they "know", even though none of the detractors bothered to devote even 1% of the time I have trying to understand actual reality in a sea of nonsense.
Frankly, I am not sure what else to do. In this charged political climate it is actually dangerous to stick your neck out too far. I think you understand now that this is not --I am not-- denying climate change, I am simply saying "the emperor has no clothes" to all the nonsense we seem to be told to focus on.
I think we need to learn to live with whatever is coming. We can't do a thing about it. New industries will sprout to help us manage it. The planet will deal (and is dealing) with CO2 as it has for millions of years.
And that's the other set of questions that the graphs and some research can answer:
How did CO2 increase when humanity was not around to muck it up?
Continental scale forest fires burning for 25,000 years as well as other sources of CO2.
How did the planet bring it down?
Rain, storms, cyclones, hurricanes, and the regrowth of vegetation over 50,000+ years.
So, we have to learn to deal with changing weather patterns and perhaps start helping the planet a tiny bit by planting trees. Judiciously though, because more trees could also mean more fuel to burn. In other words, we could, if not careful, actually increase CO2 if we plant a billion trees and create the conditions for the mother of all forest fires.
Like I keep saying, not a single variable problem. Is it?
> Not true. Not even close. Particularly if you move away from optimal solar regions.
Convenient that you just skipped over one of the options that doesn't require "optimal" solar regions.
> In addition to that, manufacturing the grid energy storage capacity required to service planetary scale requirements will result in unspeakable consumption of natural resources (mining), pollution and environmental challenges. Not to speak of the amount of energy required to produce, ship and install such storage systems.
You seem to be making a lot of assumptions that grid storage would be some kind of electrochemical battery. That's unwarranted. There are in fact many storage options available.
> You are confusing things here. It isn't about me, or someone like me, not wanting to "invest in energy storage for whatever reason". Such loaded words too, "not wanting to", implying negative intent and "invest", implying a benefit that clearly might not be there.
No, what I'm doing is telling you that there is a benefit there. Our current grid design is frankly terrible, speaking as an electrical engineer. Grid storage would solve so many problems, make the grid far more robust and simplify the overall design, even if we still burned fossil fuels. You have no idea how much cost, overhead and difficulties there are in actively managing the grid that would just disappear with grid storage. Australia benefitted immensely from the Tesla storage, for example.
> Well, this single variable solution to all of our problems isn't, in fact, a solution to all of our problems. At all. Start digging into what "investing" in this stuff actually means and you might come out of it thinking that coal and natural gas look pretty good in the comparison.
I disagree 100%. I'm aware of the costs and benefits here.
> Setting that aside, you don't get the petroleum byproducts without making fuel.
Even if that were the case, making fuel does not entail burning fuel.
> That is not what I am talking about. I am talking about things like us "saving the planet" by reducing CO2 emissions. Single variable. Complete bullshit.
Not complete, but I agree it's not the full story of the problems we have. The error bars on what problems climate change will cause are wide, but the worst case is truly horrible. All indications are that we're doing worse than we should be, so it's not looking promising.
> Here's an example of how not "green" these things can be. Do you know what you'd have to do to grid scale battery-based storage in the winter in Nebraska or Alaska in order not to lose capacity like crazy (or, even worse, have the plant shut down)? Heat up the batteries and keep them warm. If you think this is going to happen with solar energy...in the dead of winter...with snow and blizzards. Well.
There are plenty of options for using batteries in cold climates that avoid most of the problems you describe, but I'll steelman your position and suppose that any kind of grid storage is completely unusable in Alaska and that it requires fossil fuels.
Are you really saying that because Alaska requires fossil fuels, therefore we should continue to use fossil fuels everywhere else? Because I've already acknowledged that displacing them 100% is probably not an option, but we almost certainly could get to >95%.
> No, what I'm doing is telling you that there is a benefit there. Our current grid design is frankly terrible, speaking as an electrical engineer.
Then please, pretty please, with sugar on top: You have the scientific and mathematical training to actually understand that what you are being told is complete bullshit. However, you have to be willing to apply some of that critical thinking, engage in some simple research, do a little math and try to understand. However, you have to be willing to leave this single variable view of the world behind during the process.
Grid storage = good?
Current grid design = terrible?
Well, grid storage today means batteries. It does not mean Star Trek dilithium crystals, hydroelectric plants, compressed underground air, spinning masses or the myriad interesting-but-unrealizable ideas people have floated. In the US, in particular, it would take a hundred years to build just one new hydroelectric plant, much less a couple of hundred of them.
Everything sounds great until you start considering scale. A town of 30K people, in an optimal solar region, full solar with batteries at every home? Sure.
I have a 13 kW system at my home that I built myself. I might add batteries at some point in the future, when and if it makes sense. Would I recommend everyone in my neighborhood do the same? No. It's crazy. This was likely the single worse investment I have ever made.
Scale IS a problem. Most definitely.
Scale means that for everyone to have what I have (or will have, if I ever install batteries) you have to be willing to do things like destroy thousands of miles of marine ecosystem to mine the very materials we need to make something like this happen at scale.
Scale is the problem. You have the background to be able to explore and understand this.
Electric vehicles. Calculate the energy we deliver into gas tanks every day through gasoline. Now calculate what that means in terms of total energy delivery EVERY DAY across the US. Make an accounting of our current energy generation capacity. Which, BTW, does not happen to have been built to support a 100% or 200% demand increase. Now translate the total energy delta demanded by a fleet of, say, 300 million electric vehicles, into, say, 1 GW nuclear power plants. The conclusion, if I remember correctly, is that we need somewhere in the order of 100 new 1 GW class nuclear power plants spread across the nation in order to support a full shift into electric vehicles.
The other problem is POWER rather than energy. In other words, you have to be able to deliver a certain amount of energy per unit time simultaneously across geographic areas. If you have 100K or a million cars charging simultaneously you are going to have to build additional POWER delivery capacity in order to service that demand, something that today (in terms of energy) we deliver using a liquid fuel.
When you start considering scale and get beyond single variable reduction of reality, things look very different. Coincidentally, the Los Angeles Times published an article about the mess we are walking into due to our drive to move into electric cars:
"A mining permit pushed through in the last week of the Trump administration allows the Canadian company Lithium Americas Corp. to produce enough lithium carbonate annually to supply nearly a million electric car batteries. The mine pit alone would disrupt more than 1,100 acres, and the whole operation — on land leased from the federal government — would cover roughly six times that. Up to 5,800 tons of sulfuric acid would be used daily to leach lithium from the earth dug out of a 300-foot deep mine pit."
Seriously?
And this is for JUST a million batteries per year. I assume they mean cells rather than the full car battery. Whatever the case may be, at scale, this is horrific. Yes, I said "at scale" because a million per year is not scale. We need many times that, thousands of times that amount if we are going to electrify the global transportation fleet. If we also want to use the same materials for grid energy storage the problem, again, at scale, quickly reaches apocalyptic proportions.
All I am asking you to do is to invest the time to really get into the details of the issue and use the math and science you understand to develop a true sense of proportion.
My guess is that the current path to electric energy storage is, at scale, a seriously flawed idea. I am not a chemist, so I can't propose an alternative that would be more benign at scale other than to say two things:
First, we need to be very careful and not allow politicians and various interests to lead us by the nose into causing a global disaster of unimaginable proportions.
Second, I have a sense --and the hope-- that a bright young scientist might just discover a path to store and deliver energy in liquid form in a way that will not have us resort to such things as strip mining the oceans and dumping millions of tons of sulfuric acid into mines to leach lithium out of them.
I don't have the answers. I just know we are probably being led down a path that could be far uglier than pumping petroleum out of the ground. We are making dumb decisions, like cancelling the XL pipeline...which means oil will have to be trucked...which will burn millions of gallons of refined diesel fuel to move petroleum at a horrific loss in efficiency. In the meantime, we have no problem dumping thousands of tons of sulfuric acid into mines to get lithium for "clean" electric cars and storage.
I agree scale is the big obstacle, but I think you're kind of doing the same thing you're accusing others of doing: reducing solutions to single variables, like thinking lithium batteries manufactured using dirty methods are the only or primary grid storage solution.
A mixed energy economy with a variety of energy storage solutions can address the various needs, and you must also take into account the evolution of technologies and economic incentives.
For instance, the energy delivered each day to cars by chemical means is indeed huge, so you naturally conclude that we can't possibly deliver that much energy using known batteries and renewable technologies. Let's suppose that's true, there are some factors that will affect how this actually plays out:
a) ICE are very inefficient while electric vehicles are much more efficient, and so the total energy to deliver is significantly lower than that delivered by fossil fuels now,
b) vehicle utilization is also inefficient, in the sense that there's no real need for everyone to have their own cars and drive point to point; improved public transit is one solution, but so is something like Uber pool, thus improving the utilization of the delivered energy and thus reducing demand further
c) as for storage alternatives, solid state batteries are very close, and numerous other storage options already exist and are deployed, with improved variants being tested around the world already; I think the wikipedia page on this is decent, in particular if you pay attention to the tech that's already deployed and running [1,2].
I agree that this is a huge problem because fossil fuels are so heavily embedded in our economy in so many ways, but I don't think it's insurmountable. If enough of us can get on the same page that some outcome needs to happen, we can get pretty far pretty fast. We need a Manhattan project level commitment to this, and the country that does this well will have a huge first mover advantage to sell this to other countries. Vested fossil fuel interests are preventing this though.
10 years ago I would have been 100% with you that nuclear power was the option we should be investing in, but given the precipitous fall in the cost of renewables, I don't think nuclear is a slam dunk anymore. Nuclear still has its place in the move away from fossil fuels, and the renewed interest and new reactor designs seem promising.
Have you considered the flipside, which has much more evidence for it, that denialism is being promoted by institutions and speakers who are by proxy funded by the fossil fuel industry, which is one of the largest industries on earth?
I really reccomend, if you want a level headed, conservative, independent look at the issue that you check out some of these videos by Potholer54, a retired geologist and science reporter.
You know, I've seen an increasing trend towards mediocrity and outright fraud across multiple public and private institutions that reward individuals based on some power law.
If the author of the most cited paper in a field is going to get all the grants, and a standard "useful" paper is going to get no continued funding - then researchers will push to make their work sensational. Eventually the professors and everyone left in the field is fighting sensational with either outright fraud or alternate funding sources.
Same goes for VC funding of startups, employees at companies, and government programs. The baseline "useful" work is rotting away in favor of aiming to be the top ~5-10%.
In other words if your goal is to craft a paper that draws attention, that's harder if you're being meticulous about limiting yourself to saying to what evidence shows is clearly true.
Unfortunately this will be a baseline that keeps moving. scientists compete on publications so they will make more sensationalist headlines, There will then be more sensationalist headlines to compete with forcing fraud/or irrational exuberance.
You suggest how scientists can act, their motivations, their freedom to disagree with the status quo, or those higher up the ladder in their field etc. You suggest how scientists will lose grants if they produce inconvenient science, etc etc. So I ask again. Are you a scientist? Have you written a grant application? Have you been punished? I AM a scientist, and I have never seen this hypothetical world.
From what I saw while pursuing a PhD, what he says is true for some fields (I was in a hard science).
Disagreeing with status quo is given lip service, but in my particular subdiscipline, I saw trendy theories come and go. It certainly was more work to get published if your paper suggests the trend is not true (more likely to argue with referees, etc). This is common when experimental data is not available.
> You suggest how scientists will lose grants if they produce inconvenient science
Again, not a direct connection in my discipline, but trendy work is easier to get funding, and if you counter the trend, you'll have trouble getting papers published in good journals, which affects your funding.
In my time there, the common advice to new faculty members was "Do fun/risky research only after you get tenure."
It's a matter of your field of study. Climate change, in particular, is a minefield.
I am not saying what I said in a vacuum, this came out of conversations with actual climate science researchers and scientists that I had the opportunity to meet during the course of my work in aerospace. The context was a project were we were developing various systems and payloads for both the International Space Station and, eventually, the Artemis mission to the moon.
The message was quite clear: In a politically charged environment you have to be very careful not to ruffle feathers. Anything from your career to your funding could be at stake.
No, you are not going to find google-searchable interviews with such folks, again, that would be professional suicide.
You can find information that helps understand some of the forces at play. While not a full picture, it's enough to support the plausibility of my claim. You don't have to believer me, of course. It's your prerogative.
Moreover, there is really no such single thing as "climate science"; there is atmospherical physics, chemistry, meteorology, physical oceanography marine science, biology, paleobiology, the list goes on.
Use google ngrams to look for "climate science". It basically doesn't occur before around 1982.
this actually happens far to often with enough universities to make me uncomfortable. a lot of universities lack funding, even after government aid, which by the way influences what universities teach to a degree as well. i would love to see more transparency regarding funding with regards to scientific studies
I guess if you don't even feel comfortable sharing the identity of any of the parties, we really have no chance to identify bad actors from good as a community.
"Naming and shaming" comes with risk - probably a lot more risk than upside. The risk is that the parties get something actionable, kind of like "probable cause" in criminal cases. The upside is maybe a small effect on a few HN minds that might remember this when considering this Uni's reputation.
The other risk is that it's an act that can easily be abused. It is very easy to level charges against someone without proof; somehow we tend to believe the first salvo (myself included). In this case it sounds relatively straight-forward, but it really is irresponsible to take a stranger's word for it.
So, if you really care, you might reach out to the OP and get the details. That eliminates the downside risk to the OP and acts as a shibboleth that ensure only people that actually care enough to look into it know the details.
I actually bought one of the devices and took it apart on video. There is nothing inside except 4 magnets and a plastic printed card. No active circuits and no power source and no components such as resistors or capacitors and certainly there is no microchip inside.
edit below are links to pictures of the device and accompanying text as originally on facebook posted
=======================================
Hi Powerinsole I ordered one of your power chip devices and took a detailed look at it. My analyses is as follows.
1. There is no battery and no system for harvesting energy. All electronic circuits require an energy source and a lack of an obvious system for powering the device is a problem.
2. There are no components such as integrated circuits, transistors, capacitors, diodes or inductive devices that would be required to create a "circuit" or "chip". A "chip" is not just a random configuration of tracks. The tracks are there to transfer electricity between components that shape and switch the electric current according to purpose but given that there are no components what are the tracks for?
3. There are 4 magnets. Probably neodymium. They produce a constant magnetic field. They do not generate "frequencies". The device sticks to a metal wall like a fridge magnet and doesn't vibrate.
5. The tracks are configured in such a way that even if components were attached at the "solder points" nothing would happen because the tracks are all shorted together. Electricity always takes the easiest path. If all the tracks are are shorted then the components will receive no energy input.
6. After testing with a multimeter I found that the tracks on the "circuit board" do not conduct electricity. If the tracks do not conduct electricity there is no possibility of transferring energy to components. ( there are no components )
7. The magnets are isolated from the "tracks" and each other by a plastic layer and glue. It is not clear what the relationship the position of the magnets to the tracks might be.
8. There is no NVRAM, magnetic storage, optical storage, ROMs or other known systems for storing information. So claims from PowerInsole that they load information onto the device is difficult to comprehend.
9. There is no crystal or LC circuit to drive an oscillator. Even if there was there is no battery to drive it and the tracks are all shorted and the tracks do not conduct electricity.
Given the above observations I find it difficult to believe that the device can function as advertised. What you essentially have is 4 small magnets on a printed card in a gel cushion for 69 euros.
If I am wrong about any of the above I would be happy to have a respectful and open discussion about your technology.
Somehow I'm not surprised the University of Salzburg is involved. I remember a "less than stellar" experience with them that I was tangentially involved with.
It involved a research project that they stopped funding, but didn't want to let go. The only researcher who continued to work on the project wanted to take the project to a different institution where the project could get funding, but Uni Salzburg refused and said it's their project. They would rather have the project be abandoned than let it thrive somewhere else. If their name wasn't on the project anymore, they would rather have it die.
And not to forget, Uni Salzburg was also home to our most famous case of scientific misconduct where Robert Schwarzenbacher fabricated measurements using simulation software. The handling of that case was also interesting (they terminated his employment after verifying the accusations of fraud, but one guy from the union tried to convince someone from HR to delay some paperwork so they could later challenge the termination... crazy stuff)
Thank you. What a load of infomercial buzzword nonsense. The last part about some “further research and experience” showing even better results if the product is worn all the time is especially galling.
But one should look into Darsch Scientific if you want to get deeper into how this is all organised. https://www.dartsch-scientific.com/en/ they have produced papers for powerinsole and are used by other companies in the magnetic magic industry to provide a veneer of scientific credibility. For example https://waveguard.com/en/studies/
They had me at the work "power insole". Though I guess not in a good way. I expect a fairly meticulous definition of terms. If a piece of research sets off my marketing-BS sensors this early, it's usually all downhill from there.
When you want to do a proper work, your grants and papers get rejected because they are not innovatove enough or don't go far enough. So it is not a surprise that people that lied in their applications about what they can realistically do also lie when it comes to reporting results. Unfortunately there is no way out. I stopped counting how many reviewers of my grants disagreed on what was proposed, one saying that it was not innovative, the other saying that is was too risky to use this approach. We have a big problem in science, peer-review is broken and everything relies on it. And many reviewers are way out of touch about what happens in their field, I see reviews that clearly show the reviewer was sleeping for the last 10 years.
Furthermore, universities tend to require tons of publications to promote you. Things are spinning out of control. I know a few EU countries where the written norm is to need > 100 publications to qualify for a full professorship, with equally ridiculous requirements for associate and assistant positions.
Obviously, this encourages and rewards completely broken practices. Many associate and full professors in my area only care about stamping their names into as many journal articles as humanly possible. Some of them are already beyond 500, with many of these in top tier journals (Nature, Science, Cell, NEJM). Obviously, they hardly ever contribute anything. Their serfs do all the work. Their job is basically to plot in order to stay on top of their neofeudal shire.
In addition to this, funding bodies do nothing after fraud has been proven. ERC only terminates grants on rare occasions. https://forbetterscience.com/ discusses many cases of serial fraudsters who keep getting funded despite having retracted 10 or 15 articles in major journals.
One of the clearest examples of the publishing problem to me was the shift in meaning of last authors on papers over the course of my career. When I first started, last author meant the person who had contributed the least to the paper (in cases where ordering of effort can actually be determined -- often there's genuinely equal contributions). Often this was the senior faculty member, as they did little but sort of read over a paper or maybe supervise someone independently functioning.
Over time though the last author came to mean "the more senior person" and then "the person whose idea it really is". So being last went from this thing that no one wanted, to this thing that people would kind of argue over. In the process the more manipulative cases, people would kind of casually say "oh I can be last author" realizing the gains from that position.
It seems when a more junior person is doing all the work and is first author, an unscrupulous senior researcher will claim that "it's the idea that counts"; when that senior researcher is first author, it's "ideas are a dime a dozen, it's getting it done that matters."
> Many associate and full professors in my area only care about stamping their names into as many journal articles as humanly possible. [...] Obviously, they hardly ever contribute anything. Their serfs do all the work.
This describe my lab's head perfectly. At first I found strange he was so angry about a side project paper I wrote alone quickly on my free time and asked to publish at a conference. Then I understood why: in his view, every minute I spend on my projects is one I don't spend on his projects. The guy approved my first journal paper submission, which had his name on it without even reading it. It was obvious by the lack of comments and when he asked a few days later during a lab meeting to change half the content of the paper...
I'm not against putting name of people contributing to the research, even slightly and informally, but at this point this is pure leeching and exploitation. Then he wonders why my thesis isn't progressing (hint: because when I chose nothing about the topic, method and experiment setting I'm not really motivated to work on that).
Yep. The incentives in science are all wrong. To maximize your chances of publication (i.e. keeping your job), you have to make the most outlandish claims you can possibly maybe defend. Additionally, the complexity of data/analysis is increasing every day while also the esoteric domain knowledge required to make any progress is deeper and more specialized.
Not enough people realize that science and academia are just as prone to organizational politics and corruption as everything else. Peer reviewed studies are great, but, just because it was published doesn't mean it represents "The Truth". And sadly, being skeptical of studies makes you appear less credible in arguments.
When I was an economics RA literally half of the econ professors didn't work Fridays and barely worked summers. It was incredible you get paid 150k with that kind of schedule.
Interesting. In my experience professors might not be on campus teaching one or two days a week or in summer, but that's only because they are working their ass off from home, writing grant proposals, reviewing papers, doing basically anything to get funding, and trying to find time to manage their own research.
I used to think it was a great gig too, since most professors had one or more small businesses on the side. Then I realized they have those businesses and consulting companies because that means they can also apply for small business grants (which they use to subcontract the research out to the university) in addition to the normal academic research grants. If you also count teaching, then that means those professors are working three jobs for one salary.
I made the decision that I'd rather make 50% more working in industry doing easier (if boring) work.
I'd guess that's pretty field dependent. What you're saying matches my experience with biology profs - technically once they get tenure they could chill out, but then they wouldn't have any funding for research anymore, so they wouldn't be able to do much of anything in their field.
In CS I saw more of a mix though. It's feasible to fund a small research group without busting your ass, and it also seemed to me that putting time into coursework, writing books, etc. was culturally a more acceptable use of time in that department than it was in biology.
I knew a few CS PIs that actually purposely scaled back their research once they got tenure because they were more excited about teaching and some of the educational initiatives the school was working on. That's not the norm of course, but I literally can't imagine that ever happening in a bio department lol.
Agreed, you can definitely work from home, but you're being quite charitable -- 'let's assume their behind closed doors with no accountability and do everything we expect'.
This is basically how we ended up in the replication crisis to begin with. There's little to no accountability and people assume they're working hard and being honest.
5% of all degree granting universities are R1 or R2 research universities, e.g. Harvard, Stanford, etc. The vast majority of professors aren't obligated the conduct research or apply for grants.
I agree there needs to be more accountability, but IME some of the worst offenders with bad studies were the people putting the most hours in the lab. So I wouldn't conflate hard work and honesty here - most of the dishonesty isn't about avoiding work, it's about misrepresenting negative results, which everyone gets. The people most obsessed with their academic status are the ones to be wary of.
That’s surprising to hear. All the professors I’ve known (a bunch, mostly in Math and Engineering), while they all have some problem at some level, are all very hard working. They definitively work every work day, and often much more.
Worse yet, it compounds. The people approving grants, seeing all these amazing results promised, will then raise the bar for what kind of results you're promising. Which means the next batch of promises will need to be that much more extreme to get approved. It's a race to the top... or the bottom, depending on your point of view.
I'm sadly amused by all this. The complaint I hear about privately done research is it's all tainted by the profit motive, and so research should be funded by the government, as then it'll be pure and untainted by selfish motives.
Of course, government funded motives are just as tainted by selfish motives, if not more. Even worse, the people who make the funding decisions aren't spending their own money, so they have little reason to care.
At least with privately funded research, the people providing the money aren't going to fund bullshit fake research. This is why market systems work better than government systems.
I think there are two separate versions of "private research" that people below are responding to. In one, a company has a problem and they pay researchers to work on it. The key metric is solving the problem or making progress on in depending on the time scale- good orgs have different scales (usually from 3 months to 5 years at most) that they are investing in. In this case, there is little room for fraud or deception, but it goes up with time scale because of how you frame early results. (I work doing applied research for companies, and they want and will only pay for something they can use to improve their business. Actually a lot of my time is spent helping make a clear connection between how research findings will move the needle on business objectives). I think it is this kind of research you, the parent, are referring to.
There is also "sponsored" research as others have pointed out, that is more of a bought study that a business hopes they can use for marketing. These have a big conflict.
I agree that government is probably the worst system in most cases. It's the same kind of "picking winners" that doesnt work in corporate funding. I'm from Canada where our tech industry basically runs on subsidies, and very little escapes the bubble of trying to get more government funding and actually becomes self sustaining.
Personally, I have seen there is a legit appetite for corporate funded research that advances the company's goals. As an academic, I would rather seek out companies for funding, knowing that I'm working in something that someone wants, and not trying to optimize for government priorities. I'm coming at this from a hard science perspective. I imagine the dynamics are very different for drug trials or other efficacy type studies, which are maybe more relevant to this discussion.
Good points, but there's another wrinkle. If a company pays a research institution to do a fraudulent study, the research institution risks losing their status as a reputable research outfit, and thereby loses a multiple of that as other companies avoid funding them.
A prestigious reputation is like glass - easy to break, very hard to put back together.
You'd think this would work with government funding, too. But it appears it does not. It could be because one's "reputation" is based on how many papers are published and how many cites. This is like rating a programmer on how many lines of code written.
Cites are supposed to be more analogous to how many times a programmers' library has been used.
Unless gamed, it _is_ a useful measure.
But if your programming ability was measured by how much are your libraries used (eg. in hiring, determining salary, seniority...), there would be every incentive to aim for your library to be used as much as possible even when it is superflouous.
I think the timescale point is an important one. For long timescales (and how long is long has changed over the years), government might be more likely to invest: eg. imagine the project of sending people to Moon: at the time, no private investor in the world could rival what Soviet Union or USA could dedicate to those projects — the price has gone down in a sense (or we've got richer individuals/companies), so you do see private investors in the field today, but there are always projects of the similar scale that might never happen if business returns are too far off.
That's the same problem in disguise. The reason they don't do the "our product is great" research themselves is because if they did people would switch their brains on and properly evaluate it. They pay universities (i.e. government funded organizations) because of the false belief in our society that government funding means universities are neutral, trustworthy, competent research institutions, when in fact they are really quite corrupt and filled with easily bribed researchers who will publish basically anything if it means they get another paper or grant out of it.
If/when the perception of government funded researchers finally aligns with the reality, businesses would stop doing that because there'd be no reputational misalignment to exploit.
I'm talking about incentives here, and people do things almost entirely on selfish impulses. Money is a powerful motivator, and people are strongly motivated to not spend their own money on bullshit. That motivation is absent when government funds things - but other motivations remain.
Pulitzer Prizes have been awarded for work later shown to be complete frauds. Those severely damaged the value of getting one. I know I don't attach any respect for Pulitzer Prizes.
> At least with privately funded research, the people providing the money aren't going to fund bullshit fake research
They absolutely are if it helps them promote something. Cigarettes and asbestos industries helped produce plenty of fake safety studies.
The problem is that research has been marketized; you have to "sell" your proposal to get funding, so naturally you big it up as much as possible. And thus the incentive to fake results.
If you are personally funding Professor X to do some research say, on making a better LCD display, and Professor X comes up with nothing but personal aggrandizement passed off as "research", are you personally likely to fund him some more?
I seriously doubt it. Any more than you'd continue taking your car to an auto shop that took your money but didn't fix it.
It doesnt work like that. You have a proposal that you send. A group of people (never the same ususally) come and review your work with half of the people that have not heard from you. They look at your story, and your metrics (papers, patents...) and put a bunch of scores (such as societal interest, innovation, approach, personel...) then the jury convene discuss a bit and change their scores if they want. The head of the jury then takes. all of that, combine and make a report. So in practice I've seen big abuses at every stage. Friends that help other friends get grants. Enemies that destroy theor enemies grants. Thieves that destroy the grant and then develop the work themselves after (that one we reported two times for two events to the funding agency that just removed that person from jury panels, nothing else). In the end, no one cares if they failed to achieve what they promised because nobody sees the grant (not public unless you submit a FOIA request) or remembers it 5 years later.
> governments are at minimum accountable to voters
Voting on how government spends money is in no conceivable way like you deciding how to spend your money.
> Private money is in no way accountable
It's accountable to the people who are providing the funds out of their own pockets. People do not like wasting their own money.
I bet you look at your own budget. You have to, otherwise you'll be in jail for bouncing checks and tax evasion. I also bet you've never looked at your city, county, state or federal budget. It's other peoples' money, so who cares!
> I also bet you've never looked at your city, county, state or federal budget.
I have. Not in great detail though. The problem is I can't really do anything about it. Even if I find something bad and by lucky chance get people to care (there are plenty of slow news nights) - there is far more bad things in the budget than I can expose before people get tired of the corruption and give up listening. I try to elect politicians who will do something about it - I have low success: people who benefit from any specific spending are more powerful than people who are just against waste in general. That is assuming I can get my person on the ballot in the first place (low odds), and they don't realize once elected that reelection (read power) comes from handing out pork to those who want some specific waste. There are more things that make it hard - I just scratched the surface.
Pork is hard to figure out. Is spending money not to repair something that isn't broken good money or bad? I've seen perfectly good buildings get needless remodels and I've seen perfectly good buildings suffer because they never got maintained. I've seen towns put in sewer systems they don't need, and other towns fail to put in a sewer system until it was an expensive emergency. Flint had 40 years to replace the lead pipes in their water system - or they could have investing in water treatment chemicals that makes lead not leach from the pipes for much less money even over 40 years (you can pick anything from 60 to 30 years ago as the date when lead is bad became known - 40 was my somewhat arbitrary pick).
> but governments are at minimum accountable to voters.
Governments are at a minimum accountable to the people willing to use force against the government if they are sufficiently displeased. They may also be accountable to voters qua voters, depending on whether they have voting at all, and, if they do, what options are presented to voters and how fairly voted are counted, all of which are axes on which governments vary considerably, with many falling into ranges resulting in little or now accountability to voters.
The problem is not that people lie on their application but that these people are now being judged by people who lied on their applications some time back. The lying has been institutionalized and leaves little resources for small but meaningful progress.
The "way out" is to have severe, lasting professional repercussions for those creating these fraudulent studies. If the most egregious offenders found it hard to get any job at any institution, and those with state licensure oversight (I'm thinking primarily of physicians) lost their license to practice, you would see instances of this dry up almost overnight.
> When you want to do a proper work, your grants and papers get rejected because they are not innovatove enough or don't go far enough.
Not being innovative enough isn't the root cause though. The real issue is there isn't enough funding to go around, and so the bar is higher than it needs to be. Available research funding in the US is a paltry sum considering the aggregate ROI of discoveries and technologies that originate in universities. Funding rates can be as low as 10-20%, with thousands of researchers competing for the same grants. They need to all paint a tortured story of how their idea will be the next big invention.
The problem with our system is that we put public money into research, which is then commercialized by corporations and sold to consumers, and corporations/universities end up capturing the profits. Those profits are then invested in ways that yield short-term returns instead of being reinvested in research.
Some of those profits are supposed to come back to the government and reinvested in research, but more and more corporations (and I consider universities to be a kind of corporation with the way they act like hedge funds that do education as a side hustle) are figuring out how to keep as much of those profits as possible, despite those profits only being made possible in the first place due to publicly funded research.
What if we increase funding into research? VCs are willing to pour millions into ridiculous or tenuous ideas because they know a single success will more than make up for the duds. Lower the stakes, make funding more available to researchers, and then maybe we won't need to squeeze every bit of "innovation" out of every research dollar. Make room for research that fails or yields a negative result. This is important work that is valuable and needs to be done, but there's no funding for it. We could double the amount of funding for e.g. the NSF and it would still be a drop in the federal government's proverbial bucket.
I get the sense from colleagues and visiting different universities that this varies across the US, Canada, the UK, and the EU, but grants are now the bread and butter at most US universities. It's not really enough to publish 100s of articles, or have a high h index, it's to bring in money even if it's not strictly necessary for your research.
Part of the reason we have the problem you're mentioning is not that there isn't enough money to go around, it's that universities (at least in the US) now depend on inflated costs to function. The costs of research are kicked down the road to the federal government, and the research itself is seen in terms of profits rather than discovery. So if you have all these universities essentially telling researchers their jobs are on the line if they don't bring in profits, you're going to have everyone scrambling to bring in as much money as they can. It's not just postdocs or untenured research professor lines, it's tenured professors as well, whose income can be brought down below some necessary standard of living, or who can have salaries frozen or resources cut.
I was thinking about this the other morning. I had a grant proposal that the program officer was really excited about. This program of research could probably be conducted for almost nothing because it involved archival data analysis. However if you put a dollar amount on the time, it might realistically actually cost around 250k USD, maybe 500k max, pretty generously in terms of staff effort. However, the university managed to inflate the budget ask to around 2 million for the sole purpose of indirect funds.
When you have that kind of monetary incentive (carrot or stick), of course you're going to have thousands of persons applying for each opportunity. It's what led to the graduate student ponzi scheme, inflated numbers of surplus graduates, etc and so forth.
It all trickles down too, in terms of research claims, p-hacking, etc and so forth.
There's a place for profit, but there's also some realms where it does nothing but corrupt.
The problem here is not profit but the reverse, the corruption comes from the absence of profit.
Universities and grants are this firehose of tax money being sprayed everywhere without even the slightest bit of accountability in how it's used. The government effectively "loses" all of it in accounting terms, but because it's tax it doesn't matter. The buyer is blind and doesn't even bother looking at the papers they've paid for, let alone caring about the quality.
Now go look at the results coming out of corporate labs when the corporates actually want to use the tech. You get amazing labs that are consistently re-defining the state of the art: Bell Labs, DeepMind, Google Research, FAIR, Intel, ARM, TSMC etc. The first thing that happens when the corporate labs get interested in an area is that universities are immediately emptied out because they refuse to pay competitive wages - partly because being non-profit driven entities they have no way to judge what any piece of research is actually worth.
> Universities and grants are this firehose of tax money being sprayed everywhere without even the slightest bit of accountability in how it's used.
This is definitely not true, recipients of grants are heavily restricted on what kind of things they can spend that money on. I can't even fly a non-domestic carrier using grant money without proving no other alternatives exist.
Do research projects sometimes fail to deliver? Yeah. But that's just the reality of doing research. The problem I see is people expect research to be closer to development, with specific ROIs and known deliverables years ahead of time. Sometimes in the course of research you realize what you said you were going to do is impossible, and that's a good result we need to embrace, instead of attaching an expected profit to everything.
> Bell Labs, DeepMind, Google Research
I don't know so much about all the labs you listed, but just taking these three, they certainly don't have a good feeling for what their research is worth either. Do you think Bell Labs fully comprehended the worth of the transistor? For all the research Google does, ad money still accounts for 80% of their revenue. DeepMind is a pretty ironic choice because Google has dropped billions into them and it's still not clear where the profit is going to come from. So it's not clear anyone, even those with a profit motive, have any way to judge what any piece of research is actually worth.
But that's not to say there's anything wrong with that... that's just how research works. You don't know how things are going to turn out, and sometimes it takes a very long time to figure that out, and it. This is why massive corporations like AT&T, Intel, Google, Xerox, MS etc. are able to run such labs.
> The first thing that happens when the corporate labs get interested in an area is that universities are immediately emptied out because they refuse to pay competitive wages
I've seen this happen first hand. In my experience these researchers usually go on to spend their time figuring our how to get us to click on more ads or to engage with a platform more. In one instance, I remember one of my lab mates being hired out of his PhD to use his research to figure out which relative ordering and styling of ads on a front page optimized ad revenue for Google. They paid him quite a lot of money to do that, and I guess it made Google some profit. But is the world better off?
They are restricted in trivial ways that are easy for a bureaucracy to mechanically enforce, as is true of employees at every institution.
What I meant by accountability is deeper: people are not accountable for the quality or utility of their work, hence the endless tidal wave of corrupt and deceptive research that pours out of government funded 'science' every day. These researchers probably filled out their expenses paperwork correctly but the final resulting paper was an exercise in circular reasoning, or the data tables were made up, or it was P-hacked or whatever. And nobody in government cares or even notices, because nobody is held accountable for the quality of the outputs.
Whilst DeepMind is not especially interested in profit it's true, and is just doing basic research, Google itself is an excellent example of how to seamlessly integrate fundamental research with actual application of that research. That's what profit motivated research looks like: just this endless stream of innovative tech being deployed into real products that are used by lots of people, without much drama.
We have come to take this feat so much for granted that you're actually asking if someone working on ads is leaving the world better off. Yes, it does. Google ads are clicked on all the time because they are useful to people who are in the market to buy something. Those ads are at the center of an enormous and very high tech economic engine that powers staggering levels of wealth creation. If I understand correctly, a lot of academic papers are actually never cited by anyone - a researcher who optimises search ads by just 1% will have a positive impact on the world orders of magnitude greater than that.
> What I meant by accountability is deeper: people are not accountable for the quality or utility of their work, hence the endless tidal wave of corrupt and deceptive research that pours out of government funded 'science' every day. These researchers probably filled out their expenses paperwork correctly but the final resulting paper was an exercise in circular reasoning, or the data tables were made up, or it was P-hacked or whatever. And nobody in government cares or even notices, because nobody is held accountable for the quality of the outputs.
Have you ever received and administered a grant? I have to ask. You seem pretty certain about how it works, but it just doesn't match with my experience.
You say that there's no accountability in results and this leads to people committing fraud. In my experience, fraud happens when there is too high of an expectation that researchers can't meet. Let's say you get a $5 million grant to do X, and in the course of doing X you find out it's not possible. You have a negative result. First of all, good luck publishing a negative result. Without that publication, good luck getting the next grant.
High expectations for results incentivize fraud. There should be room for researchers to come up short with their research and still be able to progress in their careers. But when grants dry up, the publications dry up and then your career is derailed by failing to get tenure.
The fact is that not everyone can be researching world-changing technologies. That's just not a realistic expectation. Even Google can't do that, as much as you laud a profit motive (how many Google projects are in the trash right now?). But that's what we expect of everyone who gets grant money, precisely for the reason that people have an expectation that an immediate and tangible ROI must be demonstrated.
I don't know if you consider people at funding agencies like the NSF as part of the government, but they do notice when projects fall short, and they do deny 80% of grant applications (I would consider that accountability).
I haven't, but I'm unclear why it's relevant given that you don't seem to really be disagreeing!
The NSF denies 80% of grant applications because there is a radical oversupply of people who want to be scientists and the NSF has a finite budget. That by itself doesn't create accountability any more than the fact that lots of people want to be movie stars creates accountability for actors. That's not how accountability works.
Accountability means people being held to account for illegitimate acts. If it existed it would look like this: we (the government) gave you money to deliver some genuine research, yet you delivered a paper that simply modelled your own beliefs, cited a retracted paper and cited another paper that actually disagrees with the claimed statement, used a input dataset too tiny to achieve statistical significance, looks suspiciously P-hacked, you then misrepresented your own findings to the press and by extension the government and then to top it all off it doesn't replicate. We will therefore prosecute you for research fraud and failure to meet the terms of your contract.
What actually happens is this: nothing. Journals will happily publish papers with the problems I just listed, universities praise them, the 'scientists' who do this stuff proceed to get lots of citations and the government proceeds to award even more grant money because who are they to argue with citations.
As you admit, fraud is everywhere in science, supposedly due to "high expectations for results". But so what? Lots of people, non scientists included, have high expectations for results placed upon them. The right mechanisms and incentives to stop fraud are not simply having low expectations of scientists, that's absurd and wouldn't be seriously proposed as a solution in any other area of society. It'd be like saying the answer to fraudulent CEOs fiddling the books is to simply stop expecting them to turn a profit, or like the solution to shoplifting is to just stop expecting shoplifters to pay for things.
Expectations on scientists are already rock bottom: large chunks of the research literature doesn't even seem to replicate, other large chunks are not even replicable by design, and nobody seems to care. You can't get much lower expectations than "We don't even care if your claims replicate" and yet this ridiculous suggestion that the solution to fraud is to give fraudsters even more money keeps cropping up on HN and elsewhere. The solution to fraud is tighter contracts to ensure the rules are clear, and systematic prosecutions of people who break them.
> I haven't, but I'm unclear why it's relevant given that you don't seem to really be disagreeing!
It's relevant because you are criticizing the process but you don't seem to understand how it actually works. Your original characterization was that grant money is "this firehose of tax money being sprayed everywhere without even the slightest bit of accountability in how it's used." The reality is, when I get grant money I need to account for how every dollar is spent, and if there are ever any questions about spending, I better have the receipts to back it up. The other reality is, I only get to spend a small fraction of a grant award, as the University takes most of it off the top, and my students take almost all the rest in the form of a tuition remission and a stipend, leaving whatever is left over for equipment and conference costs. Then there are strict conflict of interest regulations which come with their own reporting requirements. I don't even get all of the money at once; I'll get some up front and then I have to show significant midterm progress in order to get more. There's accountability at multiple layers by multiple organizations.
> The NSF denies 80% of grant applications because there is a radical oversupply of people who want to be scientists and the NSF has a finite budget.
It's accountability in the form of: if you didn't do what you promised you'd do, then you don't get any more money and your career is derailed. Isn't that what you want? Anyway, do we have an oversupply of scientists relative to the amount of science that needs to be done? I don't think so. The NSF budget is finite, but it's also embarrassingly miniscule given the upside of research that has come out of NSF funded projects.
> If it existed it would look like this ... We will therefore prosecute you for research fraud and failure to meet the terms of your contract.
Fraud is one thing. I'm not going to say we shouldn't prosecute fraud. But treating a grant proposal like a contract with positive deliverables (no negative results allowed) is the exact problem. Research is not development. Research implies failure. You can't do one without the other. Failure to meet stated objectives shouldn't be met with prosecution. That just further incentivizes fraud.
If there's a replicability crises it just means we need to spend money on replication research. Researchers know no one is going to bother replicating their study because there is no grant money available for redoing someone else's research. Grant agencies don't pay for that kind of thing, and you can't make a career doing that kind of research. No one gets tenure this way. If we want studies to be replicated, we need to allocate money to replicate them, and we need to incentivize people to do so by making it a viable career for a Ph.D.
> Lots of people, non scientists included, have high expectations for results placed upon them. The right mechanisms and incentives to stop fraud are not simply having low expectations of scientists, that's absurd and wouldn't be seriously proposed as a solution in any other area of society.
I didn't say we should have low expectations, I said we should have realistic expectations, and yes, that does involve lowering expectations from where they are right now. Because the current expectation is this: you have to write a proposal that has a <20% chance of getting funding. If you can't get that funding your career is basically over, so you better promise the world, because everyone else is. In this proposal you need to lay out a research plan for the next 3-5 years and you have to convince the funding agency that your research is going to change the world as we know it. If within that time you fail to meet your stated objectives, you will find funding hard to come across, and your tenure will be threatened, meaning you will probably lose your job and have to move your family. On top of that you want to add potential federal prosecution to stakes, thinking that will make things better.
> Expectations on scientists are already rock bottom .. The solution to fraud is tighter contracts to ensure the rules are clear, and systematic prosecutions of people who break them.
Okay, run with this idea: exactly what rules need to be clearer and exactly how do the contracts need to be tightened? Because there are already clear rules and tight contracts, yet the problem persists. Will clearer rules and tighter contracts fix it? How?
I'll tell you what I think will happen with this system: you'll chase out all of the public scientists because the stakes are too high. Already the pay is too good on the corporate side, and now you add potential federal prosecution to the list if I don't meet positive deliverables? No thanks. I'll go work for Microsoft where my research will be privatized. You might be okay with this as you pointed out you believe a profit motive is good for research, but you know who wouldn't be good with this? Microsoft. And Google. And all the other tech companies who were (or will be) built on top of technologies that started as government funded research. All this does is make Microsoft stronger. Is that what we want? What about the next Microsoft or Google? Where will they come from?
I'll give you a concrete example of where your idea fails: the 2004 DARPA Grand Challenge. Millions were spent trying to bootstrap autonomous cars, and what was the result? They all crashed, no one completed the race. What should the response have been, to prosecute everyone involved? No, they tried again and gave everyone more money. Next time around in 2005 more succeeded (mostly because they relaxed the expectations).
Then in 2007 we saw the first real demonstration of autonomous cars in the DARPA Urban Challenge. Today, everything Tesla, Google, GM, Ford, et al. are doing with driverless cars is based on the research that happened in 2004-2007. Without government funded autonomous car research, there would be no Tesla or Waymo today. That's how research works, you try, you fail, you try again, and you have no idea how far your impact will be, and really no one does. If we try to control this process toward producing only successes with contracts and positive deliverables, like it's an engineering project (with prosecution of failure and all), it just means we're going to lose dynamics like the Grand Challenges, and the broader economy will suffer for it.
Take all that money you want to invest in prosecutors, courts, lawyers, and prisons, and invest that in a system where replication studies are well funded and a viable career path for scientists. Increase funding into the NSF and other grant funding agencies to hire more people to consider grants, and increase grant throughput. I guarantee you you'll fix a lot of the problems you're identifying.
I think we are 80% in agreement but still using words differently.
> when I get grant money I need to account for how every dollar is spent
Yes I know, but that's not what I mean by accountability. Again: nobody is upset with academics because of expenses scandals or taking too many expensive flights. Well, except maybe for climatologists who supposedly take more flights than the average academic, but that's due to the perception of hypocrisy rather than concern over cost.
People are getting upset because when they download and read papers, the papers turn out to be bad and there are no visible consequences for that. Even just getting a clearly fraudulent paper retracted is reported to be a nightmare, according to people who search for scientific fraud as a hobby like Elizabeth Bik. And I've read endless reams of terrible papers that were useless or outright deceptive, I tried reporting a few and nobody ever cared.
Now, you're arguing that there is accountability of the following form:
> It's accountability in the form of: if you didn't do what you promised you'd do, then you don't get any more money
This is true given that scientists are promising the NSF to publish papers, not strictly speaking to do research, and therefore by implication promising to come up with interesting claims, not necessarily true claims. But that's not what we want.
This is an inevitable problem with government funding of research. The buyer, the government, cannot really check if the claims they're buying from scientists are true, so they need proxies like did it get published, did it get cited, etc. But those aren't the same things. Corporate research doesn't have this problem because the corporate will try to apply the research at some point and if it was fraudulent they will discover it at that point, and of course they're strongly incentivized to ensure it never gets to that point in the first place.
In theory the government could write grants in such a way that money is awarded independent of what claims end up being made, instead awarding money for the quality of work done. That's what you're arguing for here. And indeed corporate labs write contracts in this exact way. Scientists get a salary in a corporate lab, they don't have to write grants. They do have to convince their management chain that the research is worth funding, but there are many different ways to do that which don't involve continually publishing astonishing claims in scientific journals.
You're asking me to propose how science should work instead but, indeed, you already know my answer: eliminate the NSF completely, and stop subsidizing student loans. All science should be funded by companies. They have already solved the problems you're treating as novel / intractable above. Scientists are awarded salaries and promotions by firms on a more flexible basis than the NSF. Importantly, they are rewarded for doing research not producing claims. Companies can do this because they have management structures sufficiently well staffed to closely monitor what scientists are doing. That means if a firm is truly committed to research then the scientists will get paid even if their programme has some dry years. Plus there's a huge body of law handling fraud and corruption in the workplace.
At the same time, firms are incentivized to eliminate the research that is probably always going to be nearly useless. Outside of firms selling books or self help courses I doubt many would subsidize sociology or gender studies for example, and it's also unclear that would be a loss.
Your argument about who it would or wouldn't be good for seems a bit contradictory and I struggled to follow it. You're arguing it would be both bad for Google and Microsoft yet also make them stronger. I disagree with both possibilities: I think they would hardly notice the difference and it wouldn't affect how powerful they are. Having worked for one of those companies and also worked at a startup where we often read research papers in a certain subfield of CS with views to maybe applying them, my view is that even in the relatively good field of computer science, most academic output is useless and has no impact. These firms do not rely heavily on government funded research:
- The web was very briefly funded for a couple of years as a side project of CERN, but then R&D was taken over by the private sector where it remained ever since. Page & Brin never even finished their PhD before moving their research into the private sector! It's hardly a mystery where the next Google will come from - probably the same place the previous one did, a garage in Silicon Valley.
- What government funded tech was Microsoft built on? The internet? Microsoft is still with us in spite of the internet, not because of it! Or are you going back to military computers in World War 2? Military R&D is different, governments can fund that semi-effectively because they actually use the outputs.
- Neural networks were a backwater until Jeff Dean resurrected the field using the resources of the private sector, academia has been chasing to catch up ever since.
There are a lot of other examples. The DARPA Grand Challenge is not an example of what I'm talking about because:
1. DARPA is military research and therefore structured differently to how the NSF does things. The very structure of it as a Grand Challenge is a clue here: the output of the programme was cars (not) going round a track, not papers and citations.
2. I'm not arguing for prosecution of researchers who end up with null results!
I'll try not to do another wall of text since we're mostly in agreement, but I will make a couple final comments:
> Your argument about who it would or wouldn't be good for seems a bit contradictory and I struggled to follow it. You're arguing it would be both bad for Google and Microsoft yet also make them stronger.
What I meant is, if e.g. Page and Brin in 1998 had no access to government funding and research because it was privatized by e.g. AOL, there wouldn't be a Google today. But if we were to privatize all research, Google of today would certainly like that insofar as it strengthens their market position (jut like the AOL of 1998 would like the situation), but it also means they have to start funding more research because now they can't get any from the public.
> - The web was very briefly funded - What government funded tech was Microsoft built on? - Neural networks were a backwater
But the point is that it all started with government funding, so we need to be very careful about the consequences of privatizing it all. Today, ideas start out funded by the government, they gain legs in academia, move out into corporations, and are productized and disseminated to the public in the form of consumer goods. This is the progress pipeline, and it's proven extremely effective and enduring at driving innovation.
You want to cut out the beginning of the process because you think corporations can handle that part, but I don't think you've really demonstrated that. Can you point to any tech product out there that is exclusively built on in-house, private research? I certainly can't think of one.
For example, you bring up the origin of Page & Brin. Yes, they never finished their Ph.D., but the fact is they did meet in grad school while they were doing NSF funded work. Brin was at Stanford on an NSF fellowship. They built the first prototype of Google on an NSF grant. They were mentored by academics who also were funded by the NSF as professors and graduate students themselves. You take that funding away, and maybe these two people never meet, maybe they never learn what they need to get that spark of insight. So I agree with you that the next Google will come from the same place the previous one did - a government-funded research lab in Silicon Valley. The garage is where they moved their operation only after they had already used a lot of NSF money to get their start.
> 1. DARPA is military research and therefore structured differently to how the NSF does things. The very structure of it as a Grand Challenge is a clue here: the output of the programme was cars (not) going round a track, not papers and citations.
The processes of getting grants from NSF and DARPA are very similar, and in most cases the deliverable is a paper. The Grand Challenges are the exception of DARPA funding, not the rule.
> Military R&D is different, governments can fund that semi-effectively because they actually use the outputs.
Yes and no. DARPA would like to use the fruits of its funded research, but it funds projects on a very long timescale, so what it funds may or may not be used in the long term. Sometimes the research is not to strengthen the military per se, but to strengthen American interests though creating domestic tech sectors. e.g. I'm sure the military would like to use autonomous vehicles, but what's even better is for America to have its own domestic autonomous car sector that can produce those vehicles.
> most academic output is useless and has no impact.
You've tried to make the case that we should optimize toward useful research, and companies are better at identifying useful research because they have a profit motive, but I still think it's difficult to say today what research will be important 30-40 years down the line. DARPA recognizes that it's very hard to tell how useful research will be ahead of time, and that corporations don't like to engage in foundational research when there is no obvious short-term path to profit. This was the entire point of the Grand Challenge series, and it worked out well -- they wanted to bootstrap the autonomous car industry, so they paid researchers to get them rolling and now look where we are. If the government hadn't gotten involved, there probably wouldn't be an autonomous car sector in the US today.
There are plenty of cases in our history where some technology seemed useless initially turned out to be bigger than anyone could have imagined. We need to be careful not squelch those ideas too quickly because they don't return an immediate profit. Things like the Internet and neural networks come to mind. A lot of people, particularly large corporations, thought the Internet was a toy when it first was introduced. Neural networks seemed like a dead end and then found new life. But the fact is they started in academia. The Deepmind arcade paper and essentially the entire deep reinforcement learning field today is based on decades-old research funded by the UK government. What if that research was locked away in a UK corporation? Would Deepmind even exist? That research was a toy for 30 years, until it wasn't.
The whole point of DARPA and other government funding agencies is that they don't know what the winners are ahead of time, and I don't think corporations can know this either. (if they could, why didn't they do more to fund RL research 30 years ago?). Therefore we shouldn't try to optimize for obvious winners because we'll miss out on non-obvious winners, which bring the biggest upsides. This means we have to fund losers and research that ends up not being useful, and we should be okay with that, because things have turned out pretty well over all.
> 2. I'm not arguing for prosecution of researchers who end up with null results!
Sorry I thought you were with this:
We will therefore prosecute you for research fraud and failure to meet the terms of your contract.
I guess you mean failing to meet the terms of your contract and fraudulently representing that. But it still doesn't address the incentive to commit fraud because if you fail to meet your objectives, you're still not going to get published and therefore won't get the next grant, so your career is still derailed. It just means people will try to hide the fraud better.
After I typed all this I realized I failed at my pledge to not give you a wall of text. Oops!
What I mean by prosecution is that if a research body signs a contract with a scientist to do research, then those contracts would need to specify what research actually is, and that is the first step towards penalizing people who aren't really doing it. Indeed the process of flushing more research into the private sector would automatically eliminate a lot of the grey-area fraud that is so prevalent, because it would force a lot more people to write down what precisely they mean by "doing research", as well as continually evaluate that definition via normal management techniques. For example, is a simple modelling exercise "research"? It's often treated as such by e.g. banks, but the big tech labs we're talking about don't engage in a much of that, unless you count AI, but I think that's sufficiently beyond the sort of modelling you find in most science that it's best to treat it separately.
At the moment governments fund science but have no working definition of what science is, which breeds a lot of cynicism of the type I display above w.r.t. sociology. Is gender studies "science"? Most people would say no, but the government says yes. A more subtle example is epidemiology. A close examination of their papers will reveal that it's just plugging public CSV files into a bunch of very over-simplified simulations, and publishing the outputs. Is that science? If it is, can I get paid to play Cities: Skylines all day as long as I write a paper at the end? It sounds like a stupid suggestion but actually yes I can:
In my view this type of thing is not science, but my guess is at this point the science-y ness of epidemiology or urban planning would split 50/50 or most people would just go with the government's definition of "they receive grants and call themselves scientists, therefore they're scientists".
Would Google exist without the NSF? The specific company maybe not, but there were plenty of search engines around before Google, and Page in particular was already keen on creating a tech company when he was very young so would likely have ended up a startup founder sooner or later. An example competitor was Inktomi, which had already started doing pay-per-click ads. It's all forgotten now but Google nearly didn't survive its early years because they got sued over 'stealing' the PPC ad concept. They were able to argue that their own elaborations on the idea were sufficiently different that it wasn't infringement. It's very plausible that one of these other firms would have struck upon the idea of PageRank; they were certainly incentivized to do so especially once Inktomi had realised that PPC ads were a way to monetize search engines.
"The Deepmind arcade paper and essentially the entire deep reinforcement learning field today is based on decades-old research funded by the UK government. What if that research was locked away in a UK corporation? Would Deepmind even exist?"
Well DeepMind is a difficult example to debate here for both of us because of course DeepMind is or was a UK corporation and they do the exact opposite of locking up their research, if anything they're famously publicity and paper hungry. Google/DeepMind are actually a strong counterpoint to the idea we need academia for long range research: DeepMind is nothing but long range research (of unclear utility!) and of course self driving cars have been driven by Google for the last decade, pun totally intended.
If I were arguing in your shoes I'd be trying to argue Google is the exception that proves the rule and/or trying to distract attention from it, because it shows that companies can and will do long range research. Microsoft Research is another example, although it's less "pure" because it's more or less a little recreation of academia inside of Microsoft. I prefer the Google approach where science and technology are fully integrated.
Now the wider issue of governments needing to fund long range research is one I used to fully agree with. It sounds right and it's easy to find examples where you can sort of link them to government funded research. But as you can see, I changed my mind over time and no longer find myself in that camp, because:
1. Government funded basic research isn't free. We have to weigh up costs and benefits. How much of a contribution does government grant money make to the technological successes we take for granted today? For examples like PageRank, self-driving or DeepMind the initial contribution was quite small and mostly in the form of logistics (grand challenges) or theory work (which is cheap). And how much of a cost does it impose?
2. The costs are not just financial. I guess this is what mostly changed my mind. I concluded a big part of the "cost" of government funded research is actually in terms of intellectual pollution of the literature. If you have to wade through 50 useless, deceptive or outright fraudulent papers to find 1 good one because governments aren't paying attention to what they fund, then that poses an externalized cost on everyone who wants to benefit from research. Moreover this work has to be endlessly duplicated because journals are loathe to retract anything, so everyone who wants to push technology forward in a certain area has to do this work within their own small group because there's no coordination mechanism ... or just give up and ignore the literature entirely (this is what eventually happened to me).
I think a stronger argument for government funded research than the "it would never have happened" approach is that government funded science is usually un-patented and freely accessible. But even this argument is kind of weak because universities do patent the results of tax funded science, maybe not in computer science but it happens a lot in other fields, and also because the results of the research are often behind paywalls too! Although that's been getting better with time and is usually not a problem in CS (which IMHO is definitely one of the better fields).
But overall, to me it's just not clear that the benefits of buying papers en-masse outweighs the costs, both in dollar terms, time terms and of course, the inevitable costs when people put bogus research into production and things go wrong.
> This is definitely not true, recipients of grants are heavily restricted on what kind of things they can spend that money on. I can't even fly a non-domestic carrier using grant money without proving no other alternatives exist.
That is pure corruption: the grant is funneling money from you to a domestic ariline. If it was about accountability you would have to prove the flight was really needed in the first place, and then that you found the best price. (though the grant should allow you to ignore the "skip maintenance and pilot training to give you a lower price" airline, but if that best happens to be foreign it shouldn't matter to the grant unless there is corruption involved)
> If it was about accountability you would have to prove the flight was really needed in the first place,
Friend, at a certain point the overhead to administrate these kinds of checks is more costly than just letting people buy tickets to go to conferences. And at this point it isn't corruption in the university, it's in the form of handouts to large corporations.
It is pretty common for universities to impose a 50-59.9% indirect cost (that they take in addition to the requested funding by the researchers). I've had grants where the university refused to offer a few thousands dollars of free equipment use as support in the grant (which is something required for funding) on a multi million grant. That's because universities are badly managed, they bleed money with tuition to compete with other universities, pay indecent salaries to administratives (and sometimes researchers) many universities pay millions to their basketball coach. And then you have to pay for all those renovations that are requires to get a good ranking in the annual US news or whatever other performance metric. And you are right for grad students ponzi scheme saw that firsthand.
You don't if you do that the ones with power will immediately seize even more and remove the last barriers. A bit of why you can't reboot a country, people who abuse the system are the ones that have the money, properties etc. Communists tried to break some of that, but almost if not all of its applications failed because of power abuse.
Many see peer review as an integral part of the scientific method, but it’s actually a quite recent custom (ca 1970 IIRC). And I agree it’s not without problems. Whenever you give a collective power over the individual it creates room for politics.
academics are largely the only people who will be able to understand the work, but sure.
The fact that it's not out in the open is somewhat complicated. You're perhaps right that it would lead to better outcomes, but it's also important that researchers feel free to speak openly.
I understand the concern about being free to speak candidly, but I think it's trumped by the need for transparency to ensure that if improper gatekeeping or other unethical behavior is happening, their reputation is also on the line. Basically if you can't say it to your peers in public, don't say it at all.
This also fixes the problem of incompetent peer review, because it will be called out as such and the reviewer's reputation will suffer.
Opening peer review to public scrutiny will not make the process any less political--quite the contrary. There must be some way to combat the unethical behavior that does exist in academia, but that isn't it.
Please don't post opinions without supporting evidence and then ask for supporting evidence when someone disagrees with you. This just shows that you're applying skepticism selectively.
I don't have supporting evidence, and I'm not about to look it up right now. I think you're in the same boat or you wouldn't have replied like that.
I don't think it's controversial though, isn't it commonly believed that increased transparency means less corruption? It might not be true, but if it's the prevailing belief then the burden of proof is in fact on you.
What I want to know is how does this issue impact the notion that we all seem to buy into that we should "follow the science".
Scientists themselves have a hard time "following the science". Add to it the observation that when an issue is getting lots of attention outside of academia, then there are usually some really strong incentives (profit, prestige) associated with doing the science and applying it (e.g., epidemiological science during a global pandemic).
The question seems not to be about how can normal people "follow the science" but rather, why should normal people trust at all that any touted science is anything more than bullshit spouted by highly-motivated sophists?
> why should normal people trust at all that any touted science is anything more than bullshit spouted by highly-motivated sophists?
In the current climate, frankly I think it's absurd that we're putting so much trust in science, or rather what it has become.
The fundamental problem is that science as in the method is absolutely worth putting your trust in, but a lot of what's sold as Science^TM has diverged from it far enough to be worthless. However, it still bears the same name and borrows its credibility. There are countless examples even from the places one would think to be the most trustworthy.
What science as in the method hinges on as opposed to Science^TM is verifiability. Disciplines that aren't easily verified suffer from the replication crisis to the point where it's basically synonymous. I would go as far as arguing that unless something has been verified several times it should be nothing more than a hypothesis. Note how popular science media are basically living off doing the opposite (though I don't think much better can be expected from the media honestly.)
Math and social sciences form the two ends of the verifiability (and reproducibility) scale. CS is close enough to math that it's not a dumpster fire like psychology but I would say we're still suffering a lot of BS research. To fix this we need actual rigor, more openness about the methods, and frankly, motivation to reproduce results.
I would just add that science and the scientific method are designed to be used in good faith. Science doesn't really withstand political manipulation. If you're a researcher interested in learning more about the universe, science provides a framework for questioning and testing ideas, and for using established ideas as a jumping off point for further advances. As soon as there are other motivations than learning, the answers that "science" provides basically become unknowable because the whole process, from what to study to how to interpret and report findings, becomes corrupted.
We need good politicians to negotiate a consensus on how we move forward in light of human desires and modern thinking about cause and effect. Pretending that "science" provides us with a way forward is abusing science for something it is not designed to do nor capable of doing.
I think this is a very important question. This is something that I struggle with.
I have read a lot of papers. I generally think science can be a force for good. I understand analytic methods developed by or used in papers from my field of interest. I generally believe that those methods are capable of answering important and interesting questions.
In my view, the problem is that you can't know if an article is good or bullshit until you sit with it for, say, at least 2 or 3 hours (some papers even more). And that is for someone with my background. I tried to do this same thing when I had an undergraduate level of education and it (a) took me a lot longer (at least 10x), and (b) I missed a lot of the mistakes/scams/lies that I would not miss now. (I'm sure I am not able to detect some bullshit even still.)
We should follow the good science. We should not follow the bullshit science. This sounds hard because science, being more technical, is harder to vet. But upon further reflection, it seems that society hasn't figured out how to deal with much simpler lies, either.
It's also the norm in some fields to provide only high level info in the methods section, often without supplemental method details accompanying. This makes it even harder to tell if they did the work correctly, because usually half the methods are intermediate steps which don't have any results directly reported in the paper. In a perfect world those methods would be uniform across labs, but in practice they definitely are not, and it makes tracing down the source of honest replication differences a nightmare.
There's also no way to really know if the researcher entirely left out 10 other tests they tried that failed. Sometimes you can guess that it's a stretch because of the stupid categories they use (I'm reminded of those ESPN graphics that say things like "most home runs on a rainy Tuesday in June"). But it's harder to detect if someone straight up removes data points, repeats tests and reports only the nicest, etc.
So at some point you basically need to be an insider in the field so you hear the gossip about what doesn't replicate. Or if you have access to thousands of dollars to blow you could try a dozen different variations to try getting it to replicate yourself.
I think for something like COVID that is actively affecting many people, there should be funding explicitly for replicating studies, and some slots reserved in a prestigious journal for the findings of the replications. I get that it is not feasible to be replicating everything in science, but I don't see why we can't have ~one lab per relevant university department that specializes in replicating important studies. If you make that a path towards becoming a tenured prof I think that could change the culture surrounding replication studies in general.
> But upon further reflection, it seems that society hasn't figured out how to deal with much simpler lies, either.
outside of your field how much of the BS papers can you catch? I know enough about computers that I could probably figure out at least some of it in that field (after spending 10x longer than someone who actually reads papers regularly), but give me a paper in something else and I'm not so sure.
I would assume the people saying "follow the science" generally don't mean "believe recent research publications".
I still occasionally see things like "hanging a potato to your wall will cure your child's flu" being debated by friends of friends on Facebook. You'd need to take a time machine several hundred years back for it to be within the realm of realm of genuine scientific debate.
You seem to be indicating the preferred time window for which research to trust. Not too new, not too old. Not the worst algorithm you could choose, and I agree. This is why I stay away from drugs, procedures, and any kind of guidance from the medical profession that is less than 20 years old.
This is exactly how I feel after being prescribed a drug as an adolescent that in fact made my condition worse. It was a very commonly prescribed anti-depressant that was eventually found to increase the risk of suicide in those under 18 years old. I was on the drug during the first decade after its release. It is no longer prescribed to minors.
Well, there is an easy way to see why basically every anti-depressant will somewhat increase the risk of suicide: many people suffering from depression are already thinking of suicide, but has no motivation even for that. Anti-depressants will try to alleviate the latter one so that the person can actually live their life, but unfortunately that occasionally will also make suicidal thoughts into attempts.
That’s why therapy is a must - the “buttons” drugs can press is simply not fine-grained enough in itself to manage depression.
Oof don't get me started on drugs. Flouroquinolone antibiotics have been causing disabling reactions since the 80s now. I suffered some severe side effects also and met hundreds of people in the same boat. I even met people who suffered from CFS, fibro, tendonitis and didn't know antibiotic could results in getting such condition and they took it shortly prior to onset of their illness(doesn't mean it is the drug, but frankly it should be looked at).
I have seen physicians at top hospitals. Nobody cares about the reaction to the drug but only about treating the current symptoms. I have heard maybe it was the drug, maybe it wasn't. My reaction was instant during a hospital visit. It makes me feel terribly uneasy, getting the word out there doesn't help me in anyway, it helps to protect other people, which frankly the government should be doing.
I honestly don't know how nobody thought maybe we should look at the data we have, look at medications patient took and then if they suffered from an illness within certain period of time.
There is a group who met with senators on flouroquinolone antibiotics.
FDA has updated the "black" label on the drugs multiple times. First with tendonitis issues, then depression/anxiety, now with possible permanent nervous system damage. Yet doctors remain completely uninformed and the drug is given out for 'suspected' UTI. EMA in Europe now recommends these drugs should only be used in life threatening infections.
Recently, a physician submitted a request to the FDA to require written patient consent to take the medication, due to possible side effects. FDA said due to covid they are unable to review it at this time.
I would suggest adding a feature to your algorithm, before taking a drug also look for groups of suffers. If there are many groups....you might want to take something older and safer.
We're indoctrinated from Kindergarten to trust the folks wearing the white lab coats. This is why the young push the "follow the science" stuff and the older generations are much more skeptical. The older people have been through several cycles of bullshit.
The first "big lie" I experienced is the food pyramid. This was a big government push in the schools that told us all to eat carbs like crazy. Turns out it was just pure corruption, paid for by the grains industry. They have killed literally millions of us with this lie alone. And there were no consequences for this. No one went to prison. At some point you have to ask yourself: "How many millions of people does the government/industry have to kill before we stop believing them?" For me, it was the first million who died of diabetes and other obesity related diseases.
This is a clear misrepresentation of what science is and is not. Specifically, science is not normative.
Science will teach you how to build the bomb. It will never be able to tell you whether you should detonate it.
When people disagree about whether to detonate the bomb or not, one side may be able act like the predictable consequences automatically determine the ethical norms that should guide the decision while ignoring the implications of the unpredictable consequences. This side may attack the other side as "ignoring the science" in order to ignore the more difficult normative debate. A counter reaction to the initial unfair sleight of hand is sometimes to act like one should completely ignore the scientists anyways, and then maybe the accusation is more justified but still ultimately meaningless: norms are simply not grounded in facts. People also aren't necessarily consistent with their own norms. They might agree on some premises then when presented with conclusions that follow, just find the result so repugnant that they search for a way out. That's when we search for chinks in the science, because admitting to our own natural moral hypocrisy is just too painful. But we can always just change our basic norms and reach a different resulting decision and we know this in our gut, so searching for the facts to support a predetermined conclusion doesn't seem so different.
What is the alternative to following the science? Following people who are not scientists and who are explicitly making things up? This sounds a lot like "most plane crashes are due to pilot error, so maybe we should give non-pilots control of the planes."
The alternative that I choose is to have a fucking identity and constantly build up strength along every axis possible, yes even physical.
Be a rock. This doesn't mean be hard-headed and unpersuadable, but it also means not being lead around by statistics and citations and "studies". Know who you are, trust your instincts, and do what is right for you.
The fact is, unless you're actually the scientist doing the science, almost all the papers and publishing and Science The Meme! that happens and is constantly and endlessly touted, has zero bearing on your own personal life. Ignore it all.
In particular, when it comes to health science, you can literally ignore every little bit of it--none of that garbage gets in front of a wide audience unless it's got some profit for some concern at the other end. Know how to feed yourself and stay fit and healthy. Sure, you could use the NYTimes to determine whether a pescetarian diet is superior to a carnivore diet, or whether you should be eating highly-processed factory-produced fake meat and in what quantities, or you could just use your own common sense and your own body to do your own individual science--try carnivorous eating for a month, then vegan, see how you feel.
Be confident and ever-increasingly capable. Become more dangerous every day. Have an anchor that's ties your core values to who you are, however you define that, and let the rest of the masses get socially engineered into believing whatever the hell they want.
> Know how to feed yourself and stay fit and healthy.
Well, okay. I was kinda with you when you were saying you can ignore most health science headlines, but then you seem to change your mind and hint that there must be some way of gaining knowledge about one’s health without science. So how then? Common sense and personal experimentation are only going to get you so far.
>What is the alternative to following the science?
Actually it is not the science that is causing this issue. But research in various forms is.
Trusting science is fundamentally different from trusting "scientists".
Trusting science is essentially trusting nature to behave in a regular fashion. But trusting research, done and proclaimed by a bunch of humans is well, trusting human beings. Nothing could be more fundamentally different.
I think adding a "repeatability factor" to researches could help. So if anyone on the planet can replicate a research method, it should have a repeatability factor value of 1. If only a single entity can replicate it, then it should have a value of 0. If some entities can validate it, then it should have a value somewhere in between.
This does not mean that research that have low RF cannot be applied widely. It just that it will have to do additional work to gain the trust of the people.
It only follows common sense that trust in something cannot be mandated. So, measures that are based on low RF value research should NEVER be mandated. No matter what that cost.
And thus I think this can save the corruption and manipulation in the name of science.
I think it depends on the risk/reward of not following the science - as an example, there was debate about whether ibuprofen worsened the effects of COVID early in the pandemic. MGH basically came out and said that while the research was leaning towards ibuprofen being okay (and the WHO at that point had announced it was), they had plenty of acetaminophen available and no reason to doubt its safety for COVID patients. So they were going to stick with acetaminophen only for most patients.
Another COVID-related example is masks. At one point early in the pandemic there was messaging that masks were not useful in preventing the spread of COVID in the general public. There were supply chain reasons that probably motivated that message, but I also know virology PhD students that were insistent it was the truth, with papers in hand. At the time there were also papers to suggest the opposite, so one could have a healthy debate on the subject - but at the end of the day I gave my grandparents a small box of masks, because wearing a mask is so low effort I don't see why you wouldn't unless you are truly extremely certain that it's useless.
Bad take. People are rightly wary of public health when public health openly discredits itself--you'd have to be an absolute imbecile to believe, e.g., covid was a huge risk during anti-lockdown protests, but was not at all a concern during BLM protests.
Just one of MANY stories why BLM protests are so much less risky than, e.g., Trump rallies.
When it was widely rebuked as hypocritical bullshit, they even came up with a crazy way of explaining it away: Now racism was a "public health crisis", all of a sudden. See the google trends: https://trends.google.com/trends/explore?date=all&geo=US&q=r...
Just like how January 6 was an insurrection worthy of months of security state theatrics and proclamations that "white supremacy" is the biggest threat to the USA domestically, whereas an entire summer of burning buildings and riots was "mostly peaceful protests".
This is why nobody trusts authority and the media anymore--they have given up on even a pretense of seeming trustworthy.
Those are factual claims about the protests being less risky than the rallies. They’re either true or false. There’s nothing hypocritical here where anyone is saying it’s okay to attend one event and not the other because of the purpose of the event. It’s literally claiming that one event is outdoors and attendees tend to follow health guidelines, while the other event is indoors and attendees tend to not follow health guidelines. Is the claim false? Perhaps, and anyone is free to argue that. But this is nothing close to an example of the situation previously described.
> It’s literally claiming that one event is outdoors and attendees tend to follow health guidelines
Yes - and it’s not true - there is plenty of footage of protesters not following health guidelines, and indeed plenty of discussion of how the protests may have precipitated a covid spike.
The fact that other similar events were not permitted proves the hypocrisy in the public health guidance regardless of the outcome.
If you don’t think there is any political bias in public health policy, that’s fine, but it seems like we are in disagreement about that.
> If you don’t think there is any political bias in public health policy, that’s fine, but it seems like we are in disagreement about that.
This is a significant altering of the goal posts. But anyway, you seem convinced it's all part of some huge partisan battle where you're sure that your side is the good side and everything is stacked against your side. I'm not part of this battle.
> This is a significant altering of the goal posts.
Not really - it’s a matter of degree. When there is too much bias displayed, it stops being public health and is discredited.
> you seem convinced it's all part of some huge partisan battle where you're sure that your side is the good side and everything is stacked against your side.
This seems like pure imagination on your part. I suggest you reread the thread. You’ll see no evidence anything partisan from me.
I simply think that public health officials have undermined trust by politicizing the issues or otherwise distorted their message. I.e. they have ‘discredited’ the field as the other poster said.
> I'm not part of this battle.
Are you sure? You are the only one reading this conversation and seeing a ‘battle’.
Have a higher evidentiary threshold when it comes to results that contradict common-sense, or that suggest changing your current behaviour? Of course sometimes counterintuitive results turn out to be real, but most the time the "follow the science" people are getting prematurely excited.
Having lived a few years and witnessed "the science" change a number of times, I'd say that in my social circles (which are by no means representative of everyone) disproportionate trust in supposed scientific conclusions (particularly those that go against common sense) is a bigger problem than the reverse. E.g. official diet advice over the last few decades, or relative damage done by diesel vs petrol cars.
Follow the science is only used as a rhetorical device outside of science to try and convince people of something political. You would never hear an actual researcher say that.
There is a realistic, weaker statement about the best available information we have, that a specialist could use to explain to a non specialist why they are making a recommendation about something emerging or theoretical. But what we are hearing with "follow the science" really means follow the carefully crafted political message that politicians with scientific credentials have put out.
It's easy to see a distinction. Nobody needs to be told to follow the science on antibiotics or birth control or something. I think the blatant anti-intellectualism in the follow the science type statements is why we have so much worry about vaccines e.g. People aren't stupid and they can tell the difference between being manipulated and being presented with something objective. Even if you're right, it's a bad strategy to try and trick people or use religion to get your point across. See "the science is settled". Nothing makes people stop listening faster.
Edit: and ironically, people call those who don't "follow the science" anti-intellectuals, as if intellectuals take things on blind faith. Every time I hear mention of anti-intellectualism, I have to remember that people are referring to those that question official doctrine, as opposed to those who have framed religion as science to try and short circuit debate.
Totally agree. I do think most times I hear people saying "follow the science" they are saying it to someone that is being anti-intellectual. But at the same time the "follow the science" people usually give horrible counterarguments that contain straight up inaccuracies too. I don't know what annoys me more.
A fun one that circulated for awhile when the vaccines were first launching was "all vaccines are the same". They are decidedly not, and I don't think people are as stupid as that kind of messaging implies. It was weirdly taboo to say that you'd prefer a particular vaccine, even once we got to the point that there was enough stock to choose. It's true the clinical trial efficacy numbers shouldn't be literally compared to one another, but that doesn't mean we should pretend the vaccines are actually the same either. Somebody deciding to drive an extra hour so they can get an mRNA vaccine is not anti-science lmao.
Maybe it's a matter of perspective here. Any (tested) vaccine is better than no vaccine. So if you're just looking to be in a better state than no vaccine, then they are all equally effective at that. I have nothing to back up this theory other than knowing that it's common for nuance to get lost as things get repeated over and over, being distilled down to phrases that are easy to parrot and being twisted as details are misunderstood like a game of telephone.
I take issue with the entire concept that "normal" people can "follow the science".
Most scientific fields I know I can't follow because I don't have enough background. I love reading papers in the fields I have a base understanding to be able to get something out of. The idea the average person can follow all scientific fields with no background just doesn't make sense.
When I read language that says "follow the science" or "based on science" it is almost always using science as a rhetorical device and should not be trusted, period.
This is actually closer to medieval magic than science. The incantation "based on science" makes every piece bullshit "true".
Evidently, there is a serious problem with scientific literature and publications and how they are incentivized. On the other hand, we have scientific method, which is a well-defined and understood technique. The publications don't always follow it. They might be fraudulent or mistaken with good intention. Not everything labeled as science is science. I believe "following the science" is important when it's following scientific method, passed peer reviews, can be reproduced independently, etc. The rest is noise, and the problem is that it's difficult for many to differentiate.
I mostly see "follow the science" in regard to well-established and scientifically validated theories and practices, like the germ theory of medicine and its implications for hygiene, or the theory of immunology and vaccination.
I had the "privilege" of reviewing a certain piece of software written to model a certain pandemic outbreak...
Numerical modelling in biology/virology feels rife with serious problems that are simply sidestepped because of ego and status. I raised alarms and as merely a senior expert in distributed computation and numerical modelling I was shutdown quite strongly. After the group in question were forced to accept our published work refuting their results to be FUBAR due to severe numerical instabilities which hadn't even been checked for we were shut out for a group who patted them on the back for being "oh so clever"...
(our published work numerically demonstrated results they were presenting as fact were compatible with statistical noise and that it is known they didn't have the required extra 100xcompute or 100xtime to produce results as precise as what they were after)
Had a similar experience a few years ago with a geological modelling group and a group hoping to do biochemistry on a huge cluster of 20k cpu cores.
For all of the "numerical sciences" achievements only really physics and chemistry (and maths) seem to have championed the ideas of reproducibility and falsifiability in numerical analysis and simulation.
I doubt most people understand the intensity of the incentive to mess with the data. In college I was a lab assistant for a professor who taught courses on research integrity and how to evaluate the quality of scientific papers. After thousands of hours of work on a study with routine 20 hour days collecting data, he wasn't getting what he needed to publish. At the tail end of one of those days I caught him with the equivalent of his thumb on the scale. He gave an excuse that he would have failed as an answer on one of his own tests. I argued a bit but then shut up. I kept shut up while that data was not excluded from the analysis that was eventually published. It wasn't enough to change the result, but still bothered me.
So yeah, trust maybe but verify definitely. The rewards for faking it are just too great for an honor system to be reliable.
I think it's more that the punishment for not faking it is too great. We need to be okay with following the scientific method and rewarding folks regardless of the outcome. Otherwise we're bound to see everything "succeed".
"The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness."
He talks about the case against science as well as bad studies like they are the same thing. Science is simply a method that can be done well or badly. The bad studies aren't fundamental to science itself.
> Science is simply a method that can be done well or badly
Science as a word doesn’t typically denote “the scientific method”. It usually means the body of knowledge and the collective of institutions that claim to practice the method.
The bad studies may not be fundamental to some abstract ideal of what science could be, but they seem pretty fundamental to humans doing science in the real world.
I completely agree that the term science has many meanings which include the overall body of knowledge which includes bad studies.
However I can't understand why bad studies would be fundamental to "humans doing science"?
You don't need to take other research results as correct to do your own, so why does bad science need to be part of the mix? Can't we choose to ignore them or treat them with skepticism?
Isn’t that the whole point? It does in fact seem that scientists in general are not good at telling good studies from had. Why would this be surprising?
If scientists can't tell good studies from bad and yet use the results of these studies, then in my mind those scientists are simply doing bad science and are adding to the problem, since their studies will also be bad.
The first example given in the article is of a researcher who published faked results. Other researches shouldn't be basing their research on these faked results. They can wait till the results are independently replicated, or replicate them themselves.
The main reason of using a scientific method is to eliminate bad theories and part of this is determining the truthfulness of other scientists.
In summary, using bad studies isn't fundamental unless you're fundementally doing bad science that can't discern good evidence from bad. The "surprise" you are attributing to my comments is essentially amazement at the idea that people trained to seek knowledge would be so careless that they take the results of studies as a base truth with which to continue their work.
> If scientists can't tell good studies from bad and yet use the results of these studies, then in my mind those scientists are simply doing bad science and are adding to the problem, since their studies will also be bad.
Yes, I completely agree with this.
> They can wait till the results are independently replicated,
Yes, this would be ideal. A mechanism to trace replication history would help. Most studies though never get replicated, and don’t replicate.
> or replicate them themselves.
In very rare cases, perhaps, but in general this would be impractical.
> In summary, using bad studies isn't fundamental
How can you be so sure? The reason science is done this way is due to human behavior and incentive systems which have never so far escaped this problem. You are comparing against an ideal which has never existed in reality.
> unless you're fundementally doing bad science
This is how science is done. It is certainly fundamental to the current practice of science.
> that can't discern good evidence from bad.
This is a false dichotomy. Evidence is not binary. Science is much less able to distinguish strong evidence from weak evidence than in your imagined ideal, but then again, your imagined ideal has never existed,
> The "surprise" you are attributing to my comments is essentially amazement at the idea that people trained to seek knowledge would be so careless that they take the results of studies as a base truth with which to continue their work.
Are you feigning the surprise as a rhetorical device or do you really not know how science works?
This is also especially ironic considering The Lancet published a totally fabricated study that supposedly demonstrated how dangerous a Trump-touted Covid treatment was.
Some highlights to show how health research is published:
> Mol, like Roberts, has conducted systematic reviews only to realise that most of the trials included either were zombie trials that were fatally flawed or were untrustworthy.
> But the anaesthetist John Carlisle analysed 526 trials submitted to Anaesthesia and found that 73 (14%) had false data, and 43 (8%) he categorised as zombie. When he was able to examine individual patient data in 153 studies, 67 (44%) had untrustworthy data and 40 (26%) were zombie trials.
> Others have found similar results, and Mol’s best guess is that about 20% of trials are false. Very few of these papers are retracted.
"Many of the trials came from the same countries (Egypt, China, India, Iran, Japan, South Korea, and Turkey), and when John Ioannidis, a professor at Stanford University, examined individual patient data from trials submitted from those countries to Anaesthesia during a year he found that many were false: 100% (7/7) in Egypt; 75% (3/ 4) in Iran; 54% (7/13) in India; 46% (22/48) in China; 40% (2/5) in Turkey; 25% (5/20) in South Korea; and 18% (2/11) in Japan."
I find it particularly sad, since actively promoting academic integrity would do more for those countries than anything else, bang-for-your-buck-wise. Instead, many seem to be seeking the appearance of academic success.
(OTOH, I suppose Japan and South Korea may be on that list due to some kind intense pressure to succeed.)
I knew there were issues with various kinds of research. Things like p-hacking, "touching up" data, and so on. But the lead example is pretty wild:
> As he described in a webinar last week, Ian Roberts, professor of epidemiology at the London School of Hygiene & Tropical Medicine, began to have doubts about the honest reporting of trials after a colleague asked if he knew that his systematic review showing the mannitol halved death from head injury was based on trials that had never happened. He didn’t, but he set about investigating the trials and confirmed that they hadn’t ever happened. They all had a lead author who purported to come from an institution that didn’t exist and who killed himself a few years later. The trials were all published in prestigious neurosurgery journals and had multiple co-authors. None of the co-authors had contributed patients to the trials, and some didn’t know that they were co-authors until after the trials were published.
It's one example, chosen and presented by someone with something to prove, and which fails to provide any evidence (such as the names of the studies or lead author).
I think just as importantly is reining in the misapplication of studies. Too often I see some news/blog/politician/other say some policy or fact is proven by a study only to find that the study doesn't mean what they are claiming.
This can be stuff like using animal studies not supported by further human studies and claiming the effect is true for humans. Or confusing correlation for causation. Or viewing the speculated application of the study found in the conclusion to be absolute truth when many times the authors themselves claim additional studies would be needed to evaluate other aspects or confirm their findings.
A classic example was gender wage gap misrepresentation about 6-8 years ago. Many news groups and even the president were misinterpreting (and spreading that misinformation about) the BLS study to mean that a man and a woman in the same job with all else equal, the woman would only make $.80 on the dollar, when in fact the issue is an aggregate level issue mostly due to structural issues (and require different remedies than proposed). At least it seems many places have since realized their mistake, yet the misinformation persists in the general public.
People cite paper titles like they are facts. Nobody even knows whether or not it was an epidemiology study or an interventional study, they just say “they did a study showing that underwater basket weaving lowers your risk for colon cancer!” Nobody actually reads the studies.
During a debate or conversation, many people cite studies in support of their point of view. This is a problem because now the other person is swamped with a dozen studies to analyze and debunk before proving he’s right. And academia is producing huge volumes of these bullshit studies so no matter how you slice it, a huge unnecessary burden has been created of digging through all of them and circling the flaws.
Thankfully, there is an emerging cultural mechanism to deal with this in the growing “epidemiology is bullshit” sentiment. This is good because it reduces the bulk of bullshit that will ultimately need to be processed and debunked. If the study is epidemiology just cross it out by default. Those studies need to burn in hell. Shine the light of day on them and brandish the holy crucifix whenever you see one.
The only thing worse than a science denier is a person who blindly parrots study titles without ever reading the body of the paper let alone understand it. People complain endlessly about armchair scientists who are spreading misinformation based on their uneducated assessment of scientific data. And the people who complain about this are always the same people who cite studies that they don’t understand like complete idiots, spreading misinformation just as widely.
> there is an emerging cultural mechanism to deal with this in the growing “epidemiology is bullshit” sentiment
There is an emerging cultural mechanism - more a rampaging mob - that says 'X is bullshit' as a simple way of denying facts or issues that are inconvenient or difficult. It's used for the news media, academia, non-partisan government agencies (e.g., the CDC), etc., etc. and for everyone who disagrees.
I say this social mechanism is bullshit - the sources they disregard come with plenty of evidence, saying they are bullshit comes with none - it's just easy to say.
It's also very destructive. Where do we get our epidemiology or news or whatever else if anyone can claim anything is bullshit at any time, halting everything until they are proven wrong? It's up to them to prove their claim right; we can't all halt and freeze in place every time someone makes the minimal effort to vocalize, 'X is bullshit'. Without evidence, their claim is meaningless and should be ignored.
Epidemiology, the imperfect human institution it is, provides many successes.
If it isn’t a randomized interventional study, and apparently now even those are subject to fraud, it doesn’t prove anything. They shouldn’t be discussed except in cases where a real study is impossible (global warming) or when trying to reason about which hypothesis should be tested next. It’s definitely a vein of ripe shit in our society, the misuse of epidemiology.
Epidemiology is precisely one of those cases were very often a randomized control trial isn't feasible (or even ethical to begin with). Your argument is self-defeating.
That's why countless time has been spent refining the techniques and methods used to make and understand those studies, and why the vast majority of experts balance different kind of studies while making decisions, not just RCTs.
Wouldn't this be the default scientific position? Or to take some of the hyperbole out of the statement, rephrase as: "is it time to assume that health research hypotheses are incorrect until a preponderance of data and reproducible studies prove otherwise?"
"preponderance" is doing a lot of work in that statement. I would argue that this is mostly about reevaluating our standards for preponderance.
And it's really hard to know if a study is reproducible. We could assume everything is incorrect until it has already been reproduced by an unrelated party, and I think that would be a major change in thought.
Scientific.. yes. But the studies are intended and/or used for a specific business purpose. The moment you recognize this simple reality, it becomes extremely difficult to take anything at face value. My wife is on the other side of the spectrum. She explicitly believes that companies/researchers/people generally want to do the right thing. It is infuriating, because my personal approach is my approach to games: 'shit, until proven otherwise'.
Exactly. It's one thing to, say, do a study that arrives at wrong conclusions because of insufficient controls or subtle mistakes in statistics. Quite another to simply invent patients or make up numbers.
Exactly, that's the thing about science, nobody believes it until everything that has been tried to show it is wrong fails. I wish this were taught more in schools.
Yeah but you also can't go test 100 previous studies that the work you want to do is based on before you can even start yours. That's extremely inefficient, wasteful, and will tremendously slow progress.
> That's extremely inefficient, wasteful, and will tremendously slow progress.
Democracy is also “extremely inefficient, wasteful, and slow to progress”. A process with those deficiencies can still be the best available approach (although your example is perhaps too far in that direction).
I'm actually kind of flabbergasted that people -no matter who they are- are automatically given the benefit of the doubt, without question.
I'll bet that a lot of folks just assume that anything they do will be taken at face value, without question or inspection. I also suspect that many "brought and paid for studies" are done this way.
For my own work, I generally assume that most of these studies are pretty much worthless, and tend to do some of my own homework before accepting them. Since most don't concern me at all; it's not a big deal.
Health is just one place this kind of thing happens. Software Development is absolutely rife with bad implementations. I am not in AI, but I have heard from a number of people that AI has a big problem with irreproducible results.
I work in ML research and I used to do experimental physics. I'd agree that specific results in papers can be hard or impossible to reproduce, but that never really bothers me because at least in my work, the specific experimental result is rarely material to why I'm interested in the paper. It's more of a demo, and like a demo, you know its orchestrated to look good. What I'm interested in is what is the mechanism behind the advance and do I think its applicable or relevant to what I'm doing. If the paper is really just a random observation of something that worked better, without a causal explanation, it's not very interesting, but I don't see those often.
Maybe health research is very different, and people are latchjng on to surprising results they find in papers, but I doubt it's a big problem in academia, much more likely in the media. If I was a doctor and saw an out of the blue study claiming a surprising result, I'd discount it accordingly. If I saw a causal explanation with evidence, I'd give it closer scrutiny and follow up if it seemed relevant to me. That is how research works in my experience.
>I have chosen the word ‘zombie’ to indicate trials where false data were
sufficient that I think the trial would have been retracted had
the flaws been discovered after publication. The varied
reasons for declaring data as false precluded a single
threshold for declaring the falsification sufficient to deserve
the name ‘zombie’,
1. Carlisle JB. False individual patient data and zombie randomised controlled trials submitted to Anaesthesia. Anaesthesia 2020; https://doi.org/10.1111/anae.15263.
This should be the default scientific position, however because people (including scientists) care greatly about their health and the health of their loved ones, they are very likely to latch on to things that they would like to be true
science is almost never practiced in its ideal form; maybe it's Time to assume that results from our scientific institutions are flawed which is my assumption
If the editor didn't trust the data, why did they publish?
The people who keep informed on their field (aka "in the know") would then be tainted because they would likely believe a journal would vet the data, process, and researchers before publication.
Unless you mean "in the know" in that they know the entire publishing system is a scam... and the ripples from that are huge.
"If the editor didn't trust the data, why did they publish?"
They're in on it. Without material the journals have nothing to publish. So their inclination is to accept and publish with as little friction as possible.
Publish or Die. Remember? That applies to the whole supply chain, not just poor put upon individual researchers.
Similar to how in psychology replication studies when they asked professors to predict which studies would and wouldn't replicate I think they were ~75% correct.
Replication was ~50%.
They have an idea of what is bullshit, but there's a very strong culture of 'don't call others out on bad research'
"...Ian Roberts, professor of epidemiology at the London School of Hygiene & Tropical Medicine, began to have doubts about the honest reporting of trials after a colleague asked if he knew that his systematic review showing the mannitol halved death from head injury was based on trials that had never happened. He didn’t, but he set about investigating the trials and confirmed that they hadn’t ever happened. They all had a lead author who purported to come from an institution that didn’t exist and who killed himself a few years later. The trials were all published in prestigious neurosurgery journals and had multiple co-authors. None of the co-authors had contributed patients to the trials, and some didn’t know that they were co-authors until after the trials were published. When Roberts contacted one of the journals the editor responded that “I wouldn’t trust the data.” Why, Roberts wondered, did he publish the trial? None of the trials have been retracted."
I realize that meta-analysis is regarded as a valid research method, if not one of the best, but honestly, I don't know why. If the original studies are garbage, no amount of statistical manipulation is going to make them not-garbage.
Meta-analysis normally tries to exclude "low-quality" studies but if the standard of honesty in a field or sub-field is truly abysmal, I guess it's GIGO.
Is that really true from a statistical standpoint? It seems to me that running ten 20-person studies is different and less valuable than running a single 200 person study, because each of the 20-person studies has a much larger error range that you have to account for. But I'm also not good with stats.
A major investor & leader in healthcare that shall remain unnamed had a book "How to Lie with Statistics" in a public reading list. It's a good book & a quick read. Highly recommended.
It's interesting that it takes an editorial to make people suspicious of statistics, how statistics can be abused, & the conflicts of interests that many people who utilize statistics have. Sample bias needs to be treated as deliberate dishonesty rather than a simple mistake. These people who make these mistakes are professionals and should know better. Their code of conduct should penalize them harshly for making these sort of mistakes.
A strict code of conduct with harsh professional penalties are necessary to remove bad actors who hide behind subtle lies that have a major impact on public policy & public opinion. A slap on the wrist means it's always worthwhile to lie with statistics. A removal of license & banishment from the profession on the 1st or 2nd offense would quickly remove the bad actors. This code of conduct should also extend to the peer review process. If the peers pass bad statistics, the peers need to be held accountable as well.
Already reached the conclusion when I saw a genetic researcher presenting his p-value < 10^-40 as better than < 10^-10. I kept my mouth shut because I didn't want to ruin the poor guy's moment in the sun, but I knew it was time to get out.
My naive understanding is that "smaller p-value" == "more likely result is true".
I know there's always more nuance in statistical reasoning, but the first number is vastly smaller than the second one, right? Is it just that both are hilariously tiny and not credible? Or is there no additional value after you get into the one-in-billions territory?
It would be, but such an imbalance of p-values is unrealistic. 10^-10 probability? If your probabilistic model includes even a one in a billion chance of messing up (10^-9), a p-value of 10^-10 is already too small. That’s before you look at 10^-40... so they are probably both wrong.
A nice demo of this effect is DNA matching in criminology. Although DNA matching of suspects to DNA samples can be insanely accurate, in practice it is limited by the incidence of monozygotic (identical) twins, which is about 3 in 1,000. You cannot be more certain than this that you got a match, essentially.
Exactly, numerical errors could easily have accounted for the difference between already tiny p-values. The point isn't that the smaller p-value isn't better than the bigger one, it is, but that small significance should have been attached to the difference.
This example is a gnome-wide genetic association study. Every genetic variations are tested, so at least 500K or more linear regressions were performed. This many statistical tests could lead to many false positives just by chance, so one must do multiple-testing corrections. The end result of multiple-testing correction is much bigger and therefore worse p-values. Hence the drive toward ridiculously tiny p-values.
Yeah I'm also mystified by that comment. You are correct that smaller P is better. Those near physics level P-values are not totally unheard of for genetics either, because they have very large databanks with hundreds of thousands of data points in them and the ability to do large analyses over them, so they can obtain a lot of statistical power.
Precision in p-values that small is more or less meaningless in almost all cases, because any violation of model assumptions will result in p-value imprecision far greater than 10^-10. p-values are (almost always) approximations based on an approximate model, and the variation between the model and reality is probably more than 10^-10.
Some tiny aspect of the real process that your model falls to capture might mean that that 10^-10 is actually 0.001, and 10^-40 is also 0.001. In complex biological fields it's fair to assume that there are always such tiny aspects.
You're right. The numbers are too small to be plausible. I read on Scott Alexander's blog about 5-HTTLPR that in genetics they can get very low P values relative to most life sciences, but 10^-40 indeed seems far too low for any plausible experiment. I guess even in particle physics they don't go that low.
> My naive understanding is that "smaller p-value" == "more likely result is true".
I think you're making the classic Prosecutor's fallacy: https://en.wikipedia.org/wiki/Prosecutor%27s_fallacy. In my experience, smaller p-value tends to be more of a measure of sample size than anything else, or an overly restrictive null distribution that is almost certain to be rejected.
I estimate the risk of human error (chose the wrong modeling assumptions, bug in data processing code, etc.) at least ~1%, so there really isn't any point in claiming any statistic that is smaller than that.
I'm going to go against the general concensus that seems to be forming here.
In my experience, at any scale, you cannot resolve trust issues by reducing trust. When you reduce trust, people producing the science expect to get more scrutinized and determine they can afford to get less critical of their own work.
The first thing I saw in academia is that scientists are almost never dishonest. Science is a complicated psychological process as you need hope to guide your research and you need rigor to test your hypothesis. When you let hope take a too big spot in your mind you tend to publish your partially tested hypothesis as results and when you lack it you tend to get paralized or to work on predictable work.
And I saw this process multiple times. But I extremely rarely saw actual malice. Even when there was an incentive to produce bad science, you could see that the author genuinely convinced themselves that what they produced was riogorous and correct.
So like in the political arena, it's a vicious circle to reduce trust. And in the contrary it's a virtuous one to increase it. When most of the science is considered genuine, it becomes a much harder hit on any scientist to get its work shown incorrect. That in turn promotes more self criticism which is the cheapeast and most effective kind of review.
> The first thing I saw in academia is that scientists are almost never dishonest.
Was that health research, or some other field? It's plausible that health research is more corrupt than, say, theoretical physics, because of the immense amounts of money that can be gained from particular outcomes.
I work on large multi-center clinical trials as a machine learning engineer. One of my projects involves the semi-automation of the detection of fraudulent data.
There's one link in the chain here missing that some people here seem to be ignoring. The authors of this post (while entirely correct) draw no link between "bad data" (which is doubtlessly responsible for a large number of "bad papers"/"bad trials") and "bad clinical practice."
I don't know a single clinician who would base their care on the findings of a single-center RCT of the kind described in this article. Or the findings of a meta-analysis of single-center RCTs, for that matter.
Bad data happens in multi-center RCTs too, and in fact that's what I'm focused on, but a lot of work already (and therefore $, for the cynical) goes into the validation of data (see [1] for a brief description). Phase III clinical trials in the west practically require a robust multi-center RCT, where systemic fraud is very difficult to perform (but not impossible [2]). By the time a Phase III trial is conducted, the efficacy of the drug can already be estimated, and the focus of the drug company (who yes, often fund these trials) is to conduct a trial which is unimpeachable in the face of a regulatory board (who are generally good at their jobs, although the revolving-door tends to reduce public trust and should be legislated away).
In short, I support most of the proposed changes to incentives around publish-or-perish. I reject the notion that these incentives are (currently) significant drivers of decreased quality of standard of care in the West. I think global governance structures, as suggested in this article, could improve understanding among both clinicians who are not necessarily scientists and the general public about just how validated a given standard of care is.
tl;dr Most good evidence-based practitioners already think this way -- not because they inherently believe fraud is rampant, necessarily, but because evidence says the kinds of studies where fraud is most prevalent are untrustworthy for other reasons.
Some segments on HN have strong anti-scientist sentiments (even while they proclaim to be pro-science), assumine we all are crooked, stupid or both, hence your insightful and reasonable comment being downvoted.
Is there fraud? Sure. Is there a lot of fraud happening in American science? I don't think so. To quote the article:
"Many of the trials came from the same countries (Egypt, China, India, Iran, Japan, South Korea, and Turkey), and when John Ioannidis, a professor at Stanford University, examined individual patient data from trials submitted from those countries to Anaesthesia during a year he found that many were false: 100% (7/7) in Egypt; 75% (3/ 4) in Iran; 54% (7/13) in India; 46% (22/48) in China; 40% (2/5) in Turkey; 25% (5/20) in South Korea; and 18% (2/11) in Japan. Most of the trials were zombies. Ioannidis concluded that there are hundreds of thousands of zombie trials published from those countries alone. "
I think institutional incentives matter a lot, and the reasonably lucrative prospect of careers outside of academia if things don't work out. That is perhaps why we see such stark regional differences.
No one I know has committed fraud in their research. I've seen mistakes in their code however, but that is another story.
Another problem with studies is, that negative results are rarely published unless it's something really really "interesting".
"we tried treating X with Y, and it didn't help (even though in theory it should have some effect)" is harder to get published than "we treated X with Z in vitro and it killed all the cancer cells (and noncancer ones too, whoops)".
I thought the article was going to be about research that could not be reproduced, not outright fraud! We're in the sleeptech space, so now I understand why there is so much snake oil that has university names attached to the "research". I thought it was just the company that was selling the product that was a fraud, not the research institutes themselves!
The headline is more than a bit sensationalist. I never know what to make of BMJ, which sometimes seems sensationalist: Can anyone in the industry or profession characterize who they are, what they do, and what their reputation is?
No, just a statistical reality of multiple hypothesis testing.
Just like you wait a few blocks for a confirmation on a blockchain, you have more and more confidence with confirmation of health research by independent papers.
From my experience, most of it is. I just left a high paying position working in the healthcare space as a data scientist, because it became clear this was known and there was no intention to improve the situation. Instead, the focus was on selling and making a quick exit.
> Research authorities insisted that fraud was rare, didn’t matter because science was self-correcting, and that no patients had suffered because of scientific fraud.
Thus aiding and abetting murderers like Paolo Macchiarini.
> All those reasons for not taking research fraud seriously have proved to be false, and, 40 years on from Lock’s concerns, we are realising that the problem is huge, the system encourages fraud, and we have no adequate way to respond.
As his patients could attest... If they weren't dead.
I presently work at Defense Innovation Unit where pretty much all we do is test and evaluate prototypes on behalf of the services and agencies. We spend hundreds of thousands to millions testing each thing. Because hundreds of millions of dollars may be spent based on what we find. We probably get 80 pitch decks for every area of interest we announce. That usually gets winnowed down to 5-10 that actually have technical merit and are responsive. We get a few shysters with every batch, but the vast majority are simply too optimistic about what they've done or what they can accomplish.
I believe we are getting to the point where we need to create a broad category of crimes for misleading the public. Form lying as a trusted public figure (elected politician, government employee) all the way to publishing or submitting fraudulent research for publication. Countless lives were lost in the past year because people believed fraudulent studies about COVID treatments. Worse than that, government officials publicly supported those false reports and increased the damage they caused for political or even financial motivations.
The search for objective truth is a societal defence mechanism. Assaulting it is an assault on society itself.
Agreed. Instead of looking for institutional solutions, we can demonstrate the need for critical thinking and skepticism as individuals. This is a can-do solution we can start with now. The above poster's plea of, "there ought to be a law" removes the agency of the individual.
There are also serious problems with appointing fact checkers as impartial arbiters of objective truths. It is an untenable scheme. A naked appeal to authoritarianism.
> we can demonstrate the need for critical thinking
Good luck with that. Humans are not always rational and don't always act on their best interests, much less on the whole species' best interests.
If we leave saving our species to individuals, I wish the cockroaches better luck. I, personally, would bet on ants and bees, as they seem to be much better organized than us.
My alarm bells would go off if someone claimed to know what is best for the whole species.
If individuals are not rational as you say, then how would politicians, technocrats or other central planners be rational?
If an individual doesn't have the right to coerce you, how does a collective of individuals have the right to coerce you?
Individualism is the decentralization of information and decision making. It has a different failure mode. If we accept that men are fallible, then individualism allows for a competition of solutions and ideas. Humans will never be perfect. Decentralization allows us to progress and iterate faster than central planning, which has all of the same problems with what you call "irrationality".
> A politician? An private or public institution complicit n this behaviour?
Elected politicians are representatives of their people. If people are voting on politicians complicit with institutions, private or public, engaged in this behavior, then it's an example of this issue.
Trump has shown us how frighteningly little of American democracy is kept together by anything more than decorum. If politicians remain popular while being shameless, there isn't much your democracy can do to protect itself.
I can say that the same (rapid institutional degradation) is happening in Brazil, the country I grew up in. In 2016 an elected president was removed from power based on a campaign of disinformation that started before the election (but failed to prevent her re-election) that ended in accusations of unlawful accounting maneuvers that, in the end (well after the impeachment), were considered legal by a court. The disinformation campaign continues and was responsible for the election of a cartoon fascist that is still supported by a third of the population. Newspapers report 3 to 5 lies every day. Last one is recommending a chemical castration drug as a COVID treatment.
We need institutions that more robustly defend ourselves from disinformation campaigns.
The article is about intentional error; however, I have found this to be very approachable and entertaining primer on (probably) unintentional error in research:
Research that has been further filtered onto your favorite infotainment coffee-table-science column further suffers from the 'eigonvalue' problem. I think assuming intentional fraud is excessive for a default position, but complete trust of any document formatted with Latex because "it's science" is probably worse.
It is about time to focus on the right problem: management standards that cause this crap to be pushed, and the effective immunity from consequences companies have when they lie.
Why blame scientists when power is actually with management?
And why let management and investors get away with this? It is about time "limited" liability had a pass through liability for this type of stuff: if you lie, knowingly or not, there are consequences and the consequences bypass limited liability. I bet if that happens this type of crap would immediately cease!
Unfortunately, you yourself are assuming that the actors in charge of holding people are accountable are themselves trustworthy.
I feel we are approaching a singularity of low trust between people. It's only getting worse. You can't trust the watchers (journals) and you can trust the watchers' watchers (governments and the law). You can only trust yourself at the end of the day.
Is it naive to suggest that we try something similar to 3rd party security audits? If a journal or institution submits itself to multiple 3rd party audits, which are allowed to test a variety of nasty tactics to get bunk science approved or published by them, it could potentially be a good way to keep them honest. Unless, of course, they collude and become rubber stamps (like certain food labels in the US).
> Researchers progress by publishing research, and because the publication system is built on trust and peer review is not designed to detect fraud it is easy to publish fraudulent research. The business model of journals and publishers depends on publishing, preferably lots of studies as cheaply as possible. They have little incentive to check for fraud and a positive disincentive to experience reputational damage—and possibly legal risk—from retracting studies. Funders, universities, and other research institutions similarly have incentives to fund and publish studies and disincentives to make a fuss about fraudulent research they may have funded or had undertaken in their institution—perhaps by one of their star researchers. Regulators often lack the legal standing and the resources to respond to what is clearly extensive fraud, recognising that proving a study to be fraudulent (as opposed to suspecting it of being fraudulent) is a skilled, complex, and time consuming process. Another problem is that research is increasingly international with participants from many institutions in many countries: who then takes on the unenviable task of investigating fraud? Science really needs global governance.
Long excerpt but the best arguments defy paraphrasing. Arguably, Science would benefit more from decentralization than global governance, because more science could get done instead of spending time on the zero sum politicking of working the governance systems. When you train people to in-effect litigate their research as a case for approval of committees, the falsification of everything is an unavoidable outcome. It becomes like a legal dispute, where there is no truth, just the prosecution of their case to make anything stick they think they can.
It's not just health research, it's a couple generations of graduates who were given a simple blunt instrument in a system incentivised to popularize the use of that dull tool. The tool reduces to, "there is no truth, only power, words are just tools to struggle for it, responsibility is what you have when you don't have power, and if you take care of this system, it will take care of you, but if you call it out it will come for you first." That's the One Big Thing the hedgehog knows. If you practice it, you can rise to the top of almost anything without knowing much about it. The way to beat it is to set a bar of competence and concreteness.
Scrutiny of experimental data goes a long way to setting that bar.
As shown both in the article and in the comments, scientific establishment seems content to tolerate fraud. But when research goes against big money interests, suddenly standards become very strict. See how Andrew Wakefield was treated. https://www.bmj.com/content/342/bmj.c7452
There's the famous story of the doctor who figured out, in the 1980s, how to cure stomach ulcers.[1] Stomach ulcers are usually a bacterial disease, and antibiotics work.
The microbiologists in Brussels loved it, and by March of 1983 I was incredibly confident. During that year Robin and I wrote the full paper. But everything was rejected. Whenever we presented our stuff to gastroenterologists, we got the same campaign of negativism. I had this discovery that could undermine a $3 billion industry, not just the drugs but the entire field of endoscopy. Every gastroenterologist was doing 20 or 30 patients a week who might have ulcers, and 25 percent of them would. Because it was a recurring disease that you could never cure, the patients kept coming back. And here I was handing it on a platter to the infectious-disease guys.
Sounds like ivermectin is going through something similar. This off-patent, cheap, and safe drug is able to significantly reduce COVID-19 symptoms [0], and is extremely likely to be a potent prophylactic and treatment for COVID-19 [1], to the extent that the third wave in North America wouldn't have happened if it was used. The inventor, Satoshi Omura, won a Nobel Prize for inventing the drug in 2015, and tried to convince Merck (the original manufacturer) for many months to conduct an ivermectin trial for COVID-19, to no avail. On July 1st, he finally found a Japanese company called Kowa to charitably conduct a clinical trial for ivermectin, without Merck's help. Amazing right? But what response does Omura get from Western media? Quickly, his announcement video was deleted from YouTube [2]. You cannot find any English news about the Kowa trial being conducted. A few days ago, Omura was interviewed about ivermectin for the first time, on Yahoo Japan News [3]. Quoting him (using non-ideal Google Translate)
> "My impression of WHO is that I feel sorry for being caught in a dilemma. Until now, I have only seen bright light in my life as a researcher. But this time, I learned for the first time after reading this article that shadows also exist in the world... Ivermectin is no longer a scientific issue, but a political issue." --Satoshi Omura
It seems like big tech's misinformation crusade is biting us and science in the ass.
Here's a list of ivermectin trials on COVID-19.[1] It does seem to have some effect, cutting recovery time in mild cases by 20% or so. But that's not anything close to "would eliminate the third wave".
This study [2] from the early days of the epidemic indicated that the patients receiving ivermectin needed invasive ventilation much earlier. Which is not a good result.
Honestly I was surprised not to see any mention of IVM in the original post. Many of the points in the original article apply to what's going on with systemic reviews of IVM - see for example allegations of misconduct/fraud by theguardian to Elgazzar's big IVM study https://www.theguardian.com/science/2021/jul/16/huge-study-s...
It does also bring into question the validity of Tess Lowrie's systematic review of ivermectin efficacy.
Also, your systematic review [1] includes the Elgazzar as "low risk" of bias... when in fact Elgazzar had GLARING errors.
It also makes me question the competence of everyone involved in this systematic review that they can't find these glaring errors but some random med student can.
See also Tess Lawrie's response to the Guardian article, that they rated it "unclear" in bias versus low and high, and how the meta analysis is affected if Elgazzar is removed (12 min excerpt):
I agree, some of the trials have issues. But if you would like to cherry pick one trial and use that as evidence to the contrary, I will direct the reader to a firehose of ivermectin studies, which the reader can evaluate on their own:
You really cannot trust the raw risk estimates that they give on this site. But it's the most comprehensive list of trials for ivermectin there is, and a place for you to form your own opinion. Elgazzar has already been removed as a data point.
> I agree, some of the trials have issues. But if you would like to cherry pick one trial and use that as evidence to the contrary, I will direct the reader to a firehose of ivermectin studies, which the reader can evaluate on their own:
Surely the point of the OP's post is that the reader can't evaluate these studies on their own. At least not without undertaking the kind of review and background research that is not reasonable for even the expert reader.
Not to mention the irony of accusing the GP of cherry-picking when the site you linked is a cherry-picked list of trials curated by anonymous alleged HCWs.
First, don't underestimate people's ability to read some papers on Hacker News. It's not that hard. And second, what's wrong with anonymous alleged HCWs? Seems like the intersection of politics and science has gotten out of hand if you ask me. Also, if you could please find me a well-laid out superset of the articles in ivmmeta.com, I will gladly replace the link.
> First, don't underestimate people's ability to read some papers on Hacker News.
The point of the parent article is that it's impossible to assess a paper by reading it. There's no way to, for example, determine that studies on which a paper relies were fictional without further research. Ability doesn't matter - there isn't sufficient information in the paper alone to make an assessment. That is the crux of the crisis.
> And second, what's wrong with anonymous alleged HCWs? Seems like the intersection of politics and science has gotten out of hand if you ask me. Also, if you could please find me a well-laid out superset of the articles in ivmmeta.com, I will gladly replace the link.
It's hard to believe this comment is made in good faith. You yourself insinuated that another poster was cherry-picking, which is exactly what you did in your response. I can only assume you are trolling or otherwise commenting in bad faith?
It took 12 years for the paper to be retracted, so I wouldn't necessarily say "suddenly".
Also, I think fraud is easier to miss (which may seem as it being tolerated) if it's something people expect or is not such a big change from the norm. For example, I don't remember the specifics but there's a story of some constant that was estimated a long time ago, and as people tried to measure it more accurately, those who calculated a value too different from the previous estimate were rejected.
I bring that up because in this case, Wakefield's claim may have been so outlandish as to provoke intense scrutiny, which, as the paper you linked to says, led to findings of fraud.
Are you implying that he was let off easy? Should have gone to jail, IMO: "The panel found he had subjected 11 children to invasive tests such as lumbar punctures and colonoscopies that they did not need, without ethical approval."
Performing unfounded experiments on children while committing fraud should be treated as criminal, not simply reputation-destroying, IMO.
(I understand we need to protect researchers from some liabilities in the ethical pursuit of progress, but he was so far over the line that the "slippery slope" argument is kind of silly. Should medical researchers be immune to all prosecution no matter what they do or what lies they tell?)
Wakefield got off easy. This video does a good (and humorous) job of reviewing everything that was wrong with him and his study and it's a dense... 1h40m https://www.youtube.com/watch?v=8BIcAZxFfrc
Yes, though it's not much of a problem for those unwilling to take the leap based on only one or two emergent studies, who might prefer relying on decades of safety data.
Oh, but then that entirely sensible and understandable action might get one censored, deleted, fired, fined and/or discriminated against as a third class citizen. Cui bono?
Exactly. I've argued on social media with "trust the science" people: none of them were scientists while I am one. People in general have no idea how the sausage is made and how broken the system is. A year or more ago I was joking with a coworker "I hope medical studies are done more seriously than what we do". Today I wouldn't make the same joke given how the situation went out of control with states enforcing authoritarian policies based on poor understanding of how science work.
All research grants should take 40% of funds and give them to a completely separate team to do "QA" research. This will give up and coming researchers cred to get primary funding after a few good QA projects. If research cannot be reproduced by an independent QA team, it does not make sense to fund it or trust it.
I think the first important question to ask is: is the research I am looking at directly applicable and relevant to other people?
If stuff is highly applicable and relevant, then the chance to get away with made up rubbish starts to drop quite a bit, because people will want to try for themselves - and fail. It gets more problematic when the application takes a long time to show effects, i.e. long observation periods - there more can be faked.
Most published research is largely inconsequential and not interesting to but a tiny few, so "fraud" can easily hide in there. Yes, it increases the sum total of knowledge, but that might be about it. I am not saying most research isn't done well, just that there isn't a way to assess correctness at scale and only high-profile results might get a fast check (and even there...)
Is the incentive for publishing research that is fraudulent primarily money, prestige, or are there just that many professors that graduate students required to publish something?
he set about investigating the trials and confirmed that they hadn’t ever happened. They all had a lead author who purported to come from an institution that didn’t exist
To me, this doesn't mean that simple distrust is the answer. These are basic issues that should be revealed with even minimal due diligence during the editorial & peer review process.
Peer reviewers and Journal Editors should brink a skeptical mindset to article submissions from the outset before they're ever accepted for publication.
After that? Well, whenever research is on an emerging topic there is a certain amount of scientific skepticism you should use. Same if results go against an established consensus on a topic. However this is where the "replication problem" enters the picture because replicating research has a lower status.
When it comes to media reporting, things get even more complicated. New science is messy. You only have to look at COVID research for the past 1.5 years, and when it's an issue of such public urgency, EVERY development hits the public eye, pulling back the curtain on the sausage factor. Because new science is rarely "Hey look what I discovered!" followed by "Yay we all agree!" It's more of a conversation or dialectic, with ever more research revealing the picture a bit more until there's enough to be confidence in a given interpretation. And even there, work proceeds on alternatives.
The above is very much NOT how science is taught to the public in schools. You learn "Darwin Discovered Evolution!", not the significant years-long process of researchers arguing it out, sometimes even with heated vitriol. You learn "Newton Discovered Gravity!", not all of the complexities and disagreements that continue even to today.
Out education systems have failed society when it comes to truly understanding the scientific process. This is why distrust of science increased. Because in past decades awareness of scientific advances often only reached the public after at least part of the sausage was made, meaning now it looks like it's descended into total disarray.
I've had a thought on how to solve this issue by using basically a research paper futures market. You could implement this with Ethereum Smart Contracts. You have a market around the validity of research papers. You would need some authority that would act as the oracle of the paper's validity. If the paper is found invalid before some time period the people who bet on the paper's truth would lose their money to the people skeptical of a paper. This would also act as a mechanism to signal which papers people don't trust by the amount of people betting against a paper's validity.
That could still be a good system, like pay-for-play on the radio. While you could pump complete crap, it's more profitable to pump songs that people will actually like, and companies with better songs end up with more money to pump stuff.
Shouldn't all research be considered _tenuous_ until it is corroborated by a third-party without financial or social connections to the original researchers?
Not even that is enough because you might end up with the filling drawer problem. For example you might end up with 50 people trying to replicate it, 49 failing and 1 "succeeding" by chance. The 49 won't publish because they probably got the "wrong" answer by chance, and they don't want to publish negative results, so they will leave it in the filing drawers.
The one that did "replicate" it will publish and then in round 2 there is even more pressure not to publish negative results because "hey, this was replicated before already"
Why only health research? The prudent and sensible person never puts more trust than circumstances require in anything they haven't independently verified
why just health research? all science is already based on that very principle. In fact, a stronger one: you cannot ever prove that some research is "true", only that no inconsistency has yet been observed.
Replicating the results would be evidence that the original research was "true."
Unfortunately nobody wants to fund research that only attempts to replicate an already published result, when they could spend the same money on novel research.
In many European jurisdictions, including Austria, the losing party pays all costs of the lawsuit, including lawyers’ fees. This simple rule takes care of most frivolous suits. So the answer is: Go sue me.
All this sounds rather unsettelling. I am left with many questions. I'll reduce them to two:
1. are there people here who (especially medical field) do have the feeling the current system of research does its job (on average)?
2. Applied to very hot topic of vaccine safety assessment for the new covid vaccines, how sure can we be there is/will be enough room to critically assess this? (sorry i had ask, but just read https://www.wired.com/story/the-cdc-owes-parents-better-mess... so yea...)
I wonder if this will apply to research published by pharmaceutical companies with a proven history of illegal activity in support of the products they release.
I was terrified approximately 14 months ago when the big pharma companies started to get billions to develop the vaccine. My fear was that their research was fraudulent and they would be exposed and discredit science as a whole. I am very happy that this did not happen.
I guess we are not in a total catastrophe situation but there is significant rot.
Fraud vs incompetence is an important dimension here. I think a lot of the time people are just incompetent. In fact I assume that > 80% of scientific papers in fields with high levels of environmental noise (social science, environment science, medical, etc) are bad science.
So TL;DR I just assume incompetence instead of fraud.
As a layperson that feels effected by this issue (as I’m Autistic) I feel powerless, both to really understand this at depth, or how to do anything to fix this. Advocacy is moving things forward but I don’t know if that’s managed to reach researchers or effect the kind and quality of research being done. Luckily some of the more overtly harmful trash ‘research’ that underpins a lot of the medical model and (lack of) understanding about Autism is being dismissed/is now somewhat regarded as contentious (Baron-Cohen’s awful ‘Theory of Mind’), but the harm is done, and things are very slow to change. The dialogue and research around queer issues went through a lot of changes like homosexuality eventually being removed from the DSM and the trans related stuff has been getting better. How do we make progress like that for Autism? More Autistic researchers researching Autism? Actually involving Autistic people is a must.
My comment is a little messy, and I don’t really know what I’m talking about, I’m just wanting to voice a concern from a different perspective.
I will address the elephant in the room on this one.
168 comments so far, I did a research for word "vaccine" in the page, not a single result.
So I assume this has not been discussed despite the current situation we are living right now.
This is a frightenning read as we people in France are being literraly forced by our gouvernement to be vaccinated (Moderna, Pfizer) or be socially terminated.
I did not know, I'm pretty sure 99,99% of people also don't, that Moderna did not release a single product before 2020 and it "all in" bet in the Covid Vaccine with up to 1 billion funding from operation warp speed. Moderna was sometimes refered as the next Theranos. His leader fits clearly in the sociopath territory of Silicon Valley tranhsumanist billionnaires. "risk very big, win very big" is his mantra, this man wants to vaccinate billions of people annualy (read article please).
More, if you read those two articles, you could change Vaccine by any software product and you would have a typical business article about a Silicon Valley startup. This is frightening to death to think that this technology "software vaccine" is to be used on the whole population with only a few month of study and, worse, with "forced consentment" on populations.
Guess what happens when money conflict with health.
No results because vaccines are not accepted based on publications and peer review on scientific papers.
They go trough completely different and extremely rigorous testing process where everything is documented carefully, documents are examined and double checked.
It's great to see 168 comments before first anti-vaxxer comment.
It's not great to go about name-calling like a schoolchild if someone brings up misgivings about a topic you seem to have decided has no room for discussion.
> They go trough completely different and extremely rigorous testing process where everything is documented carefully, documents are examined and double checked.
Rigorous testing is all well and good, but we do know how many errors crop up in scientific papers, right? A lot.
The gold standard of testing is double-blind, randomized, placebo controlled studies and this is only done sometimes for vaccines, as far as I aware.
I believe there are certain circumstances such as no safe and effective already existing vaccine, or where there's a certain kind of benefit to the injected population, where vaccines are generally allowed to go through this double-blind, randomized, placebo controlled studies (the gold standard of testing) and certain circumstances where they do not go through such rigorous testing.
I believe this ethic stems from Jonas Salk's decision during the development of the Polio vaccine where the ethical call was made to not do double-blind, placebo controlled testing due to the desire to prevent damage to human lives that could be prevented by not using a placebo.
Covid vaccines did go trough double-blind, randomized, placebo controlled studies with population size that normal scientific studies can only dream of.
As I said, vaccine testing is is exteremely rigorous.
Moreover the mRNA vaccines arrived at their statistical targets far ahead of schedule and when more data came in, the hypothesis was only strengthened. The documents are all public btw, on Moderna and Pfizer's websites.
Yes sure, you can test all that you want, even on billions on people and write thousands of studies. That's what they are doing right now. But you can't buy time for your studies, do you understand that ?
If the average period for studying long term effect is 5 to 10 years for a vaccine, how do you do that with a 6 month old vaccine ?
People are losing their mind, even rational educated people are throwing away any critical thinking to embrace the rainbow rhetoric, this is a collective hysteria crisis.
I was speaking of the HN page discussion, but it seems my search on Firefox is completely broken, so I withdraw my mention of no vaccine reference, but I stand to be commented on the subject.
I don't know. The amount of people that take one poorly measured data point and use that as a north star seems more troubling to me - than a society with low trust.
I'm not convinced a low-trust society is inherently bad. I am convinced that a society that has high trust in garbage data is bad.
==I'm not convinced a low-trust society is inherently bad. I am convinced that a society that has high trust in garbage data is bad. ==
It feels like we currently have both at the same time. There is "low trust" in any data that doesn't confirm an existing bias, but "high trust" in the data we want to be true.
I see it all the time in HN comments. Any study that contradicts the consensus is met with comments about "correlation =/= causation", while anything that supports the census is supported and correlation/causation isn't mentioned.
I think a better term for that is "brain damage". So what if all our brains are built damaged. Saying people are not reasoning machines, but social maneuvering machines doesn't change the fact they are kinda broken...
The thing is, I think people can be both reasoning machines and social maneuvering machines, depending on the incentive structure. Unfortunately we’ve created a society that greatly incentives social maneuvering.
I meant more that society's attitude toward standards of any kind--honesty, proper behavior, civility--has become increasingly suspicious. These standards are, some argue, simply power structures to promulgate the various -isms that plague society.
If you've got scientists that believe that the standard of honesty is an artificially constructed power structure then obviously you can't trust them.
I know this is a forum for nerds, but I struggle to understand how anyone could seriously believe that being surrounded by people who are either psychopathic or paranoid is probably A-OK, as long as they are all competent statisticians.
You're a psychopath or paranoid if you don't blindly trust data - without putting some effort into seeing what the quality of the data is and how much evidence supports it???
> I'm not convinced a low-trust society is inherently bad.
The term "low-trust society" is generally not used to distinguish societies where people put enough emphasis into seeing what the quality of the data is and how much evidence supports it. Maybe because no one ever heard of such a thing. I do share your lack of conviction that such a society would inherently be a bad thing.
Maybe we in fact agree that a low trust society in the conventional sense -- one characterized by low interpersonal trust -- is less appealing?
Is this some vaccine hesitancy in disguise from BMJ?
Stephen Lock, my predecessor as editor of The BMJ, became worried about research fraud in the 1980s, but people thought his concerns eccentric. Research authorities insisted that fraud was rare, didn’t matter because science was self-correcting, and that no patients had suffered because of scientific fraud. All those reasons for not taking research fraud seriously have proved to be false, and, 40 years on from Lock’s concerns, we are realising that the problem is huge, the system encourages fraud, and we have no adequate way to respond.
Yes, all approved medicines are the result of medical research. But approving medicines isn't the entirety of medical research.
I'm sure you've heard of the adage: egos are large when the stakes are low. This is related to that. The push to "get published" is so great that if you know that your research has no practical effect or that it won't affect anyone, you can kind of make up whatever results you want. As long as it publishes. The goal isn't to find the truth or to answer a question, the goal is to get a byline in a paper. And you get more and better bylines by discovering something radical and novel rather than by saying "Nope, doesn't work, just like expected".
On the flip side, when the results really matter, you'll find people do proper due diligence. Especially when your results will be essentially confirmed practically by billions of people on the planet. When the stakes are high, we wind up being way more cautious.
Of course, I hear your meta-concern. Because, yes, people will use this paper to pull the "Science is a lying bitch"* card. But it is also an issue that must be dealt with or at the very least acknowledged. As the article itself notes, someone did notice it in the 80s, but due to very concern of casting doubt on medical research, they kind of just hoped it wouldn't be an issue. And now the issue is too great to deal with simply.
In the end, medical, and really all scientific, research cannot be "hit driven". Failure must be an option. And if this month's issue of The BMJ is a bit "boring" or "thin", then so be it. The focus must be more on finding the correct answer rather than the flashier answer. Even when the stakes are small.
*Science is a lying bitch: From the It's Always Sunny in Philadelphia episode "Reynolds vs. Reynolds: The Cereal Defense". One of the characters uses the fact that certain noted scientists had incomplete ideas about certain scientific phenomenon or weren't completely right on every subject as proof that science itself was flawed and couldn't be trusted on the subject of evolution and therefore one should believe the biblical account of creation because the bible hasn't been changed since it was written.
What? How do you figure that? Some person points out actual instances of poor quality research and you imply he is anti-vaccine? I can find no record of this person discouraging vaccination.
Ok, maybe its just a coincidence that it was published right when the current dangers of misinformation are so high and so many are failing to believe science.
The solution to “people aren't believing academia” is to make academia more trustworthy (probably by filtering for trustworthiness rather than making individuals more trustworthy), not to encourage scientism. Seeking the truth is still important, after all.
Believing in science means believing in falsification and sceptism. This was published during the ongoing replication crisis in medicine where we're finding that more than half of cancer studies don't replicate [1]. This replication crisis hasn't been put on hold and all science is now deemed irrefutable or you are VACCINE HESISTANT and DANGEROUS.
I'm mostly confident in the COVID vaccines because we're at billions of doses and there is so much uncorrelated data about the vaccines, and they're so controversial and their safety/efficacy is being looked at by so many people that one can be relatively confident about their efficacy and safety (so long as you look at uncensored information to avoid systemic bias). We don't need to be science denialists and start denouncing scepticism on safety grounds, we can point out there are special reasons to be confident in the vaccines.
I've got both vaccines especially early for my age group (clinical vulnerability) and I've been lobbying my vaccinated peers to at least get one dose, so apparently not.
I say "mostly confident" because we have literally zero data of effects after 3 years and can only make inferences, and it's foolish to feign knowledge you do not have to avoid being called "hesitant", but my lack of confidence does not make me hesitate to recommend the vaccine. I also lack knowledge of long term effects of COVID itself which may in fact be worse than the long term effects of any vaccine given COVID does most of what the vaccines do plus extra nasty stuff.
Zero data of the effects after three years. Plenty of data about the effects after one. It's not “reasoning without data”; it's reasoning without as much data as one would have liked.
The commenter did not say the results of one year were under consideration. They described a state of recommendation without information. Besides, March to July isn’t even half a year for phase 3 trials.
You seem to be concerned that I don't maintain a state of complete agnosticism in any matter which I lack data for, but it's really hard to do so in practice in a world of distinct choices. Most everybody will either get vaccinated or not get vaccinated and at least have made an implicit prediction about the future. Since I was pressured into making that choice it seems hardly surprising that I'm willing to opine on this issue where I lack data.
>"Is this "mostly" qualifier not the dangerous front end of misinformation and vaccine hesitancy? "
I do not understand. HN forum are places where "question everything" should be the norm, especially about science, and more especially about health when so much money is at stake.
It seems we are in a religious quest, with virtue signaling signs everywhere, to laudate the vaccine COVID-19 miracle. Everything is "magic". Somehow one could make a parallel with the state of the economy and the stock market. We are printing our way to prosperity as we are saving the world with "methode Coué" and of course brutal coercition (mandatory vaccination at state level).
What part of "you can not buy time" don't you understand ?
You may throw billions, no, trillions, quadrillons, bazillions to all the Silicon Valleys startups and brains to developp a vaccine in a matter of months ; but you can not buy a 3, 5, 10 years study with this money. You can't even buy one month or one second, you just have to wait, this is life and bio-trans-human thinking still can't and never will overcome nature, space and time.
So you might be "mostly" confident about the vaccine, I would say that this is the absolute maximum optimism best case scenario adjective you can use. Moderna, Pfizer sure are, "risk big, win big".
But we'll have to wait a few years more to completely assert something like "safe and effective" for the COVID vaccines, that is not a fact anybody can deny.
Well of course I'm not going to believe blatantly fraudulent "science". That has nothing to do whether or not I would take a COVID vaccine (I have). Trying to suppress the truth that there is shoddy or false research to keep up appearances is ridiculous. There's something between ignoring scientific evidence you dislike and credulously taking everything a scientist says as fact.
And the "dangers of misinformation" have been high for a long time now (perhaps since antiquity). Modern climate change denial has its roots in the 90s, and modern vaccine skepticism is similarly old.
Affirming the consequent, sometimes called converse error, fallacy of the converse, or confusion of necessity and sufficiency, is a formal fallacy of taking a true conditional statement (e.g., "If the lamp were broken, then the room would be dark,") and invalidly inferring its converse ("The room is dark, so the lamp is broken,") even though the converse may not be true.
You are trying to take a very general statement and make it a statement only about one very specific subset. That does not mean that someone pointing out your logical flaws is falling into the converse fallacy.
It's like someone writes an article about drought in the west, and you're saying that they must really be talking about El Centro, California. Yes, it probably includes El Centro, but it's also talking about Phoenix, and LA, and Salt Lake City, because it's really talking about the west as a whole. Trying to make it be "about" El Centro is missing the point.
The article is about all of medical research. Vaccine development is part of medical research and is therefore included. I did not attempt to "make it a statement about one very specific subset." I pointed out that the article is similar to the kind of article seized upon by people who have vaccine hesitancy ideations.
So, on an article that you admit is about all medical research, you asked:
> Is this some vaccine hesitancy in disguise from BMJ?
Why, in an article that's about all of medicine, do you ask about BMJ's stance on vaccines? Why not ask about their stance on chemotherapy? Why single out vaccines? And, in the current climate, are you talking about all vaccines, or are you really talking about vaccines for one particular disease?
That question you asked is your innuendo, not my strawman. Maybe you should take that somewhere else.
The timing and motivation of publishing something is a valid line of inquiry and not innuendo. Vaccine hesitancy is a salient topic of the day. A world leader accused a major social network of killing people by distributing misinformation [0]. Why would a publication choose this moment to editorialize about fraud in medical research? This wasn't a long running experiment from which new information was developed. Why publish now?
Maybe it has nothing to do with vaccines but rather maintaining the BMJ brand ahead of some new fraud they know will come to light, or a warning to researchers who might publish elsewhere. Why publish and why publish now?
That was my first thought as well. Casting doubt, and then in a few days we'll see this opinion piece references by some right wing politician who is vaccinated but "can understand their constituents hesitency to trust the medical establishment".
Also I haven't seen anyone bring up the perverse incentive of capitalism in this thread. Like.. let's just pretend this is moral failing rather than the literal race to the bottom of capitalism.
Perverse incentives exist in many socio-political systems, capitalism isn't unique in that regard. For example, the Soviet whaling industry [0] sounds almost like the paperclip maximization AI [1].
“He is now sceptical about all systematic reviews, particularly those that are mostly reviews of multiple small trials.”
This describes basically all “meta analyses” of Covid claims, especially the effectiveness of masking and lockdown. This is at the heart of “epidemiology” which is, as far as I can tell, the science of organizing data to meet predetermined political demands.
The company threatened to sue me and the university threatened me as well. Neither has followed through on threats. The company wants to keep selling their rubbish magnetic health ding ding and I assume the university wants nobody to look into how positive results for the product came out of their institution. Allround an education on how the real world works.