Hacker News new | past | comments | ask | show | jobs | submit login

As I’ve transitioned to more exploratory and researchy roles in my career, I have started to understand the science fraudsters like Jan Hendrik Schön.

When you spent an entire week working on a test or experiment that you know should work, at least if you give it enough time, but it isn’t for whatever reason, it can be extremely tempting to invent the numbers that you think it should be, especially if your employer is pressuring you for a result. Now, obviously, reason we run these tests is precisely because we don’t actually know what the results will be, but that’s sometimes more obvious in hindsight.

Obviously it’s wrong, and I haven’t done it, but I would be lying if I said that the thought hadn’t crossed my mind.




> When you spent an entire week working on a test or experiment that you know should work

I thought the whole point of doing experiments was to challenge what we "know" so we can refine our understanding?


Sure in la-la-land where science isn't conducted by humans.

In reality, scientists are highly motivated (i.e. biased) individuals like anyone else. Therefore science cannot be done effectively by individuals.

The system that derives truth from experiments - the actual scientific system - is the competitive dynamic between scientists who are trying to tarnish each others' legacies and bolster their own. The scientific method etc. primarily makes scientific claims scrutinizable in detail, but without scrutiny they are still highly liable to produce false information.


A bit of a nitpick, but...

> The system that derives truth from experiments - the actual scientific system...

Yes!

> ... is the competitive dynamic between scientists who are trying to tarnish each others' legacies and bolster their own.

Hm. To some degree, sure, that is one dynamic, but (a) this leads to/presupposes a truckload of perverse incentives and (b) this is not inherent in the system if we rearrange incentives


Do you have an idea for a better one? It is pretty darn close to natural selection, which while ugly, does produce surprisingly good results in many domains.

Of course the implementation is far from perfect. For example, the interaction between impact factor and grant funding produces pressure toward ideological conformity and excessive analytical “creativity”. But the underlying principle of competitive scrutiny is probably a desirable one.


> Do you have an idea for a better one? It is pretty darn close to natural selection, which while ugly, does produce surprisingly good results in many domains.

Cooperation is also an extremely fit behavior in natural selection.


Not by itself it’s not. The “selection” part of natural selection is inherently competitive, even if some things cooperate as a competitive strategy. Obviously scientists can and do cooperate within the broader framework of competition.


How do you eliminate the personal incentive to have found a meaningful result? I don’t think that can be changed without redesigning the human psyche.


I think the desire to do something meaningful can easily exist outside of a "competitive dynamic", which was the thing that felt off for me.


> Sure in la-la-land where science isn't conducted by humans.

If someone has a large bag of money laying around the plan is this:

There are lots of companies that will run material A though machine B for you. There are a lot of science machines. One is to put a lot of them into a large building and make a web page where one can order the processing of substances in a kind of design your own rube goldberg machine.

It can start with all purchasable liquids and gasses, mixing, drying, heating, freezing, distilling etc and measure color, weight, volume, viscosity, nuclear resonance etc, microscope video, etc. Have as much automation as possible, collect all the machines. A robot cocktail bar basically.

Work your way up to assembling special contraptions all ordered though the gui.

Jim can have x samples of his special cement mixture mixed and strength tested. Jack can have his cold fusion cells assembled. Stanley can have his water powered combustion engine. Howard can have his motor powered by magnets. Veljko can have his gravity powered engine. Thomas can have his electrogravitics. Wilhelm can have his orgone energy.

or not... hah....

If any people are involved they should not know what they are working on.

It wont be cheap but then you get an url with your nice little test report and opinions be damned.


I think that might end up with "Oops! All Smallpox."


Ill be the last one to say the idea doesn't come with some serious challenges. Someone some day will think it funny to try blow up the place.

But if you want to without human error/bias there is nothing close to removing all the humans.

Things that are controversial, unbelievable or unlikely may have big implications and risking your career on it is usually not a good idea - for you.

Though automation one might drive the prices down to make the brute force approach viable but with somewhat intelligent machines one could also make educated guesses in volume.

You could auto suggest similar experiments while the researcher types their queries complete with prices.

The original question was: How can we do more research without increasing the number of scientists.


And yet, it is still the best we got for also producing highly reliable and correct information.

Personally, I think the “highly” in your statement is quite over exaggerated. Humans can be convinced to produce bad science, for sure, and there are even journals set up by religious orgs that specifically exist to do just that.

But at the same time, science landed humans on the moon.


> But at the same time, science landed humans on the moon.

That was engineering. Closely linked to science, but not the same process of inquiry.


Engineering did not discover the Keplerian or Newtonian laws of motion.


> Personally, I think the “highly” in your statement is quite over exaggerated.

Except that the entire point of the article here is that it's not exaggerated.

> But at the same time, science landed humans on the moon.

Cherry-picking a highly successful, well-known example doesn't prove a point.


> Cherry-picking a highly successful, well-known example doesn't prove a point.

There must be hundreds, if not thousands, of successful scientific discoveries that went into something as complicated as the moon landing, and if you still don't think that's convincing, just look at the world around you - which looks just radically different from the world of, say, just a couple hundred years ago.


> Cherry-picking

As if our lifespans and quality of life haven't been drastically improved by modern medicine.

I mean, we can cut people open and replace entire parts of them and they're fine. They don't even get sick anymore - thanks germ theory and aseptic technique! Do you not understand how much of a marvel that is?

Before that, people used to get cuts and scratches and just... die. We can now fully rummage inside an arbitrary person's internal organs.

And don't even get me started on long-term illnesses. High blood pressure and cholesterol has been killing humans since forever, and we have medicine that just fixes that. And now, we're getting medicine to rewire our brains to prevent addiction in the first place (semaglutide)


This is what makes me troubled regarding medical science. I've heard tons of things about fraud and unreproducible results but new wonder drugs (that actually worked!) are deployed every year.


Clinical trials in general are extremely, extremely above board. The level of scrutiny is extreme, and the stakes are unbelievably high for pharma companies and the individuals involved. There are better ways for an unscrupulous pharma co to gain an edge.

That said, wonder drugs are few and far between. The GLPs are at least a once-in-a-decade breakthrough, so that’s probably most of the noise you’re hearing (there are a lot of brand names already).


What about Vioxx?


Voixx was an unknown unknown problem.

Cardiovascular safety was tested in the original trial. It passed. Nothing in the data during development suggested it was an issue. But trials can’t detect everything.

It wasn’t until it got to market did a safety signal pop up. Then retrospective analyses of large data sets proved it.


> in general

No one is under the illusion it’s perfect or ungameable. A drug slipping by every few years is bad and often tragic, but IMO nowhere close to indicative of a systematic problem. It is a system that is worthy of a high degree of trust.


I'm unfamiliar with Vioxx and whether its approval really was a result of mistakes.

Shouldn't we expect some small percentage of failures in these processes given that they are driven by statistics and confidence intervals? Is that even a failure of the process, or is it a known limitation given how much resources and time we are willing to allocate to the discovery process?


Yes we should expect some small number of failures and so far I agree, I don’t see evidence of a problem that needs fixing.

As patio11 says, the correct amount of fraud in a financial system is not zero, and the correct amount of false positives in drug approvals is not zero.


> Shouldn't we expect some small percentage of failures

Yes, and this is really solved at a more local level. Doctors aren't prescribing new drugs like candy. They, too, are skeptical of their success and will reserve those prescriptions for the most desperate cases. Over years, we (and the doctors) learn how effective these drugs are and what potential side effects they have.


You’re hearing loads about fraud because the anti-intellectual bots are here to make sure you hear about them all the time.

Republicans and Russian bots WANT you to hate science and academia and they have frequent pushes across social media platforms to make sure you do.


> And yet, it is still the best we got for also producing highly reliable and correct information.

It's not. Markets are good at that. They're actually competitive. Academia is good at producing enormous volumes of documents that claim to be information, and may or may not be if you test them. It's "competitive" in a weird way where people don't compete over what's actually true but over who can convince central planning committees to give them money, which is very different.


Why is 'competition' your marker for ability to 'obtain reliable and correct information'? Wouldn't cooperation be necessary along with competition? Entities that only compete with other produce no standards and share no knowledge. The market (with limitations for externalities and to prevent fraud) is good at distribution of limited resources in the most efficient way possible. Why is it being shoehorned into being the solution to production of reliable an correct information?


They're both necessary, you're right. A more precise statement would be something like "competition between groups of cooperators".

You can't separate the production of correct information from the production of goods and services. They're inherently intertwined, the attempt to separate them is how we ended up with such a polluted scientific literature. Having formed a hypothesis as to what is true, you have to test that out in a robust way where you can't easily cheat and you can't easily cheat yourself, even when "yourself" refers to the vast institutional structures that employ you. In other words you need a system that stops you cheating even if that's what would please your boss, your vice chancellor and ultimately the President.

We only seem to have one such system and that's markets. If you cheat then the product or service you provide will be based on beliefs that are false and - eventually - customers will abandon you because the thing you're selling doesn't solve their problem. This is effectively a referendum of the customers. There is no such feedback loop in academia. There are attempts to approximate or emulate it with things like peer review, but they're all shadows of the real thing.

Meanwhile you can encourage cooperation even between competing entities in lots of ways, and it often emerges naturally even in the absence of any specific social policy. Open source collaboration is one obvious example in the tech sector, patents are a more formalized system.


No, we have better systems now


Yeah. Like “just do your own research” man.

Tell me of a better method to get to the truth. Go on.


je refuse!


In theory, but it is extremely easy to get into the mindset that your hypothesis is absolutely true, and as such your goal is to prove that hypothesis.

I’ve never fabricated numbers for anything I’ve done, but there certainly have been times where I thought about it, usually after the fourth or fifth broken multi-hour test, especially if the test breakage doesn’t directly contradict the hypothesis.


Maybe it's different in other fields, but from my background in physics it seems like if your hypothesis is wrong that is usually way more interesting than it being right. As long as it isn't just because of some contamination in the data.

Although, contrary to what I was taught in elementary school, most of the experiments in the physics department of my university didn't even really have a hypothesis. They were usually either of the form "we are going to do this thing, and see what happens", or "we're going to measure this thing more accurately than anyone before".


So I'm not a "scientist" really, I'm just a bit more of a theory-focused software engineer.

An example has been times where I really want to use a certain concurrency style, and I'm convinced that it should be faster than the way we're doing things before, so I will write a few non-trivial tests to make sure that's right, and I'll get inconclusive numbers.


That's still an interesting result. If you thought it would have a significant impact, and it doesn't, that means something is wrong with your mental model that lead you to think it would be faster. And if you can figure out why, that can help you find a solution that is faster.

Of course, that is still very frustrating if you are on a tight deadline and all you have is one thing you know doesn't work.


No doubt that it's interesting, but when there's deadlines that I need to meet "Throughput level X", and it's also infuriating when the theory isn't lining up with reality.


Not just the mindset. Our social setting can deem some hypothesis must be true and any disagreement is blasphemy of the highest order. The 'softer' a science, the more beliefs like this that exist. Sometimes you can even see scientists deeply studying something adjacent to one of these beliefs start to question it and how delicately they have to dance around the issue until enough other scientists also question that they have the safety in numbers to begin directly questioning the belief. A reoccurring example of this is the research around the labeling of certain behaviors as abnormal psychology which eventually lead to an update in the DSM.


Thanks for staying your point so clearly. I’m a bystander to this discussion, but agree with you about the reality of this.


That's a valid way to look at it, but Fisher (who all but invented hypothesis testing) took a different perspective. To him, most things we know because of informal experiences. Only when trying to find small effects or when we have insufficient experience do we conduct experiments, which are effectively experience meticulously planned in advance.

A significant result in an experiment, according to Fisher, is just an experience to add to the mental pros-and-cons list. It is not defintive proof of anything.


There are externally motivated scientists who are in it for the prestige or awards. Some fields are more like this than others, but they show up in all fields.

Plus these days there's a lot of pressure to run universities more like businesses. To eat, academics have to hit certain numbers, so you see behaviors common in business like faking the KPIs.


Setting up real experiments in a lab is super hard e.g. is all equipment properly calibrated, is the way I am measuring actually right, are my reference measurements correct, are my samples "clean". A lot of things can go wrong that it is even sometimes challenging to replicate experiments that are 100% known to work. So, it takes some discipline to not cheat in the sense that e.g. one cleans up the data a bit too much.


Because of that, backing up a claim with research adds weight to the claim.

If the claim is false, though, you can still sometimes get research to support it. If you or the researcher stands to profit from the false claim, then there is a conflict of interest.


I think that’s what the parent is acknowledging in the end of the second paragraph.


Well, that depends. What are you paying the guy to do?


I’ve been in a meeting with government research officials where a director of the primary global institution in that field described how when she writes or does research and writes papers she draws a graph she needs to support her research or a point she is trying to make and then goes to look for the data to create that graph.

Maybe I’m missing something, but I do not believe that is the way it is stopped to go. Btw, she has a PhD and failed up into a global scale.

I’ve been meaning to find out if there are any open tools to evaluate someone’s dissertation.

It was equal part stunning and seemingly a bit traumatizing to me considering I still remember it as if it had happened earlier today. I think what surprised me too was her open admission of it, even with external parties present.


So she establishes a hypothesis (draws a graph or picks a point to make) and then tests it through experimentation (looks for data to support the hypothesis)? Isn't that just the scientific method worded another way?


Wait until the GP knows about how scientists generate Monte-Carlo (MC) simulation data to see what a positive results looks like and then do meta analysis for both real data and MC.


Only a week? The stakes are higher my friend. It's usually months at a minimum.


Heh, totally fair, I'm not a scientist, I'm just a more research-oriented software engineer, and generally I have to keep my tests smaller in scope.

No doubt that in the case of physics and chemistry and the like, testing can be a lot longer.


Rapid outcomes should not be a priority


You should also understand that there are external forces here, like state sponsorships that monetarily rewards for scientists to simply file enough research findings.

The startling rise in the publication of sham science papers has its roots in China, where young doctors and scientists seeking promotion were required to have published scientific papers. Shadow organisations – known as “paper mills” – began to supply fabricated work for publication in journals there. https://www.theguardian.com/science/2024/feb/03/the-situatio...

The number of retractions issued for research articles in 2023 has passed 10,000 — smashing annual records — as publishers struggle to clean up a slew of sham papers and peer-review fraud. Among large research-producing nations, Saudi Arabia, Pakistan, Russia and China have the highest retraction rates over the past two decades, a Nature analysis has found. https://www.nature.com/articles/d41586-023-03974-8

That's why a recent article https://news.ycombinator.com/item?id=41607430, where the measurement of China leads world in 57 of 64 critical technologies was based on number of journal citations, was laughable.


Talking with some Chinese colleagues in the past, they were talking about having a 'base' salary which was not enough to have a family on. For every published paper they'd get a one-time payment. So you'd have to get a bunch of papers out every year just to survive; no wonder people start to invent papers.

Of course the same thing is happening in the 'Western' world too, with a publication ratchet going on. New hire has 50 papers out? OK! The next pool of potential hires has 50, 55, 52 papers out, so obviously you take the 55 papers-person. You want outstanding people! Then the next hire needs 60 papers. And so on.


...an effect known as "wonkflation".


I think there are maybe two separate issues here.

Paper mills are bad but mostly from the perspective of academic institutions trying to verify people's credentials/resumes. Paper mills aren't really that much of a concern in the sense of published research results being false in the way the article is talking about because people aren't really reading the papers they publish. In that sense it doesn't really matter if there are places where non-scientists need to get one paper published to check some box to get a promotion, because nobody is really considering those papers part of established scientific knowledge.

On the other hand, scientists intentionally (by actually falsifying data) or unintentionally (as a result of statistical effects of what is researched and what is published) publishing bogus results in journals that are considered legitimate which aren't paper mills actually causes real harm as a result of people believing the bogus results, and unfortunately the pressures that cause that (publishing papers quickly, getting publishable results, etc.) exist everywhere, and definitely not just in China, nor did they originate in China.


I think you're making the wrong distinction here, it's not about whether the result came from a known or unknown paper mill in that country. It's about whether there is a culture of fraud and fakeness that permeates that country and that scientific community. And there is certainly a culture of fraud and fakeness in China, from tofu dreg buildings, to fake food and gutter oil, to drugged olympic athletes, to fudged economic numbers.

Let me give just one example of how prevalent the culture of fakeness has pervaded through China. Nowadays, because the economic decline, people are eating out less, and restaurants are getting less and less traffic. Therefore, they needed to cut costs. So some restaurants started using pre-packaged food, and just heat those up in the microwave and serve them up as cooked dishes. Because other restaurants couldn't survive without doing the same cost-cutting behavior, they've all started doing the same things. Thus, most restaurants in China are now serving pre-packaged food. And there's a backlash from consumers, so now even less people eat out. And then restaurants started using expired pre-packaged food. Oh, and because expired pre-packaged food has a tendency to cause diarrhea, some restaurants in China have started adding Loperamide into the dishes to prevent diarrhea.

Fake it until you make it out of China mentality.


Since when is this kind of blatant racism acceptable on this site? “Gutter oil”? Wtf is wrong with you?


still happening in China in 2024

Foreigner caught a Chinese couple scooping up gutter oil https://www.reddit.com/r/interestingasfuck/comments/1eo2wmy/...


There are 1.4 billion people in China. You’re showing me a couple of people doing who knows what in a clip of unknowable provenance. This is not the hill to die on, my man


Obviously there are way more occurrences than this video. Also, the lady in the video acted like nothing was wrong and admitted no shame, which means there is a culture/common practice of using gutter oil.


Wikipedia says that today this carries the penalty of decades in prison and a suspended death sentence. I very much doubt it’s as prevalent a practice as you suggest. To suggest that this crime is a “normal part” of Chinese culture is simply wrong.


There was no penalty/death sentence for the recent public incident of the oil tank truck that was found transporting both toxic industrial oil and cooking oil, without cleaning in between. Which apparently was a wide-spread practice, as confirmed by netizens. Instead the officials just hand waved and said it's an isolated incident, and they're looking into it. And no news of it since.


Restaurants in china are legally required to use oil traps like that and the oil must be removed. It is usually reprocessed to be used for industrial purposes. The fact that those people were possibly possibly illegally collecting it to sell to a company that reprocesses it does not at all mean that it's going to be used as "gutter oil" in restaurants any more than someone collecting empty cans from a trashcan means they're going to reuse those cans in a restaurant.

Gutter oil used to be a major issue in China but the Chinese government cracked down on it a lot a few years ago.

I recommend watching this video about it: https://www.youtube.com/watch?v=G43wJ7YyWzM


They said "cultural", you decided to insert "race", presumably to stoke more outrage.


This is what happens when Silicon Valley execs, trying to make their employees more replaceable, call for more STEM education; suddenly, tons of funding and institutional resources go into STEM research with no real reason or motivation or material for this research. It's like an gerbil wheel: once you get on the ride, once you get tricked into becoming a "scientist" just because a few billionaires wanted to cut slightly thicker margins, there's no stop. Bullshit your way through undergraduate education, bullshit your way through a PhD; finally, if you're good enough at making up statistics, you get a job training a whole host of other bullshitters to ride the gravy train.


> tons of funding and institutional resources go into STEM research with no real reason or motivation or material for this research.

I do believe that there exists an insane amount of (STEM) questions where there exist very good reasons to do research on - much, much more than is currently done.

---

And by the way:

> This is what happens when Silicon Valley execs, trying to make their employees more replaceable, call for more STEM education

More STEM education does not make the employees more replaceable. The reason why the Silicon Valley execs call for more STEM education is rather that

- they want to save money training the employees,

- they want to save money doing research (let rather the taxpayer pay for the research).


repeating what user u/randomdata said already,

> - they want to save money training the employees,

> - they want to save money doing research (let rather the taxpayer pay for the research).

means they want to offload costs to the public in order to increase profits, which is what I said above.


Offloading costs is a different thing than making employees more replaceable.


Employees are more expensive because they are less replaceable. A company must invest a certain amount of money into labor to make a profit; however, if that company learns it can invest less money into endeavours to make the same profit, then it can decrease the amount invested into labor. The only way to do so is to create some sort of technology, or social relation, that makes the price of individual workers cheaper. Thus, any reduction of cost of labor that increases profit is something that makes employees more replaceable.


> - they want to save money training the employees,

So what you're saying is that they push for STEM education to make their employees more replaceable...?


> So what you're saying is that they push for STEM education to make their employees more replaceable...?

A general rule of thumb is rather that better education and/or specialized knowledge makes employees nore productive, but also less replaceable.


>less replaceable

Only when they are the only ones that have that knowledge, not when teaching it becomes rote.


A decent rule if considered in a vacuum, but perhaps you missed some necessary context related to this particular discussion?

> - they want to save money training the employees,




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: