Hacker News new | past | comments | ask | show | jobs | submit login
The fight against fake-paper factories that churn out sham science (nature.com)
244 points by pseudolus on March 23, 2021 | hide | past | favorite | 142 comments



> Physicians in China are a particular target market because they typically need to publish research articles to gain promotions, but are so busy at hospitals that they might not have time to do the science, says Chen. Last August, the Beijing municipal health authority published a policy stipulating that an attending physician wanting to be promoted to deputy chief physician must have at least two first-author papers published in professional journals; three first-author papers are required to become a chief physician. These titles affect a physician’s salary and authority, as well as the surgeries they are allowed to perform, says Changqing Li, a former senior physician and gastroenterology researcher at a Chinese hospital who now lives in the United States.

"When a measure becomes a target, it ceases to be a good measure." - https://en.wikipedia.org/wiki/Goodhart%27s_law


It's not just china. Enormous effort is put into achieving high H-Index scores everywhere.

I'd love to see a review of all researchers with high H-indexes. I bet you would see a disproportionately high incidence of self citing, citing rings, journal bias, outright corruption and much more.

But the bigger problem is that many of these people are highly intelligent and capable. When they are told that their careers depend on a gameable metric, they can figure out clever ways to game it.

Mountains of paper don't improve the human condition, we need better success metrics.


If you work on a bigger LHC experiment (ATLAS or CMS) you publish about 100 papers a year. I'm on the "author list" for many hundreds of papers that I contributed nothing to and never read. It's actually very difficult to get yourself taken off the author list.

I'm sure my H-index is great, but it's completely bogus. Some organizations have stopped counting papers with over some number of authors (i.e. 1000) which is progress, but by that metric I'm the author on zero papers a year.


> I'm on the "author list" for many hundreds of papers that I contributed nothing to and never read.

Are you asked for permission to be added as an author?

What if the research was 'bogus' or at least parts of it were incompetently/poorly done and the paper is discredited.


We earn the right to be on every paper after working on the experiment for around a year. After that it's automatic: there's theoretically a way to remove yourself from specific papers but no one ever does. It's a convoluted process that has to be approved by the highest ranking person in the collaboration.

The question about bogus science is an interesting one: In theory by putting 3000 authors on every paper the collaboration we are ensuring more scrutiny for every result. And indeed, our internal review is far more rigorous than the peer review that we get from the journal. As far as I know, no journal has ever rejected a paper from ATLAS or CMS, which is a pretty good track record for O(thousands) of papers.

There is a flip-side, of course: this system also hinders innovation. When 3000 people are "authors" on your result, any one of them can to hold it back from publication. We tend to do choose more conservative techniques in the interest of getting anything at all past internal review.

Personally, I don't think aiming for a 100% success rate in publication is a healthy way to do fundamental research. I'd rather see some slightly questionable papers submitted to journals now and then, since lowering the bar to get to that stage would mean making more interesting ideas public.


I understand that advances at LHC rely on a huge amount of people and it would be cumbersome if everyone was fighting to get on papers rather than contributing technically. But once you get above a few hundred authors - outside the team that might understand and care what the paper is about, I'm not sure I'd value everyones or anyones contribution, if I was hiring that person into a new research position

Perhaps its lucky I don't work in physics funding or recruitment.


We have an internal database that keeps track of who contributed where. So in practice when someone is making a hiring decision, they find someone who works on our experiment, and that person asks around or accesses the internal database to see if the candidate really did everything they claimed.


> Some organizations have stopped counting papers with over some number of authors (i.e. 1000) which is progress

You reminded me of this classic paper: https://improbable.com/airchives/classical/articles/peanut_b...


Gordon? You're late.


Been playing Black Mesa recently, what an awesome remake!


Yeah it is stunning, isn't it.


Wait, so all the papers you were part of had more than 1000 authors?


Almost every paper I've been a part of. Some of us will do a few independently, but if we want to use LHC data it has to include everyone on the collaboration on the author list.


Did a quick search and found the LHC papers, all the ones I looked at have around 1K authors:

https://lpcc.web.cern.ch/lhc-data-publications


Alice and LHCb are around 1k. ATLAS and CMS are closer to 3k. There are some very cool experiments at CERN that have less than 100 members, which typically take advantage of either existing LHC interaction points[1] or the accelerator chain that feeds the LHC[2].

[1]: https://faser.web.cern.ch/the-collaboration

[2]: https://base.web.cern.ch/content/people


There are professors at top institutions that publish more than a paper per week. Obviously they are not contributing with much more than just stamping their name as last authors. Least to say, this is completely ridiculous.


Having the citation's impact on one's H-index decay with author position would cause an interesting stir to the "academic game". I wonder if we would start seeing supervisor's bully for first authorship.


As the authors are typically listed alphabetically that would unwittingly have an inhumane result.

(My name moved from the end of the alphabet to the middle when I married. I was amused that it actually makes a difference).


Authors aren’t usually listed alphabetically in academic papers though, typically it is relative contribution (not saying alphabetical never happens though).


It differs by research area. For example, the mathematics authorship convention is alphabetical. Computer science is by contribution.


Computer science theory papers are alphabetical as well.


In large collaborations, this is common. See e.g. https://inspirehep.net/authors/1222902?ui-citation-summary=t...


Also depends on the journal; some have the most impactful author stated last.


H-index should go away, it cannot be fixed. Traditionally, academics have been encouraged to publish in prestigious venues. Metrics like H-index do not take this into account.


Coming full circle much? The h-index was touted as a better metric than judging someone’s paper by the merit of the journal since high profile researchers would get their papers in NSC easily even if it never served a purpose. The h-index is definitely less gameable than you think - the only effective way to game the h-index is to become completely illegitimate in your publishing, using farms like above. That would be easily identified if anyone even vaguely related to the field tries to read the titles and journal names.


The evaluation of scientists and academic staff needs to be done by independent evaluation panels based on a (subjective) assessment of the scientific merits of their publications. In every reasonable place it's also done that way. In addition to this, funding for time-limited contracts has to be treated similar to investment funding, i.e., evaluate very carefully in the beginning and do an internal evaluation of the performance afterwards (mostly to evaluate the initial evaluation) but avoid continuous evaluation and mostly only advise/help during the contract period.

The worst thing to have is indicator counting of any kind. The system will be swamped with mediocre, sometimes almost fraudulent scientists who game it. (It's just too easy to game the system: Just find a bunch of friends who put their names on your papers, and you do the same with their papers, and you've multiplied your "results".)

H-Index is also flawed. In my area in the humanities papers and books are often quoted everywhere because they are so bad. I know scholars who have made a career by publishing outrageous and needlessly polemic books and articles. Everybody will jump on the low-hanging fruit, rightly criticize this work, the original authors get plenty of opportunities to publish defences, and then they get their tenure. Publishers like Oxford UP know what sells and are actively looking for crap like that.


There are moderate tools like multi-round double blind reviews to resolve such issues.

A tool which was used to resolve issues among chemists and biologists is now running rampant over fields which do not have a high volume of citations. Mathematics is suffering, for example.

An Annals of Math paper might have fewer citations than a paper in Journal of Applied Statistics. But the prestige is incomparable.


So what's the problem there? The person going for a job in a maths department with an Annals of Maths citation isn't going to be in a competition with someone with a big H-index because of applied stats papers... the committee won't look twice at the statistician! On the other hand if the Annals of Maths person wants a job in stats then presumably they will also have stats papers (and the stats people will be keen to know.. "what about your amazing career in pure math?!"


This would be the inevitable outcome. Currently the first author did the work, and the last author supported in some way (such as by supervising).

This is purely by convention.


different fields have different conventions regarding what a given authorship position implies in terms of work contributed to the paper. Some place very high weight on the last named author, others on first named, among many other permutations and subtleties. There's no single rubric for deriving relative effort from author name position.

part of this is contingent upon citation formats used in different kinds of publications (and thus different fields) where long lists of authors are condensed to one two or three at most.

this is not even getting into more locally scoped, second order inputs such as any given department's traditional handling of advisor vs. grad student power dynamics.


Look at Didier Raoult in France. He is a perfect example



Those remind me of the entrepreneurs who founded "dozens of companies."

Yeah... maybe if you were on a meth IV drip and lived for 200 years...


Coming up with a good problem to work on is the most difficult part in science.


Quite the contrary, at least in computer science. Tons of good problems, actually developing a solution (and the theory and experiments to back it) is usually orders of magnitute more difficult.

Another point is that to get to this volume the "researcher" probably have many papers that he/she had no part in. Not even formulating the problem. In the case of physicians I encountered senior doctors who just conditioned using the department's medical statistics on adding him/her as author. That is, every paper that uses this specific public medical data adss this author regardless of their contribution... (To be clear they not even necessarily have anything to do with the data collection beyond what they already are obligated to do at the hospit, just happen to be responsible on managing access it)


Good problem also means it fits the reachable skill level of the person doing the work. Do we agree on that stuff like P=NP for example is not a good problem?


This varies a lot according to the area.

Quite common in Biology and Chemistry.

Less common in theoretical physics.


Also luck (says someone with a pathetic H index!) But - I don't care about it, so I can make the observation that of my peers in my 20s the ones that went on to get a massive H-index lucked out with an early paper that became widely cited for a random reason. They introduced a definition, repurposed something that they had done into a fashionable domain, or "got adopted" into a paper by a famous professor and got lots of profile for it. Once they got profile subsequent papers attracted many citations and a virtuous cycle kicked in.

Of course all these people are mega smart and do fab research - but they got lucky, and there were lots of others who were as smart and got pushed out.


> Mountains of paper don't improve the human condition, we need better success metrics.

I'd argue the problem is the concept of "metric" in itself. You can't measure complex human endeavors with a simple objective number. Whenever the concept pops up (lines of code, anyone?), it's always a disaster.


One of these guys is (well, probably) Didier Raoult. The person who started the chloroquine craze for the treatment of COVID. He has an h-index of 148 and over 2300 papers.


What does 148 over 2300 papers means exactly ?


148 is the h-index, which is defined as the maximum value of h such that the author has published h papers that have each been cited at least h times. (from Wikipedia).

My h-index is 13.


is 148 high ? low ? What does in intrinsically measure (beside being a competition of who piss further) ?


Yes it is high. It means you published at least 148 paper each of it cited by more than 147 times. It is supposed to measure how much interesting/useful work you do.


So if you publish a single paper that only gets cited a few times, it'd completely drag you down. That doesn't seem so great honestly.


No, that wouldn't affect your h-index at all.

If you had sixty papers that have each been cited more than sixty times, you'll still have those sixty papers even if you publish a new paper that gets cites only once.


It's very high vs comp sci. 50+ in comp sci is amazing.


its quite high, its hard to compare different fields since in a more active field papers will get cited more often. George Whitesides, one of the most cited chemists is at almost 270.


Why do we need metrics at all?

If I’m sat in Einstein’s office doing a performance review with him, I’d probably say “Yes, Alf, you’ve done pretty well these last few years. Keep it up.” Don’t think I’d need a metric. And if he came to me for funding for a postdoc I’d say - yes, sure.

I don’t know when we decided that _management_ had to be replaced with _metrics_ but it doesn’t seem like a good idea.

Management involves doing things that don’t scale, that’s why we have lots of managers.


This leads to another problem that we (the science community) are already facing: how should we allocate funds between older and more established researchers vs new researchers? Or, who will be the next Einsteins?

If you only fund the more established researchers, then new researchers are starved out and more likely to leave the field. However, my belief is that new researchers are more likely to develop big new innovations than (say) the really old professors that are well past their prime.

You also have another related problem: given a large pool of new researchers, who are the ones that will be really good? Plus, there are other possible goals, like spreading the money around to broaden the base of researchers.


> how should we allocate funds between older and more established researchers vs new researchers?

Reasonable question with no precise answer, but I imagine a manager would seek a balance between the two as with any company or team. Some big hitters but you've got to see where your next Einsteins are coming from.

There's nothing about this that is solved by metrics. Metrics just help you might shallow decisions quickly, and provide ways for academics to game the system by manipulating those metrics.


Einstein solved this problem by doing his best work before joining academia, and winning lifetime funding as a reward afterward.


Einstein was not peer reviewed either. Peer review became standard only later.

That is to say, whatever worked for him or that era wont work for contemporary person today.


Probably because it's much easier judging whether to fund Einstein than whether to fund someone much further down the chain.


I don't see how that is relevant, but let's consider someone further down the chain.

"Ah yes, Jimmy Postdoc, I see you have published 3 papers with an average impact factor of 3.1. Have a promotion."

vs

"Ah yes, Jimmy Postdoc, I see that you're making progress in improving quantum error correction, as evidenced by the fact that we can now use 80% of the previously required qubits to complete Shor's factoring assuming a surface code - great work, you should get that paper out at some point but keep focused on the work for now.

I'm also really pleased with your contribution to the academic community in the dept, particularly helping out Polly PhD student with her SAT formulation of decoding. The constructive questions in her talk really opened up a new line of enquiry. Great job.

Given all the above, have a promotion."

Metrics are stupid, people are smart, stop using stupid.


The subjective valuation lead to quite a lot of nepotism and unfairness. There is techno-babble you can use to pick pretty much anyone if you are good enough with words.


I agree that's a problem of traditional management. The way to counter it is good management methods, in particular ensuring that reviews and promotions are cross-validated using other personnel, both horizontally and vertically. I'm sure you're right though, some bias will slip through.

The question you have to ask yourself is whether you are prepared to tolerate occasional suboptimal decisions for a metrics based evaluation that _corrupts the entire system_.


> I’d probably say “Yes, Alf, you’ve done pretty well these last few years. Keep it up.”

That might work in a company but not in the academic world. Many countries limit the amount of time a PhD student or Postdoc researcher can stay at a university. After that time the person has to find a permanent contract (professor) if they want to stay. Because there are many many more candidates than available positions, the hiring committees try to justify their decisions by objective (haha) criteria.


I am working on a website that will precisely do this! I want to create factors that will show citation rings and self citation ratios! Trying to figure out what to call the metric, my current favourite is CJ factor!


Cool of you to do this, though I wonder if this will be more of a measure of community size. Smaller niche topic communities probably behave more like citation rings (you'd cite everyone else in the community frequently). It also seems like it'd penalize people working in a new area (like if you're the first person to use a method, you'd probably cite yourself a bunch).


I will try my best to control for field specific practices for sure, the RCR metric published by NIH is already a good starting point in this regard. Valid point about not penalising people in new fields, will keep that in mind!


To some extent this already exists

http://www.vanityindex.com/


While I agree that we need better metrics (and, more in general, a better academic system) I think that publishing fake research (committing fraud) is still way more severe than committing H-index hacking. The latter mostly translates into publishing lots of papers that present small advancements and does encourage scientists to work on topics where there is very little risk of failing (rather than chase challenging topics). These are have long-term negative implications for the systems and for our society as a whole, but, at the end of the day, it's still real research.


Actually the problem is that funding agencies and universities started to implement success metrics. I mean you are trying to measure something which is inherently difficult to impossible to measure (what is good science).

Moreover, you essentially make the career of some very intelligent people depend on this arbitrary metric that you have created, what do you expect to happen? Obviously they will work to that metric, which then people complain about they are gaming the system. No, they actually work toward the metric that you (not you personally obviously) have created.


Perhaps we should make their actual science the metric...


Requiring first author papers for physicians is not unusual. In the UK surgeons are required to have published several as a requirement to finish training [0]. Often in the uk this makes a phd or other higher degree necessary to completing training.

[0]https://publishing.rcseng.ac.uk/doi/pdfplus/10.1308/rcsbull....


Incentives are broken just about anywhere in the world that you look. You’d think that of all fields, the scientific ones would understand the power of incentives and solve for that. But... I guess the incentives aren’t set up for it!


Yes, and we can generalize this statement to physicians worldwide. This is absolutely not a chinese-specific problem.


Is there a good analysis somewhere of pragmatism in Chinese culture?

It seems that there is a particular brand of ultra-pragmatism and rule-gaming that is overrepresented in China.

Researchers are judged on published papers? Publish papers that appear just good enough to be published, ignore everything else. Products are bought based on looks on price? Make a cheap product that appears just good enough to be sold, ignore everything else.

I don't mean to stereotype all of China negatively. I've worked with enough amazing Chinese colleagues to understand this is not all of China. I'm just curious if this pattern has some truth to it or if it's just a random racist stereotype, and how it can be explained.


There are a lot of people in China and everyone's place in society was recently (relatively) reset. Rapidly developing and everyone scrambling to get on top of one another. There's bound to be rough edges in the developing of a society basically from scratch. While China is an ancient country, the rules, regulations, institutions, people in power, basically all reset last century.

Also what you described are characteristics of any poorer country. They'll try to get some money from wealthier places in the easiest way possible (Indian IT scams, Nigerian prince scams, etc). For most of the last century and even until today China is still a lot poorer than the US. It's also a massive country. It's easy to discount the 1.4 billion Chinese population because a lot of them are dirt poor still.


I don't think this is something that's over-represented in China at all.

For a start, many of the Chinese factories making low-quality, cheap goods are instructed by Western IP holders to do so. You see the same thing in many industries no matter where they're manufactured - textiles is the biggest example that springs to mind, but fast food would be another. Hell, I reduce quality to minimise costs in my business (second-hand games; we'll bundle third-party controllers with the console because the consumer doesn't care, and sell the genuine item separately at a premium).

At the status-grabbing level, look at the number of Indian students graduating with CompSci/IT degrees that can't even write FizzBuzz. Look at all of the upper-middle-class white people in the West who get their below-average intelligence children into bullshit middle management positions just so they can boast about their child being high-status at dinner parties. Look at all of the "entrepreneurs" out there who take outrageous risks with their parents' money and are actively encouraged to do so "because at least they're doing something with their life". Look at your local members of parliament. Look at Facebook and Google. Look at your friendly neighbourhood OnlyFans thot.

Everyone is gaming the system, not just the Chinese.


I've experienced the opposite. All the western brands made in china are reasonably good and sometimes excellent. Meanwhile any chinese brand is awful.


Indian education is based on rote memorization. That is why they think they can memorize their way to a degree in CS, and end up graduating while still not being able to program. So yes that one is another systemic problem.

I live in the West and not sure I've ever even been to a bona fide dinner party. How does one get a grown child into a middle management position? You mean nepotism?

Over-represented means more than elsewhere. Not something you can disprove with some examples.

I have also worked with the Chinese, and yes in my experience they are much more likely to try and be "clever" by putting something over on you than others. Japanese are very different people to deal with. Part of it surely is that China is developing and the lines are still blurry in acceptable business behavior. But I think their history (of eliminating capitalism then having to rebuild a culture of it from scratch) plays a role too.


What you're seeing is that entrepreneurs are scrappy and sketchy, and the US has much fewer small entrepreneurs than in the past. We do our ripoff scams at scale in big corporations with lots of "innocent" employees getting paid.


> For a start, many of the Chinese factories making low-quality, cheap goods are instructed by Western IP holders to do so. You see the same thing in many industries no matter where they're manufactured - textiles is the biggest example that springs to mind, but fast food would be another.

Exactly this. It's not the Chinese factories that choose the quality level of the product - it's the companies that design and order them. Yes, the factories can and will cheat if they think they can get away with it, but roughly - they'll produce things at the quality level they're ordered to and paid to produce. The deluge of crap on the West is and always has been the fault of the Western companies that commission their production.

(And as a somewhat obvious proof of that point: all the high-quality stuff we buy and use? Almost all of it is made in China too.)


"risks with their parents' money and are actively encouraged to do so "because at least they're doing something with their life"

Well it does beat spending that money on drugs and hookers


They aren't mutually exclusive


Wait how does streaming video content to consensually paying customers consumers bear any resemblance to the cheating and scamming you listed?


> Researchers are judged on published papers? Publish papers that appear just good enough to be published, ignore everything else. Products are bought based on looks on price? Make a cheap product that appears just good enough to be sold, ignore everything else.

This is just the result of very strong competition.

The only reason things aren't crap in other places is because things are less competitive (due to lower stakes) and people have a desire to make good things (particularly if they have strong cultural values relating to this) and do good work which can rise to the top of the priority list if competition isn't too fierce.

The stakes in China are high and were even higher in the past. And values of quality have been sacrificed due to how high competition was back then and those who sacrificed quality are now successful, leading others to immitate them. Quality is likely to improve in the future once people become more secure and the people who were most ruthless in the most competitive times start dying off.


I lived in China for 6 months. This fits my experience, and is remarkably similar to a story I was told by a native Chinese:

"We were taught in school about Mao, who rose to power on idealism. At first he showed us many great things that we could do, but eventually idealism led us to the great famine, and many people starved. Since then, Mao lost some power, and now in school they teach us that Mao was 2/3 right, but 1/3 wrong. And that 1/3 that was wrong was idealism. So now we practice pragmatism."


People have to fight for survival. University administrators who have not done any serious research in their lives impose unrealistic goals on academics, without realizing that academics is also a game of influence and networking. Sometimes rewards come slowly, over decades. It is no wonder that people will jostle for power and influence if the administrators insist that their very salary depends on it. (I'm not Chinese, this is a global phenomenon and all academics are subjected to this nonsense.)

If anything, traditional Chinese culture has elements which are heavily anti-pragmatic. Tao-te-ching Verse 3 begins with not praising meritorious people, to prevent envy/fraud (depending on the translation).

[3] https://www.taoistic.com/taoteching-laotzu/taoteching-03.htm


University administrators who make the rules are almost entirely published academics.


Quantity has a quality all it's own. IMO China can spam it until she makes it.

Central government wants to incentivize innovation and research so huge Chinese (and homogenized) population starts spamming patents and research papers. The system generates expertise for 1 good patent or paper out of 10. But volume is so high that 1/10 is enough to be globally competitive. Refine quality to 2/10 to reach parity. Refine to 3/10 and China becomes industrial leader. It's not hard to go from 1-3/10 in a short span of time when you have a massive labour force spamming its way into maturity. This is how China went from having poor academic research capacity into leading research many fields relatively fast if one looks past the "culture" / incidental inefficiencies and evaluate based on metrics controlled for quality. Supercharge this with industrial policy and state coordination to target strategic sectors.

It's the optimal development strategy that plays to PRC strengths - lots of people, somewhat competent ability to coordinate. In terms of fast, cheap, good, developing countries don't get to be good anyway, frequently poor governance = can't even focus on fast or cheap let alone both. Coordinating both = Deng model: poor China develops cheap and fast aka chabuduo. So Chinese population + spamming chabuduo + coordination = enough occasional, actually good outcomes to reach global competitiveness in ways that matter, fast. Though eventually need to reign in cheating and corruption via parallel improvements in coordination and culture to move onto next tier. Whatever you think of Xi, this is what he's been focusing on. Building premier institutions is also just hard and takes time to cultivate, especially if intellectuals previously purged, so need to spam out all the necessary components first anyway. Can't exactly have leading institutions without pool of experts first. The only place to cut corners for building institutions is to generate excess talent/components and assemble accordingly. Excess talent also necessary to mitigate brain drain (your amazing colleagues).

Elsewhere, Chabuduo manufacturing with Chinese population means plastic hangers to iPhones in 10 years. Chabuduo construction means enviable transportation systems, tier1 cities from fishing villages, and ghost cities that eventually gets filled by the hundreds of millions still waiting to be urbanized. Perfect/Good is the enemy of fast development.


Sounds very much like a certain anecdote about a pottery class where one half was graded on a single final project and the other half was graded on the quantity of pots they made. At the end of the class, the half tasked with making lots of pots also ended up making better ones because they were constantly making and practicing.


[flagged]


You've broken the site guidelines egregiously with this comment. We ban accounts that do that. Please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here from now on.

Also, while I have you: when accounts use HN primarily for political, ideological, or nationalistic battle, we ban them. That's a standard moderation test on HN, because such accounts are clearly no longer in using HN as intended, i.e. for curious conversation on intellectually interesting topics. It looks like your account is hovering on the edge of that. I don't want to ban you, so please go the other way.


OK you're right, I get it. I will amend my ways.


Appreciated!


> talking points spammed

My talking points are mostly from subject matter experts with good track record. Yours is generic youtube wisdom from ADVChina and laoway86.

> makes no sense.

Tell that to various international global rankings in innovation / research indexes that controls for quality and impact factors. China starting to lead in many strategic high tech fields. High impact research doesn't seem to have a problem citing Chinese papers. Espionage built the base and fills in gaps. Combine with domestic research = rapidly propelling Chinese RD competitiveness. Whine all you like, strategy is working and reflected in reality. Not to mention industrial policy is modelled after west, China likes to copy what works after all, it just learned to play the same game well.

> China's growth was helped mainly

Yet it grew to 2nd largest economy, 1st by PPP measures. Model worked in China but not India who had better starting conditions. China had exceptionally onerous WTO accession protocols, sanctions and other geopolitical hindrances. Ultimately it's about playing cards that are dealt, which were much less charitable than your fantasies. CCP just managed cards competently and continues to, seeing how FDI is up, minimum amount of foreign companies are moving out despite cold war, Chinese companies expanding abroad, trade deals being signed, further entangling more economies to Chinese supply chains. Demographic crisis overblown, China's a medium income country with plenty of room to grow, enough cheap surplus labour to maintain infrastructure and negotiate demographic transition. Meanwhile western infrastructure actually rotting, living standards are declining. See which countries are actually losing shit over economic anxiety, fracturing over immigration to maintain unsustainable safety nets and coping with rants like yours. China isn't a developed country, it's not going to deal with developing country problems based on poor projection. Poor Chinese are going to die poor, but still richer than they were. That's an arguable easy crisis to deal with than pervasive decline in QOL in west.

> outsourced that with industrial espionage

Again refer to steady upward trend of Chinese academic institutions. China only middle-income country to break top rank. Mid-ranking universities have converged and started to outperform mid-ranking western institutions. It's just getting started.

> what's the fucking benefit

Just 40x increase in economy since 90s and massive aggregate QoL improvements I guess. Maybe also triggering folks like you. Hundred millions = urbanization rate from 60% -> 70% by 2025... then onto 80%. That's 300M people... Remind me how much surplus units there are? Obviously some waste, but broadly oversupply was planned as part of general urbanization strategy trying to get rid of subsistent farmers kept around for stability maintenance.

> always end up costing more

Said every China collapse proponents every year for the past 30 years, yet somehow China is now a viable strategic competitor to US.

> Chabuduo constructions are crumbling

Actually worked in construction field in China / West. Residential developments in North America failing as much as ones I've worked on in China. Except China builds it cheaper and faster. BTW most of South Korea has shitty construction quality due to rapid development. It's been decades, unsurprisingly not actually an existential problem. Not to mention the amount of seemingly unworkable starchitect designs in tier1/2 cities that are doing fine. Hint: construction isn't rocket science. Some accidents as result of building more than the world combined or US in last 100 years is negligible. I'll take that over NIMBYism stagnant infrastructure/development any day.


I've had to warn you before about using HN primarily for nationalistic battle and flamewar. I understand that it's a different situation when you're representing a minority viewpoint in defense (I'm imagining myself into your position here) of a disparaged group that you either belong to or know a lot about. If that's the case, fair enough—but you still have to follow the rules, and by focusing overwhelmingly on one flamewar topic, you're not doing that.

Moreover, you're not just defending, you're playing the same flamewar game that the others are, with tedious "no you" disparagement and name-calling about the West which mirror the things you're objecting to. That's not cool.

Worse yet, you're crossing into personal attack. I know how difficult it is to resist these temptations, but everyone who posts here has to follow the rules. You're breaking them, and we need you to fix that. Whether you're breaking them as badly as the next person is irrelevant.

I've put a ton of time and energy into the difficult task of convincing HN commenters to be level-headed and decent when addressing China/West geopolitics during this time of heightened propaganda and nascent enmity: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que.... I've stood up repeatedly for HN's Chinese users, and/or users of Chinese descent, and/or users who have a family or business background in China, for their right to share what they know on the site and not be attacked or hounded off. What you're doing here, unfortunately, undermines that.

If you want to post in the spirit of sharing knowledge and helping to overcome ignorance and prejudice, you're welcome to do that. If you're just fighting a nationalistic battle from the minority side, that's as bad as the comments you're objecting to, and you're damaging this site. That's not ok.

I realize this asks more of you in terms of dignity and self-control than it asks of the users who are in the majority position and therefore under less pressure. That's not fair, but it's how group dynamics work, it's the same all over the world, and it isn't going to change—we have to play the cards we're dealt. If you can find it in yourself to post in a spirit of respect and patience, no matter how ignorant others are or you feel they are, that would be lovely. But if all you're after is "triggering folks", we're going to end up banning you. I don't want to ban you, so would be grateful if you'd please fix this. Currently, you're giving ammunition to the lame and foolish accusations that we're trying hard to combat here.

If you do want to help overcome ignorance and increase understanding, here are two things you can specifically do that would help a lot: (1) find things to agree with in the comment you're replying to—this is nearly always possible (if not, you probably shouldn't be replying), and establishes respect and good faith. (2) Scrupulously eliminate every trace of pejorative language from your own posts, regardless of how wrong the other comment is or you feel it is.


In China, the concept of shame [in contrast to guilt] is widely accepted due to Confucian teachings

https://en.wikipedia.org/wiki/Guilt%E2%80%93shame%E2%80%93fe...

Translation: It only really matters if you're caught going something wrong by your peers or society at large. There are recent, but also ancient origins which define this culture. The west (western and especially northern europe, etc.) is broadly a cultural outlier anthropologically, which are guilt societies.


That sounds rather Orientalistic, and at least partially based on mistranslation of what "shame" means in Chinese. Mencius said about what one should desire:

> 仰不愧於天, 俯不怍於人 - Looking up, no shame toward the heaven; looking down, no shame facing others.

That is, Confucian "shame" is not something you feel when you are found out. It's what you feel when you look inward at yourself, in front of the heaven. It's closer to what Europeans would call "sin".


This is a guess but I am not sure if "Confucian" teachings is the right way to figure out modern Chinese society. Plenty of Western writers point to it to understand China but I can't help but feel like that's like pointing to George Washington to understand modern American society. Yes, history plays a role in modern society but it's not the entire picture.


It’s more like reading about the Protestant reformation to understand the US - not directly relevant but very visible in the culture everywhere you look.


The more time I live outside my homeland, the more I agree with the idea of pointing to George Washington to understand modern American society. I don't mean America in particular, but if you want to know why different cultures think differently, the only thing you have is their intellectual history, which changes (I think) surprisingly slowly.


> Translation: It only really matters if you're caught going something wrong by your peers or society at large.

Let's not single out China though. I've read pretty much the same happens in Japan. It might be that the source of this ends up being Confucianism and historical influences of China, but it extends to other countries outside it. The whole "saving face" and outward appearances, etc.


Where in the world doesn't this happen?


I can't agree with this.

Everyone I've ever seen doing something wrong in the west only showed "guilt" once caught.


How would you know?

Do you not feel guilt yourself?


Actually my experience is that while those are many, they are usually considered sociopaths or narcissists.


Is sociopathy and narcissism the norm in shame societies but not in guilt societies?


> Researchers are judged on published papers? Publish papers that appear just good enough to be published, ignore everything else.

If there’s a cultural issue here then it’s a management/leadership one.

I don’t think the average westerner would be any different if their future employment required that they have something published.

And, I know it would never happen, but if FAANG companies were to all decide that the number of published papers would be key in hiring decisions then this place would be full of people looking for the quickest/easiest way to get the number up.


ADV China talk about this a lot. China is basically a low trust society where amoral opportunism is not only tolerated but also expected, as long its 'chabuduo' (just good enough).

Putting an entire country through famines and encouraging paranoid political denunciations will squeeze out any concern to ethics and altruism out of any society.

In their eyes, when someone gets scammed, the scammer is smart and the victim dumb and deserves what happened to him. I also wouldn't be surprised if it was facilitated by the PLA as a form of Unrestricted Warfare.


It’s possible that such a phenomenon (existence of paper-mills) could be caused by systemic flaws in policy making (in health care system in this case). As it’s mentioned in the article, such strategies are observed to be adopted by some in Iran and Russia. IMO attributing it to the “culture” of the people in those counties based on the facts presented in the article is far-fetched.


> Researchers are judged on published papers? Publish papers that appear just good enough to be published, ignore everything else.

This isn't particularly Chinese, though. Let me clear: by far, most researchers who "make it" in academia are very capable and very highly deserving of their careers. However, it is quite obvious that you can also advance your career by cynically focusing your research in exactly the way you say.


>Researchers are judged on published papers? Publish papers that appear just good enough to be published, ignore everything else. Products are bought based on looks on price? Make a cheap product that appears just good enough to be sold, ignore everything else.

This is not often blamed on Leninism, which to me is weird, because of the frequency with which Leninist polities show cults of personality, multi-decade tenures of heads of state, micromanagement of media and public communications, and use of public shaming as a tool to control mid-ranking mopes. And of course, I don't need to explain what a struggle session is.

If the governance and judicial practice consistently obeys the maxim that popularity = good and shame = bad, it doesn't seem surprising that that value system would reappear in the proletariat. Proponents of "radically democratic" systems would be wise to consider ways to mitigate this phenomenon.


I've seen some good summaries in the past, but unfortunately can't find them at the moment. Some things that play into it are the damage from the Cultural Revolution/Great Leap(s) Foward, government corruption with many rules designed to move money up to party leaders, extreme hardship and poverty, as well as recent affluence.

As with anything, it's complicated, and I don't understand things well enough to really synthesize it any better.


I think the issue is less ultra-pragmatism and more a desire to (quantitively) measure everything to the nth degree. It starts with five year plans at the top and filters down to almost every level of society. Once everything is understood as a list of items with numbers attached, certain behaviours become incentivised. Society optimises for the map not the territory.


"Poorly made in China" is a really interesting book.


Love how Dr. Bik exposed @IHU_Marseille and Dr. Raoult (aka Dr. Photoshop) especially. Those doctored images in the Dr. Raoult papers are comically bad.

https://twitter.com/MicrobiomDigest/status/13740469245510328...

We hear about some Chinese examples, but this happens also in Europe.


Dr. Bik is tireless, and her knack for spotting duplicated/reused images and image segments is incredible -- I often have to stare even at her marked/highlighted images to see it, but it's indeed there.


> We hear about some Chinese examples, but this happens also in Europe.

Indeed, fake paper factories are a thing in Western democracies as well. In Latin America, for example.


The current academic publishing system is so broken that I can’t see it surviving for much longer. In addition to outright fraud, journals are chock full of low quality papers and papers that cannot be reproduced. Citations are a useless metric because of how circular the citation networks are.

The best way forward is probably some metric of reproducibility. Can your paper/experiment be reproduced? Has anyone done so? Did they succeed or fail? Did they publish their results? That would quickly separate the wheat from the chaff.


This would indeed be great, but the impact on one's career of reproducing another's work would have to be equivalent to that of the original publication on the original author's career. Otherwise, nobody will want to spend the time certifying another's work.

Could have this so that it's a multi-stage process where all involved parties become authors on the paper, but this would need a substantial shift in what academia is today.


That’s exactly what it should be. After author there should be multiple lines for individuals/groups that reproduced that research. Prizes should be divvied amongst original researchers and the first N groups to reproduce it.

Also, I’d love to see the Masters degree become focused on reproducing others work for two years. You do original research in a PhD only.


Or...a scientist's reputation can also be bolstered by how many papers they are able to reproduce, or debunk.

Debunking science also brings value to science. The incentivizes of the system needs to change.

As an analogy, performing code review brings value to software development.


People have been saying this for decades. Both the idea that the academic publishing model is about to die (due to too many papers, low quality, broken peer review, etc.), and that we need to focus on reproducibility.

These ideas itself are valid but not nuanced. I could also argue that academic publishing as at its most successful ever with science advancing at an incredible pace even as complexity is so high, and I could argue that everyone is now thinking about reproducibility and it's baked into the academic process these days (for decades, having another lab reproduce your paper is critical to it being accepted by the community even after being accepted for publication).

Maybe an analogy I would use it if I were to say "software is so broken, the quality is getting worse over time, new programmers can't fizzbizz, security bugs everywhere, no one can even reliably make a heathcare website or computer game on time." Same argument that you're making.


> The best way forward is probably some metric of reproducibility. Can your paper/experiment be reproduced? Has anyone done so? Did they succeed or fail? Did they publish their results? That would quickly separate the wheat from the chaff.

I wish it was more common to cite a paper and the first few attempts at reproducing it side by side.

It would reward the reproducibility study with a lot of citations and reinforce the credibility of the cited original paper, because the reader now knows that the results quoted have been certified elsewhere.

Sure it can be gamed, but if you see $shady_institution and the reproducibility study was made by $other_shady_institution you can still draw your own conclusions...


Many of these fake papers are a major boon to the supplement industry.

Obscure supplements or herbal medicines are a common target for these papers. The more obscure the topic, the less likely the authors are to encounter conflicting results. So you end up with streams of papers showing that various herbal remedies or supplements show efficacy against COVID or cancer or other hot topic issues.

Supplement producers love this because it creates demand for their products. In some cases, I've been shocked to discover that the same researchers who published these papers are now marketing their own proprietary blend of those supplements. They create both the supply and the demand.


> In some cases, I've been shocked to discover that the same researchers who published these papers are now marketing their own proprietary blend of those supplements. They create both the supply and the demand.

Well, fakes aside, legitimate pharmaceutical researchers should also be expected to do this, right?


Not really, no. Once you have a compound that looks promising you then need to spend huge sums of your own money running a battery of human trials compelling enough to convince local regulators that your compound actually works and isn't incidentally harmful. Doing shoddy initial studies is just shooting yourself in the foot. Supplements are their own special area because they generally aren't regulated as pharmaceuticals.


There are companies in Europe that produce suspicious looking papers backing products in the alternative medicine industry. "Darsch Scientific" comes to mind. They have done papers supporting Menon and Powerinsole. The papers published include graphs and pictures of cell cultures and conclude amazing results but methodology described in the papers seem to be less than robust for products claiming miraculous properties.

https://www.dartsch-scientific.com/ https://www.powerinsole.com/science-en/?lang=en https://www.memon.eu/blog/schutz-hochfrequente-strahlung-ele...


The author emphasizes China, but the massive incentive to publish papers based on volume with little/no consideration for reputation of the journal in question, is prevalent in a number of different national systems.

China just has a disproportionate impact because of the size. It's not unique, and the comments talking about this being some aspect of China or the character of the Chinese are, I think, missing the mark. This is just humans responding to clear incentives.


> China just has a disproportionate impact because of the size.

Hugh no, fake papers and total disregard to ethics in general has always been a huge problem in China. Remember, China is still a poor contrary to what they claim, and its academic population is therefore small in comparison the rest of the developed world.


I'm not necessarily talking about developed world systems. I encounter these same incentives when working with researchers in Africa and other parts of Asia. But many commenters are making this inherently Chinese:

"Is there a good analysis somewhere of pragmatism in Chinese culture?"

"I lived in China for 6 months. This fits my experience, and is remarkably similar to a story I was told by a native Chinese: 'We were taught in school about Mao, who rose to power on idealism. At first he showed us many great things that we could do, but eventually idealism led us to the great famine, and many people starved. Since then, Mao lost some power, and now in school they teach us that Mao was 2/3 right, but 1/3 wrong. And that 1/3 that was wrong was idealism. So now we practice pragmatism.'"

"In China, the concept of shame [in contrast to guilt] is widely accepted due to Confucian teachings"

"ADV China talk about this a lot. China is basically a low trust society where amoral opportunism is not only tolerated but also expected, as long its 'chabuduo' (just good enough)."

etc. etc.

It's just as silly to talk about that as it would be to talk about the inherent pragmatism of the Tanzanians.

It is, again, just people responding to very clear incentives, and the market rising to meet them.


To some extent, if, as a journal, you have to start policing submissions in the ways described, you have already lost. In that so much of the journal industry is not about sharing knowledge, but about generating a metric (publication count) for profit. Rather than create more hoops that are only going to push good science away to other venues, they should reconsider what they are trying to accomplish overall, and where appropriate, go out of business.

I feel like this is less a problem of serious science (where it's generally known what authors and journals are reliable, and where people using the results of others know enough to subject them to scruitiny). It is a problem of the publishing industry and of evaluating academics for tenure and promotion.


I think even serious science can be problematic, at least to the extent that "generally known" means wildly different things across different subfields. During graduate rotations I encountered a few instances where something was known to be very hard to replicate, but it was basically impossible to find that information publicly. So if you didn't happen to "network" with labs that were aware of this (which could occur for a variety of reasons besides just being early career), you could end up wasting a lot of time trying to build on work that is at best very finicky, if not outright wrong.

Trusting only the work of specific authors may not be a bad strategy, albeit quite conservative. But trusting work because of the journal or the university it came from is way more likely to yield misses IMO. Not saying there isn't a correlation, just that there's still plenty of bad science going on at supposedly top institutions. And I've lost count of how many times a supposedly big result published in Nature or Science just quietly drops off the face of the earth. No rebuttal or retraction, but 10 years go by and nobody in another lab does any real followup on a seemingly cool result? I find that quite sketchy.

Biology papers are especially bad at not releasing raw data or detailed methodology, so in some cases it's not possible for even a very educated person to evaluate the quality of the work from reading the paper. There's definitely some trust that goes into it, probably too much given how oversubscribed/hypercompetitive that field is.


This is an article focused on Chinese paper mills, but let’s not forget bad research happens in American too and doesn’t require paper mills. That was revealed in the grievance studies scandal: https://areomagazine.com/2018/10/02/academic-grievance-studi...

Joe Rogan interviewed two of the people involved in this project: https://youtu.be/OlqU_JMTzd4


Bit different for medical studies, though, isn't it?


Nature's own publication: Scientific Reports is also suspect. https://www.nature.com/srep/

More about Scientific Reports controversies: https://en.wikipedia.org/wiki/Scientific_Reports


Interesting that the journal claims to judge only on scientific process and not impact, which sounds like a good idea for publishing negative results and replications and such, but then they wind up publishing some really outrageous claims like a paper supporting homeopathy. Seems like the opposite of the stated goal, but I guess they can make more money by publishing straight up bad science instead of staying true to their mission statement.

That controversies page was surprisingly entertaining overall though, I have to say. Clearly the authors don't even take the journal seriously: "The face of Donald Trump was hidden in an image of baboon feces in a paper published in 2018. The journal later removed the image."


Scientific Reports is very common source of HN posts.

People see only the domain name nature.com and think it's quality source.


The current publication system is very odd. Mostly government money is being used to fund research, then professors are asked to review a manuscript (who may just hand it off to a grad student or postdoc) without any compensation for their time, and the final result then locked behind a paywall of some private journal.

It seems that the incentive of a journal is to maintain an appearance of legitimacy, rather than actually enforcing it. This is why, as the article mentions, journals tend to be fairly quiet about retractions and issues of misconduct.

There is a better incentive by those that are funding the research (the taxpayers/government) that their investments are resulting in legitimate works. This also goes in line with the idea that these final manuscripts should be freely available to the public. Now with that said, I also acknowledge that the idea of having the NIH, NSF. etc. operate the editorial and review process would be nightmarish.


I think you are on point. I also wonder what the system would look like if you suddenly took out govt money out of the university system.

I suspect there would be a lot less research, but perhaps whatever papers made it to the finish line, they would be much more meaningful ?


I think you would just get a lot less research overall. It might lead to a better fraction of research being of higher quality, but in absolute terms, it would be a drastic cut.

R&D is always going to be a sunk cost, especially when exploring "new frontiers". Funding people to "waste their time" going down different uncharted paths - with most of them leading to dead-ends - is still the only way to have a few return with new insight from those paths that led to new and wondrous areas.

You can already see this at the scale of "what" government chooses to fund. For years they have funded cancer research which has made many advances, but they did not fund "ageing" research anywhere close to as much. Recent advances in the latter have shown this to be a promising field, but imagine the advances we could have had had this been funded to the same extent that cancer research has!


Whenever something like this comes up, I think that a closely-related topic is replication of scientific studies. One of the best articles I've read on this was by Stuart Buck here [1], which I think offers a great starting point.

I also think it's really relevant to this particular discussion; the added emphasis on attempted replication probably wouldn't prevent paper mills from working period, but they could drastically increase the cost of good forgeries and create an environment of accountability that could potentially make their use by researchers much more risky, which might drop their use even further. Of course, publishers could help as well; maybe only the most innovative papers that have been replicated by unaffiliated third parties that have been provided with the pre-print or low-impact initial publication get re-published in the higher-impact journals, and those that can't get watermarked with with "unable to be reproduced x of y times."

[1] https://worksinprogress.co/issue/escaping-sciences-paradox/


I think that a part of the reason is China's hyper-competitiveness, because the country has simply too many people. But then I guess that the more or less same thing would happen in India too? That country has also too many people, and from what I've seen their job market is pretty competitive. I'd be curious to see if these two country appear similarly in the academic landscape.


There's a lot of fakery in the world. Perhaps its because we want to pretend we are better than we actually are.

You can get fake teeth, fake hair, fake boobs, fake tan, fake muscles, support fake charities that take 98% for themselves. Read fake news and vote for fake politicians that will tell you sweet lies to make your fake self feel better. Fakery is only growing more popular.


Every student knows that some papers are a scam. But life is about getting to a position of certain security.


I wonder if the different issues we'd face would be less obfuscating, if science was authored anonymously.

In principle, why should we care who makes a particular claim? The claim should be decided on its own merits.


Papers aren't published under real names for the sake of the papers or to increase the acceptance of the claims in them. They're published under real names for the sake of the authors writing them. "See, I did that!"

It would be hard to evaluate a researcher's abilities if one couldn't read her papers.


This is exactly my point. When assessing research, the evaluation of a researcher should always be far, far secondary to the evaluation of the research itself.

In the contrary case, which is a separate issue - that of hiring a scientist, in every other arena, employees are hired based on interviews and testing, with some consideration given to past experience - but this is more and more, especially on the cutting edge, the least pertinent concern.

In science however, employees are hired sometimes entirely based upon their "experience", with little to no consideration given to their presently-assessable ability.

If I were to hire a research scientist, I would far prefer to test potential suitors by having them walk through the conducting of research in front of me. Many, however, would balk - indignant, and stand proud on their legacy of papers instead.

This gives me no idea of their actual ability, and taken on face-value - ignoring tradition and dogma - is actually a red-flag.


and they didn't even get started on social sciences..




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: