Hacker News new | past | comments | ask | show | jobs | submit login
Human psychology and behavioral studies overlook 85 percent of people (sapiens.org)
245 points by headalgorithm on Feb 7, 2019 | hide | past | favorite | 111 comments



I read the article. It barely scratches the surface of the issue, I was mad as a psychology student that this was the state of our knowledge when I read it in 2013 and didn't know about the publication crisis. The actual academic publication is really interesting and I'd recommend it as a read on whatever commute you're on.

Here is the link:

https://www.ssoar.info/ssoar/bitstream/handle/document/42104...

Years after pondering this article, I felt I had figured it out: psychology is secretly the study of particular subset of 18 to 21 year old American women. The ones who study psychology because it's a female dominated study and all those psych students need their credit and 'participating' in research is part of it. Most of them are American because psychology is a bigger thing in the US than in Europe, or at least it seemed to be regarding well-known theories, so I presume most research happens there.

There is another big group. A lot of dead mice (neuroscience).


Do you have any studies tracking the sex of study participants in psychology? I know that in clinical trials, historically the majority of participants are male:

"After the tragedies caused by the use of thalidomide in pregnant women, the FDA issued “General Considerations for the Clinical Evaluation of Drugs” in 1977. This guidance document stated that women of child-bearing potential should be excluded from Phase 1 and early Phase 2 research, except if these studies were being conducted to test a drug for a life-threatening illness. If a drug appeared to have a favorable risk-benefit assessment, women could then be included in later Phase 2 and Phase 3 trials if animal teratogenicity and fertility studies were finished...In 1993, FDA reversed the 1977 guidance with another guidance document entitled Guidelines for the Study and Evaluation of Gender Differences in the Clinical Evaluation of Drugs."

Also:

"In a study that evaluated the inclusion and analysis of sex in the results of federally-funded randomized clinical trials in nine major medical journals in 2009, researchers found most studies that were not sex-specific had an average enrollment of 37% women."

From "Women’s involvement in clinical trials: historical perspective and future implications" - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4800017/


Don't clinical trials often pay?

They could be attractive to the desperately impoverished single adults, who are mostly men.


Source? Everything I've read says that women are more likely to live in poverty.

Stats on poverty by sex and age in the United States: https://www.statista.com/statistics/233154/us-poverty-rate-b...

Worldwide, the gap is even wider: https://www.washingtonpost.com/news/worldviews/wp/2018/02/14...


There's a weird disconnect between poverty in terms of income and poverty in terms of access to basic needs. For instance, American Men are much more likely to be homeless than American Women[0]. There's all sorts of interesting gender/race/age disparities related to homelessness[1]. The effect of income appears to be outweighed by the effects of gender, race and age.

I should have been clearer than simply stating _desperately_ poor; I intended to convey that they are in a state of poverty where there are few other options available.

0: https://en.wikipedia.org/wiki/Homelessness_in_the_United_Sta...

1: https://www.forbes.com/sites/eriksherman/2018/12/29/homeless...


I'm surprised by that considering men are more likely to be homeless.

I wonder why that is, I would've thought poverty and homelessness goes hand in hand.

Maybe it's more to do with mental health? Of all the homeless people I knew, a sizeable portion of them suffered from serious mental illnesses.

I don't know, I just found it interesting.


Someone once argued to me that homeless women are much more likely to get assistance, in the form of organizations or from begging, than homeless men. Never verified but seems intuitively true


having been homeless and been around homeless groups before - being homeless is very unsafe (deadly) for women, so many will pick the path of slavery instead. (or abusive partners, whatever. It's not really like there's a difference) after all, both involve being used as an object and thrown away, but with the latter one's likely to at least get food first and probably drugs to help survive.

It's a really horrible place to be.


It's to do with the fact that society objectifies men as objects of success while women are objectified as sexual objects for having a vagina and womb; hence a woman cannot go any lower in value than her womb and vagina while an unsuccessful man can just go the botomless pit as far as society is concerned.

Men are the disposable sex.


Considering homeless population is like 80-90% men I think it's fair to say poverty is not the whole story. Men have some expenses women don't and women by far can receive more social benefits than men (some welfare programs are exclusive to women).

And the world stats aren't relevant as we're talking about psych studies that mostly take place in the US.


Does that poverty rate take into account that a lot of females are given money and financial support by a partner?


clinical trials are for medical experiments.

social psychology experiments use whoever they can get, often for class credit only


> Years after pondering this article, I felt I had figured it out: psychology is secretly the study of particular subset of 18 to 21 year old American women.

There are many analogs of that in Computer science too. For example in computer vision, CIFAR-10 is a de facto standard for measuring performance. A set of 60k 32x32 images. Good results on that data set doesn't necessarily translate well into real-world performance. But what can you do? Gathering huge data sets and having humans annotate them is incredibly expensive.

Another ml example is music recognition. There are several state-of-the-art methods for detecting notes in music. So one Chinese researcher tried to apply them for Jingju music https://www.youtube.com/watch?v=NJsTl342RhI. The results were... less than stellar.


> psychology is secretly the study of particular

> subset of 18 to 21 year old American women.

This is perfect!

I had a similar epiphany about linguistics, much of which appears to be the study of example sentences that linguists come up with.

Since you could invariably tell when something was an example sentence, this was obviously a different language from what people actually used. (Of course there are linguists who go out and study real language-use, but some actively dismiss this as mostly irrelevant "performance")


Those example sentences linguists come up with are actually very reliable. Here's Sprouse & Almeida. 2012. Assessing the reliability of textbook data in syntax: Adger's Core Syntax. https://doi.org/10.1017/S0022226712000011

" [...] This suggests that even under the (likely unwarranted) assumption that the discrepant results are all false positives that have found their way into the syntactic literature due to the shortcomings of traditional methods, the minimum replication rate of these 469 data points is 98%."

98% replicability!! Compare that with psychology.


Interesting result, but seems to answer a different question: do naive native speakers find those examples acceptable?

The first problem is that asking people how they speak or what is acceptable does not accurately represent how they actually speak when not observed/asked:

https://en.wikipedia.org/wiki/Observer%27s_paradox

The second problem is that, even if the first isn't an issue, the most this can demonstrate is that linguistic example sentences are a subset of actual language.


That's a fair point, but then the question is what should the question be? There are many subfields and disciplines of linguistics all of which are imo fascinating. In theoretical/generative linguistics the distinction between acceptability and performance is relevant, and acceptability seems to be a very robust measure.


> Most of them are American because psychology is a bigger thing in the US than in Europe, or at least it seemed to be regarding well-known theories, so I presume most research happens there.

I'm not sure about the amount of research but the amount of psychology students here in the Netherlands is huge. And it seems to function in much the same way. Often first-years get some form of credit for mandatory participation in experiments for higher-year students and sometimes regular faculty members. I'm also aware of some researchers going out of their way to get a different sample from the population but in the end it's a lot easier to get participants when you make them.


The thing with the reproducibility crisis is that a lot of psychology theories have the "looking under light post effect" (a drunk looses his glasses in a river and a police officer finds looking under a light post, "because at least I can see here). It's been well known for centuries that human motivations are complex, socially layered and contextual. But most psychology theories wind-up fairly simplistic and not framed by multiple contexts because ... these can be easily tested (except apparently they can't be easily reproduced, uh..).

I don't know any easy way out here. Just because "folk psychology" can say formal psychology is too simple doesn't mean folk psychology and anecdote are more useful.


> The ones who study psychology because it's a female dominated study and all those psych students need their credit and 'participating' in research is part of it.

Wait, does participating in a study as a subject count for credit, or does working on a study count?

Aren't studies explicit about the demographics of their participants? And don't studies that make generalized claims usually control for things like gender and age?


It's common practice in psychology degrees to have some of your ungraded but required credit basically be "participate in the studies of others". I get why they do it this way (otherwise there wouldn't be enough people willing to participate in undergrad psychology experiments). Personally I find this practice very unethical.


While true, that’s misleading as this is generally for class assignments not published research.

Some overlap does exist, but it’s significantly less than suggested by the requirement.


What do you mean class assignments? It may vary by university but at my school the studies we participated in were indeed being published. This also was not only for psychology majors but anybody who took 'psych 101' -- participation in at least 3 studies was required to receive credit for the class.

It was fun, but it was seriously disillusioning to see how absurd most psychology studies are. Two researchers talk to me for 5 minutes about women performing better than men on certain forms of math tests. They then sit me in a room alone for about 10 minutes presumably to try to make me nervous. Finally they give me a test, overtly handing me it on a pink sheet of paper instead of a white one. Then after the test they continually tried to get me to agree that I felt like I probably didn't do so well on the test. I refused - since I was certain I nailed it (it was a cookie cutter pre-sat math type test). They told me I'd receive my results by mail within 2 months. I never did. E-mails asking for such were ignored. The research got published and was of course some trite gender bias explains all difference in math paper.

At the time I just found the whole thing quite absurd. In hindsight, I think it was even worse. That study was prepped and ready for p-hacking. They were measuring and possibly varying a large number of variables there. There was the paper color, the isolation, the 'chat(s)', different sections of the test, were men/women over/under estimating their performance, etc. And they were actively trying to push people into agreeing with what they wanted to find. You're going to be able to find some sort of statistically significant correlation there, even if none actually exists.


I'm not sure whether this is a rant, or informing or both. Let me know if it's a bit too much on the ranting side, I'll edit it.

> Wait, does participating in a study as a subject count for credit, or does working on a study count?

There was this course called "measurements and diagnostics" and for that you had to pass the theory/tutorials of the course and also 10 hours of being a participant in research. If you didn't do the 10 hours of participation, you could do them at any moment you want and if you completed it, you immediately passed the course.

My university has about 400 psychology students, so that is 4000 hours worth of people participating per year. And I'm pretty sure that more universities are doing this. In The Netherlands alone I estimate there to be about 40000 psychology students per year and probably about 5 hours of compulsory 'participation' in experimental studies. So a tiny country like The Netherlands is skewing this particular bias about 20000 participant hours per year.

Participating in experimental studies is quite fun (pro tip: go for the fMRI and EEG studies ASAP! ;-) ), but I didn't appreciate how I was forced to do it. I didn't learn too much from those experiments and I felt used. It furthermore is a way to be lazy regarding participant recruitment. I'm pretty sure that some startup could disrupt this particular problem. If you have ideas about it, email me, I'm willing to brainstorm and help you to look for ways to solve this problem.

> Aren't studies explicit about the demographics of their participants? And don't studies that make generalized claims usually control for things like gender and age?

Studies are explicit about the demographics, which is why the WEIRD article could be written. And yes they do control for things like gender and age and in general they are quite careful making those claims. The issue is though as a research field psychology isn't the study of the human mind and human behavior but of WEIRD people their mind and WEIRD people their behavior.

Like a lot of societal issues that are in popular media today, this is a systemic issue. Though I could imagine psychology researchers being like: "Hmm, I could get a representative group or I could just get some psychology students and churn my next paper out twice as fast!"


Would it be fair to say that by being forced to participate in a study you aren’t really giving your consent? Even though I presume you get to find/pick the studies you participate in.


Its not just fair, i'd say its accurate.

Here's another interesting phenomenon: a certain personality type (of which I may or may not count my self among) will deliberately supply false data which attempts to invalidate what we perceive as your study goal when forced to participate against our wills in this way. Its a small rebellion against an academic system that doesn't respect us or intellectual integrity, but is focused on pumping out shoddy research on unwilling participants.

Turn academics into a game and thats how it'll be treated.


I remember this practice of almost-compelled research participation even in the introductory undergraduate Psychology 101 course, and in that class (which included high school students and tons of non-majors) there was a lot of the attitude you describe.


Generally, there are "alternative" assignments that you can do instead, but they're usually somewhat more annoying.


In my case, we had no alternative. If you chose to study psychology at my uni, you had to do this.


Was there some choice of what studies? So that if you object to a particular one you could decline... if you object to all of them then I guess you should study something else.


I can only speak for the American university I work for, but here, every Informed Consent form explicitly states that participants can withdraw at any time for any reason and still get credit.


That seems unfair and wrong. I'm forced to have a job to pay my bills and I get to pick my job but that doesn't exactly mean it's without my consent.

I also have the choice of not working or wiring some other job.

I would say it's a responsibility not supporting forced on people against their will. Two very different things.


I'm talking about the concept of "voluntary informed consent" in the context of the ethics of research studies. It's a far more specific thing than 'oh our capitalist system requires our labour or we'll freeze to death'.

Voluntary informed consent generally requires that participation is completely voluntary, that any rewards for participating don't cause undue influence on the participant's decision to participate and that there will be no consequences or loss of benefits for not participating.

I would argue that to reach this standard the requirement to participate in studies to complete your degree would need to be known before enrollment (likely the case) and the actual study you'll be participating in needs to be identified (almost definitely not the case). Anything less and students will find themselves financially and personally committed to a course of education without truly knowing what they'll be required to do to pass.

If you're interested in the topic, take a look at this neat little booklet: https://oprs.usc.edu/files/2017/04/Informed-Consent-Booklet-...


> Participating in experimental studies is quite fun (pro tip: go for the fMRI and EEG studies ASAP! ;-) ), but I didn't appreciate how I was forced to do it. I didn't learn too much from those experiments and I felt used. It furthermore is a way to be lazy regarding participant recruitment. I'm pretty sure that some startup could disrupt this particular problem.

I don't see much ability to innovate here other than Mechanical Turking out the work to people you pay to participate, but if you come up with something interesting I'd love to hear it…


other than Mechanical Turking out the work to people you pay to participate

You'd be surprised how common that is.


It's actually a decent source of money as a student. Go lie in the fMRT for an hour or two (I just sleep if there's nothing to do) and they will wire you ~10€/hour or more.


Exact same thing going on in the courses over here in Aus.


Thanks for taking the time to answer my questions as well as you did.


I'm taking an intro neuroscience course this term. We can participate in department studies for extra credit, of up to a 3% bump for the entire course.


Getting research subjects is hard. For any significant sort of test beyond a simple survey, it is really really hard. So colleges force their undergrads to be participants so that the graduates have enough fodder to write research papers off. Of course, the IRB views this as unethical, so all the undergraduates are given an alternative to being a participant in research, at which point it becomes a question of crafting an alternative that is painful enough all the undergrads choose being a research subject but not so painful the IRB considers it unethical.

I'm use to the alternative being a research paper that can easily take 20 times longer to complete than being a test subject.


> There is another big group. A lot of dead mice (neuroscience).

I'd add in Beagles, Capuchins, Zebrafish, and Mongolian Gerbils there too.


I believe a significant chunk of genetics is similarly limited, to Icelanders, for example.


I actually don't see this is as too problematic.

The entire field is murky because it's very hard to scientifically measure human psychology and behavior, and the tools of the trade seem almost laughably simplistic (like the aforementioned 5 point rating questionnaires). So our body of knowledge doesn't even reliably describe WEIRD people. Rather its a crude proxy, that just might contain some elements of truth warranting further investigation.

But better crude tools than no tools, and better locally available subjects than no subjects (because most studies don't have the budget to go to Zambia). Similarly in other fields, research is done on rats pigs and monkeys with conclusions drawn to humans. Obviously not perfect, but again it's at best a starting point for later studies.

I think the real problem is the over-zealous interpretation of study results as "truth".


I disagree.

If you're trying to measure a target 1cm across from 1 mile away with a ruler and a squint anything you say is not only likely wrong, but woefully deceptive.

My view is that this shouldn't even be attempted because it just generates superstitious theories. Psychology is the skinner box pidegon.

There's a fallacy here that's hard to pin down clearly but roughly: to measure inaccurately isnt to measure approximately. Its to measure totally in error.

The errors in social psychology are not just "second decimal place", they're angels pushing stars.


Most people I know believe and expect that some form of talk therapy is an important part of a mental health safety net. Now, how should we train these therapists? That's the utilitarian reason why we need something like a study of psychology.

Unfortunately, much like with more physically oriented doctors, opinions and practices will vary widely (including going against established research for whatever reason)-- and those seem to be particularly diverse in this field. The replication crisis clearly shows a strong need for reform at the research level as well.

In my opinion, the way we approach these issues should be 99% focused on what we know facilitates better outcomes (short and long term) for actual patients in a cost effective way. Theorizing about fun new methods is exciting, but what we need most these days is more disciplined and well trained pragmatists in the field, who can put their world views aside for a while to provide basic care for the (emotional equivalent of) bleeding gunshot wounds in front of them. How we get there is a big question, but one part of it is always going to be recruiting and properly equipping a certain percentage of incoming students to do the work.


Talk therapy is a different case than research social psychology.

I take talk therapy to be a kind of practical skill in regulating the emotions of another human being through narrative & interpersonal contact. That requires a lot of practical training, and is in fact, mostly practical training + high empathy.

Research social psychology does not inform this at all, being a totally different area.

"Psychoanalytical research" is even more BS than social psychology, but that's rather pre-advertised. And somewhat defensible as providing a "training ground" for learning the practice of talk-therapy rather than any kind of genuine explanatory framework.


But under this view how is talk therapy not more bullshit than research, or vs more evidence based psychiatry?


Riding a bike isnt a theory of classical mechanics. Talking to another person isnt a theory of human psychology.

Both require something to be true, but have no content on what that is.


I concur. Psychology isn't hard science by any mans, and extrapolating it to people outside your subjects' samples across the entire world into other cultures and things is obviously ridiculous. But we can still use tools and find patterns to help people.

Surely, the solution isn't to throw up our hands in the West and be like "we can't find any help for the mentally ill because some subject might not apply to our discoveries from around here!"--but of course, if they are extrapolating those results to other cultures, that's worrisome and inaccurate in many studies.

Human behavior has some universals, but a lot we figure are universal aren't.


I think the 'better crude tools than no tools' claim is false. The history of the 20th century is littered with an embarrassing number of stupendous tragedies that can all be laid at the feet of a lack of scientific rigor. Eugenics, leaded gasoline, thalidomide, DDT, lobotomies, and many, many others all stemmed from people who felt that it was expedient to draw conclusions based upon either insufficient data or reasoning they knew was not iron-clad.

There will always be a conflict between what we can know with confidence and the decisions that we need to make to handle situations that arise. But it has been shown that having someone in the position of authority as a 'scientist' giving their approval to something based upon insufficient evidence or reason tends to often lead to the most severe and large scale suffering. So I'd have to recommend against giving any credence to any crude tool.


Especially considering inherent conflict of interests:

"Of the authors who selected and defined the DSM-IV psychiatric disorders, roughly half have had financial relationships with the pharmaceutical industry at one time"

https://en.m.wikipedia.org/wiki/Diagnostic_and_Statistical_M...


Would you rather know nothing about something, or know something about it that is probably wrong? This should be a rhetorical question as the answer is self evident. The psychology replication crisis started off when it was illustrated that only 36% of studies published in highly regard psychological journals could be successfully replicated. And, unlike psychology studies, that discovery can and has been replicated. One can only imagine the rate in less well regarded journals.

The point of this is that the average study in psychology is much more likely to be wrong than right. And so by indulging psychology you are not giving yourself a crude tool for understanding, you are actively misinforming yourself! Imagine I wrote a newspaper where 64% of the articles were fake or misleading. If you'd like to be as well informed as possible, you'd be better off never reading that paper, even if there are some true things in it.

Science is not a 0 or positive game. Bad science can and does send societies and progress backwards.


That's not the only issue.

https://en.wikipedia.org/wiki/Replication_crisis#Psychology_...

> A report by the Open Science Collaboration in August 2015 that was coordinated by Brian Nosek estimated the reproducibility of 100 studies in psychological science from three high-ranking psychology journals.[38] Overall, 36% of the replications yielded significant findings (p value below 0.05) compared to 97% of the original studies that had significant effects. The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies.


That makes perfect sense. The published studies are a sample of 'study-space' and that have outlier significance. Replicate published studies and their significance likely returns to the norm.

Journals are filters to cherry-pick 'study space'. By the way they're constituted, they publish new studies that have overstated significance.


Veritasium has a good overview of the replication issues in much pf modern science: https://www.youtube.com/watch?v=42QuXLucH3Q


Causes of the replication crisis puts 0 blame on academia. This is an Academia caused crisis.


> [...] something like “I generally trust people.” Then participants are asked to choose one point along a five- or seven-point line ranging from strongly agree to strongly disagree. This numbered line is named a “Likert item” [...]

Oh god, having filled out a bunch of these for diagnosis and such I hate these with a passion. I always wondered how well these actually work.

I've seen grammatical nonsense like, "Do you often do X? -- always, often, sometimes, rarely, never". What, I often rarely do X? And what does often mean, anyway? Like once a week? Every day?

Then, there are the abstract or vague questions that you then have to interpret what concrete situation it could apply to. Hard to think up an example off the top of my head, but how people reply to these surely depends on what exactly they think it might mean.

Then you start losing patience after about 3 minutes of this shit, not to mention 15 or 30 minutes, and just go through them barely reading the questions, but for the first couple of questions you were pondering whether you "agree" or "somewhat agree" for ages.


I didn't quite get why the article described these surveys as problematic. I skimmed the linked article in that section but this didn't provide any answers either. Assuming the participants fully understand the question, and the survey is designed well, what cultural aspect is stopping people from answering?

I grant that with a questions like "Do you often do X?", examples are necessary to specify what "often" means.

From the article:

> Some people may refuse to answer. Others prefer to answer simply yes or no. Sometimes they respond with no difficulty.

That just sounds like some people boycott the Likert questions, but we don't know why.


I don't know what reasons other people have for refusing to answer. As a bit of pedant I just wanted to explain why I personally don't like them, it has a lot to do with being vague, abstract, unclear. Confusing really. You sit there and are not sure you can even answer the question honestly.

In a normal situation I would probably refuse and ask them to clarify, or challenge their assumptions or something. In this situation I just try to get over my aversion to answer and try to interpret it as best as I can. Sometimes I got no clue what it could mean and choose neutral or whatever. It's mentally exhausting.

This doesn't invalidate the methodology. Just saying I don't like it. If people refuse to answer or answer in a way that doesn't fit the schema and they throw it out, that's obviously a problem though.

Edit: To maybe answer your question, I skimmed through the actual article and they mention things like old people not culturally accepting a young person administering questions, children thinking they should not speak in the presence of elders. But also other weird things as mentioned that seem to me as stemming from confusion. I mean, you get better at answering with practice, so the whole concept might be totally baffling to some people which haven't gone through a Western school, where you're also quite often expected to answer unclear and confusing questions on tests.

More edit: The PhD students and professors designing these questions and the college students used as test subjects are basically the most overschooled people on the planet. They are the ones that, in school, excelled at answering abstract, underspecified, questions with multiple choice answers. Many have seen these before. They might even know how they are made and how the results get processed. Of course they're going to have less trouble answering.


Agreed. They are like 'which color represents your opinion on this complex subject?' I don't know what to say - I agree with some of it, disagree with another part, and have no opinion about how it applies to others anyway.

Then there are the 'Have you quit beating your wife?' questions, posed to make it a dichotomy but I'm somewhere else entirely.


Surveyors and surveyees being in the same pool of people often is probably bad, I agree.

My conclusion is that the usability and accessibility of empiric testing could use some work. Even in regards to WEIRD people :D


The problem with these Likerts is that they are often an uncalibrated scale, so the subject is interpreting the question as well as the possible responses. They are also used in medical studies of pain, for example. You've seen them- the little faces in the doctor's office indicating differing amounts of discomfort and you pic the one that describes you). My buddy and I had a good laugh thinking up how to calibrate it :)

I'm not even sure how you test something like that for reliability.


These tests have other questions that intersect and determine how much you actually mean it by taking it from different angles. The MMPI-2 for example, may touch on the same subject from various degrees and angles to get the overall picture.

Not saying it's right, but not saying it's a singular question "do you believe X agree? slightly agree? etc." it's a bit more deep and nuanced than that. And the statistics tend to back it up.


You are right, they do this, especially in the more tedious longer ones.

I think improving these things is difficult, because of you change the questions, you also loose all the accumulated data, and maybe now you might have a better questionnaire, but you wouldn't know it, and you wouldn't know what the results mean. I mean, until you administer it to just as many people as the old one. It's a lot of work.


That's why starred-review sites have mostly reverted to thumbs-up and thumbs-down. The biggest trial of Likert items in history has spoken.


... and people have started complaining about those reversions to thumbs up or down (https://www.polygon.com/2017/4/7/15212718/netflix-rating-sys...). And in any event, why would you use the same thing for everything?

The amount of armchair criticism of this sort of thing always surprises me.

There's literature on the the number of response options and it basically says:

People respond quicker with binary options.

You lose information though. People get pissed there's no nuance available (no "maybe" for example). They refuse to answer because they can't say anything other than thumbs up or thumbs down.

As you increase the number of options, your ability to predict increases. Predict the same thing later, predict other things, etc. It stabilizes though.

It does take longer for people to respond, but then they complain about what "slightly agree" means.

Regardless, though, you can ask people to do it, and if they just do it, you can predict from it.

Even when you let people omit responses and refuse to respond, you can model that refusal to see what it predicts and then use their refusal anyway.

You can model how people use the response options in different ways, and in general it doesn't matter too much.

You can even eliminate rating scales altogether and use entirely different systems (forced-choice between different statements), but those don't actually work much better either.

Yes, you can always respond in a cheeky, subversive,or manipulative way, but then that's an entirely different issue altogether. You can always do that, and all you've shown is that you're smarter than a rating scale?

As for the original article... these criticisms have been made for decades. Decades. These are my general impressions:

1. The problem of western focus in behavioral sciences is a problem of western focus in the sciences period. Many behavioral scientists would love to do research across multiple sites but cannot afford to do so. And this doesn't stop all sorts of other biomedical research from being done on very narrowly defined groups of people (or animals).

2. Effects observed in undergrads generalize a lot more than the author is letting on, and effects observed in western populations generalize even more. I'm not saying it's not important to study things cross-culturally, only that the idea that people are fundamentally different in different settings is itself flawed. Many things have been examined across different cultural settings, and the differences are not all that dramatically different. In fact, in one recent replicability study, the effect of culture/sociogeographic population was one thing that didn't seem to matter that much. Some studies replicated and others didn't, but the sociogeographic setting didn't seem to matter very much.

I agree that being more sensitive to human variation is critical, but like a lot of things with behavior, there's a lot of grey areas, which people don't like to hear.

There are whole fields within psychology and the behavioral sciences devoted to these issues. People have put a ton of effort into studying them, considering all the issues being raised here as well as many others to numerous to count, and it's like all that work gets brushed aside like snow in the wind because of random blog posts and anecdotal experience.


What people complain about is maybe not the important thing. Its getting actionable results.

And what about the 'Have you quit beating your wife' part? Where I don't lie on any part of the spectrum they've drawn. Ok, that's the same with thumbs-up-or-down, but with the spectrum its in your face that you have no answer that's meaningful. I refuse to answer those kind far more often that thumbs-up.

Finally, mega-corporations wanting actionable results are not 'brushing aside with anecdotal experience'. That's the point I made. They're spending billions and want results, and more often use thumbs-up-or-down. That trial has five orders of magnitude more data than all the graduate students in history added together.


Or what about people actively lying?


I guess it would be a minority because telling the truth is easier and faster under most circumstances.

But yes, all these studies whose the authors authoritatively make claims but these are are based on people answering questions regarding themselves (how often do you cheat on your partner?) - I never believe these numbers.


Deception vs. honesty on certain questions in itself is a huge revealing factor in a person's personality.


Not really - these Likert question studies are so much easier to click through rapidly. You can double or triple your hourly rate depending on how bold you are. Would a college student cheat their psychology department out of 10 dollars? Maybe some.

I wonder how much shrinkage (wasted questionnaires) they have to account for? Pun semi-intended.


It's accounted for in modern psychological tests.

Edit: to clarify, there are many other questions that will seemingly be somewhat related or unrelated that will infer later on in the test if you were lying. People who aren't familiar with the tests won't notice it in an honest testing.


The big problem with this is that the more diversity you have in your study participant set (gender, age, skin color, handedness, etc), the bigger your study has to be to correct for potential biases in results caused by differences in the participants unrelated to what you're testing. And the bigger the study, the more it'll cost, so there will be less studies overall.


It's not so much a problem as simply the cost of doing proper science, and just because you make a small study you don't have to make it cheaply (though you probably are if you're an academic).


I guess the dilemma is whether you want to do good science or just get your papers published for a low price.


I am glad that there are some big studies out there though; big in terms of participants, or big in terms of long-term research - about subjects like nutrition, cancer and heart problem risks, fertility, economic mobility, etc.


Even among 18-21 year old college american women there are ridiculous amounts of variables that you can't control for. Give up the idea that you can control for biases in psychology. Do big mixed tests, then if there's positive outcomes see if its correlated to any of those groups. Then you can do more tests on the groups that didn't respond.


Sorry, that's a sunken cost fallacy.


There should be incentive for laboratories to produce bigger and better studies rather than a bunch of small ones.

Especially since a study with 300 participants is worth immensely more than 10 studies with 30 participants.


This is a known problem. Nearly all experimentation-based psychology research was in past decades conducted within the US and is heavily biased to US culture. Cultural differences aren't limited to how people interact with each other, but also include primitive qualities like how people perceive space and use tools.


This isn’t even the worst problem with psychology.

The worst is it’s contextual loyalties against critiquing the relationsips people have with one another. Power struggles, to be particular.


Can you elaborate on this?


By premeditating that diagnosis will be of the individual, the society is assumed a constant. In science, we call that fraud.

Effectively, clinical psychology serves apology for societal and economic ills by blaming the effects on the individuals themselves.

The process is strikingly transparent from an objective standpoint, but ironically it’s precisely an objective standpoint that is undermined by the institution. These days, you can visit one and tell them just about anything, and they will diagnose your disposition as illness.

I don’t blame psychologists themselves, and who is to say they don’t help people? Many surely do, but the institution is an absolute drag on our economy and society; something like a collective excuse for avoiding self-critique.

Sociology theoretically steps in here, but it is merely a study, and some branches of it would result in a totalitarian nightmare.

What’s needed is humanist sociology and humanist psychology. As we learned in kindergarten class, there is nothing wrong with us. Each of us are unique and beautiful in our own ways. The questions are about how to live with one another.

Pyschology as a study is another thing, but as a medical malpractice it has proven antithetical to systems design from the start.


If you have scientific studies to show your objective viewpoint on mental illness being merely a product of societal or economic ill, with definition of 'societal ill' and 'economic ill', I would be more than interested in seeing the data you derived these conclusions from.

In the meantime, I would think the field of clinical psychology is a safer bet.


>If you have scientific studies to show your objective viewpoint on mental illness being merely a product of societal or economic ill

It's common sense, and there is no objective proof to the contrary.

"Psychiatric diagnosis still relies exclusively on fallible subjective judgments rather than objective biological tests"

-Alan Frances

https://en.m.wikipedia.org/wiki/Allen_Frances

Psychiatry/psychology also admits that social environment has substantial effects on mental illness.

But asking for a "scientific" study of that is silly, because it is impossible to have an indepedent variable or control group -- because it is impossible to separate an individual from societal influences.


You are making the claim of an objective evaluation. It is your responsibility to provide proof of its objectivity. I otherwise an more willing to take something that at least has supporting evidence, dubious as it may be, to claims that have no evidence at all beyond ‘common sense’. I think that’s a pretty common sense position, but it’s clear we don’t agree.


There is no objective proof that mental illness even exists, so the burden of proof is on those who claim it's just like real illnesses (psychiatrists and psychologists). They're the one's claiming to be scientific.

I'm merely claiming they're not being scientific.

There is plenty of evidence that disfunctional social environments often lead to disfunctional individuals.


One pretty decent correlation is that between poverty and mental illness. People who are regularly abused by others also end up mentally ill. To me it's pretty evident that mental illness is caused in large part by your environment. Remove somebody from a certain environment and they will no longer be mentally ill. Keep them in that environment and feed them pills and give them talk therapy and they'll remain ill.

It's not like this is some groundbreaking controversial revelation that flips the bird to science. It's a fact that isn't incorporated into a model of mental illness treatment that pretends that "mental illness" is primarily a result of a disordered badly behaving brain. This model is used because its easier to change a person than the society around them. It's easier to empirically measure change when you're measuring the effects of something on an individual rather than on the entire society.

I would account for the decline in mental health metrics in the west, there's more self reported mental illness, more suicide, more social withdrawal, largely on the failure to address big picture concerns. Clinical Psychology isn't useless to solve the mental health problems of society. It's just inadequete. It's like expecting tier 1 support to solve a problem that's rooted in an entire network being fucked up. What I see is a society that has problems that are correlated with poor mental health which aren't getting addressed and people are acting as if the solution is leaning more on solutions like talk therapy and pharmacuticals which are failing to address a society-wide decline in mental health.


"Let's not disturb the status quo and what's mainstream with our results, else we'll stop getting grants"


Isn't this applicable to most of academia and science and not just psychology?


Physics and Engineering doesn't have many things that will annoy the status quo.

While speaking against e.g. mainstream cultural tenets, or how we run society, or how we parent, etc., can lose you grants quickly, and those are all things psychology and sociology can get at.


Reading the intro I had a strange feeling about this text.

I don't have any issue with the general message. This surely is a legitimate problem. But the intro reads a bit like "psychology is such a great endeavour, if there wasn't this little issue."

Everyone following science news should be aware by now that psychology suffers from a whole range of systemic methodological problems, notably publication bias, widespread p-hacking and failed replications.


I remember our psychology 101 professor offering .1 GPA “extra credit” for every psychology study you went to. There was no way to get a 4.0 in that class unless you attended every class, answered every exam question correctly, every quiz question, etc.


The human behavioral and cultural adaptability to the environment is really impressive and it makes many psychological findings local. With technological and cultural changes some results may apply only to few generations before they go away.

Think children growing in warzone or poor and violent environment. Their behaviour as adults is often sexually more promiscuous, aggressive and their impulse control seems to be less than 'the baseline'. They show trust issues.

How much of that is just damage and disorder as psychology seems to assume, and how much is adaption to survive and procreate in an environment where lifespans are short and life is uncertain. Maybe childhood stress and stress hormones trigger survival strategies that work well in hard environment. They are maladaptive only in the culture and safety of the developed world.


Locus of control is another way of understanding how humans are not necessarily universal to one another by how events are perceived. The missing shape example might just be the children never encounter the square & triangles in daily life and are thrown off it. I don't really consider that a great example for showing how behavioural is different.


The very fact that familiarity with a shape can influence performance on pattern recognition tasks so dramatically is a significant finding on its own.

It would mean we are effectively blind(er) to unfamiliar shapes, even though they are extremely simple, like a triangle or square are.


But that is just conjecture. No one knows why they failed the test more often, only they did.

It could have something to do with their written language, for example, and shapes similar to triangles and squares in it appearing in a certain pattern that isn't the basic repeating XXOXXOXXO...

Or it could be music and rhythm is a big part of their culture, and XXOXX then follows with XXXO in a large number of their songs.


I was just thinking. If they gave me this test back in the '90s, but replaced it with a Pokeball and the face of a character from the Pokemon show, I'd probably have done better too.

Those would have been things recognizable to me. Things I liked a lot.

Also, they would have put my mind somewhat at ease, I think.

I wonder how much of the inability to answer some questions is actually down to feeling slightly intimidated by the onslaught of questions, their sterile aura, not knowing the person presenting them, etc...

How much does the subconscious feeling of intimidation influence the ability to think?


I’m surprised that is a recent discovery. I would think if someone asked a group of kids why someone would get the problem wrong. They would be like well first time seeing shapes.


Well, some certainly do, but as always, it is a game of sampling, so individual errors are not as important as proportions. If we take the article at face value, proportions differ drastically between very different cultures so there must be something in the culture that biases the proportions towards one outcome or the other systematically.

Of course, it's always possible any individual study contained unknown and unaccounted biases. Funding in science is still relatively small, the number of people participating few and science itself is hard.


Much of experimental design in psychology is an effort to attenuate the effect of individual differences so that you can find the common (and probably core) cognitive processes. Sometimes the risk to external validity is worth it in order to reduce the complexity of data and models from confounding variables. So we work with a system where most science is conducted with a relatively homogeneous sample of college undergraduates who are convenient to obtain, while well replicated or foundational findings are later tested across cultures to determine degree of generalization. It seems an ok compromise to me.


a simple arithmetic computation shows that Europe has 750 million people, and the world has 8 billion people. That's 1/10 of the population. We neglect about 90% of people in... anything. South America has 500 million people, so they should get a comparable share, and Africa has 1.2 billion.

So just by simple fractions we know something is off. A more careful study by subject and region could be required.


Psychiatry is in the same boat. Anyone with multiple ongoing issues (depression + anxiety, for instance) is disqualified from studies. Guess what - those are the people that repeatedly have the worst outcomes or don't respond to treatment. We don't even know the real effect of smoking with ADHD people, because if they have anxiety they won't be included in the study.


there could have been any number of blue triangles before that orange square - we see 2, but what about the rest out of frame


[flagged]


This is spam.


75% of statistics are made up


Psychology and social studies, antropology (and economy and whats called climate sciences) really shouldn't be called sciences. It's one of the rare cases IMO where the word really obstructs the core meaning of the subject matter.

They should be considered temporary interpretations of statistical data or in short meta statistics cause that's really what they are.

Much damage is being done by treating these fields as science and the article is only mentioning a few of those problems.


> They should be considered temporary interpretations of statistical data or in short meta statistics cause that's really what they are.

This is what a lot of experimental sciences are, even physical ones, when the systems being studied are complex.

There are very few areas of scientific study anymore that offer convenient, deterministic results. That fruit was picked a long time ago.

Even at the cutting end of physics, researchers have to infer from statistical results.

The difference is only that some of these fields have more reproducible results than others, often because they are studying less complex phenomena, whose causal factors and mechanisms can be more directly observed.

Psychology is at one end of that spectrum, because it is studying the output of the mind, a biological information system whose mechanisms are among the most complex and obscure that people have ever studied.


I understand that and I agree with how you think about it but the problem is that it creates a very unhealthy public debate because one end of the spectrum gets confused with the other.


I think of it on a per-hypothesis or per-theory basis: If a hypothesis or theory is falsifiable, then it is scientific. If not, then it's not.


Why was this flagged?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: