Hacker News new | past | comments | ask | show | jobs | submit login
Social Psychology Is A Flamethrower (slatestarcodex.com)
147 points by gwern on June 22, 2014 | hide | past | favorite | 54 comments



So a friend of mine just shared a survey from a student of social sciences on Facebook. The survey is concerned with stress levels of kids between age of 13 and 16, and is targeted to them. The data is to be used as a basis for a bachelor's thesis.

The first thing I learned is that my friend actually took part in the survey by "putting herself in the mindframe of a 13-year-old and filling it in as the kid would". After a short discussion about how doing so turns the whole thesis into pile of nonsense, she (herself a graduate from a liberal arts college) told me that the survey author was actually one of the better student in that she actually made the survey - as most people fill the surveys themselves or just make the data up.

So things like these give me trust issues with all the "soft sciences". I do consider most of the sociology nonsense, because I seriously doubt that people who make up data for their thesis will suddenly turn into honest researchers after graduation. Yes, the "best research in social psychology" might be "as well-supported as anything in physics or biology", but I doubt it's more than a small part of research done, and mostly research from first principles (so there isn't much chance that the research you're basing your paper on is crap).


Answering a survey by pretending to be someone else is clearly unacceptable and fraud by the student if they encouraged others to do it. I’m pretty sure that would lead to that student failing the class or worse if it were discovered where I study. The same is (even more) true for faking data. That’s a disgusting violation of all scientific standards. For any somewhat larger project that might yield actual useful data our teachers are usually also involved to a degree where they would know if something like that is going on to a massive degree.

It is also standard practice to disguise these kinds of filter questions in surveys and to leave possible respondents in the dark about the ultimate purpose of the survey (insofar as that is possible and it isn’t ethically necessary to be transparent about certain aspects). That way respondents outside the filter criteria hopefully couldn’t participate even if they wanted to fake their way in. The way I do it, no one besides my advisor and a limited pool of pre-testers (who I make sure to tell not to participate in the main survey – and I also try to keep the link to the survey away from them) knows what the survey is about and what are the filter criteria.

That’s students. Most of them won’t go on to do any actual worthwhile research that will ever be cited by anyone. I would hope that the standards are even higher for research that’s published in journals.

Social research is tough even if everyone behaves ethically. No doubt about that. But I have never encountered any culture of fraudulent behaviour outside of lazy students who will never publish anything anyone cares for.


I tend to agree that defrauding data is not so much of an issue. After all, filling in a survey does not require too much effort or equipment. Cherry-picking the data and biases are the main issues with some studies.


Undergraduate students doing class assignments (or even graduate students or notionally professional researchers doing published or corporate research) using data invented to suit the desired result rather than actually following their documented methodology is by no means a feature limited to the social sciences, so I'm not sure how that is a source of particular distrust in those sciences.


It's not limited to social sciences, and I experienced this first-hand with STEM students. But the much higher scientific rigor, easier verification of results, greater separation from politics (you can't easily back your pro/anti-something beliefs by experiments with particle accelerators) and the lack of ability to just make up things as you go makes my distrust of hard fields much, much smaller than of social sciences.


While this may be largely true at the level of results of individual experiments, STEM is plagued by equally significant issues around funding. i.e. selection of which experiments will get performed. Consider the politics required to fund LHC and the necessity of there being a "God Particle" to discover. Or the reasoning applied to research in astrophysics, where at the end of a chain of tortuous logic there always should be something like "this could help us understand whether we are alone in the Universe". Or the quota of experiments needed to justify the budget for Space Shuttle. I am not objecting to these researches per se, rather highlighting that in STEM research funding at least, the Emperor has no clothes. As you say, experiments are largely sound because not easily faked, yet the distribution of research actually performed is driven by political expediencies more than it should be. Good researchers are those who understand and work within the distortions inherent to the funding landscape; a form of self-censorship.


> Or the reasoning applied to research in astrophysics, where at the end of a chain of tortuous logic there always should be something like "this could help us understand whether we are alone in the Universe"

I went to the NSF grant database and searched for "astrophysics":

http://www.nsf.gov/awardsearch/simpleSearchResult?queryText=...

> Result 1. Deliverable: a terabyte-scale real-time data exchange and correlation platform. Ultimate purpose: expediate collection & analysis process for data sourced from multiple observatories.

> Result 2. Deliverable: "the coordinated observation of nearby supernovae with optical and near-infrared spectrographs on 4m and 8m class telescope..." Ultimate Purpose: The study of thermonuclear supernovae ... [which] have played a central role in the discovery of the acceleration of the expansion of the universe and have a key role in attempts to constrain the nature of dark energy.

> Result 3. Deliverable: "an underground accelerator laboratory" Ultimate Purpose: " address three long-standing fundamental problems in nuclear astrophysics: solar neutrino sources and the core metallicity of the sun; carbon-based nucleosynthesis; and neutron sources for the production of trans-iron elements in stars."

> ... that's all I care to summarize

Where is the tortuous logic and bullshit justification you keep going on about? It looks to me like the grants are going towards funding legitimate scientific inquiries into legitimate scientific questions.


I feel this article is relevant here: http://en.wikipedia.org/wiki/Haldane_principle


There's no lack of ability to make it up as you go in the hard sciences, and there's no lack of motivation, even if it's not political, including often financial motivation.


An undergraduate's non-publishable homework assignment doesn't undermine a field's credibility. Plenty of physics students like myself bluffed lab data when we fried our experimental apparatus.


Actually, this also undermines the credibility of the institution that you attended. You can say "that's a small sample size," but it's indicative of larger problems at the institution that the honor code was not so internalized that bluffing the data would result in social ostracism.

The 'hard' sciences in the US are developing as much of a credibility problem as its 'soft' sciences. Institutional science may not be working out for us as well as has been hoped. It's a recent establishment and may not survive without a reversal in the decay of standards.


Actually, this also undermines the credibility of the institution that you attended. You can say "that's a small sample size," but it's indicative of larger problems at the institution that the honor code was not so internalized that bluffing the data would result in social ostracism.

You have fallen victim to the Typical Mind fallacy. Most undergraduate students are there for the paper that will make it easier to get a good job. Most people are not like you.

More than half of the graduate business students surveyed recently admitted to cheating at least once during the last academic year, according to a report released on Monday. The report, "Academic Dishonesty in Graduate Business Programs: Prevalence, Causes, and Proposed Action," is based on survey responses from 5,331 students at 32 graduate schools in the United States and Canada, and is scheduled for publication this month in Academy of Management Learning & Education. The survey found that 56 percent of graduate business students -- most of whom are pursuing M.B.A.'s -- had cheated, compared with 47 percent of graduate students in nonbusiness programs.

Please note almost half of nonbusiness GRAD programs cheated. It is going to be way, way higher in undergrad.


>The 'hard' sciences in the US are developing as much of a credibility problem as its 'soft' sciences. Institutional science may not be working out for us as well as has been hoped. It's a recent establishment and may not survive without a reversal in the decay of standards.

Key words: in the US. You can't generalize from one institution in one country to all institutions on the planet.


I have two comments with respect to what you wrote:

FWIW I'm an undergrad who is currently writing his thesis (empirical economics). I use the German Socio Economic Panel data set, which is a representative survey of ~11,000 German households (totaling about 79,000 individuals over 28 years). The reason why I picked a data set instead of collecting my own data is precisely to avoid the situation of the social sciences student you describe. As an undergrad, getting data is extremely difficult. You usually get about 6 weeks to write your thesis and you have absolutely no training in data collection and/or survey design. Getting data from students is your best bet, but nobody wants to answer your questionnaire because…hold your breath…everyone is sending out questionnaires. I get spammed with these things. Luckily, nobody has asked me to pretend that I am a 13 year old. So quite frankly most students panic after about 2-3 weeks and they will try to do anything to get out the situation. The alternative is not graduating and paying tuition for another year.

Secondly: Undergrad thesis =\= scientific paper. The fraction of undergrads who end up being social scientists is…tiny. Out of 300-400 econ students who graduate at my uni every year, I think 20 or less will go onto grad school (I'm proxying the number by thinking about how many went to do an MPhil). Now econ grad school is a different beast. Essentially you either end up doing experimental research or you work with observational data (assuming you are not a theoretician, those exist too; Economics is pretty much applied mathematics at that level.). Experiments can be checked and working with observational data requires a trove of econometrics. Trying to fool a panel of econometricians isn't easy, especially when these days you have to send in your data to the journal (or so my supervisor says; he has a paper forthcoming in the Quarterly Journal of Economics).

Social science in general is becoming more and more computational and statistically rigorous. Gone are the days when an "expert" would find a correlation between two variables and then speculate which way the causality flows…if any.


So you're, on a sample size of one or two undergrads, concluding that a whole science field is "nonsense"?


No, they are trusting the statements of another graduate student who had a much larger sample size.


Also by "things like these" I implicitly meant that I've heard many stories like that, including long discussions here on HN about invalid use of statistical methods. I just wanted to provide a single example situation that coincidentally happened just few minutes before I found this submission.


That is pretty shoddy, but bear in mind this is an undergrad thesis, and so (apart from teaching very bad ethics to future researchers) this has no bearing on research that will ultimately be published.

As an aside, I first misread your post and thought you were describing the process of taking your own survey, while putting your self in the mind-frame of the person filling it out, in order to imagine what goes through someone's head when they fill it in. This seems like an extremely good idea.


While that is of course fraudulent and would have meant failing the thesis where I studied (education, the softest of soft sciences ;)), a bachelor's thesis has absolutly nothing to do with the scientific field - a bachelor's degree isn't even enough for getting a job in the real world, after all.

Science starts at dissertations.


That's one of the most stupid comments I've seen you make, in what, three years on this site.

Thankfully, I don't judge you entirely on it ;)


Thanks for your honest feedback :).

Could you please elaborate a little bit on what you consider stupid in it? I'm always eager to learn :).


I think you're speaking from a position of ignorance, and discounting an entire field based on your biases, with confirmation in the form of an anecdote about a grad student (or undergrad? - it's not clear).

Of course my comment is partly tongue in cheek, given that its form necessarily shadows your own statement for my desired level of irony.

My own biases come in part from my GF. She's an economist - a social science - and has opened my eyes in many respects. Most of all, it's extremely hard to get good data in the social sciences. She works in healthcare policy evaluation, but all the people collecting data are not trained in it - they're filling out forms, themselves often designed by people not trained in it either. And of course she runs up against the usual "you have to do it for the children" people who dismiss any kind of rational analysis of the cost / benefit of various different healthcare interventions.

Anyway, I'm a bit sensitive about this, because I see a lot of arrogant ignorance, especially in IT, with guys (usually guys in our field) dismissing whole fields that very clever men and women have dedicated entire lives to, without bothering to learn anything about what they're dismissing.


I saw the analogy in some book: imagine there is a shoemaker, who has no access to good materials and good tools. The result of course is that his shoes are terrible. And why you can understand all of the reasons why shoes are terrible—they are still terrible. Good data is hard to get. Yup. Or do you think the data on Higgs boson was easy to get?


That's a different kind of hard to get.

Bad data is easy to get.


Not OP , but your story isn't even anecdotal really it verges onto a friends of a friend story.

Plus rather than saying undergraduate thesis say assignment. Because that's what it really is, not at all related to peer review papers or a doctorate.


Most people are not consequentialists, but most people feel implicitly uncomfortable making moral arguments on non-consequentialist grounds. “Stop what you’re doing, it disgusts and offends me” is less noble than “stop what you’re doing, it will hurt people who can’t stand up for themselves”.

I hear "arguments" like this all the time against data-mining public information and crowd sourcing and making it open on our site (and through an API) through our feedback and they make me chuckle seeing how they had to engage and co-opt a multitude of psychological/behavioral theories not drafted by themselves to say such things. Even more funnier is when the things they are protesting has been done in "private" for a very long time. Kind of reminds me of how the whole Snowden thing plays out on HN and in the general culture at large, despite the mumblings of such since the very beginning… quite amusing to say the least…


quite amusing to say the least…

Ah, yes, quite droll that people who don't make money from the thing I make money from have different moral opinions on it. Very amusing indeed. Because, of course, they're obviously wrong.

And they use other people's words when arguing their point? I'd point out that this is how just how ideas pass through society, but I'll settle for laughing at the fact you see unaware that you're also doing exactly that in your post.


Only if we all could conflate stating observed interactions with arguing a point like you just did exactly in your post, we all could laugh at ourselves more. :D

I guess those who think they are morally right can just give themselves a big round of applause while they continue to engage in behaviors that act against such moral beliefs, whether they are willing to admit that to themselves or not and I'll just keep exploring the infinite. :D


| The best research in social psychology is as well-supported as anything in physics or biology

Nope.


I am not sure why you are being downvoted, as the author himself first says:

The best research in social psychology is as well-supported as anything in physics or biology, and much more intuitively comprehensible.

And then he follows it up with:

Social psychology experiments in the laboratory tend to throw up spectacular mind-boggling effects. Many of these fail to replicate and are later discredited. The ones that do replicate are not always generalizable – sometimes an even slightly different situation will remove the effect or create exactly the opposite effect. The effects that remain robust in the laboratory may be too short-lasting or too specific to have any importance in real life. And the ones that do matter in real life may respond unpredictably or even paradoxically to attempts to control them.


He was downvoted due to lack of evidence and his casual dismissal of a point in the article. This isn't reddit, as easy as it is to see an unsubstantiated point in the OP and reply with an equally baseless retort, it doesn't help any of us learn more. Your comment, on the other hand does point out apparent contradictions in the OP, and thus is useful


Former social scientist here. Can confirm.


On HN yes, but if this were reddit you might negate.


The issue with psychology is that double blind experiments are prone to observer effect.

The behaviour often have nothing to do with reality - when measured, everything tends towards the ideal.

And the experiments can often result in the results you want if the double blinds are removed.

I always fall back onto the Survey from "Yes Minister" to explain how that works

https://www.youtube.com/watch?v=G0ZZJXw4MTA#t=30s


> The issue with psychology is that double blind experiments are prone to observer effect.

You mean, 'non double blind' experiments?


>Social psychology experiments in the laboratory tend to throw up spectacular mind-boggling effects. Many of these fail to replicate and are later discredited. The ones that do replicate are not always generalizable – sometimes an even slightly different situation will remove the effect or create exactly the opposite effect.

Generator-based tests (as in Haskell's Quickcheck) could help here. Standard unit tests run assertions with hardcoded input values such as [0, 1, -1]. Generator based tests instead try to show that properties hold over randomly generated inputs, by replacing 1 with a generator of positive integers, -1 with a generator of negative integers, etc.

Imagine if social psychologists ran generator-based studies as a hedge against unknown unknowns. The biggest challenge would be generating and executing valid permutations of the original experiment. Perhaps ad network AB-testing could be repurposed to run experiments as ads with randomly generated values.


The biggest challenge would be to get funding to run many slight variations of the same experiment without clear indications that this will produce many publishable results. Getting data from humans costs many orders of magnitude more than getting data from your code.


I went to a guest lecture by Michael Macy on network autocorrelation. its not fair for me to tell you what he was saying (so look him up if you want), but I will summarize.

there is a lot of research done with surveys, and most of these surveys have an inherent assumption: that the people live on an island and are not autocorrelated.

so he created a false network of people with random fake political views, etc and added network autocorrelation. they found that random views would cluster with demographics and were very statistically significant - but the views were random.

in a similar experiment, a music sharing website to collect data on its users separated users into 'worlds'. users could see what songs were downloaded by other users within the world. and each world had its own popular songs despite being made up of random people.

problem with networks (people) is that even a few missing nodes makes analyzing the network almost useless.


>If I were a demon from Hell, charged by my infernal masters with increasing rape as much as possible, I literally could not think of a better strategy than talking about rape culture all the time.

I don't understand the proposed alternative. There is a widespread disconnect between the ethical view of consent ("enthusiastic yes") and the equally common disregard of consent ("well, she only said no the first time but then got real quiet...seemed OK"). Families and friend circles often fail to support rape victims or confront rapists. We're facing a structural cultural problem, and the solution is to...not talk about it?


One alternative (which appears to work with alcohol problem drinking) is to tell people what other people like them are doing. Thus, all the newspapers and tv programmes talking about the perils of binge drinking and how everyone is drinking too much and how it's harmful to them and the nation - that just makes people think that excessive drinking is just what people do. But if you tell university students that people their age tend to drink about 8 units a week (or whatever the actual real average is) then they realise that they drink a lot more than that and start to cut down.

Telling men that "most men think drunk consent is not consent" would be in line with this, and is I think what the article is talking about.


I remember those binge drinking proclamations in college and reading a few years later that they basically didn't work at all. People construct their social norms based on the behavior of those they observe around them, and particularly the people they care most about: their friends and their friends' friends, not some vague model citizen they were told about in some quasi-propagandist fashion. And if I recall the article correctly, the average they quoted wasn't even real, so in effect it was propaganda.


First, the proposal doesn't suggest a solution. It only suggests that one approach is flawed.

Second, you can explain proper consent without lecturing on issues like lax enforcement and prevalence of rape.


Are you seriously suggesting that we should continue to talk about it even if we know talking about it increases the number of women raped? To make you feel better?

I've never understood this attitude:

"What you are doing is costly and has been shown to be useless/counterproductive."

"But we have to do SOMETHING."

Besides, the author was not saying we shouldn't talk about the problem, just that this one particular way of talking about it worsens the problem.


As someone who has written his thesis in social psychology i have to say that social psychology is fundamentally broken, kaput. If you had a worker who fixed the pipes in your apartment the same way your toilet and kitchen would explode. But because we call social psychology a science we accept it somehow and hope for progress.

Issues i have:

1.) What would the practical consequences be if social psychology and it's knowledge would be erased. Not many that i can think of. Considering that the area has immense potential applications this is pretty telling.

2.) What have been the most important research breakthroughs of the last 20-30 years? There are new fads for sure (behavioral economics, neuroeconomics and evolutionary psych i look at you) but they all coexist peacefully. No falsification is happening, just new profs need new theories for publication.

3.) There is NO theoretical rigor. If you do have tons of theories you would suppose that there is much theoretic work going on clarifying and contrasting this theories to enable empirical tests of their validity. Not here. The more theories the merrier. Why? Every researcher needs his own theory or not crowded field so he doesn't rock the boat. That has lead to the development that one experiment is done by two grad students and a significant result in one direction is interpreted by one theory and in the other direction by another theory. Basically you give a sample of students a questionnaire and if it goes one way it proves that XY gave an evolutionary advantage to the prehistoric people if it went the other way it proves that the utility curves cringle on the right side. Both ways the researcher has something to publish. I do think that the 2 theories per experiment limitation is rather arbitrary and inefficient tough. There is definite room for improvement here.

4.) Many recent work try to address this by going theory free. All this different effects are the result. This is also practical because there is always room for more effects to research and if you happen to need one to explain a particular fact then there are always a ton to choose from.

5.) Cargo cult use of statistics. You could make an argument that most social science researchers are not able to understand the whys and hows of statistics and decision theory so giving them recipes to follow is better than the alternative. Might be. It just does not work. It really does not. Also the imperative of the researcher is not to generate knowledge but publishable significant results.

Basically i do think that we have to start over and scrap the work done so far. And no, i do not have a better alternative at hand. That is not necessary to see that the current process is broken and produces nothing useful and binds intellectual potential, tough.


> No falsification is happening, just new profs need new theories for publication.

Much like macroeconomics, the problem with falsifying our hypotheses in the field is that you can't just build a culture in the laboratory. Even worse, there's no such thing as a control group: everything is observing everything and trying to self-modify to copy the good ideas of others, all the time.


> and studies on child porn show pedophilia is less common where it’s more accessible.

There's no link to the research.

Does anyone have any links? I'd be interested to know of this is just more lack of reporting of crime in a country that doesn't do much about that crime.


I don't know all the studies Yvain might have in mind, but at least one of them is "Pornography and sex crimes in the Czech Republic" http://www.researchgate.net/publication/49644341_Pornography... - one of a few papers exploiting various lapses in nations' child porn laws and observing not an increase in child sex crimes but rather decreases. You can probably find more studies by looking at related papers (http://scholar.google.com/scholar?cites=14325274059754433423...) and reverse citations (http://scholar.google.com/scholar?q=related:j-vH2duYzcYJ:sch...) of that paper.

(This is broadly similar to other correlations you may've heard about with more regular porn: porn seems to substitute for sex crimes, and not increase crimes like rape.)


They found violent movies decreased crime 5% or more on their opening weekends, and that each violent movie that comes out probably prevents about 1000 assaults. Further, there’s no displacement effect – the missing crimes don’t pop back the following week, they simply never occur.

This is a very naive view of the idea that violent movies increase violence in the population - the author is acting a little surprised that the results actually show that crime goes down a little on opening weekend and doesn't change from normal in the weeks before and after. It makes no long-term analysis of the baseline; the culture of having violent movies so prominent affecting levels of violence in the population. It just shows that violent people like to go to violent movies, not that the baseline violence is increased or decreased from those movies.

Reading over it again, the author is just guilty of bad journalism. The article he's quoting for this section is "Does opening weekend of violent films have an effect on violence rates?", which is a really very specific view, but is painted by the author as an incredibly broad "Violence In The Media Prevents Violent Crime". The irony is that it's the very first example given after a screed about how sloppy the field is.


That's the whole point of the article. He's showing that research can easily be twisted to support one conclusion or the other, and that the standards for what conclusions are acceptable to draw from social sciences research is far too low. It is not that the stuff listed there is in any way the "absolute truth".


Except that he does imply in the conclusion to the article that they are reasonable conclusions to make. It's not just "twist to meet a predetermined conclusion", the author is actually saying that these are reasonable and plausible arguments. I'm saying that the first example isn't plausible because the quoted study is not representative of the argument being made. It's more a comment on spin-doctoring than quality of social psych research.


The author prefaces the six arguments by saying that "I think some of the arguments below will be completely correct, others correct only in certain senses and situations, and still others intriguing but wrong."


Yep, and in the conclusion the author is stating that because he's found a single study that can be heavily misrepresented to make his point, the entire position of 'the other side' is therefore of equally poor quality. It's both bad science and bad journalism. It's like climate change deniers pointing to the 3% of naysaying scientists and therefore claiming parity in the debate.

Look, I'm not saying that social psych research couldn't do with improvement in quality, but that someone who's pointing fingers should make sure their own stuff is tight.


Do you think the evidence in favor of violent movies causing violence is significantly stronger? Comparing to the climate science debate implies that you think there is a huge disparity in evidence. If so, it should be easy to present some.

Personally, I'd think it would be hard to directly test the proposition "existence of violent movies at all increases baseline level of violence in society". You can't do a randomized controlled trial, and there are no good natural experiments. Trying to measure both variables across a range of communities would be hopelessly polluted by confounders. Therefore, I'd spect neither side to have a very strong case, which is exactly the point of the article.


Comparing to the climate science debate implies that you think there is a huge disparity in evidence.

No, my comparison to the climate science debate was more about using a selected study and demanding it has parity with the collection of whatever evidence is on 'the other side'. This is what the author is doing in the conclusion; suggesting that because he's found one or two studies that go against the grain, that the side those studies oppose are equally poorly researched.

Yes, it is difficult to measure, I absolutely agree. But the author could still have made his point without resorting to spin-doctoring. He is suggesting that the line of reasoning is valid and plausible - and it's not, because he's heavily misrepresenting the data in that first case. If that was his point - that some studies heavily misrepresent the data like that - then he should have exposed studies instead of repeating the process as a counterpoint.

In any case, this is a line from his conclusion: the laboratory experiments that experimental exposure to violence causes people to play contrived games in a more aggressive manner couldn’t catch that in the real world, violent movies decrease crime. He's taken this blip from blockbuster opening weekends and taken it as gospel in his rationale. "Because violent people like to go see a movie with everyone else, and they don't do more crimes later on to 'catch up', violent movies do not increase violent behaviour". It's a really naive view of the argument around violent movies and violence in the public, that it only has short-term effects measured in weeks, yet he's using it in his conclusion (not just in his 'case studies').

By "couldn’t catch that in the real world, violent movies decrease crime", the author has clearly taken that faulty interpretation of the study as an overarching truth - even after saying that both sides of the story are 'plausible and intuitive', he's still declared that 'what happens in the real world' is based on the misrepresentation of his study. I don't think the spindoctor-esque misrepresentation was a conscious decision on the author's part, else it would have been highlighted more clearly. I think it was just a mistake.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: