Honestly, I'm a bit disappointed by the level of discourse this post has generated. I'm also an outspoken skeptic of findings in the field of psychology, but the article is quite well formed and well argued. If anything, accepting the nitpicking from a lot of these comments would result in psychology being _less_ rigorous and precise. A lot of psychology's problems comes from how readily available it seems to be to everyday experience. We're only 100 years from Lewis Terman and H. H. Goddard thinking they could measure intelligence by having people circle a face judged to be "more attractive", it's a young science and improving the rigor of how things are expressed is essential to advancement. Not only are a lot of these terms being used incorrectly, hence why this paper was even published, but because they carry a colloquial and historic baggage that don't reflect academic understandings, or even philosophical understandings of the epistemological concept of science. Words have definitions, and if two people don't share the same definition, then communication breaks down, research is misunderstood, and the field is worse off. This is for people operating within the field to consolidate knowledge, I don't know why people insist that lay understandings of language and a field of study need to be reflected in the terms of art.
To put it in terms that engineers are more likely to appreciate, a lot of these terms would be like "man-hours". Man-hours is obviously a useless term because it 1) implies that increasing workers scales production linearly, 2) implies each individual produces at the same rate, 3) inherits from a factory mode of production that engineers typically don't believe fits their situation, and 4) generally results in poor estimations of cost and delivery times. Obviously if you're trying to be productive as an engineer, your managers only using man-hours as a term invites ambiguity and worse working conditions. Same goes for things like the lay understanding of attention vs its specific meaning as an implementation template within deep learning, or even AI more broadly vs specific neural network techniques.
As a psychiatrist, I'm just going to tip my toe into this conversation:
I have not read the entire list. I read a good chunk of it. Almost everything on it is -not- stuff I see among my colleagues, and appears to be how psych stuff is presented /in popular media/ rather than in the profession. Which is fine if one targets this for the science media/science communication types, but it seems very odd to apparently have a target audience of psych professionals. So of course there's a list of 'inaccurate and misleading' terms - you can't simplify something without losing /something/ of its complexity, but it still needs to be communicated.
The other thing to point out is that some of their criticisms are just ... well, outsiders commenting on things they don't know about. It's a team of academic psychologists. Psych PhDs generally don't do frontline stuff - they do targeted 1-on-1 therapy, or group therapy, but they tend to parachute in and out, they're not doing the days-long care-and-feeding of these patients where some of the terminology they're criticizing comes from. For instance, they take issue with the use of splitting, but ... clearly don't work on the unit. Splitting isn't just about wanting to avoid the thoughts of your loved ones being flawed beings; it's about the emotional hyper-sensitivity of a BPD patient resulting in them condemning as awful anyone that is less than entirely positively oriented towards them, because they can't take any negative interaction (see paragraph 1 about "I'm oversimplifying shit to communicate to people outside my profession", so don't come at me to pick nits. I know it's simplified-into-wrongness.) The result is that when they a unit staffer enforces boundaries, they're instantly terrible; as opposed to that "other" staffer that didn't have to enforce boundaries with them, who is wonderful, can I talk to them now please instead of you, you awful monster? And so the psychodynamic 'splitting' leads very quickly into 'pitting staff against each other'. It's not a mis-use of the term, it's a specific manifestation of the term that you deal with on the unit. Which the authors would know, if they were frontline workers rather than consulting therapists.
And the section on pleonasms is just nit picking for the sake of nitpicking.
This article doesn't do anything in the way of making psychology nor psychiatry more precise. This article is a lot of nit picking, in ways that look valid externally but would have anyone familiar with these things wondering why this was worth writing.
Ok, to be fair, I did roll my eyes at a few of their terms. However, you are mostly arguing out of scope/training.
Psychiatrists are not researchers, they are MDs.
They are generally pretty smart people, and I admire their work.
But MDs have next to zero formal training in research, methodology, or statistics, and even fewer have taken courses in the theory or philosophy of science or knowledge.
That's why MD/PhD programs exist, or MPh degrees, so the MDs can learn how to do research properly.
Yeah I don't disagree that the article does have a nitpicky streak. However, since it's in the service of having a higher standard in publishing, I don't think it's useless, except in the sense that it dilutes the more useful recommendatios.
Speaking from my own experience, a lot of the statistical and clichéd methdological phrases also appear in some lower-quality biology papers I've seen, and it's mistakes I see grad students make and even submitters make. The fact that the authors can point to thousands of google scholar hits shows that the paper is addressing a real issue and was a worthwhile endeavor. If anything, this is what I feel more commenters should have criticized. Not that it appears in the paper, but that the state of the field is such that the familiarity with statistics (and I argue math, history, philosophy, etc) is so poor that such inaccurate language is so frequently deployed.
I can't speak to how this relates to practice, especially with regards to psychiatric practice. Quite frankly I was less concerned with that than the obsession with the sign/symptom distinction that's both commonplace and useful, and the weird attempts to preserve phrases like "steep learning curve" when what's being criticized is actually the lack of specificity the phrase encodes.
> This article doesn't do anything in the way of making psychology nor psychiatry more precise. This article is a lot of nit picking, in ways that look valid externally but would have anyone familiar with these things wondering why this was worth writing.
Great summary. I think the author's real audience is themselves and their real purpose is self-affirmation. Their article became popular on HN is because we (myself included) love to believe we know more than the common folk. A 50+ word list that purports to correct others is too tempting not to agree with. Particularly, when most of us don't have the expertise to know redefining 50+ words is not actually useful for its claimed audience.
I was curious about this one and wondered if you could based on your expertise shed some light on that section and whether it is also off.
Human body is obviously a rather complex piece of machinery, but chemical imbalance can put things in a bad spot ( lets say too much vitamin D ). I get that 'chemical imbalance' is a very simplistic model, but I always thought it describes some of our bodies' interactions with the world rather well ( even if they are still not understood ).
Would you be willing to elaborate on whether ( or maybe how ) chemical imbalance is used/described in psychiatry in your experience?
Yes we can deep into the weeds on how to best estimate a task.. or we can ask "How many man-hours is this going to take roughly?" and you give a number and then WE DO THE THING.
The argument about "man-hours" is bikeshedding.
Language is to be heard and understood. It carries both it's technical meaning and it's subjective meaning. If you're arguing, as the article argues, that words that are subjective have to be removed then you are left without any language at all.
If one reads the article, one would find the absolute majority of the arguments are plain empty claims with little supporting evidence. Citing a single paper, usually over a decade old, is simply proving the opposite; as what they're saying is "out of the thousands of researches done on X (especially medicine), we only found one paper supporting our modern and liberal claim".
And your example of manhours is utterly detached from reality. Only daft personnel think your points are axioms. Of course manhours are a rough estimate! Of course 9 women don't make a baby in a month. Unless you became an engineer last night, I can only assume you are a liberal warrior regurgitating favorable pieces.
It feels like you didn't read or understand the article.
Techincal fields (other than yours) also have rules for how to use techincal terms.
Arguing for clarity is not a "liberal" point of view (it's actually rather prescriptive, which is associated with more "convervative" viewpoints, but whatever).
It is good to see awareness being raised of accidental philosophical positions, sometimes unwittingly assumed through word choice.
For example, I was starting to doubt whether anyone realizes that they make a leap whenever they imply physiology/biology is the cause of what happens in our consciousness (or indeed causes consciousness to happen), but entry 29 reassured me not all hope is lost (emphasis mine):
> Nevertheless, conceptualizing biological functioning as inherently more “fundamental” than (that is, causally prior to) psychological functioning, such as cognitive and emotional functioning, is misleading (Miller, 1996). The relation between biological variables and other variables is virtually always bidirectional. For example, although the magnitude of the P300 event-related potential tends to be diminished among individuals with antisocial personality disorder (ASPD) compared with other individuals (Costa et al., 2000), this finding does not necessarily mean that the P300 deficit precedes, let alone plays a causal role in, ASPD. It is at least equally plausible that the personality dispositions associated with ASPD, such as inattention, low motivation, and poor impulse control, contribute to smaller P300 magnitudes (Lilienfeld, 2014).
I believe the equal likelihood of such reverse causality and its implications are severely underexplored in modern medicine.
Similarly appreciated the warning against accidentally assuming mind-body dualism (entry 40) and a fundamental point about natural sciences—that there is never definitive proof, only limited to various degrees models (entry 45).
This uncertainty stems from the lack of rigorous proof. Medicine these days is all about statistics and correlation. It's "we gave drug A to people with B and we observed effects C occurring D% of the time". In way too many cases there is no exact model or understanding of how things actually work, just inferences from observed effects.
That might still be preferable to coming up with theories explaining mechanisms we don't fully understand and then looking for data which might prove it. That might become especially problematic when scientists stake their entire careers on it. We might end up with situations similar the amyloid hypothesis which likely tuned out to be a dead end a huge waste time & resources.
This is a massive problem right across biology. When we talk about heritability people assume we mean causal genetics for some trait. What we actually mean is all the genetic variation that is causal and all the genetic variation that is correlated (by chance) with the trait. The latter are effectively instrumental variables that are predictive but not causal.
I think the aritcle is pretty good (I have a strong disiagreement with one of the 50 terms, but otherwise I think they did a good job).
I'm relatively new here at HN, but as a psychologist (who also dabbles in hacking) some of the reactions do support, sadly, the stereoytype that computer/engineer-types suffer anoagnosia (the lack of knowledge of what you do not know).
Some engineers do not understand that the world of human behavior is far more complex than the world of atoms, molecules, components, chips, software, etc.
Behavioral and social scientists are, you may be surprised to learn, aware of this fact, and actually lead in scientific investigation of the difficulty, specifically: dealing with constructs, and figure out how to define and measure them.
"Soft sciences" actually are much harder than "hard sciences" in many ways.
Some are working at making soft science more of actual testable, isolateable, repeatable, predictable science.
To be in psychology requires you to accept the ideas and culture of the "soft sciences". The system of universities and higher education, largely aligning with political and financial interests. One cannot be against it and be apart of it. That is the grand filter that removes dissent.
In current day all any diagnosis requires is a few 30 minute meetings and the right words in accordance with the dsm5. They'll get a diagnosis that's wrong, and be on their way with medications not proven effective. Ask me how I know. That's not science, it's a paycheck.
With no definitive right answer but still a clear outcome from choices made, yeah that's a moving target compared to the solid rock of most math and CS work.
Glad to see "chemical imbalance" make the list. It is very common to see people use terms like "dopamine hit", "endorphin rush", "low serotonin", etc. in ways that don't make scientific sense.
I assume people do it to sound knowledgeable or to make it sound like their ideas are backed by science, but neurotransmitters are vastly more complicated and subtle in their effects than is implied by these kinds of usages, and emotions and behaviors are tremendously more complex than the "my neurotransmitters made me do it/feel it" narrative would suggest.
Using inappropriately technical jargon (especially “dopamine”) is one of my pet peeves (which I just get more of the older I get).
Why, say, describe fond feelings or affection as “oxytocin”? Do you know anything about this neurotransmitter? Or are you just using it as a fancy synonym for love and affection? Is the vernacular English language—but I guess pop-neurotransmitters are vernacular now—too impoverished to talk about love and affection?
Same thing goes for “dopamine”. Did reading a factually wrong comment make you angry, or did you get a “dopamine hit” that motivated you to reply to it? Were you validated by the upvotes, or did you get a “dopamine hit”? Why describe pleasure, anger, comfort, etc. etc. as just “dopamine”?
It seems to me that fetishizing things by pinning pleasure on something ostensibly tangible like dopamine is just too irresistible. Now we can pretend that we actually know ourselves in the ancient Greek self by talking about our indulgent and sinful behavior as just “dopamine”.
For similar reasons I don’t like when people in meditation circles obsess over “the ego”. Every little mind-wandering and slip-up becomes the fault of the fetish that is “the ego”.
Edit: Of course this is how laypeople relate to neurotransmitters while the submission is about psychologists. So this is a side-topic.
> Every little mind-wandering and slip-up becomes the fault of the fetish that is “the ego”.
Ironically, you’ve here used “fetish” in a way that the article explicitly admonishes, opting instead for the vernacular definition. And I think that’s okay! Language is about communication, if when you say fetish I know what you mean, and when other people say dopamine I know what they mean, who cares what the diction police say?
Not really. First of all, see my edit, which was there many hours before your reply: This is a side-topic about laypeople misappropriating technical terms, while the submission is about professional (social) psychologists. I am not a psychologist, did not attempt to discuss psychology, hence this attempted gotcha is invalid.
Second of all, my attempt (butchered or not) at using this word has no relation to the field of psychology at all since it is inspired by Marx’s use, who probably got it from Hegel (predates professional psychology, does it not?). The vernacular meaning (apparently, according to MW) means to be very fond of something. But what I mean in this context has got nothing to do with being in love with some inanimate object (in a non-sexual way); it means to use some object (although in this case an abstract one) as something to fixate on in lieu of appreciating and understanding wholly non-tangible things that may only be fully understood as relations. In this case, instead of trying to comprehend the mind as more of a web of associations, of cause and effect, you/they invent the fetish of the very animated “ego” which is trying to sabotage your every effort to achieve mindfulness, perform better in your corporate job, or what have you.
While I agree that the term should be on the list (as it is a catch-all term unlikely to describe a given phenomenon), it is not that "chemical imbalance" does not exist.
For example, dopamine-related drugs relatively consistently lower ADHD symptoms.
A given example that serotonin-reducing drugs can alleviate depression hardly makes the point.
One common (and false) belief is that "depression is caused by low serotonin". It is certainly way more complicated than that; otherwise, SSRIs would be as effective for depression as dopamine-increasing drugs work for ADHD.
When I teach abnormal psychology, I like to ask my students "If Prozac is an SSRI, it increases Serotonin receptor activity. However, LSD is a Sertonin agonist, so it also increases Serotonin receptor activity. Why therefore do Prozac and LSD not give the same experience?"
At the same time we have a need to make sense of things. And having a rough relatable framework can be part of the therapy. I can also see why it might be dangerous “e.g. dopamine hit good, what foods/drugs will give me that”
What becomes clearer by labeling things with neurotransmitters? Did you just not know your own habitual motivations? In that case some technical jargon won’t make you wiser or smarter.
For example why most of the value of a todo list is the writing down and ticking off. And tiny todos (fill water bottle) give the same kick as big ones. Now if you understand that the it motivates you to have todo lists, use them and craft them well and build a chain of success.
Having a mental model of why something works for me at least is a big motivator. The laws of thermodynamics don’t help you lose weight but they sure make it clear it is possible (or if your body absolutely cannot burn fat, you die).
I don't see anything wrong with the term "dopamine hit". It's the main neurotransmitter involved in the brain's reward system. Rewarding stimulus is exactly what is meant by popular use of the term.
I really don't understand attempts to problematize the use of this term. Sure, there are other neurotransmitters involved and dopamine is also present in other systems and even in the rest of the body. Nobody denies that.
Despite what we have been taught—and what is still commonly stated—dopamine is what motivates behavior (motivational salience), not what makes us feel good after the behavior. Plenty of citations on Wikipedia if you want to dig.
The problem is the dumbing down of complex mechanisms into sound bites. Both dopamine and rewards systems are a lot more complicated than they're presented and not very well understood. But you can buy self-help/popsci books that give you very incorrect simplistic pictures of how these things work.
Maybe the term "dopamine hit" is fine when it's just that, but people base their whole understanding of their brain based on dopamine does this, seratonin does that, etc.
Yes, I agree that it's dumbed down but not everything has to be a scientifically accurate discussion. I think it's a useful term to easily communicate complex concepts. Dopamine hit for any pleasurable addicting stimulus. Dopamine dripfeed for social media's endless stream of tailored content. The word dopamine is in there to associate the idea with addictive drugs and the way they take over the brain's reward center.
> The goal of this article is to promote clear thinking and clear writing among students and teachers of psychological science by curbing terminological misinformation and confusion.
> We also do not address problematic terms that are restricted primarily to popular (“pop”) psychology, such as “codependency,” “dysfunctional,” “toxic,” “inner child,” and “boundaries,” as our principal focus is on questionable terminology in the academic literature. Nevertheless, we touch on a handful of pop psychology terms (e.g., closure, splitting) that have migrated into at least some academic domains.
I’m not aware that “dopamine hit” has migrated into some academic domains as it is kind of a slang among laypeople with the meaning of indulging in an activity they like. So I think “dopamine hit” specifically is out of scope for this article. “Chemical imbalance” on the other hand is a problematic term that has been used historically to promote inaccurate—and thoroughly disproved—models of mental illness. I guess the term is—somewhat worryingly—still being used among academics in the literature, and that’s why it was granted a place on this list.
Another term to avoid IMHO: "jingle jangle fallacy". It's catchy, but both the word "jingle" and the word "jangle" have established meanings in English neither of which has anything to do with what is being referred to here. To say nothing of the fact that the "jingle jangle fallacy" is not a fallacy, it's just bad choice of terminology.
Much better words than "jingle jangle fallacy" are "ambiguous" (for one word that has multiple meanings) and "redundant" (for multiple words that have the same meaning) terminology.
(I find it supremely ironic that this needs to be pointed out in an article whose central thesis is that wise choice of terminology is important.)
This is the first time I've heard of the word "jangle" existing independently of the onomatopoeiac phrase "jingle jangle"! That said, I definitely agree that this is an ironically-poor choice of name due the lack of relation between the name and the referent.
The authors seem to use the term them selfs in this very article, they apparently don’t see it as a problem.
> Psychology has long struggled with problems of terminology (Stanovich, 2012). For example, numerous scholars have warned of the jingle and jangle fallacies, the former being the error of referring to different constructs by the same name and the latter the error of referring to the same construct by different names (Kelley, 1927; Block, 1995; Markon, 2009).
Looks like Psychologists came up with something as complicated as the human mind. I found this very strange:
Symptom: (under Oxymorons)
(41) Observable symptom. This term, which appears in nearly 700 manuscripts according to Google Scholar, conflates signs with symptoms. Signs are observable features of a disorder; symptoms are unobservable features of a disorder that can only be reported by patients (Lilienfeld et al., 2013; Kraft and Keeley, 2015). Symptoms are by definition unobservable.
I am surprised that English medicine seems to differentiate observability here since I never heard it expressed in that way. Seems to make sense to differentiate though, psychologists probably know best why this data maybe needs a different evaluation.
It is a standard terminology that is recent and only in English.
It has nothing to do with the original meanings of the words "symptom" and "sign" in English and other languages (where "symptom" is any sign of a disease, while "sign" is any sign of anything).
The standard medical terminology in other languages is "subjective symptoms" and "objective symptoms".
In my opinion, whoever has chosen these artificial and arbitrary meanings for the words "symptom" and "sign", both words being widespread in many European languages, where they are used with their old meanings, has made a very bad choice. Someone who wants new words should invent them, not change the traditional meanings of other words.
I wonder if this "sign" and "symptom" terminology is common for British English and American English, or it is used only in American English.
At least one NHS content editor uses "symptom" to mean objective symptoms [1]:
> Symptoms of coronavirus (COVID-19) in adults can include:
> - a high temperature or shivering (chills) – a high temperature means you feel hot to touch on your chest or back (you do not need to measure your temperature)
As do, slightly implicitly, all four of the UK chief medical officers [2]:
> The individual’s households should also self-isolate for 14 days as per the current guidelines and the individual should stay at home for 7 days, or longer if they still have symptoms other than cough or loss of sense of smell.
True, but both of those examples were written for the general public, where the authors might intentionally be using “symptom” in its colloquial sense to be better understood. So it’s hard to draw conclusions about how the same authors would use “symptom” when writing for a medical audience.
> The standard medical terminology in other languages is "subjective symptoms" and "objective symptoms".
Brazilian here. We use the term sinais e sintomas, meaning signs and symptoms. Doctors learn these signs in medical semiology classes and through practice in hospitals.
I believe that these special meanings for "signs" and "symptoms", which deviate from their prior meanings in the English language, must have been used for the first time in some influential medical handbook from some important US Medical School, like the Harvard Medical School.
Then they must have spread in USA, and then in some other countries, like Brazil and Norway, which have been mentioned here.
At least France and Germany seem to be among the countries that have kept the traditional terms "objective symptoms" and "subjective symptoms".
Another poster said that "language changes" and this must be accepted. While there are many people who believe this, I do not agree with it. When language changes in order to speak about new things, that is obviously fine. When language changes because someone thinks that the traditional words are not cool any more and they make arbitrary changes in the meanings of the words, that, in my opinion, is lazy and stupid. Every such change in the meaning of the words makes more difficult both the communication with people from distant places, where the meaning changes have not arrived yet, and the understanding of the older writings accumulated during the last centuries, many of which are much more valuable than it is expected by those who read only what was published last year, thinking that the older publications are obsolete. In fact too many recent research papers reinvent things that are decades old, because their authors have not read the old literature.
This change in terminology has been completely random, one could have equally well chosen to use "signs" instead of "subjective symptoms" and "symptoms" instead of "objective symptoms".
So there is no way for someone who has not studied the recent medical terminology to guess what medical doctors mean when they speak about "signs" and "symptoms".
If the author of this terminology would have invented some new words, e.g. "obsyms" and "subsyms", then at least their use would not have caused confusions. Any listeners would have recognized them as unknown words and would have asked for their meaning.
In truth, "language change" (which is in fact language corruption) if often a necessary evil when a mass of "lazy and stupid" people desperately tries to learn something. The net outcome of all that is good, or has been so far.
But it's a different pair of shoes when supposed experts decide to change dictionary definitions on the fly, and in such a confusing manner. They should have known better.
I don't think many ordinary British English speakers draw the distinction. And yah, I guess they'd take "sign" to mean any sign of anything. So in vernacular talk, all medical signs AND symptoms get smooshed into "symptoms".
So that's fine. Medical experts can use the technical meanings in expert discourse, and the vernacular meanings in vernacular discourse.
In Norway, doctors will distinguish between (subjective) "symptomer" and (more objective) "tegn" (and "tegn" can be directly translated to "sign") whereas laymen will use "symptomer" for everything
That definition doesn't comport with my experience of British-English usage in UK.
The definition you link is interesting, I'll quote it:
>symptom
>(SIMP-tum)
>A physical or mental problem that a person experiences that may indicate a disease or condition. Symptoms cannot be seen and do not show up on medical tests. Some examples of symptoms are headache, fatigue, nausea, and pain.
There seems to be an internal inconsistency in defining a symptom as physical if it's not objectively discernible; in what way is it physical? If you know it's a physical aspect of a patient then isn't it objectively observable and and so not a 'symptom' under this narrow definition?
We're going to tie ourselves in knots here for sure.
I have always understood symptoms to be the patient's reported experience; and signs to be observable manifestations of a condition. So yes, a symptom is by definition not observable, and therefore "oxymoron" is correct.
The author is right from a medical perspective. There is a technical difference between observable signs and reported symptoms. Most people conflate both terms and there's nothing wrong with that but people in the field should understand the difference.
I'm confused about (47). Isn't "empirical data" based on observation or experiment. Is this a typo? And non-empirical data is defined as an observation that one cannot formally measure, e.g. "I love/hate it."
(47) Empirical data. “Empirical” means based on observation or experience. As a consequence, with the possible exception of information derived from archival sources, all psychological data are empirical (what would “non-empirical” psychological data look like?). Some of the confusion probably stems from the erroneous equation of “empirical” with “experimental” or “quantitative.” Data derived from informal observations, such as non-quantified impressions collected during a psychotherapy session, are also empirical. If writers wish to distinguish numerical data from other sources of data, they should simply call them “quantified data.”
It's erroneous to equate "empirical" with "experimental" or "quantitative". Those terms don't have a relationship like 4 = 4.0. Your example in your second sentence is an example of equating empirical with quantitative.
Empirical makes more sense when you contrast empiricism with rationalism. It's a split between things you observe and things you think. If you're doing chemistry calculations, that's not empirical but it is quantitative. If you're pouring stuff into beakers and writing down what you see, it is empirical and quantitative.
Theory driven: "I think anxiety and depression are fundamentaly different, because of writings of Freud, Foo, and Bar"
Empirical: factor analysis of test data suggests anxiety and depression may actually be a single factor, perhaps there is a single biological mechanism underlying both?"
The "steep learning curve" entry is bizarre. Is it so difficult to envision that it's a straightforward analogue to real life? Climbing a steep mountain (that is, a steep slope or curve), if you manage (since a difficult traverse is going to turn away a lot of people, just like a steep learning curve), you are going to end up with a good view (or understanding of the field). It was never about a X=time, Y=distance mathematical curve.
Your disagreement with their assessment is exactly why they say it should not be used. There are two equally plausible interpretations, and as a technical term, theirs is correct, see https://en.wikipedia.org/wiki/Learning_curve
IMO, since they're not suggesting anything better they are not helping. It's like telling people "just don't do that if it's causing problems". Well, what's the alternative? We still have this concept we need to communicate effectively, of something that's hard but possible to learn if you spend a lot of energy on it, analogous to how you spend a lot of energy climbing a steep hill.
Also, have you ever met anyone who thought "steep learning curve" was ambiguous? This seems like a controversy invented by some really literal-minded people who are not representative of the general population.
> have you ever met anyone who thought "steep learning curve" was ambiguous?
Yep, that’s me. When I first encountered this expression it seemed very strange to me (not exactly ambiguous since I guessed the meaning). I envisioned a plot, with a steep curve: x is time, but what’s on y? You see, there are no mountains where I’ve grown up…
Eventually I’ve come to think of it: x is pleasure that you will be able to extract from the game, y is amount you have to learn to extract that pleasure.
But now I do not think much when I hear the expression. “Steep learning curve?” - ok, I know what you mean. Changing the meaning to its opposite would cause me a great confusion, bigger than the first one.
Yeah, right now signal/noise for this statement is huge, because IMO at most 50% of people (likely much less, only the kind of people who are more used to graphs than mountains) who have a good grasp of English but have never heard this expression might get the wrong idea when they hear it for the first time. If we try to invert the meaning it will reduce signal/noise, because the vast majority of people who know the original meaning will now have less information about what you actually mean. We basically have two sensible choices: leave things as is or invent a new phrase which captures the idea even better than the current one.
> I envisioned a plot, with a steep curve: x is time, but what’s on y?
This question really got me thinking, because I've always envisioned a plot as well, but the phrase always made sense to me.
I realized that I think of x being "mastery" and y being "knowledge required". On such a plot, you'd see a steep curve if one needs a relatively large amount of knowledge to progress to the next level of mastery.
I've never thought of it differently, and so never realized that the phrase is ambiguous until reading the article. Upon reflection, it makes much more sense plotting time on x and mastery on y when talking about learning.
The fine article begins with this rather humble aim that;
> The goal of this article is to promote clear thinking and clear
writing among students and teachers of psychological science
How many published papers, or even undergraduate essays contain such
simplifications and misunderstandings?
I rather think the piece is aimed elsewhere, to the press, science
jornalists, politicians, mid-ranking deciders, mass media, and pundits
whose language is awash with this stuff.
Some of these terms are indeed likely to be aimed at researchers, though I agree that journalism and politicians and charlatans seem to love pulling from shoddy psychology and making it even worse with their poor understandings of the field. For example, they give a number of papers that cite a p=0.0000 number, which is clearly absurd. p-hacking and dubious misuse of statistical testing aside, it shows a clear lack of understanding of what the p-value even is, and what it tells you, and this is obviously more relevant to researchers than reporters. I can also speak towards personal experience regarding "comorbidity", for a while I knew a grad student who accidentally attached a completely new, incorrect definition to the term by assuming it mean risk-factor. It's certainly a blend of really subtle, interesting mistakes that I can easily see a researcher unfamiliar with the history and philosophy of science making, and some pretty common tropes that are probably more often just stereotypes from the "science" section of popular magazines.
It would probably be difficult publish in a scientific journal if it was not aimed at other academics - and if it was not published in a scientific journal, the authors would not get citations
It's strange that they don't offer "recommendations for preferable terms" for every term. Or, at least example sentences with the terms to avoid removed and replaced with more appropriate language. Clearly these terms are being used because authors find them useful. Without guidance on how to replace them, authors will probably keep using them.
An urban legend from physics is that Murray Gell-Mann deliberately chose whimsical terms for his ideas, such as "quark" and "up, down, charm, strange, truth, and beauty." The rationale was to prevent any possibly application of a familiar definition or social implications of the terms, such as was applied to "relativity."
A lot of the terms on this list describe things that simply don’t exist, so the implicit suggestion is to not make shit up. To, y’know, research what you’re writing about rather than just repeat mistruths.
MKUltra would be an example in support of the claim that brainwashing is not real.
No one is disputing that subjecting someone to drugs will permanently alter their beliefs, anyone who has come in contact with an alcoholic or a heavy drug addict knows this but that's not what brainwashing is. Brainwashing is ability to control someone's thoughts often with the goal of inducing specific beliefs and ideas. While there is plenty of evidence that MKUltra damaged people psychologically and forever changed their personality, there is absolutely no evidence whatsoever that it managed to change their personality in a controlled manner.
How can you say "there is no evidence" using your own definition people I've met many times throughout life meet that criteria belonging to the military, religions of all stripes, CMU students
It also doesn't quite make sense, even if these techniques aren't any more serious than other kinds of indoctrination, we have abundant examples of indoctrination permanently altering beliefs - religious conversions, ideological movements, etc.
Indoctrination is not the same as brainwashing though. If what you mean is indoctrination then use that term. Brainwashing is the idea that it's possible for a specific agent to forcefully gain significant control over someone's mind.
Indoctrination is not forced on a person by a specific agent but rather is the result of passive exposure over long periods of time. As a social species we are all indoctrinated to some degree by our culture, our parents, friends, profession etc... but that is not the same as a specific person having significant control over our beliefs or actions.
> Indoctrination is not forced on a person by a specific agent but rather is the result of passive exposure over long periods of time.
I do not think that indoctrination is being done by passive exposure only. This 1970 book: https://www.amazon.com/Canvassing-Peace-manual-volunteers-pa... (and yes, this is the Zimbardo of Stanford experiment) describes the tactic that was successful in starting the piece movement. The first step Zimbardo recommends when going from home to home - is to ask to sign some document that almost nobody will refuse to sign. Not “Stop the war now”, but more like “Investigate reports of war crimes”. Who will refuse to sign under such a noble request? What? Do you support war crimes?? A lot of signatures was collected.
But there was no need to send these collective signatures anywhere. Zimbardo states (as a scientific fact, lol) that after a person signed something in support of some (anti-war in this case) position - the person’s perception changes. He will become a little bit more receptive to anti-war arguments than to pro-war ones. (Other purpose of this first canvassing was to find people who already are passionate about your cause and sign them on as volunteers).
As I can see this tactic - not passive indoctrination, but active involvement via small first steps - is used rather widely on our planet these days. The most cynical variant is “give some little money to our noble cause!” - those being indoctrinated are financing the indoctrination campaign themselves. That is, if Zimbardo is right.
> (27) The scientific method. Many science textbooks, including those in psychology, present science as a monolithic “method.” [...] Contrary to what most scientists themselves appear to believe, science is not a method; it is an approach to knowledge (Stanovich, 2012). Specifically, it is an approach that strives to better approximate the state of nature by reducing errors in inferences.
Mmmh is this Psychology opting out of the scientific method?
I love that they make an assertion that “Science is not a method, it is an approach to knowledge” and cite something from 2012, as if science didn’t exist before that. And the fact that they conflate “science” and “scientific method” is quite hilarious as well.
See absolutely nothing wrong with this quote. The "scientific method" is almost a meaningless term because it means different things to different branches of study - discouraging its use for more accurate terminology makes perfect sense.
there are three pillars of communication in action at once here, it seems.. one is the accepted technical language of the trained, credentialed specialist; second is the common language used daily to navigate our personal lives; third might be the language used in public discourse, in the media, and in a classroom to non-specialists..
The high-effort piece of writing adds citation-based examples of semi-specialist wording.. like someone that is a credentialed school counselor, but is not in the health professions per-se. Well guess what, you now have religious schools and also splinter educational environments to deal with as your audience.. good luck with that, Science is not going to settle cultural commitments in all cases. nor should it, I will argue.
Psychology has always been seen as a pseudo-science in some corners, unlike hard sciences backed by math. This well-intentioned and somewhat urgent writing tries to corral the "three pillars of communication" listed above, and as usual, will only get so far IMHO ..
I wouldn't call psychology a 'pseudo-science', more that it's at a level of development comparable to 19th century medicine, when the concept of infectious disease being due to transmittable microbes or viruses was not understood.
Most 'psychological disorders' today are not diagnosed based on specific physical-chemical tests, i.e. one can say with high certainty that a patient has drug-resistant tuberculosis using a PCR test. In contrast, there are no definitive tests for everything from ADHD to schizophrenia to bipolar disorder to depression, although more diagnoses lead to more pharma drug sales...
Perhaps the pseudoscience label is used too sparingly. When people treat something that is not science as science, that is pseudoscience. Modern society is awash with it, as was 19th century medicine.
I'm not sure how much real science they have in psychology in particular. I'm sure there's some, like I've heard they can use fMRI to analyze the language region of the brain to identify what kind of speech disorder you have. But certainly a lot of publications cross the line.
We can still be fair and say they do help a lot of people with therapy, even if they don't really understand why. Historically craftsmen were ahead of scientists in making working technologies. They weren't necessarily peddling in scams and nonsense, they were just not backed by real science.
For the sake of the Hacker News audience I wish "introvert" was its own category and not under "Personality Type". It is baseless pseudo-science but comes up in tech-related circles all too often.
As (43) states, introversion is on a scale (a matter of degree) rather than a type (i.e. not like char vs. float). That it is persistent in a probabilistic sense is well established.
'Histrionic' originates in the Latin 'histrionica,' which refers to actors and acting, supposedly because actors first came from the Greek colony of Histria on the Black Sea. I'm not sure why any of that is insulting to women.
He probably means hysterical, that which is plagued by hysteria, excessive emotion. The word originates from hystera, greek for uterus. No longer in clinical use but certainly in popular usage despite its unfortunate history.
If two words aren't related it can still be a problem if people think they are, and language evolution will start treating them as if they are (backformation).
That’s true, but few people know the origins of hysterical, let alone histrionic. You could just as well argue that the average speaker doesn’t know that hysterical was once a gendered, misogynist term and that therefore it’s no longer problematic.
This article was written in 2015, not 1960 is why.
Academic psychological research has been making great attempts to remove biased / inaccurate terms from the lexicon, and I encourage you to find a cite from the last 10 years that uses this term, other than in a historical sense. (Not saying it's impossible, just unlikely)
I disagree with the "steep learning curve" point. The X axis is acquired knowledge, and the Y axis is the effort required. I don't know why people assume the X axis is time. Not all graphs are temporal.
It's standard practice to plot independent variables on the x-axis and dependent on the y. To my knowledge, this is how learning curves are typically plotted as well, and the use of "steep" is usually a misnomer.
Is there a reason you believe people have this unintuitive plot in mind, rather than simply conflating with the difficulty of physically scaling a steep slope?
I disagreed at first, but realized what they were trying to convey. Specifically "learning curve" is a defined term in learning theory that represents the accumulation of knowledge. Like the the "forgetting curve" [1], the use of "curve" represents at each learning opportunity (or forgetting). "Learning curve" between intervals would be more akin to "learning gains".
So a "learning curve" that is "steep" implies that learning occurs rapidly; a "forgetting curve" that is "steep" would be "in one ear and out the other".
Super pedantic and I don't plan to change my use of steep learning curve, but I get where the intention is coming from.
Does the steepness of the curve happen earlier or later in the timeline?
I imagine that one could spend a long time learning the fundamentals before getting to a stage where the proficiency escalates quickly i.e. where the steepness occurs. But the long time getting there is what the term "steep learning curve" is about. Like the opposite of taking a long run at jumping off a cliff. I'm sure that this is easier to show with a graph.
"Moreover, some authors argue that these medications are considerably less efficacious than commonly claimed, and are beneficial for only severe, but not mild or moderate, depression, rendering the label of “antidepressant” potentially misleading"
Love it!
(7) Chemical imbalance.
Hate it!
Of course there is such a thing as a chemical imbalance. I would say serotonin syndrome is a good example. That is way too much serotonin. And when I am manic, I am sure I have a chemical imbalance (glutamate).
Can you elaborate, I always assumed subconscious to mean something like blindsight, that is blind individuals that have functioning eyes (i.e. their blindness is caused by brain damage in the visual processing organs), are able to evade an object being thrown at them despite never being able to consciously see it. That is the stimuli of an approaching object is indeed subconscious.
However maybe that is an outdated term, and a better one exists. It has been a minute since I read up on the literature.
There is a more or less fine line between unconscious and subconscious (its slightly hysterical cousin). The term might be somewhat outdated but it's still in use and also cognitive scientists make respective assertions - even though they might call it differently when talking about human consciousness.
Not on the list: "Oh, it's just my OCD" -- Either you have been diagnosed (and consequently suffer from OCD) or you have an obsessive personality (sometimes a quality).
But OCD is a diagnosis, abusing it do describe a personality trait doesn't serve the many many people impacted by the disorder.
There’s a great episode of the Australian TV series You Can’t Ask That which interviews a group of OCD sufferers about their lived experience. It really opened my eyes to just how appallingly cavalier it is to refer to one’s minor neuroses as “OCD”. An excerpt from the show: https://youtu.be/tkrFgKW5LvY (Australians can stream it on iView)
I see how it can be perceived this way, but I think it's because of the novelty of the condition (so people aren't used to it being casually used as much).
For more established conditions, we don't really feel that way, e.g. like, how:
"Turn the flashlight off, I'm going blind!"
"This jump scare gave me a heart attack!"
"Monday's make me depressed"
Are not really considered an insult to the blind, people with heart conditions, and the clinically depressed...
Being blinded is something many people temporarily experience without being “visually impaired.” Feeling depressed is something that everyone experiences, while they may not be clinically depressed. Heart attacks are similarly common; they are hardly a misunderstood phenomenon.
These things are not the same as misunderstood conditions suffered by a small minority.
There's also self-diagnosis though. Which in some cases, and for some types of conditions, can be as or more accurate than a professional one (where the professional might only see a facade)...
If anything, I find this to be a, perhaps unintentional, damning indictment of psychology and psychiatry in general.
If your discipline cannot clearly define things in a reasonably concrete and provable way, such that it is readily apparent to the patients and the public at large, and also such that the language effectively clarifies itself out of necessity, then much of what you do needs to be strongly questioned -- and often not taken too seriously.
I'm reminded of e.g. the term "neurodivergent." It's a good thing to look at, but how do you falsify it? Who can stand up and say "I'm definitely not neurodivergent?" If you can't do that, the term is not very helpful.
Frankly, if your standard for a good definition is that it is readily apparent to the public at large, then pretty much every academic discipline will fail your rigour test. Even (or especially) mathematics. Which should make it clear that this is an unfair and useless standard to hold psychology up to.
Fair, "at large" is a bit extreme. This is a spectrum. But psychology appears to be far worse than the others, even reading from this paper. I'm a lawyer, and even law isn't this bad. "Symptoms are unobservable?"
Actually symptoms being unobservable is not only not unique to psychology (the distinction is well known in medicine), it's a core challenge within the practice of medicine and one of its historic tensions. Yeah, a patient might be complaining of chest pain, but the reason for the complaint is going to be unknown to you until you start the process of diagnosis. Chest pain is a symptom of many conditions, and you can't even trust that there's an observable cause, you can only believe what the patient tells you. This was obviously even worse until modern developments of things like MRIs and X-rays.
Compare that to signs, where you can see that a patient has say, a lesion or is clearly coughing.
Law, really? Aren't there a lot of legal terms (like 'ownership', 'possession' 'contract',...) that have precise legal definitions that may differ in crucial aspects from their colloquial usage?
I think you're confusing a definition being easy to understand with it being well-defined. Mathematical definitions are often hard to grasp exactly because they are so well-defined. If you want something to be easily understandable by a lot of people (especially outside the discipline), you'll usually have to sacrifice rigor.
In my unfortunately extensive experience with legal matters as a non lawyer, there really aren’t many legal terms with material reduction in meanings when used legally. For the most part, like in medicine, a less commonly used word is used instead (often in another less commonly used language, such as Latin).
For example, most people would consider a contract to be something written down that says ‘contract’. A few know it can be verbal too.
When really, it could be verbal, video, scrawled in blood on the side of a fence, or any number of other forms, and really covers any agreement that meets certain criteria (generally that there is some form of payment or consideration, an offer, and an acceptance - think ‘quid pro quo’ or something for something).
99% of the time, the public is right. The other times, something went really wrong somewhere and someone did something pretty weird and dumb for it to matter.
In engineering, physics, math, it’s not uncommon to need SOME unique identifier for the thing, and there are very specific technical needs for it to even have a conversation on the topic - and often we’ve run out of pronounceable or recognizable symbols, alternative alphabets, etc.
Ha, contract is a really bad example here, as proven by the very very wrong term "Smart Contract."
The thing that is called a "Smart contract" has literally no overlap with a legal contract; because a legal contract is the "separate human statements of agreement that are associated with the action/performance and usually only needed when the performance goes wrong,"
Pain is not observable. It's reported. Loss of vision or hearing isn't observable, it's reported.
You can run tests to get an idea of whether any of these are happening. But you are not observing any of these symptoms. You're observing behavior and inferring what kinds of symptoms the person is experiencing and what degree.
But is a runny nose (observable) not a symptom of a cold? Is fever not a symptom of Covid?
If that's how medicine actually uses the term, okay, sometimes terms of art clash with the plain meaning, but damn that is a pretty weird distinction, as a layman.
In the US those would be "signs", not symptoms. Symptoms are things like pain, fatigue, etc. Signs are observable. I think other countries/languages often use "subjective" symptoms and "objective" symptoms.
It's a very useful distinction between things that only the patient can tell you versus things that can be observes and tested for. How do you run a test to determine if a patient is "really" experiencing chest pain? You cannot. You can run tests to try and determine an obvious cause for that symptom, but there is no test to tell you if say perhaps the patient is lying about having chest pain.
The data is pretty clear that psychology and psychiatry sucks at doing it's job. But it's a hell of a tough job, and using more meaningful words probably doesn't hurt.
Precisely, it feels like there's a lot of "activity" and "named ideas" chasing things that may not meaningfully exist? This is of course symptomatic of "publish or perish" et al, but I suppose one of the difficulties here is that the paper is like "here are some bad ideas and trends (fair) so here are some more to counteract that."
It really just gives the impression of "wow, these people are wasting time if they can't even decide on what relatively accessible words even mean?"
This is not what the article concludes at all. Every item in this list is met with alternatives or justifications for careful considerations and nuance. If anything it celebrates psychology as a rich field while issuing a warning about specific problematic terms in use. This is no different from most other fields. Cosmology also has problematic terms such as Grand Unifying Theory, even the word planet has issues within the field of astronomy. This is not a damning indictment of respective fields, but a sign of maturity, growth, and healthy debate within the field.
Your example of “neyrodivergent” is a really bad example for your case. This term is a laypeople term that captures the notion that there is a variety in the way people think and behave, and as a society we tend to accommodate only a subset of this variety, leaving the rest (the neurodiverce individuals) in a harder then necessary situations. You don’t need to falsify it because it is not a scientific term. But if you wanted to disproof it, you would need to show that cognitive behavior doesn’t vary nearly as much as observable behavior, the very fact that autism exists should be evidence enough to justify this term.
I think you might be misunderstanding what falsifiability means for psychology or other social science (or any science that uses probability or population statistics in its theories for that matter). When you have a term that distinguishes a subgroup from a population, you don’t try to disprove it by looking at an individual, you look at the variance and compare it with the greater population. If there is no difference in the variance, then most likely you are using an unhelpful term.
So, for all the many things in the world that cannot be defined in "a reasonably concrete and provable way" or where a statement cannot be "falsified", no study is worthwhile.
More or less this position: nothing worthwhile from Wittgenstein beyond the Tractatus and specifically, forget about the Philosophical Investigations.
It's not an indictment of a field that the subject matter is not susceptible to description in terms that are definitely falsifiable. Of course, definite terminology can be very helpful where it does apply, but it can also be used to oversimplify complex questions in a way that obfuscates them. But there's no law that nature must always be subject to description in a "concrete and provable way", especially not by human languages. By not taking seriously anything but disciplines that can be boiled down to true-false propositions, we'd miss huge amounts of knowledge that are useful and helpful.
Especially fields like psychology where so many important observations simply cannot be broken down into concrete statements or provable propositions in the way you seem to mean those words.
On the other hand, it's also very important to be careful of misuse of terms such as "true" and "false", for example, they can have very different meanings when we're talking about logic or observation, answers to examination questions, romance or religion. In this particular exchange, by true do we mean scientifically true or logically true?
Or, how indeed would anyone stand up and say whether something is "definitely not damning" or "definitely not unintentional" actually?
There’s a reason that traditionally in America psychology is taught in a research oriented way. This has the upside of at least ideally causing psychologist and psychiatrist to have a curious attitude toward their patients. In a field that is dealing with something so complicated as people’s brains and their social interactions and their self perceptions and their bodily health, it’s pretty much a necessary condition to begin from a standpoint of assuming that you don’t understand everything. I think it also has the unfortunate downside of producing a lot of questionable research though.
The problem is that in a research paper you do have to have operational definitions, P values, etc. it’s not that these things are bad but they are not particularly well-suited to such an ambiguous problem as attempting to explore the human condition.
To bring the point back to Wittgenstein, we are forcing students to talk about “the unspeakable” in scientific terminology. Bringing in his viewpoint from the investigations, it feels to me like in the mental health field we need to be playing a different game than the scientific research game. Professionals on-the-ground are doing this, but how to bring that back into the academic sphere, I don’t know what the best solution is.
> I'm reminded of e.g. the term "neurodivergent." It's a good thing to look at, but how do you falsify it? Who can stand up and say "I'm definitely not neurodivergent?" If you can't do that, the term is not very helpful.
I mean, that's not a diagnosis. If you mean just generally and not psychology specific then you are going to have to throw out most english adjectives. Like you can't falsify "im hungry", "im smart", "im dumb".
> I can't easily identify whether or not I'm neurodivergent, because I'm not sure what it means or what the opposite would look like. Does anybody know?
The people who are claiming to be neruodivergent would say its quite obvious to themselves (FWIW, the general definition is roughly ADHD or Autism, both of which have more concrete definitions). I was assuming the statement was about other people verifying the veracity of the statement not yourself.
Of course, i don't know how you can easily identify if you are actually hungry, when you can't even tell if you actually have a stomach or are just a brain in a vat/plugged into the matrix.
Even if I'm a brain in a vat, I'm a brain in a vat that thinks it's hungry.
But I'm not a brain in a vat that thinks it's neurodivergent or not neurodivergent because it doesn't know what that would mean in the first place, and it's not sure anybody else (assuming other brains in other vats exist) does either.
Exactly. I'm thinking of other diseases and other conditions. Like "has covid" or "very likely does not have covid" are verifiable or falsifiable enough for the situation even if you occasionally get it wrong. Neurodivergent? Probably easy to find clear yeses, sure. But I'm guessing all of your clear yeses there have things that have other, clearer, names.
I think of neurodivergence as having a cognitive style different than the commonly accepted cognitive style (the typical way of understanding things). It is not a diagnosis, but a way of framing differences in cognitive style and helping work out accomodations that might be necessary.
This means that not only can "your neurodivergent status" change (say, a non-autistic person in a group of autistic people would be neurodivergent), but it is a spectrum. You might be divergent from the typical in certain domains (say, you are very sensitive to certain sounds), while being typical in others (say, your way of using body language matches the way most people use it around you).
Bringing up neurodivergence makes only sense when you do notice that there is something different about how you perceive the world or act, vs what everybody else seems to be doing.
Specific conditions lead to neurodivergence, say ADHD or neurological disorders or mood disorders, while others might be attributed to upbringing or nationality.
It’s more common to hear these terms of misunderstanding in main stream medicine. Psychologists know there isn’t a “chemical imbalance” which “causes depression.” Overworked GPs and others without better training are the ideal audience for this piece.
I think all disciplines could have a damning list like this. For example, in software development, we could say "pull request" is to be avoided in favor of "merge request" (some progress has definitely been made toward this state, but there is more to do).
>Nevertheless, the attitude-change techniques used by so-called “brainwashers” are no different than standard persuasive methods identified by social psychologists, such as encouraging commitment to goals, manufacturing source credibility, forging an illusion of group consensus, and vivid testimonials
Ok, let me get this straight. You openly admit that brainwashing techniques are now routinely and knowingly used in "casual" settings, but you want me to stop using the term, because it's so routine and because it never was never long-term effective. Faulty reasoning at best, manipulative bullshit at worst.
In my experience if someone uses the term “brainwashing” they are not trying to have a real conversation, they are trying to avoid any nuanced conversation actually.
I think in most cases the argument of precision is warranted, but this might indeed be a bit much. Especially since psychology and clinical psychology needs accountability and the author used the weakest meaning of brain washing, a term quite precise for the worst deeds of the profession.
Is it fair to think that "burned out" isn't always coming from an employer-employee power imbalance? Startup types or small business owners burn out. Benign employers can have employees who burn themselves out. As a freelancer, I'm quite responsible for my work habits and workload; I sometimes burn out, but I'd never think of it as having been exploited.
No that's true, burnout has many causes, but I suppose that I'm thinking about it at a systems/societal level.
The current era is much more difficult to survive in than when I was growing up in the 80s and 90s. Because wealth inequality (high housing/healthcare/education costs, low wages) has come to dominate our lives. We're working twice as many hours for half as much buying power. We may not see it as much with our 6 figure incomes, but at least half the US is basically living hand to mouth in dead end jobs. No help is coming from the top, so until people wake up and realize that the power is in their labor and organizing power, nothing will change. That's why I think that framing and terminology are so important for progress.
Also I define exploitation as receiving less than the lion's share of any income generated by a worker. So if a business bills someone out at $50/hr and pays them $15/hr, that's exploitive. Very few businesses bill less than that, but a huge number pay less than that. So exploitation has become the way that business is done in the US and any countries we exploit for our manufacturing needs. Selling shoes for $100 made by workers who are paid $1/hr, etc.
I've travelled a lot in the US and many parts feel like a developing country. The tent cities and begging and vehicle-dwelling and so on appear worse each time I go over. But people either don't vote or vote against their own interests. Maybe one risk of framing things as a societal issue mean that people are less likely to take individual action in voting or changing jobs (if possible)? Voter apathy in the US is quite remarkable, given the constant media furore.
I'm in small business so I'm offering no comment on larger companies, but often here, I think an employee paid $x is expected to produce $3x as an old rule of thumb to cover overheads, taxes, etc. And I have sympathy for the brutal world of small business ownership: if an employee detests that $15/hr, go freelance at $40/hr and wear the downsides that come with it. Almost every business owner has taken a risk at some point to work unpaid or underpaid while getting off the ground, has covered wages during slow periods, etc. It's not always glamorous. An idea I find intriguing is in government or enterprise providing small businesses with as much boilerplate/scaffolding as possible to minimise the impact of wage theft or rostering struggles or anything else in that painful interface between the employer and employee. Without it, an employee is at the mercy of the foibles of the smalltime employer.
> (19) No difference between groups. ... Authors are instead advised to write “no significant difference between groups” or “no significant correlation between variables.”
This is terrible advice. To the public, and often even to experts, "significant" doesn't mean "statistically significant" it means "big". We need to abolish this use of "significant" not promote it. Way too many papers show "significant" (statistically significant) results that are not significant (so minor as to be irrelevant). This is the #1 source of misleading headlines.
"Statistically measurable" or "statistically determinable" come to mind though I can't find a cite.
The originally intended ... significance ... of the term was that a difference was capable of being shown using statistical methods. Not that its size or context was itself significant in some semantic, practical, or other sense.
The main issue is that the word "significance" is overloaded with two meanings in research & stats:
"Large and important" (e.g. "clinical significance", often given as an effect size, or some benefit/cost tradeoff)
"P value below alpha" (e.g "statistical significance", Probabilty of rejecting H0, which is roughly equivalent to "if the true effect size were zero, what's the chance I could see this effect size in my data given random fluctuactions [and a bunch of other assumptions]"
It's misused as a descriptor for just plain aggressive behavior. I was telling someone my roommate was outside yelling at our neighbor about a car crash they caused a few days ago. He replied with "That's passive-aggressive."
I said "No it's not! Which part of what he's doing is passive?"
Comes across as unnecessary judgemental. Opportunity here: figure out how to be helpful rather than sanctimonious. Talk about what to use instead, rather than just about what to avoid.
> (47) Empirical data. “Empirical” means based on observation or experience. As a consequence, with the possible exception of information derived from archival sources, all psychological data are empirical (what would “non-empirical” psychological data look like?).
Nonsense. There are plenty of types of data that are not empirical. For example, data from simulations is not empirical data.
Isn't fiction an overall non-empirical simulation? Science fiction books are not strictly based on historical events or even facts. Yet we still gain valuable psychological knowledge from them.
Can’t believe they didn’t include “intimate partner violence”. OK, I can believe it but it’s the dumbest, least understood term I’ve ever come across.
For nstance, IPV usually doesn’t involve “violence” or at least what normal people consider violence (ie Punchin, kicking, slapping, etc…). IPV is more often financial or mental abuse.
Many (most?) people understand "learning curve" as a hiking analogy. A steep learning curve would mean a difficult climb. This is understandable as most people aren't data literate, and "steep" generally has negative connotations.
I think there's a difference between police-ing speech, and clarifying it. Language is hard, and misunderstandings are surprisingly common, and many world's evils comes from pure misunderstanding.
Most misunderstandings arise from people having different understanding of same words - so approaching agreement on the meaning of words is important to reduce misunderstandings. The problem is the language is self-manifesting - the word gains the meaning from the way people use it, even when the meaning differs from the original (perhaps academic or technical) meaning.
So one side has to stand back - either academia (in this case, psychiatry), or the general population. Since academia largely relies on solid, defined meanings of words, while the general population relies on "how other people are currently using it", it's supposed to be easier to change the usage by the general population. Although it's pretty hard to do both ways.
Otherwise, we end up with words having two distinct meanings, one professional, and other layman.
I extremely disagree. Leaders that don't lead by example aren't real leaders, but demagouges and deceivers. To become a general, you must first be a soldier.
That's not how generals work, they go to officer school, don't they?
Actually I don't think "virtue signalling" implies "doesn't do it themselves". Vegans who go around telling everyone they're vegan generally don't secretly eat meat.
However, this is a list for students, teachers, and researchers, and within that context, precision is important. This group of terms in the article is not about policing - something like "crazy" or "schizophrenic". It's about technical terminology within that particular context.
It also was a good article about the problems behind the mental model.
Did you read the article? Because even with a cursory scroll would you find that it's clearly not about policing speech. For example:
> (22) p = 0.000. Even though this statistical expression, used in over 97,000 manuscripts according to Google Scholar, makes regular cameo appearances in our computer printouts, we should assiduously avoid inserting it in our Results sections. This expression implies erroneously that there is a zero probability that the investigators have committed a Type I error
Oh no. Big bad PC culture is stopping us from stating fallacious statistical conclusions.
Please don't dilute the quality of discussions on this site by posting reactionary nonsense before even reading something to which you're responding.
Your comment diluted the "discourse" more that the "reactionary nonsense" given that the first thing you do is using snark to signal that yiu are in the "cool tribe".
While indeed HSP is not a psychological term, it makes sense for self-description.
Unless you are a kind of person that cringes at people saying they are "coffee lovers" or "morning persons" with "it is not a proper psychological term!". :)
I think it's crazy that we spend time thinking about minor bs like this instead of spending that effort trying to solve actual problems. This type of stuff is great for people who want to feel like they have made a positive change without actually having accomplished anything of value.
I think it’s worse than that. It makes people have to police their own thought about what they’re going to say, to avoid stumbling over one of the linguistic trip wires. I think that’s antithetical to creative thought.
Counterpoint - mindfully reviewing and examining your thoughts and speech is an effective way to improve clarity and to identify unexamined (and perhaps unfounded) assumptions.
Cool, as long as it's your thoughts and not mine. I have no problem with other people policing their own speech and thoughts. My problem starts when they start to enforce what they found on other people.
Do you recognize that there is a distinction between enforcement, and being told "using this particular form of speech is harmful to certain people - if you continue choosing to use it, you are choosing to present yourself as someone who values your own speech over someone else's comfort, and will be judged accordingly"? If so, then we're in broad agreement (though we may disagree about where the line between social judgement and enforcement lies)
All the "speech is harmful" is exactly the thought policing we are talking about: when you hear something that you don't like you candecide to ignore it instead of trying to make everyone in the world stop thinking "bad thoughts". It's your job having the emotional maturity to not screech at any percieved misgiving not mine to not think or say things that may or may not be hurtful.
To restate my point - I'm not advocating for banning speech. I'm _certainly_ not advocating for banning the discussion of sensitive topics. I'm asking people to recognize that, by using certain forms of speech _that they have been asked not to use_, they are presenting themselves as someone to whom speaking in that particular way is more important than the feelings of the people who they're speaking to or about.
You, clearly, are signalling yourself as one of those people. And that's your right! That was your choice to make.
But maybe stop and think about why you think it's so important to you to tell people to shut up about something that bothers them, and if you want to be seen as someone who's deliberately and wilfully causing harm when there exist other ways to phrase the same concept.
I was with you earlier in the thread when you wrote "effective way to improve clarity and to identify unexamined (and perhaps unfounded) assumptions". With you on that because that is ultimately a selfish motivation - one wants one's ideas to be understood, one wants to influence, one wants to be correct ("check assumptions").
But this:
> using this particular form of speech is harmful to certain people
... are a few bridges too far in my mind. Who are these certain people? What do they think, not you? For instance, I'm gay, and homophobic slurs happen all the time in my life, whether coded or overt. But they don't harm me in any reasonable sense of the word.
I mean, should we not distinguish between physical harm, mental abuse, and discomfort with words?
To put a fine point on it, equating "discomfort" and "harm" is flawed.
> To put a fine point on it, equating "discomfort" and "harm" is flawed.
That's fair! Negative impacts certainly exist on a continuum! I felt like "harm" was a reasonable synonym for "negatively impact", but you're right that it carries a stronger connotation than "mildly inconvenience"; and it also has a more direct connotation than "perpetuate negative stereotypes in listeners" (using homophobic slurs has negative impact _even if no homosexual people are around to hear them_, because it emboldens homophobes and normalizes homophobia)
So let me loosen my original statement to "using this particular form of speech will negatively impact (whether directly or indirectly) certain people, ...etc.". And please do notice that I didn't describe the outcome as "then you are a bad person"! It's reasonable and justifiable to pick a point on that spectrum and say "you know what - negative impacts that are weaker than this just really don't matter to me. There's a certain level of tip-toeing around that just isn't worth it for the miniscule risk". Hell, I do it too. There's a tiny chance that my use of "hell" as a casual interjection might offend a religious reader - but in my determination, that offence is less important to me than that particular form of speech. There are probably a ton of other words or phrases I use for which I've made a similar (conscious or unconscious) determination.
My point is - speech has an impact. Indeed, if it didn't, it would be pointless. The point of speech is to communicate. Communication distributes ideas, concepts, thoughts from one brain to others. Those ideas will not just be the ones that are directly communicated in the speech, but also "meta-ideas" communicated by the form of speech. Meta-communication, more often than not, is about the speaker rather than just the content. If I hear someone teaching a complicated topic, I will assume (rightly or wrongly!) something about their level of intelligence and knowledge. If I hear someone speaking clearly and convincingly, I will assume that they've taken the time to prepare and research - and vice versa. If I hear someone using an obscure word, I'll assume they have an interest in vocabulary (and/or are involved in the field in which that term is common). If I hear someone using a well-known slur, I'll assume that they have made the deliberate choice to do so in the knowledge that it _is_ a slur. It's a form of social signalling.
So, y'know - if you believe that forms of speech that do cause "discomfort" are acceptable, and ones that cause "harm" are not (based on your own determination of the likelihood of those impacts, hopefully informed by the people who are actually impacted), _that's justifiable_. I'm not saying that words should be banned. I'm asking for a recognition that the use of a word - _any_ word - is a signal; that some signals are more distinct than others; and that that signal will be interpreted by listeners as reflecting something about the speaker's value system. That's not policing - it's a recognition of personal choice.
What you just wrote caused me a lot of discomfort and yet you chose to write it. Because in order to think and discuss we need to risk causing discomfort to other people. I don't know what to tell you if that's not obvious to you.
It's entirely obvious to me! As I wrote at greater length here[0] - I am not advocating for banning words! I am asking for a recognition that certain forms of speech have certain impacts on the listener, and that by using them you are signalling yourself as "someone who has made the choice that that form of speech is more important than the possible impact". I'm not saying "never take that risk", I'm saying "recognize that it exists, and factor it into your choice of speech". If using a particular slur is super-important to you, go ahead and use it! But don't pretend that people are unreasonable for then thinking of you as "someone who is comfortable using slurs" - because, by definition, that is what you have shown yourself to be. And those judgements lie on a continuum - and each person will be comfortable with existing at different points on that continuum. _This is not policing or enforcement_ - it's just how social signalling works. You choose how you want to be perceived.
> I think it's crazy that we spend time thinking about minor bs like this instead of spending that effort trying to solve actual problems.
False dichotomy. We all (you included) already spend plenty of time thinking about "minor bs". A discussion about language will probably use up less time in your week than you spent deciding what to get for lunch.
Personally I don't have any problem with the word "crazy"... but I don't mind thinking about it for a minute.
being precise in your language is of great value, both when it comes to thinking and communicating. Per Wittgenstein, "the limits of my language mean the limits of my world". If you're a software developer you should be familiar with how important syntactic precision is.
Someone who runs around the office and complains about everything being 'so crazy' is genuinely not articulating anything meaningful but just ranting. And it's quite amazing how many people can't tell the difference between a rant and a productive argument. Perceiving this as 'policing' is just an excuse to not putting an effort into what you think and say.
Aside from the communicative value, there's a socio-political angle to it as well. Speaking in nebulous terms like calling things 'crazy' without sufficient elaboration is a great way to alienate people as a new-to-the-org resource or as an embedded outsider (e.g. consultant).
I have no issue with it. My only caveat is I would refrain from using it around someone who is "unstable".
I don't look forward to prescriptive definitions, grammars and usage. I will use those for technical writing and use "style guides" but not for casual communication and conversation.
Regardless of "wokism," it might stop meaning from being transmitted to those who do find the term problematic.
Absurd or ridiculous both capture the same meaning without losing precision. And at this point, we can look at it more as a function of deprecated terminology - people don't listen to you and think you're a bad person if you use said terminology, they just think you're old.
However, to be clear, this is off-topic for the article at hand.
Worrying about something as mild as "crazy" is really, really niche. Maybe that'll change, but for 99% or more of the not-terminally-online crowd, nobody thinks a thing of it today. Even more so outside the US.
I, for one, am overall skeptical that being very concerned about people using any term relating to a human malady or misfortune figuratively is a trend that will continue to flourish even to the extent that it has so far.
I don't have any problem with the word per se (it doesn't evoke any connection with mental health, if that's where you're going with it).
But... it's a dumb and unnecessary word, in the contexts you mention. When I'm at a meeting and someone says "I'm going to propose a crazy idea" I always think "Just get on with it, man, no need to tamp down expectations."
I can certainly see why someone with a history of being called crazy for whatever reason might be hurt when someone at works casually calls them "crazy" for something, even if they understand it was meant differently in that context.
I think once you know something might accidentally upset someone it's up to you to decide whether you want to take that chance. I try to avoid taking that chance where possible, I really don't like the idea of accidentally hurting someone.
I can also see why people are upset about their language being policed. I imagine every generation feels this way as society changes around them. I'm sure there are lots of examples of probably silly things we've policed ourselves out of saying that probably didn't amount to much, but I'm also happy we mostly don't use the pervasive casual racism of the 50's, or the pervasive casual homophobia of when I was kid.
I think I'm tired of constantly worrying if what I say naturally is becoming yet another "thing I can't say among my betters without them looking down on me". Is there a list I can subscribe to?
It's the same as with other words, including the some of the "x-words" (the n-word, c-word, f-word, p-word, and whatever else Americans came up with). They should all be possible to write and utter as long as they are not written and uttered as a slur.
Saying that "something is crazy" doesn't really victimize anybody. Intention should matter much more than it currently does. I really hope we are at the point where the pendulum swings back to the place where people understand that again.
I think it's not great, especially because psychosis, bipolar, personality disorders, etc. collectively make up enough of the population that almost certainly one is speaking it in the company of people who would be called "crazy". But also it's in way more ubiquitous use than "retarded" so its a far bigger hill to climb and probably not worth the teeth-pulling until we can at least stop with OCD, ADHD-squirrel, and other jokes first.
If that is a challange to you, you might be directly affected? Although English isn't my native tongue and I learned it on the internet. I might be resilient...
A few of these seem like they are somewhere between harmless and inevitable... what's supposed to replace "the scientific method" or "steep learning curve"?
This is a hilariously passive aggressive attempt at gate keeping. Considering that nearly every term on this list has a range of uses: from the very precise with an attending list of up to date peer reviewed exceptions and footnotes to the the obviously false, manipulative, and reckless. In an attempt to reduce incidents of the latter, they are asking that everyone move to the former: "list of 50 commonly used terms in psychology, psychiatry, and allied fields that should be avoided, or at most used sparingly and with explicit caveats."
Perhaps if this field of study did not suffer from a replication crisis, the language in use might have more meaning.
The DSM 5 provides Antisocial personality disorder (ASPD, not to he confused as ASD which is autism). It's more commonly referred to as Sociopathy. Psychopathy is not an 'official' condition
This isn’t used in medical circles anymore but I’m shocked that it’s still acceptable to label something or someone as “hysterical”, which I believe implies erratic behavior that ties back to having a uterus.
To be fair, compaining about that makes about as much sense as compaining about the racist implications of the term "good Samaritan", although I agree that, in retrospect, it's pleasantly surprising that social justice warriors haven't tried to ram though a de facto ban anyway.
If I understand correctly, Samaritans have the same ethnic roots as Jews. The differences were more religious and ideological than racial. So with that in mind, saying the term "good Samaritan" is specifically racist doesn't really hold water. Xenophobic, maybe. The lines kind of blur when you're talking about groups that are divided by some odd combination of ethnicity, religion, and nationality.
Regardless, if you look at the actual context, the Parable of the Good Samaritan is arguably meant as a message against racism and preconceived notions about people from other ethnic groups or countries.
> If I understand correctly, Samaritans have the same ethnic roots as Jews. The differences were more religious and ideological than racial. So with that in mind, saying the term "good Samaritan" is specifically racist doesn't really hold water. Xenophobic, maybe.
According to the Australian Human Rights Commission (an independent government agency) [0]:
> Racism takes many forms and can happen in many places. It includes prejudice,
discrimination or hatred directed at someone because of their colour, ethnicity or national origin
So, even if a difference is "ethnic" rather than "racial", it can still be "racism" (at least by that definition of "racism"), since "racism" can be based on "ethnicity" or "national origin" rather than "race".
Contemporary science does not support any distinction between "race" and "ethnicity": the distinction is just a cultural construct specific to certain societies. So culturally-specific, in fact, that it even differs between different countries in the English-speaking world: US government publications officially acknowledge a distinction between "race" (e.g. African-American) and "ethnicity" (e.g. Hispanic) – even while admitting the distinction is grounded in the particularities of US culture and history rather than anything scientific or universal. By contrast, Australian government publications studiously avoid making any official distinction between them, and in practice treat the two concepts as interchangeable.
Why wouldn't it be? Clearly people don't mean anything sexist by it, given that most don't know about it, and nobody as far as I know is offended. Who do you want to ban the word for?
To be clear I think intent matters more than anything else, and I’m not an advocate for “banning” terms or shunning people for unknowingly using a term like this - but the history and what it implies is questionable to the point where it’s probably best not to use it personally
Not exactly related, a friend recently told me that calling someone a "spaz" who acts absurdly or with an excess of enthusiasm, is no longer (/ was never ?) appropriate. My boss and other boomer (ok technically gen-xer) coworkers use this term liberally, as does my boomer mother and myself and others of my cohort.
I do not want to dwell on correctness of this term or of prescribing / proscribing language, only to share that this is the sort of the thing I had thought the article would be about.
Looking at the headline, one might think this is another tedious guide to what (arguably) constitutes politically correct language in modern society (case example: "use 'unhoused' in favor of 'homeless'"), but it's actually a collection of well-researched and documented examples of misuse of technical terms.
For example:
> "Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a 'chemical imbalance' of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public... There is no known 'optimal' level of neurotransmitters in the brain, so it is unclear what would constitute an 'imbalance.' Nor is there evidence for an optimal ratio among different neurotransmitter levels."
They also discourage the use of the term 'brainwashing' (introduced in the 1950s during the Korean War by the US government), although I'd argue that 'operant conditioning' is an acceptable and well-researched concept, particularly when it is applied steadily from a young age through to adulthood:
The phrase "chemical imbalance" can't die soon enough. I've met some very smart people who were misled into thinking there's much more scientific evidence than exists that there's some innate brain chemistry that's linked to mental illness. In general, people overestimate how much of psychiatry is first-principles instead of black box statistical conjecture. The first huge hint that there's no simplistic chemical imbalance explanation is they don't measure any brain chemicals when they do psychiatric diagnosis.
Furthermore, when people discuss "chemical imbalance" they're almost always talking about something innate and unchanging e.g. I'm depressed because of a chemical imbalance, not my lifestyle. When there's actually a ton of evidence that to the extent there's chemistry in your brain, your lifestyle and things like your food diet and information diet play a massive role.
I will recommend this guy in way too many comments here but if you haven't heard of Andrew Huberman, please listen to his podcast, it's been life-changing. He gives tons of lifestyle tips and backs up all his arguments with his decades of experience in neuroscience.
> they don't measure any brain chemicals when they do psychiatric diagnosis
They do not measure anything in my experience.
They just take an educated guess at what will fix your issues based on heuristics, and then just wait to see if you love/hate whatever they gave you. If what they give you did not work, then on to the next option until something works or all options are exhausted (which they usually just send you off to a different professional).
As I grow older, I am more convinced that psychiatry is more of an art than a science.
Tangential: I used to listen to Huberman, but over time I started to dislike his podcast more and more. I think he is a phenominal researcher, and I think his intentions with his podcast are good, but it seems like he just reads off various cherry-picked studies without actually linking them (that I could find) and without any mention of important details in the studies e.g. sample sizes, methodology flaws, implicit/explicit biases, etc..
He claimed 14% of pregnant mothers use cannabis in the US, but from what I have read of others' ventures into this claim, no one can find any information to support this claim.
I also remember his episode on ADHD, and how effective fish oil can be for treating some of the symptoms in ADHD. However, in my own reading of various research articles, I have found that there is quite a lot conflicting information pertaining to the purported benefits of fish oil (in regards to ADHD). But of course, he did not mention anything about the research that did not fit the supplement narrative... But don't forget to use the code "HubermanLabs" for a discount on your purchase of Athletic Greens though.
> As I grow older, I am more convinced that psychiatry is more of an art than a science.
There's a bit of an educated guessing component to it, but mental health professionals are starting to use genetic testing to take some of the guess work out.
Psychiatrists go through the same four years of medical school that other doctors do. This gives them a broad enough background to help diagnose other diseases that have psychological symptoms.
Some of these are practically tautological in their reasoning for why the terms are "misused", though.
For example, they criticize calling drugs "antidepressants" solely because it's not their "primary effect". But the term is used to describe drugs which do treat "mood disorders" (depression), and do have effects in many cases. I don't see how that's any different from how the term "anxiolytic" is used, even though that's one of the examples they contrasted it with.
Likewise, the section on "hypnotic trance" talks about how hypnosis doesn't induce a sufficiently different brain state to warrant the term, but the term was coined to describe the state the hypnosis induces, which is subjectively distinct from normal waking consciousness. And from what I've heard from those who've experienced it, it's characterized by a reduced awareness of the self, which is the defining feature of a trance, if the term can even be said to have any definition outside its uses to describe hypnosis.
Agreed. I despise political correctness yet appreciate this list, using precise language is always a good idea. I clicked in ready to see "shell shock" being renamed for the third time in less than a century, but that type of shenanigan is not what this article is about.
> I clicked in ready to see "shell shock" being renamed for the third time in less than a century
It’s interesting you’d say that because I always understood the term Shell Shock to have been abandoned in favor of a more accurate and descriptive term. So I think if it was still in use by psychologists today (for some reason), I would have expected to see that on this list, and see the authors argue for the far more accurate term Post Traumatic Stress Disorder.
As a case in point, this list does include Multiple personality disorder and argues we should instead use the more accurate term dissociative identity disorder.
"Shell Shock" of course being the feeling when you discover that you are not using bash 4+
=== End joke section ===
Shell shock has a genuinely complex and interesting history. It has described different conditions, had different connotations and value judgements, and had different supposed causes over time.
As I understand it, shell shock has at least two very real components: Traumatic brain injury and Post traumatic stress disorder - they are often found together, and they act together to make things much worse than either by themselves. Mainstream understanding of the harm of repeated mild traumatic brain injury is very recent, relatively speaking.
I don't see the difference. Both "Politically Correct" and "Psychiatrically Correct" are addressing the same concern of misconceptions due to inaccurate or misleading language.
You can be houseless but not homeless, and still need help with your housing.
Unhoused vs homeless looks like a good example of a euphemism treatmill (in progress). Their state is not being altered by the "finer" wording instead it sounds pretentious and condescending.
There are a lot of these political correct terms that are being used more cynically by now, however I'd prefer to not open that can of worms.
That’s just a straightforward description of changes in the DSM. “Alcoholism” was split into “alcohol abuse” and “alcohol dependence” back in the 80’s, but in the intervening years the consensus view settled on treating these as separate symptoms of a single disorder. This is reflected in the DSM-5, which uses the term “alcohol use disorder”. The purpose of the linked article is to ensure practitioners are using consistent terms when communicating in a professional context.
pedantically, I find alcohol use disorder more accurate to my internal definition of condition and disease.
What we call things matter: as even slight changes in wording affect how people perceive things - so great care should be use when deciding what vernacular we should adopt generally.
In my mind, an alcoholic is someone that has a dependency on alcohol, but not all people with AUD have a dependency on alcohol. For example, I think binge-drinking consistently (college students on weekends, for example) can be considered to have a mild AUD, but that person is not necessarily an "alcoholic" -- at least not what the average person considers to be an an alcoholic.
It makes sense to have a distinction between the word disease and condition and a disorder.
A disease particular is from a foreign agent: bactierial/fungal/viral/prionic/parasitic.
a condition or disorder is an integral structure on awry, or rewritten (by chemicals or by viewing data) to go awry: the key distinction between this and disease being that there is no other 'lifeform' (yes - include viruses for simplicity's sake).
so alcohol use disorder vs alcoholism reflects a more accurate state of reality. (there is no bacteria/fungus/prion/parasite/virus that is the CAUSE of alcoholism - a chemical rewrote your brain which has been damaged to potentially require alcohol to survive).
the push back I've encountered had nothing to do with being 'correct'/'sensible' and had everything to do with protecting the feelings of people suffering from AUD - which is not a worthwhile goal to coddle people with fluffy medical terms in the aims of removing responsibility from one's self...
This is not the case. Often the "politically correct" version is not actually more correct, but is replacing some common phrase that is declared obnoxious by some very small unrepresentative minority of people who coincidentally make their living as activists.
Yes you can be houseless but not homeless, but you can also be homeless and not houseless. It's an utterly stupid distinction predicated on an intentionally incorrect, close, and literal minded reading of the term, done for political reasons.
Some iconic precursors to PC speech was the redefinition of role titles. Janitor which everyone was familiar with would be replaced with "sanitation engineer" or something similar which conveyed less what the person did do.
I think it's worse than that because it actually devalues the role of engineer, and the role of janitor by presenting it as something it's not.
A sanitation engineer sounds like someone that knows how to safely build sewers or run a sewage processing plant. i.e. something you'd have a formal education and be accredited for.
With respect for honest hardworking janitors everywhere, that's not the same thing as minor plumbing or ensuring that places are properly cleaned. Neither task is engineering.
-
“I understand that you’re a neurosurgeon.”
(Bert grins.) “…No; I’m a barber. But a lot of people make that mistake.”
It kinda sounds like classism or wage related; "janitor" or "cleaner" sounds common, a "sanitation engineer" or "chief housekeeping manager" sounds like they would earn more.
Would I prefer to be a Janitor earning $20/hr or would I prefer being a sanitation engineer earning $15/hr. Sounds like I'm trading title for less money. There is probably some relationship with the now passe "rock star" "ninja".
It’s fluff (trivia) but it’s real. Those titles exist. If buzzfeed has a listicle of the most populous cities, Does that mean those cities are not real?
I think I agree with the sentiment, but being overweight is a characteristic of someone’s body. A fat arse is a part of a body. You can be overweight without a fat arse and vice versa, those are not identical.
I would naïvely think that “fat arse” is vulgar or insulting enough (or at the very least way too informal) not to use it with someone you’re not familiar with. This has nothing to do with political correctness.
You can ruin some people’s day by calling them ginger. I would expect a psychologist to be sensitive to this sort of things (do no harm, indeed), but that’s not a good reason to make everyone stop using that word.
Psychological safety is utterly subjective. There cannot be a general ruleset. Although of course most doctors will not call you a fat ass. Maybe if he knows you more closely...
It doesn't. Doctor's and psychs feel entitled to treat their patients in any abusive and harmful manor. It won't matter, because nobody would believe the anyway.
A huge percentage of Doctor's are abusive scumbags. Everything you hear about medical ethics, they do the opposite.
They are bigoted, and emotionally abusive, gaslighting monsters, who stubbornly refuse to self reflect.
Enforcing others to change their language is mostly about power. Any feel good explanations of why it needs to change is justification for executing that power.
It could also refer to the difference between living in someone's spare room, couch surfing, or living in a camper van vs having a home (rented or bought).
Children living with their parents don't own houses, but they aren't homeless either. Or adults for that matter. For a few years in college I lived in a spare room in my grandparent's house. I had no house of my own, but I certainly felt welcome and wasn't homeless by any means.
> Nevertheless, the attitude-change techniques used by so-called “brainwashers” are no different than standard persuasive methods identified by social psychologists, such as encouraging commitment to goals, manufacturing source credibility, forging an illusion of group consensus, and vivid testimonials
The brainwashing techniques used by various cults, criminal gangs and regimes go far beyond these gentle methods. Particularly, I sure hope social psychologists don't use torture. (Social isolation, food deprivation, and much worse.)
> I sure hope social psychologists don't use torture.
The phrasing was very specific: " standard persuasive methods identified by social psychologists". You don't need to use a method just because you've identified it.
> I sure hope social psychologists don't use torture.
Read up on psychologist’s participation in the Guantanamo prison camp torture program. Indeed many psychologists did use torture, and it is something of a really dark spot in the history of the American Psychology Association.
The broader history of almost any profession/industry is similarly checkered. Engineering, farming, medicine, charity/aid… humans will do bad things whatever field they’re in.
I canceled my APA membership over this. People asked APA to condemn it and they did not. Weak sauce on my part (to just cancel) but yeah, that was not cool.
This scandal generated a whole host of reports, it collapsed the APA leadership. Books have been written about it. I am not a journalist, nor a historian. And I don’t have enough knowledge about it to answer any detailed questions about it, other then the fact that it happened. I’m sure you can find information by googling e.g. “APA torture” or “James Mitchell” and “Bruce Jessen” the two most prominent psychologists guilty of torturing people.
Edit period is over, so I’m responding to my own post. My post actually inspired my self to do some googling, and here is an excellent interview from 2009 that goes into the personal story of these two tortures, hopefully it answers some of your questions:
Do note though that this is before the scandal had reached the APA leadership (which happened in 2013 if I remember correctly) and before any legal actions were taken against them. So there is definitely more to this story that has emerged since 2009.
Thanks super interesting. In addition to the lack of ethics, it sounds like there was no proof that the methods would even work, and it was a pseudoscience.
Uh, I'm having more and more trouble believing this is just an increase in diagnosis rates. That's what we said 13 years ago when I was graduating college, and yet the rates have actually increased very dramatically even since 2010 (from 1/68 to 1/44) a 54% increase.
I appreciate that it's useful to consider alternative explanations of data, but presuming an alternative explanation is valid for over 20 years without hard data? Really?
Increased understanding of autism on a societal level. 20 years ago autism was just, 'weird savant kid who screams and everybody can't understand why they can't <XYZ>,' disease. 10 years ago, we saw depictions of autism that were less stereotypical in mainstream entertainment; which prompted some to see themselves reflected in these characters, and self-reflect if they're autistic too.
In the years since, autism communities on the internet and the psychological community have come together to help folks realize that there is a wider spectrum of how autism manifests. And the increased visibility of autism, and increased societal understanding of the nuances involved with autism, have led to folks who previously thought of themselves as neurotypical as realizing they are autistic too. Who then inform their family and friends, who may come to realize that they (or their friends, coworkers, or children) too might be autistic as well.
It's not an epidemic, it's merely language and labels being more accurately used within society.
> After accounting for methodological variations, there was no clear evidence of a change in prevalence for autistic disorder or other ASDs between 1990 and 2010. Worldwide, there was little regional variation in the prevalence of ASDs.
1) That's a "Meta study" which is a good thing usually, meaning it summarizes results from numerous other studies.
2) Just because one meta-study says something doesn't mean the topic is defacto decided and over. Many controversial topics have many meta-studies.
3) Notice I said since I graduated college in 2010 autism incidence went up 50%. The study you referenced ENDS in 2010. Have the diagnostic criteria changed AGAIN for 13 more years?
4) This article didn't say "We disproved there was an increase in prevalence of autism." They said they couldn't find evidence of that. Absence of evidence isn't evidence of absence.
4) We already know for a fact that various stressors including pesticides in utero (or pollutants such as PFOAs which are now in everybody's blood in utero) can contribute to autism. Any research that finds no increase/change in absolute rates of autism at all is highly suspect. Also this article seemed to suggest that all countries had the same prevalence of autism, which would be at odds with very different pollution profiles of different countries.
Maybe in 10 years the rate of autism will still be 1/44, but if it's 1/29 I'm gonna be even sicker of this "diagnostic criteria" stuff. It doesn't seem that hard to get around diagnostic criteria if we really want to -- just make a standardized test and give it to a large sample of children of age 10 (and never change the battery) for 5 consecutive years. The test could be facial expression recognition, theory-of-mind questions, etc.
There have been significant changes in diagnosis of autism in (overlapping categories): adults, women, people of color where the signs of autism were usually assigned to another diagnosis (anxiety, OCD, PTSD, bipolar, etc...).
I'm not a researcher so I can't really comment on the rest, but from all my friends (me included) who have finally realized that we were autistic in our 40ies and 50ies, what it's at least plausible that what looks like an epidemic is just people having a better understanding of the many forms in which autism manifest, beyond non-speaking white boy that likes trains.
I spent all that time thinking I couldn't be autistic because look I have friends and a job and I can sometimes do small talk.
How breathtakingly insulting to the 100,000s of parents and caregivers of autistic children over the past 30 years, who have sacrificed 1,000s of hours, $MM of lost revenue, their own health, their own goals, and even attention that could have been paid to their other children and their community, exerting heroic efforts and sparing no expense to reach into a single child's mind to teach him or her basic skills and share some kind of human connection, starting from zero with no answers, no training, and sometimes not even family or a support network.
Unlike these scientists, most of these parents and caregivers will not be thanked or applauded for their work by society, but they only did it because it happened to be their child.
Let's not acknowledge that that their efforts correspond to a real event-- instead, let's dismiss all of the potential links uncovered and directions for future research, and wave it all away, feeling self-assured that we are being skeptical and rigorous while eating up taxpayer money from these same parents who still have no answers.
I have autism and only discovered it two years ago at age 33. I have been intensely researching it.
The book Neurotribes presents a very in-depth picture of the history of autism and the research and theories for the past century.
This is not saying that autism is not real, that it is not a thing to understand, or something that should not be researched. It is only saying that the idea that autism is On The Rise in a terrifying way, that it is something to be feared, is misguided.
As this article says, and as the book Neurotribes explores in depth, autism used to have much more strict criteria for diagnosis. Connor, the leading researcher of autism for much of the 20th century, was convinced autism should only cover the most severe cases, and he did not like the spectrum idea. As time went on and it was realized that many more persons may have some aspect of autism even if it isn't extremely severe led to the DSM making the criteria much looser. The authors of the DSM particularly noted that this may make it seem like the prevalence was increasing when in fact it was simply more widely diagnosed.
The reason for caution of using the term "autism epidemic" is that it spooked many parents into thinking there was a horrible plague afoot, and that it needed to be cured.
My current understanding of it in myself and wider society is that it has always been around, that those on the spectrum hold an important place in society, and that rather than finding a cure (if this is an epidemic) is more important than understanding autism and advocating for services to help those who have autistic children.
This absolutely does not seek to discredit the indeed heroic efforts many parents have gone to to support their children, in fact by being more precise about what is happening the hope is that autistics like myself can have even better outcomes.
I didn't want to engage someone apparently so angry in that direction, but since you took time to respond kindly and share you experience, I will add my comment too.
The centering of autism as being something happening to "the parents" and framing autistic children (or adults, for that matter) as being unable to form human connections or acquiring basic skills is something a large part of autistic people are pushing against.
The #actuallyautistic tag on social media is a good way to find autistic people's experiences regarding that matter.
Something I wish was more widely known is what is being studied as the "double empathy problem", which frames communication issues between autistic and non-autistic people as a two-way misunderstanding, rather than autistic people being handicapped or incapable of social communication.
I am skeptical that all of those people on Twitter are actually autistic and not just self-diagnosed.
Despite having a clinical definition and a defined diagnostic process, autism has become sort of an identity that people celebrate and use in a casual, colloquial way.
People have told me that I could have been diagnosed as autistic because of how I act as an adult, but I didn't have developmental delays. I had a pretty normal early childhood. So I don't identify as that.
So perhaps it's better for parents to leave the term "autism" for people who wish to self-identify that way, and using a different term to describe the thing that happens to their children. Too often they end up clashing over this word, and I don't think it's healthy or constructive for either side.
So maybe the authors of the paper were right, but not in the way they wanted to be.
As a self-diagnosed autistic (dodged a diagnosis as a kid, in hindsight quite good seeing how france in the 80ies had some pretty rough approaches to it) that came to that conclusion at age 40, believing for most of my adulthood I wasn't because "look how many friends I have", the reason people seem to "celebrate" it is that it finally gives context to a lifelong suffering and feeling of alienation.
It's not like it will make your life any easier, since accommodations are scarce to come by, an official diagnosis can come with significant discrimination, and if you made it this far, it doesn't feel like a great way to spend your resources. It however allows me to not only realize that my experiences are shared, but to finally be able to learn strategies and find therapists that are helpful, vs actually aggravating the situation (say, framing you as argumentative and arrogant or schizoid or whatever, when what is going is mostly a communication mismatch).
However, my nephew got diagnosed and it allowed him to find a school that accomodates his cognitive style. He's gone from puking every day and turning into a shell of himself to being back to the bright and happy kid that he is. While in some ways I'm sad I didn't have that opportunity, it makes me so happy to see that the field has made progress exactly by widening its diagnostic net.
> It's not like it will make your life any easier, since accommodations are scarce to come by, an official diagnosis can come with significant discrimination, and if you made it this far, it doesn't feel like a great way to spend your resources.
These reasons and others are why I do not believe the rise in autism is due to a change in diagnosis. There are very few incentives to getting a diagnosis and so many obstacles. It also doesn't take into account that many autistic children have their diagnosis dropped as adults.
Diagnosis criteria have definitely gotten broader, and plenty of adults go through with it (I probably will) just to not feel self-conscious about the "self-diagnosis" thing. There is a lot of upsides to framing yourself as autistic, as it makes navigating the world so much easier. These better diagnosis standards, along with better accomodations for autistic people (vs just brutally forcing them to conform) is what made life so much better for my nephew, for example.
Thank you for sharing your thoughts despite me clearly having my feathers ruffled a bit.
Do you mind sharing, if you were formally diagnosed with autism? And did you have any developmental delays as a child? If so, how did your parents deal with those?
Based on what I've seen from friends and family with autistic children, and how their lives and family dynamics diverge so drastically from the established well-trod tracks of parenthood traveled for centuries, looking incomprehensible to outsiders, I don't believe it's possible for the current rates of autism to have been constant throughout history. There would have been some similar phenomenon to autism parenting known to us (though not necessarily by that name).
The paper suggests that what looks like an epidemic is due to better diagnosis, so that people previously not diagnosed now are. It doesn't discredit autism.
My wife is a pediatric occupational therapist and I'm a physician in government. I'm still not sure which direction your comment is pointing: is it insulting that the term Autism epidemic is exists, or that this paper suggests it should be avoided?
Then you went on a very angry and incoherent rant for nothing. You seem to be angry because you don't understand what the word epidemic means.
In the spirit of this article, epidemic means something specific; the rapid spread of a disease over a short period of time in a specific region. The article is not saying that autism isn't real or that people aren't suffering from it, the article is saying that it's not an epidemic. The article is saying there is no rapid increase in autism over a specific region but rather there's been a change in how it's diagnosed, people's awareness of the disease and that if these factors had been held constant, autism rates would also have been constant.
If it was for nothing, then it was only because I chose the wrong audience. Nothing about my comment indicates I misunderstood the intent of the authors.
You're misinterpreting the purpose of the article. It's not saying that the epidemic wasn't real, it's arguing that calling it an "epidemic" implies that there is more autism than there was previously. While they posit that it's simply due to better diagnosis.
There are not more people with autism. There are more people that have been diagnosed. Those are different things and calling it an "epidemic" implies that it's the former.
To put it in terms that engineers are more likely to appreciate, a lot of these terms would be like "man-hours". Man-hours is obviously a useless term because it 1) implies that increasing workers scales production linearly, 2) implies each individual produces at the same rate, 3) inherits from a factory mode of production that engineers typically don't believe fits their situation, and 4) generally results in poor estimations of cost and delivery times. Obviously if you're trying to be productive as an engineer, your managers only using man-hours as a term invites ambiguity and worse working conditions. Same goes for things like the lay understanding of attention vs its specific meaning as an implementation template within deep learning, or even AI more broadly vs specific neural network techniques.