The story reminds me of the HR practices of 'measuring' personality based on questionnaire to judge the competence of people for certain positions and determine their carrier path inside the organization (e.g. at Trimble). Imagine you are a software engineer applying for an engineering role and they ask you a hundred or more questions to compare the importance of aspects such as punctuality, trust, being patient, lawfulness, honesty and several more, 3 at a time, make a strict order of importance. Without specifics on the work context and situation. Then they build a multi dozen component report how good you are in this and that on a 1 to 5 scale. The way you work. Based on answers to strange questions, not actual work.
When it is impossible to answer accurately (because it depends, or there is no order like between playing piano or being tall) then the results will be inaccurate, yet they use it to classify workforce like steel by its properties, determining their fate. Assessing work performance before doing any work. And they take it dead seriously like gauge on a pressure tank.
It is just so ... well dumb, forcing any kind of oversimplified measurement on complex and fluid things, sounds like measuring friendliness by meter. People change, people adapt, people behave differently in different circumstances, very very differently, and definitely not how they admit it, no one can count how many influential factors there are, these robotic and unified approaches are just distorting for the purpose of appearing fair.
It is far from fair, it is just robotic - which is ironic from a department called human resources, feels like robots work there not humans. Measuring fish by how high it can jump before allowing to swim.
I thought about 6 months whether I should call companies out if I'm really unhappy with their recruiting process. I decided the potential increase in transparancy is worth the risk.
I once applied to ING (the biggest Dutch national bank). They gave me a test like this and rejected me because I'm not responsible enough.
The irony is that I was complimented on my responsibility when I was an instructor on a coding school. This was especially in light of all the other teachers resigning because they couldn't cope with their classes.
I told ING this. I told them I'd work for free for 3 months, so they can try me out. Nope, I'm too irresponsible, no matter that I completed my 4 academic study programs in time with high marks and did extracurricular stuff.
It is bullshit indeed. IMO, the real answer is: it seems that Mettamage is responsible based on his past accomplishments. Based on the questionnaire he doesn't rank his responsibility a lot lower compared to the average person. Time will tell whether he is responsible.
If you ever need to take such a test, get yourself in the mindset of "I am a loyal, hard working person", and good results will flow from that one. Just think what kind of employee a company wants to have.
It's easier to fill it out on a test than actually be hard working for a couple of years, so the test is the easy part ;).
My point is more like that it is the best not to be involved with organizations employing such evaluation practices towards their people.
Despite the technical interviews went very well I've quit the recruitment process when faced this mandatory step. I choose not to take those tests. Previous experiences show that where such mechanistic approach is in place those places are not worth working for. I tried to discuss around this kind of test, asking for reasons and proposing substitutes with more position relevant ones, but no, rigid refusal was the answer in polite wrapping. Actually they required two kinds of tests, one personality and one generic ability that included one third - ca. 6 questions - of calculating percentages and sum values quickly, in a financial context. Basically adding and multiplying numbers very quickly. Irrelevant but the results are taken into account seriously.
Yes, great point. A space engineer can feel super inadequate when it comes to systems thinking (since she/he is surrounded by geniuses) but a smart barista can feel like a 5 on the 1 to 5 scale. (Of course the barista might be better than the space engineer, but you get my point).
I've been called in to an extra meeting with my future employers because my answers on one of those. I mean, I'm pretty analytical (I'm doing my PhD in engineering this fall) however I didn't give myself top score on that (and a lot other metrics) as I know people who are a lot more analytical etc than me. After working in that company for a few years (and quit) I realize I should have maxed out on many of those metrics, but all in all its so much bias involved in those questionnaires - and I don't think the people working them are realizing that. At least most of them don't.
And that is without going into the skewing of these questions. Yes I can be motivated to do a good job without answering like this job is the most important task in my life behind breathing and in front of eating.
I don't know about this Trimble outfit, but many of my friends and relationship partners have found value in similar tests like enneagrams. In my experience, they do help distill relationship dynamics into a smaller set of more understandable patterns. No, it's not rigorous hard science, but neither is most of what we might speak about with a therapist. They can be great frameworks to build on, if you trust the people you're building with.
Then again, the power dynamics when a company tried to work with you on them, I can easily imagine that as threatening. After all, my friends care about both my personal growth and our interpersonal growth -- a company mainly cares about how I can most benefit them.
Anyhow, I suppose I'm defending the class of methods (as I've experienced them), outside the context :)
There is a big difference between using the Enneagram to get to understand yourself and others better and using it to judge people. No profile is 100% correct, but some companies act like they always are.
It's similar to unit test code coverage: a great tool to point to areas that might use some attention, but a bad tool to determine whether a commit should be rejected.
This is such a horrible article. Many have already mentioned the conflation of humanities and social sciences. Then by using the example of books (because most would agree books would loose their essence if we just describe them by numbers) it constructs a straw man that somehow we loose something (the beauty?) by using quantitative measures.
Sure many studies have flaws (not just in the social sciences), but what is the alternative that we simply use theories because of their "beauty" (whatever that means)? Shall we start psychological therapies just because someone thought it sounded good, instead of measuring if it works?
Given the title I expected to find an interesting article on what can go wrong if we overly rely on quantitative methods in the humanities. But this article doesn't distinguish between badly-applied quantitative methods and where the limits of those methods are even if they are executed well.
For example:
> We look at instances where the effect exists and posit a cause—and forget all the times the exact same cause led to no visible effect, or to an effect that was altogether different
This just sounds to me like bad quantitative modelling.
There is a huge argument to be made for qualitative research, and there is much-needed criticism of the idea that "hard" methods are more valuable than "soft" methods. I think this article manages neither.
When the measurement is flawed you will not know if the therapy works. Sorry.
And the measurement is flawed! Actually, there is a crisis of reliability concerning modern research papers.
(probably this is the source that many try to build castle out of dough?...)
Flawed is not completely useless. Even a flawed p-hacked measurement can distinguish between big effects efficiently.
Plus even if all the previous measurements were totally useless, that doesn't mean we should just give up and stop trying to measure soft stuff.
No, it's quite the opposite. Instead of many small underpowered experiments and studies we should be spending on fewer but well designed and run larger ones. (Even if that's naturally harder.)
So are you arguing that useful measurements are impossible for specific topics or are you just stating that flawed research is less useful than it could be (which it obviously is)?
I can recommend having a look at the book „how to measure anything“ to get a sense of how to apply measurement techniques to „soft“ contexts.
> Shall we start psychological therapies just because someone thought it sounded good, instead of measuring if it works?
Yup. All psychotherapies were started just because someone thought they sounded good; and are still being practiced because they sound good. There are hundreds of different schools of psychotherapy today. There is a movement toward testing the effectiveness of therapies, but they just show whichever therapy the researcher fancies is the best, and metastudies reveal about equal effectiveness of all the major therapies.
The point is, I think, that quantitative measures are not lifting the scientific status of a field automatically; and can mask the lack of sound foundation.
That's incorrect. Meta-analyses show all widely accepted therapies are efficient exactly because public health authorities only fund and support scientific therapies. There are plenty of great-sounding therapies which are not effective and thus not widely used.
You've missed the entire point. A sort of conflation is the problem - not by the author, but by people in and around soft sciences and humanities throwing a bit of statistical jazz into their papers and then drawing ostensibly rigorous conclusions which influence social policy.
The reality is that by their vary nature, both soft science and humanities (there is a lot of overlap) cannot be held to the same rigor as, say, mathematics, physics, chemistry. These sciences are pure theory (like gender studies), non experimental (like psychology), and fundamentally unfalsifiable in the majority of cases...but laymen, and apparently government officials, either don't understand or pretend they don't understand - either way shitty policy and legislation is passed and innocent people (society) are worse off frequently.
This is 2020 and while in the past attitudeX or behaviourY might have been acceptable we have now come to understand (through Social Science Studies or NY Times bestseller that a particular academic has written) that both are wrong, toxic, in fact.
Please address the issues directly and name your sources rather than just giving a general progressive word salad. I'm ready to support you but you don't convert people by brushing them off as out of date.
That reminds me of a quote from Kurt Tucholsky: “Sociology was invented so people could write without experience”. A crisp way to summarize your point. Although I don't think that sociology can't be better than that.
I've never seen a half-decent study in social sciences that draws rigorous conclusions: what I have seen is those studies noticing a correlation and then mainstream media picking it up as de facto conclusions (to put out a simplest example we've all seen: "people who have more sex are happier", directly implying that having sex leads to happiness, whereas studies noticed a correlation between people who claim to be happy and sex frequency).
But you are right about what happens next: you top it off with academically uneducated (or simply unaware of scientific rigour) politicians like Trump (he's just an obvious example, far from being the only one) making calls on different social topics.
> I've never seen a half-decent study in social sciences that draws rigorous conclusions.
Perhaps you should look harder. The dominant approach in economics for 20 years has been to reject correlational studies and try very hard to get at casuality, by:
* Running randomised controlled trials, often at scale (see eg Esther Duflo);
* Laboratory experiments, which have provided a body of robust paradigms and results;
* Seeking natural experiments;
* Statistical techniques like regression discontinuity and instrumental variables.
There's plenty of bad work in the social sciences. So is there elsewhere in the natural sciences (cough Lancet). There's plenty of good work too.
FWIW, I realise I didn't phrase it correctly. What I wanted to say was that any non-terrible ("half decent") study does not attempt to draw a final, black or white (I wrongly used "rigorous") conclusion, but that mainstream media will do that instead by choosing a particular interpretation of the study results.
I did not want to imply that social science studies are non-rigorous, I was actually trying to defend their scientific nature, but with an incorrect phrasing.
Economics is not really something people think of when they talk about humanities or social sciences.
Indeed, in Econ grad school I learned a lot about control theory, statistics, dynamic programming, etc. But I was never told to read Foucault, Levi-Strauss or even Marx - something that sociologists and other people in humanities usually have at least a basic understanding of.
If we judge what is science by level of quantitative rig our then economics is the only social science.
There are definitely areas of overlap with social sciences--especially these days. Behavioral economics (for which Richard Thaler won a Nobel Prize a couple years back) grew directly out of behavioral psychology for example.
Articles and generally ideas like this always seem too dismissive. CPUs are nothing like brains because: intricacies. Algorithms can't be applied to literature because: intricacies. It is almost an appeal to emotions to say that rather than saying "ok well we tried to apply these electric constructs and algorithms in order to understand ideas and literature better, we got nowhere and here's exactly why", they are like "it can't be applied oh no it can't our humanity is so distinct and precious there are some lines not to be crossed and here's a million contrived examples as to why".
And to this approach I say, ignorant and cowardly.
I'd hesitate to even call them intricacies. Billions of years of evolution in biochemistry and multicellular organisms, half a billion years in nervous systems, and over 200 million years in mammalian brains have culminated with about 50 million years of evolution of the primate brain. For every human being that has ever lived, nature has run 10^X continuous brute force "simulations" to arrive at our present civilization, where X is a ridiculously high number that's impossible to even estimate.
Sure, our intelligence allows us to skip a lot of those processes just like it allowed us to escape our gravity well and explore our solar system, but the jump from CPU to brain is like the jump from moon landing to intergalactic travel. The discrete nature of digital electronics alone prevents them from matching neurons because of sampling, let alone their lack of architectural (i.e. neural) plasticity. It's like trying to weld with a q-tip.
Nature isn’t limited to intentional design tho, I’m not sure what advantage we have when you have a few 1000’s of people trying to progress technology over their life time vs about 7 billion iterations which are governed by natural selection and that is if you only account for a single species.
Also don’t forget that a single life can also produce multiple iterations of itself over its life span.
Designs are limited by what we understand and can imagine as well as other constraints, nature not so much.
Can't upvote enough. It's easy to dismiss something you don't understand and wreck-less to assume you do. It's attitudes like these articles that led Planck to say, "Science progresses one funeral at a time"
If learning, math or otherwise, doesn't destroy your ego, you can do better. There is no shame in vocalizing uncertainty or a feeling of not understanding. Look at Stephen Hawkings - he had the courage to admit he was wrong and was one of his best critics.
Ego has no place in progress - scientific or otherwise.
Right, and if bernoulli thought that "omg birds evolved a billion years, we can't simply model their wings to levitate a plane, look at those intricate feathers!" or the guy who first suggested CNNs thought "human brain! hundred million years of evolution from early primates! we can't simply use the neuron structure to create anything better, it's all over oh no!" we wouldn't simply be at the point we are standing today. On the shoulders of pretty smart people and their brave ideas.
This is a repetitive concept in modern society, just look at astronomy 100 years ago. People want their science to be more science than it is. If you build your models on faulty foundations, they don't mean anything. Many psychology studies use American college students, as if you can build a model of society in an international context with such a specific subset of people. This is valid criticism.
This conflates social science with the humanities. I agree with respect to the humanities. Less so with respect to social science. People in large groups do in general follow certain trends and we do have decent ways of establishing causation. Now there is a lot of stuff that people do that we can't analyze quantitatively due to lack of data or other blockers. For these qualitative methods will have to do. Fine, but then we should accept that the conclusions from qualitative analysis are less firm and judge them accordingly.
We can't predict how one person will react to a situation. But sometimes we can estimate, on average, how a large group of people will react.
Even a lot of engineering deals with similar types of uncertainty. We build huge structures out of metal, concrete, and wood on top of soils and other geological materials, often with the assumption they they are all homogeneous with constant material properties through the structure (they aren't). However, we can do this because the variations in properties tend to average out as the material sample gets larger. If we try to make the same assumption on very small scales, we find that there is a lot more uncertainty and our predictions aren't going to be as accurate. We see this in physics and many other scientific fields as well.
I think the real problem is, is that the humanities -- the social sciences in particular -- have proven themselves to consistently build bad models with unproven assumptions. These assumptions have been leveraged to directly influence policy (instead of, say, trying a lot of different things and seeing what works).
Here's a concrete example of a paper [1] that just came out, attempting to impact COVID19 policy, from a very "respectable" set of academics at Yale -- that is based on a flat out fabricated economic model:
We focus primarily on the moderate scenario. That is, our baseline assumption is that diminishing returns play a larger role than accelerating returns (so that α ≤ 1) but not so large that they lead to α < 0. We stress that U depends both on the variation in economic value attached to different activities and on the model governing the disease transmission
Translation: we made some equations that makes the BAD thing BAD and the GOOD thing GOOD.
> I think the real problem is, is that the humanities -- the social sciences in particular
This construction makes it seem like you view social sciences as a subset of humanities. I was taught that neither is a subset of the other.
Where did you get your impression from? I assume it’s not the university in your username, since I’m certain they don’t share this view. (Source: I’m married to a humanities professor at that university. Also: https://exploredegrees.stanford.edu/schoolofhumanitiesandsci...)
They make assumptions and then foreground them so the reader can understand them. What's wrong with that? Have you discovered a way of drawing conclusions without assumptions? Or do you think you can "just observe" without being guided by theory? What would a non-"fabricated" model look like?
The problem is, that the assumption is not rooted in any empirical phenomena. They have neither historical data to fit to (which itself would be tenuous) nor do they have any sort of first order justification for it.
It's like doing a car crash safety simulation and stating that the airbag provides protective force "linearly proportional to the force of impact, with alpha > 1" ... without ever measuring it or doing any calculations on why this would be the case. Would you drive in a car that was tested this way?
I'm too damn lazy to read the paper, so you may be right. But normally, we practice a division of labour. Theoreticians build models in the expectation that empiricists will (a) test them and (b) estimate key parameters. If these guys have said "we have the answer, this is what you should do" then maybe you are right.
I'm not saying either that this phenomenon never happens. I just think it's rather rare, especially among sensible and skilled economists. See also the great Bob Sugden's paper "Credible Worlds", which is available here at the moment.
It's a paper on epidemiology, that utilizes econometric thinking. I understand the theoretician / empiricist divide very well. But I chose this paper to illustrate because it is very clearly aiming to impact immediate policy (it states this in the abstract). Furthermore it is not trying to create fundamental new theoretical models... but rather provide a justification for why lockdown is economically justified.
I hate to be the one to say this -- but most of modern economics is crap, given how reflexive it is. People know how the fed acts -- they know how policy makers act. The models don't take into account this reflexivity.
Someone like George Soros is a thousand times more savvy on how the economy works than post-docs publishing econometrics papers for $40k a year.
Which fields are you referring to? I think of psychology as perhaps more rigorous, but none of the others (sociology, anthropology, political science).
Sociologists often complains that economists don't bother reading established peer reviewed sociology and opt for own guesswork based on own intuitive observations whenever they attempt to incorporate sociology-lite into their writings.
This is a problem I find tiring about most studies, even in the sciences (excluding most physics and all mathematics). It seems as if most researchers are looking for a novel or interesting correlation or apply an interesting model to a new domain. Generally, I've found this puts insufficient effort into any kind of disconfirmation.
It makes sense, though, with the incentives most academic journals put into study authors. Nothing's sexy about disproving some random model fits.
Replication and publishing negative results do not get the respect they deserve. They should be given equal attention and funding as novel results. That would fix the replication crisis.
Are they not published in the major journals, or are they not published at all? The latter would seem insane. "We've tried this, it didn't work, but nobody will know and somebody will try it again next week to find out that it doesn't work."
They get published in lower impact journals because it's not as sexy. They get less funding for similar reasons.
I think a change of bureaucratic structure might be needed, like funding for every study should include funding for at least one independent replication.
Not only would the replication itself cull some of the false results, but knowing a replication was coming might make researchers more open and honest, ie. less p hacking, document more of their methods and in greater detail, etc.
>We can't predict how one person will react to a situation. But sometimes we can estimate, on average, how a large group of people will react.
I think the issue here is very similar to the one in predicting the weather. The uncertainty of our models increases so quickly that we can only predict a relatively short amount of time ahead (in the case of weather it's about 1 week [0]). We can see patterns in human behavior, but the uncertainty just grows too quickly for long-term predictions. Small things that are hard to account for can make a big difference in where even large groups of people end up at.
When you describe continental drift, it doesn’t make the plates go faster. When you do economics, it changes the economy. Social sciences have a worse reflexivity problem than the arts, because with art, nothing turns on whether it’s right or wrong. With social sciences, if your hypothesis is wrong but widely believed, you can still kick off seventy years of communism.
Yes and no. There are large amount of economics where the describing what is going on doesn't change the outcome.
Eg, a free market puts tremendous downwards pressure on profit margins. Identifying that means the powers that be fight against free markets with hook and crook - but it doesn't change the fact that if there is a free market it will find an equilibrium where people are indifferent to starting a new business.
The last forty years has been shaped by neoliberalism, which is exactly based on the idea of free markets as self-correcting. You’re so blinded by it that you’re just assuming that the theory is correct, so therefore the theory can’t have an effect!
A) empirically, that was wrong and disastrously so (not as bad as communism but very bad) B) assuming reflexivity can’t exist is silly even within the neoliberal framework. It’s like assuming that the dollar bill you see on the sidewalk must be fake because it’s there.
If you want a citation; that little theory is vaguely from Karl Marx. Chapter 13 of Capital, Volume III according to Wikipedia [0]. Pretty good theory, and somewhat older than 40 years.
> The last forty years has been shaped by neoliberalism, which is exactly based on the idea of free markets as self-correcting.
This is not only wrong, it's laughable. Governmental interference in the economy has grown over the last forty years, not shrunk. And not with good outcomes, either: the crash of 2008-2009 was due to too much government meddling over a period of decades finally catching up with everyone.
Irresponsible speculation in dubious securities by investment banks who knew the government would bail them out (or bail out the insurance companies underwriting their risk, which comes to the same thing) because they were "too big to fail".
And the dubious securities themselves were derivatives based on the housing market, which was in a huge bubble caused by government policies that basically forced banks to lend to people who couldn't afford the loans in order to encourage home ownership, as antepodius pointed out.
I agree with the core premise (scholarships of society are more reflexive), but this nonetheless made me wonder about the relative rates of incarceration and suspicious death among practitioners of the various arts and social sciences.
The problem with eugenics isn’t the idea of selective breeding, but the conflation of mostly irrelevant traits or uninheritable traits with important and inheritable ones. Leading to a loss of genetic diversity and concentration of genetic diseases.
“If done right”, but I doubt it can ever _be_ done right...
Huh? You can cherry pick an attempt to do something in any discipline and then go on to doubt if it can ever be done right.
What you said is not even an argument - it is just an expression of personal preference towards specific actions.
I like apples, you like bananas, where do we go from here?
I heard somewhere that there are three types of information: facts, fiction and fiction masquerading as facts.
It seems to me that you're engaging in the third type, as are many others who hear an opinion that seems to make sense and aligns with their preconceived preferences, which they then go on to repeat with an incredible level of certainty. I'd argue 99% of what people say is of this variety, which is a problem because it is so pervasive, that hardly anyone seems to notice. It's like we're in the matrix, maaan.
> What you said is not even an argument - it is just an expression of personal preference towards specific actions.
Which is the whole point. Eugenics is “fiction masquerading as fact”, as you put it. The core hypothesis (selective breeding works) is sound, but the traits that historical Eugenicists were trying to breed into the populace were all subjective, fictional correlates of what they thought were desirable. It’s not about MY preconceived preferences, but about what preconceived preferences the person running the eugenics experiment has.
When I say it can’t be done “right”, what I mean is that what constitutes the “eu-“ part of eugenics, meaning “good” or “best” is inherently subjective, and the choices of a particular experiment may end up being counterproductive to the goal of creating the “best” humans.
This 'everything is subjective' non-sense is fiction masquerading as fact of the highest calibre.
People liking water and not liking dying of thirst is not subjective.
You're using 'everything is subjective' to defend 'I don't have the faith in humanity to not fuck it up' and you cannot do that because everything is subjective is a non-sensical claim. What you're really expressing is deep pessimism, which I happen to share :)
What I think we should say is, eugenics is an excellent idea, but the current ruling elite is too infantile to implement it in a way that wouldn't lead to a dystopia of one type or another.
---
Another interesting thought is to contemplate that we already enable some to breed and raise children in far better circumstances than other people, by design. Is that not eugenics? We're doing selective breeding and providing selective advantages and we always have. Is that not eugenics?
If you ask me, I'd rather we go extinct because of a scientific experiment of immense beauty, than the usual petty fight for limited resources where the rich sit back and watch, while the innocent slaughter each other by the millions.
Eugenics was an idea pondered by the ancient Greeks. I’m not defending it, but it’s not the brainchild of a certain genocidal political party that many people seem to think it is.
No - it's pseudo moral. It's reviled because proponents are reliably racist idiots who don't even understand the limits of the science - or why breeding humans like farm animals really isn't such a great idea.
The GP didn't seem to claim it was moral. Making nerve gas isn't moral either, but that has nothing to do with whether making nerve gas is pseudochemistry.
> It's reviled because proponents are reliably racist idiots
It's reviled because it's inherently immoral. Its proponents are indeed usually racist idiots, but even if they had the best of intentions, it would still be immoral.
Sure. But updating the equations doesn't change the outcome of the Michelson-Morley experiment or make a ball fall up instead of down -- it simply converges on a more precise outcome.
social science is interpretation of data and much closer to history than science. The data will often change over time and thus the interpretation. Its very problematic that we treat it as science because it can be politized. To the extent its science its very very imprecise and is on the other end of the spectrum than something like physics which makes very very careful conclusions about very specific things and even they are to a certain extent also just interpretations. It should be called interpretational disciplines not science.
I’m not even sure the paper he starts off criticizing is that problematic - it sounds like it’s basically a quantitative study of various tropes in fiction and comparing it to real data.
given that the study goes back as far as the creation of the Illiad I have serious trouble believing that we have high fidelity data about the social graph of individuals at that point in history.
Pretty much all we have is second-hand accounts to begin with which may themselves be as unreliable as fiction. So on that particular case, I fully agree with the author. That's not actually scientific.
Oxford Economist Kate Raeworth has made the exact same argument about her own discipline and the allure of the ‘hardness’ of maths and physics. The way early 20th century increasingly turned to Newtonian-like mechanistic descriptions of economic processes, reductionist and absurd ideas about ‘human nature’, and extracting Universal laws from historical and accidental correlations.
I do recommend reading the first half of her Doughnut Economics where she makes this case at length, from someone inside the discipline.
I admit I haven't read or even heard of Kate Raeworth, but I am an academic economist. Most modern economic research is empirical, a paper poses a policy-relevant empirical question 'did policy X reduce unemployment' and then empirical evidence is presented (perhaps using data from a randomly controlled trial or using quasi-experimental variation). The statistics are calculated and the assumptions required for the validity of the statistical analysis discussed critically and at great length. Undergraduate econ classes unfortunately leave students with the impression that academic econ is mostly unrealistic theoretical models of behavior, but those models are just handy tools for hypothesis formation, and it is the empirical testing of hypotheses that makes a discipline a science, not the manner by which those hypotheses were formed.
Hypothesis driven research disregarding rigoriousness of hypothesis formation is most often flawed, as the hypotheses will be filled with suspicious/flawed concepts that cannot serve as reliable and meaningful abstraction of reality.
Consider the measures prefixed with 'real', for example, 'real wage'. The concept of 'real wage' isn't meaningless, since it captures the relation between wage and purchasing power when inflation is involved. But what about cases where inflation is not involved, or cases where we need to consider interactions between some other factors and inflation? In those cases, the concept of 'real wage' is often impedimental and misleading, a ratio indicating purchasing power would surely be a better choice.
Consider the concept of 'equilibrium'. I can hardly see any empirical foundation of 'equilibrium' when it's invoked by empirical economics research. There are stationary periods of prices. But it's a different story to interpret such stationariness as a state of equilibrium with a mysterious process forcing prices to always gravitate to this state. This interpretation is without empirical foundation and yet its reliability is often assumed a priori in the research.
If you are not persuaded, it's okay. Regardless, I don't believe hypothesis formation is irrelevant.
EDIT: On second thought, my case against 'real wage' above was missing the key. The key is that purchasing power is what matters, and the purchasing power (most of the time local) of stock variables (e.g. savings) is what matters. To adjust flow variables by inflation is often misleading.
Well, at the very least, the argument over hypothesis formation is not specific to economics. The philosopher of science Karl Popper argued that science should aim to empirically falsify hypotheses, and that the way scientists develop those hypotheses, while an interesting psychological question, is irrelevant for the scientific method. He argued that hypothesis formation always contains an element of irrationality and instinct and he quotes Einstein who expressed views to that effect.
To be fair, there is good economics out there - not as solid as the sciences of course, because the subject matter is more complex, but it does exist.
The main problem with the role of economics in society is that economics, and often pseudo-economics, are used to influence politics in a biased way.
Economists need to start strict self-policing and disavowing all the dishonest actors in political think tanks. It's difficult to do in practice because some of the dishonest actors get a lot of funding from political interests, but if economists want their discipline to earn respect as a science, they do need to clean up their act in this regard.
Look at the latest edition of QJE. You will see mostly studies addressing a particular policy question, e.g., 'what was the effect of this policy change on unemployment' which they answer using randomly controlled trials or quasi-experimental methods. Which aspect of this falls apart when you prod it?
Of course it is. Economics uses to be called political economy for a reason. The idea that economics can be considered separately from politics is laughable.
Quasi-experimental variation refers to situations in which assignment of treatments is 'as good as random' but treatments weren't assigned by the researchers in an RCT. A classic example is Angrist and Lavy https://www.google.com/url?sa=t&source=web&rct=j&url=https:/...
Do you think there is a factor of unknowns in both economics and humanities (disparate) that challenge scientific study? I.e. humanities and a further extension of art is currently immeasurable. If this sounds like a loaded question, I don't know how to phrase it correctly.
My father (a physicist) used to quote Rutherford to me: "all science Is either physics or stamp collecting". An extreme position perhaps. I now work in a college of humanities and have frequently collaborated with engineers. I have seen first hand how poorly applied scientific thinking has become within the humanities. But I cannot blame them. They are only giving to the uni what has been asked of them.
The untold reason why humanities now self-describes as 'social science' goes back to Thatcher years in the UK. She was the first to link university funding (and tenure) to research output. Research output was in turn defined by ranked publication and patents. This worked ok-ish in the sciences, but not so well in the arts. The humanities were obliged to ape the sciences in the way they spoke, the way they defined their outcomes and the functions they served. The scientific method is simply not a good fit for the arts.
In the UK Thatcher might have been a driving force.
In Western thinking (entering a rabbit hole) conceiving humanities as sciences is a recurrent theme (say, in the Scepticism of Hume) that Auguste Comte in the 19th century turned into the doctrine of positivism:
- The scientific method (and precision of mathematics) applies to all sciences, social and natural.
- Knowledge can be proved only by observation trough empiric means and deductive reasoning.
First of all, the introduction which bashes the paper which applying social network techniques to fiction? If the author had bothered to look it up, they would have realized the authors are an applied mathematician and theoretical physicist. Not humanities.
Then they goes on to criticize political science and psychology as their poster children for the humanities, except these are social sciences, and only "humanities" in a very broad umbrella term. "Humanities" is more often used to refer to disciplines such as history, art, literature. So a complete mix-up of fields.
Third, the assumption that social sciences has a "reliance, insistence, even, on increasingly fancy statistics and data sets to prove any given point" is simply flat-out wrong. For example, of course political science relies on "big-N" studies which try to find or refute correlations between democracy and various other country indicators. But political science also relies heavily on "comparative politics" which is much closer to literature or history in a classic "compare and contrast" aspects of two countries. Similarly psychology has many different approaches taken in published papers and books, some quantitative and others more qualitative.
I could go on and on. But this article is completely ridiculous, arguing against a straw man that simply doesn't exist. It's like the author isn't even familiar with academia. Bizarre.
She did not confuse humanities and social sciences. She started off focusing on humanities then segued into a bigger complaint about humanities and social sciences together sharing the same problem. The author has a doctorate from Columbia in psychology. So there's probably a lot more thought behind her frustrations.
As for the home departments of the authors, that doesn't change the fact that their work was humanities research. Unless you're saying humanities departments shouldn't be blamed for arguably bad humanities research produced by "outsiders" and published in physics journals. It doesn't appear to me that the publication contained any new applied math or physics.
Correlations are not science by the way, no matter how repeatable they are, they're just statistics. You need to prove causation to be scientific. Big analyses of countries generally can't do controlled experiments so their findings are always dependent on what they chose to include (and not include) in their models as the supposed causal mechanism.
It may sound silly, but the article seems like a caricature of Big Bang's Sheldon disdain for social sciences. All the ingredients are there ready for easy consumption.
In all seriousness, there some valid criticisms of social sciences, but the article reads like a pop version of them.
I call these hard science vs soft science. Please do not get me wrong, I shuffle from math and art quite a bit these days and get to take a look at various approaches in both areas. There is a trend of applying hard science approach to soft science. This is good thing until it results in research that is quite wishy washy. When you see "study" where experiment is mainly surveying people, its soft science. Unfortunately such soft science gets a lot of press. When you see articles like "Coffee found to reduce cancer by researchers in a study", you are reading soft science. A huge problem is that people doing hard as well as soft science are referred to as "researchers" or "scientists" and both of their works referred to as "science". But its quite not. Science demands not only evidence but understanding. If coffee was found to reduce cancer, can you elaborate exact mechanism? If you can't then science requires not to make such claims. Doing survey of graduate students is not sufficient. That's the nature of hard science which soft science (including economics) violets too often. Media needs to be more aware otherwise the trust in science that has been developed over centuries will no longer be there.
You had me for a second with the survey comment, but we don't know the mechanism for a lot of biology and that doesn't mean it's not a science. In fact, we didn't understand most of the things we take for granted today but that doesn't mean there were no scientists until Newton came along (or any other arbitrary point of "understanding").
i think the monimum bar for something to be science is that it has to have a rigor to it which can be used for removing doubt:
- hypothesis
- control group
- well chosen or random samples from representative population
- statistical significance
- a way to separate correlation from causation
- enumeration of conflating factors and potential for flaws in the chosen methodology
- list of prior studies or research
- peer review
- reproducibility, possibly using alternative methods
until all of this is done, a survey (or any study) is not science. convincing the layman is insufficient; you have to convince other experts in the field.
In a way, science is a process of creating a model. To create a model, you first need evidence (i.e. data) and then you form hypothesis which your proposed model. Then you make prediction using your model that wasn't known before. If predictions continues to remain true over time then you have higher confidence in your model. However, a true scientist would never set his/her confidence to 1.0 in any model because all models are eventually wrong and needs to be improved further. So the science is the process of continuously gathering evidence, improving model and remain skeptical that you might be wrong. It is very much like training a machine learned model using training data. Most soft sciences do the first two steps and bypass everything else. It's like you created ML model, you had good result on training data but you never tested your model on hold out set, assumed your model was good enough and just moved on to make a press release.
Biology had always been red herring. Lot of biologists actually are careful about making big claims until they understand the mechanisms. There is an evidence based clinical science but that's been made very watertight through structured trials that must comply to well defined standards unlike studies like "Coffee was found to reduce cancer" which is often half-assed surveys with too many statistical biases.
We don't know the mechanisms of plenty of things that we accept as proven with scientific principles, like most of pharmacy.
And what about physics before Newton? Or before Einstein? In fact, we still don't know exactly how gravity works. Or magnetism. At what point are you drawing the line of being able to "elaborate the exact mechanism"?
You never elaborate the exact mechanism. But if you can consistently make accurate predictions, you're onto something.
The real argument is about the validity of the predictions. Core science - basically undergrad - is very good at making predictions in its domains of interest. Outside of that everything gets more speculative.
The real problem with soft science research is that it cargo cults data -> statistics into data -> weak correlations -> "truth." And that's not how good science works - because there's just a statement based on correlations that may be accidental, and there's no attempt to make a model at all.
The hard versus soft science distinction is an old one, cutting off around biology. Basically physics is the "true" science and as it gets harder to relate phenomenon back to the underlying physics involved, the study is seen as more soft.
Hi, there. I've studied Political Science, Film "Science" and IT. And Pedagogy. Why do I say Film "Science"? Because I had to use refuted "science" to analyze films, among other things (Freudian Analysis, which was of course groundbreaking at the time, but that does not hold up to modern scientific standards anymore). The professor said to me that, "Yes, I understand you when you say that this isn't stricly 'science,' and that it has indeed been refuted. But you see, everyone in this field does these things, and so you should too, at least for the experience, and to communicate with these people." So since he wasn't trying to force me view it as sience, I accepted it and did it on those grounds, and I passed. Despite this one incident, Film "Science" was my favourite topic at college, not least because it teaches you genre and narrative techniques. But yeah, some of it was definitively "out there," and I wouldn't classify it as science in any meaningful way, though it is a valid system to communicate ideas. Hell, even postmodernism is, which I sadly know too much about now. Too much about now.
Ironically this thread - and the OP - is a superb example of why everyone should study narrative techniques so they can understand the kinds of categories that common arguments and positions fall into.
You can learn a huge amount from ad hoc observational models of behaviour that aren't based on equations or statistics. You can even use them to make accurate predictions.
I used to know a manager who had an outstanding intuitive understanding of organisational and personal psychology. He probably couldn't have formalised his knowledge, but he had a real talent for getting shit done with individuals and groups, and for knowing exactly the right moment to apply leverage in a negotiation - all without bullying, shouting, or underhanded manipulation.
He simply knew exactly what people would do in one set of circumstances, and how to change their preferences by presenting them with alternative circumstances.
This isn't "science" in a formal sense, but it's certainly a very real form of knowledge. It seems to me STEM types tend not to understand how valuable and effective it can be, and how important it is to have some of this skill if you want to change what people do.
There's definitively a value in knowing about culture, and being able to categorize it and discuss it on many levels. If you simply want to get through to more people, the most direct approach I can think of is to study things like rhetoric or take communication classes. There's a wealth of knowledge there for how to improve the way your communication impacts and includes other people. And while I'm at it, let's not forget the cross-section between marketing and psychology, for all those boiler-room types among you. :D
totally agree. And then it even brings out the good old "tax payer money" line and asks if it is useful. As if we should only pursue that which seems instantly useful.
More frustrating is that there is some valid criticism of "digital humanities" for being the cool new discipline that while capable of some great stuff is guilty of all too often neglecting the "humanities" part of the term in favor of just throwing up some graphs.
> First of all, the introduction which bashes the paper which applying social network techniques to fiction? If the author had bothered to look it up, they would have realized the authors are an applied mathematician and theoretical physicist. Not humanities.
Nonsense. Whether a piece of research belongs to the humanities doesn't have to do with the credentials of the researchers but with the subject matter. If we genuinely want to say that scientific and mathematical methods are fruitful to explore questions in the humanities (contra the main claim of the article), we have to at least allow this. Or would you say that the paper on the network structures in fictions is a piece of theoretical physics?
A lot of people have been and continue to make this criticism that’s in the article, across a variety of the softer disciplines. (It’s like you’re not even familiar with this line of critique.)
I've studied political science a great deal, and there are absolutely nuanced critiques you can have here.
I personally, for example, find that political science went overboard in rational choice theory (borrowed from economics) over the past several decades, which hasn't turned out to be particularly fruitful. (And there are 20+ subfields of political science as well, rational choice being just one.)
But that's simply arguing over the relative usefulness of specific methods, like to what degree TDD should be used in software, or have we gone overboard with microservices.
The original article's absurdly broad critique of quantitative methods somehow taking over generally remains bizarre and completely uninformed.
> Then they goes on to criticize political science and psychology as their poster children for the humanities, except these are social sciences
Social sciences are a humanities. The term social science was invented fairly recently by sneaky academics to leech off the credibility of actual science.
> It's like the author isn't even familiar with academia.
Are you? This issue with "social science" has been going on for a few decades now.
Richard Feymann called social science a pseudoscience a few decades ago.
The name social science was invented for the same reason creation science was invented because real science had such a good reputation and they didn't so they decided to manufacture some credibility by attaching "science" to their fields.
> Social sciences are a humanities. The term social science was invented fairly recently by sneaky academics to leech off the credibility of actual science.
> The name social science was invented by hacks just like creation science was invented by hacks because real science ( biology, physics, chemistry, etc ) has such a good reputation and they didn't so they decided to manufacture some credibility by attaching "science" to their fields.
Do you have anything to back this up? According to Wikipedia's rather extensive article on the history of the social sciences, the term first appeared in 1824, and the discipline was pretty well established by the turn of the 20th century.
"The term "social science" first appeared in the 1824 book An Inquiry into the Principles of the Distribution of Wealth Most Conducive to Human Happiness; applied to the Newly Proposed System of Voluntary Equality of Wealth by William Thompson (1775–1833). Auguste Comte (1797–1857) argued that ideas pass through three rising stages, theological, philosophical and scientific."
What do you think rising through theological, philosophical and scientific implies? The lowest being theological, the highest being scientific?
1824 was around the time of the scientific revolution and enlightenment. Everyone wanted to latch onto the good name of science.
"Karl Marx was one of the first writers to claim that his methods of research represented a scientific view of history in this model."
"One of the most persuasive advocates for the view of scientific treatment of philosophy would be John Dewey (1859–1952)."
From history to philosophy to politics, everyone wanted to associate itself with "science" because of the credibility it brought.
The article on Wikipedia doesn't support your assertion that the scholars who coined the term "social science" did so "fairly recently," and that they were "sneaky academics" and "hacks."
Well, unless 1824 is "fairly recently" (to be generous, let's say a century ago, when things really got going), and Comte, Durkheim, Weber, and yes, even Marx are "hacks" (they may be other things, but hacks?).
> The article on Wikipedia doesn't support your assertion that the scholars who coined the term "social science" did so "fairly recently,"
Fairly recently is subjective. But the article showed that they coined it because of the cachet attached to science at that time.
You can downvote and find things to nitpick, but ultimately I'm right. Just like creation "science". Political "science" is as much a science as creation "science". That isn't to say political "science" is nonsense like creation "science". It's an academic field that belongs in the "arts and humanities" category.
As someone who did my graduate work in humanities and not social science, I think there is a world of difference between social science and humanities. There might be some valid criticism of the term "social science" and the field but lumping it in with humanities isn't accurate. While there are tons of people who use mixed methods, quantitative and qualitative work tend to be very different in focus and even how they are written.
> As someone who did my graduate work in humanities and not social science, I think there is a world of difference between social science and humanities.
Don't say there is a difference. Name them. I too have a degree in the humanities. That isn't an argument.
> While there are tons of people who use mixed methods, quantitative and qualitative work tend to be very different in focus and even how they are written.
What's your point? Quantitative and qualitative work occur in science and in the humanities. It's not exclusive to one or the other. The difference between the two is that one deals with natural law and the empirical testing of those laws. While humanities do not. They most deal with the philosophical and the human condition. Morality or the best form of government aren't scientific concerns because they aren't about the natural world and its laws. It's not empiricial testable unlike say the speed of light.
Political science is not a science because it doesn't deal with nature and empirical experiments. Machiavelli was not a scientist because political science isn't a science.
True, but that's more of a current snapshot than a categorical determination. It's more like the social sciences started breaking away from the humanities many decades ago, and have been increasingly pulled in the general direction of hard science ever since. Though necessarily relying much more on modeling and statistical approaches than on formal analysis.
Due to the inherent impossibility of repeating experimental conditions exactly (or in some fields, at all), the social sciences will never join the hard sciences, but instead occupy terrain adjacent to both humanities (where their data is sourced from), and hard sciences (where techniques are sourced from, increasingly for the "computational"- prefixed subfields).
One of the most valuable courses I took as an undergrad was an anthropology course. In it, we studied the life of JR Oppenheimer and his relationship with the government patrons of his science.
The sort of scoundrels naturally attracted to power will always find cynical use for talent of the kind possessed by Oppenheimer. However, every aspect of such a relationship will be thoroughly cynical.
If you are a future STEM person, understanding this fact will save you a lot of grief. Learn it early.
You aren't wrong but in the case of Oppenheimer - If we don't build this bomb what happens if the German's build one and when the soviets definitely get their hands on one?
Was it beyond Oppenheimer's intellect and imagination to comprehend what he was doing? Sort of like nazi Germany having no idea what was happening to the jews?
Perhaps the currency of social sciences should be anecdotes with full context (of which you can obviously get only a limited number unless you have unlimited budget) rather than collecting extensive data points on a limited number of variables. While physical systems can be approximated (or some independent variables dropped) without affecting the aspect being studied (think perfectly spherical objects), when it comes to humans, there are no independent variables. Approximation or simplified models are much harder.
I think looking for statistical patterns (e.g. in literature) is perfectly good science as long as you are cognizant that patterns merely invite more study an should not be used to reach conclusions, also being aware that patterns might disappear when you expand your data set.
Finally, as someone trained in the physical sciences, I used to look down on social scientists. I no longer do this. At least they're brave enough to tackle a complex monster with the limited tools at their disposal, stumbling and even enduring ridicule from the hard sciences. We ignore the human mind and collections thereof, because it's too complex and prefer the relative comfort of simple, predictable systems. I don't believe that's good.
As someone in the behavioral/population sciences, I think there's an underlying, interesting question of "when, if ever, should quantitative methods not be applied to an area of empirical inquiry?" The article doesn't seem to address this though.
As for the social and behavioral sciences, another way of approaching it is: if you have a phenomenon, is it better to try to be scientific in explaining it or not? If not, you cede that realm to the nonscientific, with all that implies. If you do approach it scientifically, how do you do that? If your explanation or theory involves some quantity of some sort, shouldn't you then attempt to specify a model of it, and test it against observations?
Most science seeks broad principles, but I kind of like the OP's suggestion that a depth-focused approach of detailed anecdotes, possibly from multiple different points of view, could provide an interesting form of alternate data.
Some people submitted nonsense papers (for example: rape culture at dog parks, and an excerpt from Mein Kampf), and they were accepted for publication.
They have a book coming out soon titled “Cynical Race Theory” which is a critique of the 40 years of academic thought that has gone from fringe to mainstream (And applied like the Inquisition mashed up with Maoist techniques like Struggle Sessions) and we are seeing played out unwittingly by the media, tech companies, and the tokenizing social media chatter class.
Perhaps. The authors, however, are respectable in the time I’ve read their blogs, tweets, etc. Guided by liberal principals and have expressed a generally good understanding of philosophy going back to the Greeks.
The first paper she studies sounds interesting. Regression analysis on vocabulary has been used to identify authors of work whose authorship had been lost or to find context of where and in what circumstances certain authors grew up.
Mathematical analysis of linguistics has pointed to. Irrational patterns later confirmed by genetic analysis.
I don’t mind pointing out the vacuity of what often passes for scholarship. But she didn’t start with a good example.
I think one of the points that the article tries to make, is the point that proving something in a soft science using models is often based on many assumptions, be it implicit or explicit, and that this is problematic. To that I agree to some degree.
In hard sciences all inputs to a proof are either verifiable theorems known to be absolutely true, conjectures/hypotheses (in which case the proof becones a conjecture) and seldomly axioms. In soft sciences on the other hand, it is common to construct models quite arbitrarily, in order to try and match empirical results. If however, we would like these models to have any indication of "absolute" truth, similar to the hard sciences, currently we can't or don't.
To achieve this I believe we could do an input analysis of ALL assumptions and try and quantify the aggregated certainty of the model's correctness, even before matching it with empirical data. In this way we could say for example: we have used a model with a predicted input accuracy of 0.82, that matches our empirical results 0.97,p < 0.05. This would then further strengthen and quantify the "standing on the shoulders of a giant" principle.
Of course this is easier said than done and I know this is a bit naïve. Currently no techniques exist to do this as far as I know. There is also discussion to be had about how to interpret model outputs (we now have three variables, how do we relate them? How to calculate this model's output accuracy?) and how to calculate subsequent model's accuracy based on different input accuracies and their inter-relations. This would also require re-building soft sciences all the way from the bottom up (from the most easily verifiable facts first) to be useful and a new science on hypthesized model accuracy calculation.
Anyway, enough hypothesizing thought experiments for the day. Any thoughts?
This is one of those instances where I might care about the case Konnikova was making if she bothered using any quantitative methods to convince me the humanities were awash in quantitative study while qualitative analysis clearly went the way of the dodo. Or that literature programs were churning out students who think network analysis is the best way to understand a text.
In the purely-qualitative realm, it just comes off as pearl-clutching over something I don't think anyone actually believes?
I fully agree with the premise that humanities aren't a science. In fact I am fairly certain that the current paradigm of science can not work for at least history, though I strongly suspect most of the humanities.
One of the problems is computability, when I try to build statistics on a space of human intentions, then I strongly suspect that this is at least as complicated as trying to build a measurable space atop the set of all Turing machines, and there I get immediately the issue of computability. (For example, calculating the average run time of halting Turing machines.) So, then assuming that one can meaningful build a statistic (just the claim that this is possible) will doom any too formal reasoning, by principle of explosion.
I disagree entirely, especially about history. You form hypotheses on the basis of evidence (literary, archaeological, documentary, etc), make predictions about the kinds of effects you’d expect to find in the historical record, and then modify those hypotheses based on what you find later.
No, predictions about what other undiscovered evidence you’d expect to find about the past. In a way this is like astrophysics in that you can’t conduct experiments, but you can predict that X is correlated with Y. Come to think of it, paleontology...
For example, take the Shakespeare authorship controversy; you could create a hypothesis that say “Shakespeare was indeed the author of Hamlet”. A prediction from this might be, “if a manuscript of Hamlet were ever found it would be in Shakespeare’s handwriting”. Not a really good example to be sure, but just off the top of my head...
The article is talking, essentially, to the humanities. But the point applies to those of us on the outside as well. We as well need to stop expecting the humanities to be science.
Provocative title. The author argues - more or less - that quantitative and mathematical approaches do not lend themselves to questions of the humanities or social sciences.
As an example they take a network analysis that was done on social relations of characters of fictional works. While the author finds this use dubious, I think it's the contrary. While the researchers might not fully understand the methods, they could very well have a mathematician on hand. What do we do in math if not model real problems of real people?
It might be nice for some people to not know an application of their research but for humanities to find novel ways in which to use mathematical tools is great and should be encouraged. Of course they will miss but they will also hit. We need a peer review where those methods are understood within the humanities and social sciences, in order to not draw false conclusions.
Of course, qualitative analysis isn't going the way of the dodo and the author agrees on that.
I just think the occasional misuse of mathematical models for humanities research is well worth the possible gain. Those problems should follow some rules with a mathematical models, right??? Let's help those researchers instead of banishing them to qualitative methods.
I don't think the author is arguing for a banishment of certain authors "to qualitative methods." The problem described in the article is one I frequently see in Silicon Valley - ok, a couple of engineers build a thing. Note how they didn't start from asking "what is a real problem?" No, they built some tool/app. Now they spend several investment cycles trying to find "product market fit" by attempting to find some place in the market where that thing solves a real problem. This very rarely works as a business strategy - you first need to find a real problem and then build a tool that solves that problem.
Problem -> so what? (we build a solution) -> real business.
Now, replace "app" with "mathematical modeling," and you'll start to feel the author's gripe.
I do think the author is right to ask - what is the point? So what? What are you trying to do with those mathematical models? What problem are you solving? For instance, we have the hypothesis that the researchers of the British paper posited:
> the relative likelihood that certain stories are originally based in real-world events
Based on:
>looking at the (very complicated) mathematics of social networks
So, we have a tool - that tool is looking at the mathematics of social networks. Does high fidelity between models of social networks predict "realness?" Does a certain model of a social network described in the relationships of protagonists in a book suggest that book's events are accurate historical ones?
No, right? Then why is that step glossed over when the researchers go ahead and start modeling anyway?
The laser was famously named "a solution looking for a problem", many times in science we build theories/experiments/devices without solving a problem, but instead found many problems in hindsight where the theory could be applied (and many others where it could)
Many tropes in fiction can be boiled down to weird/unnatural social networks. I’m not convinced it’s a bad paper, in fact I’m worried the author of this post is being unfair to what seems like a pretty harmless/fun digital humanities paper.
If you reduce the humanities to what is quantifiable, you kill them. (You also kill humanity.)
[Edit: After re-reading, I'm not sure the parent thinks that numbers should take over the humanities, so my comment may be misdirected. I'm leaving it anyway, because I think the point is valid, even if it doesn't address the parent's point.]
Elegant conclusions can totally be arrived at in the humanities, it's just that statistical methods often aren't the best way to go about doing so. The traditional method of logical proof, which is valued just as much as data analysis in science, used to be the standard in the humanities. The human mind would compute the statistics more or less subconsciously, but would then use those empirical results to say something valuable through a process of logical induction. I think that's the key point that this article misses—it isn't that we don't want humanities to provide rational insight, it's that we want humanities to reduce its reliance on statistics and refocus on what it's historically been great at.
There's something important to be said here about the duality between logic and math, algebra and statistics, classical AI and modern DL, philosophy and science, rationalism and empiricism.
I would encourage those interested in the intersection of STEM and the Humanities to check out the Digital Humanities Minor/ Major being offered at a growing number of universities. I earned the minor at UCLA, and found the application of digital tools to historical and modern "humanities" focused types questions to be incredibly relevant.
Examples include: 3D modeling of the historic broadway district in Los Angeles, Natural Language Processing of ancient Roman texts, virtual reality's impact on human cognition, etc.
Say i agree with the sentiment expressed here, what does the curriculum of a neo-classical humanities look like. Let's presume you'd want to keep the good stuff, things like Arrow's impossibilities theorem, Bayesian statistics, Network theory, Language pragmatics, ect. You know the sort of stuff that might aid the organisation of complex systems without repeating the mistakes of the past.
Maybe it is unfair to judge the hole field on a silly paper, in all fairness they write about non existing geometries in physics
On the one hand I completely agree that a lot of scientists across disciplines have too much confidence in the predictions of their models of complex systems. For example climate models predicting not only how much global temperature will rise but also on the local level, which seems far harder to predict. And the amount of studies that do not survive a replication attempt. But I don't believe that we can justify wasting tax money on people who don't try to back up their claims and instead just make up hypothesis after hypothesis, maybe based on a patient they knew, debating and sipping wine with their fellow intellectuals, prescribing morality under the guise of science and just laughing at us plebs. If you want to do that, become a writer or YouTuber, but not a scientist of any kind. If you're going to be a scientist at the very least try to back everything up with statistics. I know, It's work, it's not fun and it might not mean very much but you owe it to society to dig in and crunch those numbers.
I know it's just an example in the article, but the study about real-life likeness of social networks in fiction literature seems to miss a point. Wouldn't a good writer in many cases abstract away some of the complexity of real-life social networks?
You can analyze a book by saying it has certain number of pages, each one with certain amount of paper and ink, and you can use that to determine the chemical composition of the paper and the ink.
That will not tell you what the book is about, though.
It's probably fine for there to be a study of history that is not a science, but there should also be a study of history that does apply scientific methods.
Good idea, but you can’t throw out the quacks because their chairing the departments. You won’t make much progress until you start throwing out the schools.
Well math is not a science either. I acknowledge that there is debate about if formal sciences (logic, math, theoretical CS) are a science, but I'm with Popper in saying that the core of a science is falsifiability which does not apply to math.
"Psychology is not a natural science." I used to think this and to a certain extent I still do but the fact is that the feild has changed and become much more reproducible as time passes and it matures.
More importantly though this article conflates humanities with social sciences pretty badly. This is quite insulting to sociology and even more so to economics and anthropology. Physical antrho is pretty serious science.
There are limitations to social sciences but those are not the same limitations of literary criticism.
The reproducibility crisis extends far beyond psychology. Psychologists are the messengers, just like they were with meta-analysis in the 60s and 70s.
Similar problems have been demonstrated in a host of fields, mostly the biomedical sciences. To take one prominent example, HN has been plastered with articles about COVID studies of dubious quality.
Has it become more reproducible? From what I've read we've only now come to terms with the fact that so much of the research is not reproducible and largely debunked (even land-mark studies like the Milgram experiments have started to be put into question)
I've noticed this too! I've seen schools describing their curriculum as STEAM, and all I could think was "Isn't that just normal school, but without history?"
I would prefer a university structure where B.S. + M.S. is earned in four years by focusing on core major requirements to then have humanities courses offered at no cost via the alumni association as part of continuing education throughout one's twenties.
That's funny because I actually think that the opposite would be nice. You never know what technical knowledge you'll end up needing, so it would be nice to have technical education integrated with work throughout the first several years of working. But everyone can benefit from having a broad education in liberal arts and sciences.
I agree. Let me reproduce the last paragraph of the top level comment I submitted above:
Finally, as someone trained in the physical sciences, I used to look down on social scientists. I no longer do this. At least they're brave enough to tackle a complex monster with the limited tools at their disposal, stumbling and even enduring ridicule from the hard sciences. We ignore the human mind and collections thereof, because it's too complex and prefer the relative comfort of simple, predictable systems. I don't believe that's good.
It would be nice to have as an option for those who do have an interest. I suspect that most would have an interest in college level liberal arts and sciences if it didn't conflict with getting a job.
Because not everybody wants to be in STEM. And because even those who are in STEM need to know that humanity does not live by STEM alone. (We'd like our STEM to treat people like humans, rather than like machines.)
A "liberal education" originally meant that those who pursued it were free. They weren't pursuing an education of mere techniques, which was for slaves. Even today, there is a place for learning things that don't have a direct economic impact, as part of becoming an educated person.
That's from the more idealistic side of me. Now here comes the cynicism. Why? Because people still want to major in them, so that they can say that they have a college degree without having to major in something rigorour. And those people pay tuition. And the colleges like getting paid.
Plenty of non stem majors are very rigorous. Philosophy is particularly so. My undergraduate roommate wrote a 50 page paper that had to have original ideas that was critiqued by professors who are quite adept at spotting fallacies and picking apart arguments.
> Even today, there is a place for learning things that don't have a direct economic impact, as part of becoming an educated person.
Sure, but we shouldn't assume that college is the appropriate place to do so or that the way colleges teach the humanities is effective. If your supermarket forced you to do aerobic exercises before entering, I'm not sure a good justification for it would be "well, aerobic exercise is good for you and not everything is about buying food."
I guess it really depends on what you think the purpose of college is. To me, that's exactly what college and high school should be for -- learning things that don't have a direct economic impact but that make you a more educated/well-rounded person. They shouldn't just be about job training, which should be focused in technical schools, and it actually saddens me that so many think the purpose of university/high school should be job training (really, that should be the companies themselves, but of course they don't want to pay the money to invest in their hires).
We get a bit sidetracked by terminology here - if we suddenly called a college a technical school, would it suddenly be fine to remove the humanities from the curriculum? One of the big problems when we discuss these things is that there's so much inertia stemming from our preconceived notions of what a college is. It stops us from examining what are actual goals are, and if what we're doing is effective in achieving them.
I don't think the early comment that humanities is approached better as a hobby is necessarily wrong. It's quite possible that other approaches, like the Chautauqua movement, would be much more effective (my personal experience suggests it would).
Because music, history, and literature are just as important to human society as math, computer science, and physics. This unthinking disrespect for the humanities is embarrassing.
So, the only thing in life you enjoy is math? Because if you enjoy music or tv or novels or dance or sport or anything else, you enjoy something that doesn't deserve your respect. And why would you enjoy something like that?
> Why are they even taught in colleges? They're more of a hobby thing really (and more enjoyable that way imo)
Because that's why colleges exist...? Historically universities were not the workforce mills they are today. You did not go to a university to help you find work. Stuff like business/management, engineering, medicine, etc really shouldn't be part of universities.
It's interesting how you go in two sentences from returning to goals of historical universities to suggesting that medicine shouldn't be part of it.
A classic full university was supposed to cover the four historically major fields of study - theology, medicine, law and philosophy (which includes all the modern subtypes of PhD's e.g. physics, math, biology, etc). Three of these fields were pretty much designed to prepare students for the specific needs of knowledge intensive work (clerics, doctors and lawyers) and only the philosophy studies were less practical.
Why is science even taught in college? All you need is a math book and YouTube to have people show you how to do the mechanical steps needed to solve problems.
(I'm sarcastic of course. Sciences and humanities are both cool and good and contribute to a better understanding of the world around us, and one wouldn't be much without the other.)
Because the operation (not just the physical structure) of the human brain (a complex neural network) and networks of brains (society) are just as important, if not more important subjects of study than physical systems and computers. Or are you suggesting that we completely abandon the understanding of human behavior at scale?
Is there any evidence of this? To me it seems like humanities makes people have less empathy, since instead of feeling other people they analyze them.
For example, to me it seems like American politicians have way less empathy than European politicians even though European politicians have studied way less humanities than American ones. So my belief that studying humanities helps you deconstruct the human experience and see us as robots, hurting your empathy.
I do however believe that studying humanities makes it easier to answer what the tester wants you to answer on empathy tests, since you now understand those tests better.
I don't have scientific evidence. But my wife is a historian and the process of doing history is fundamentally the process of erasing your own context and placing yourself fulling in somebody else's context. That's fundamentally the practice of empathy.
It is just pretentious not scientific.