(Because it's such a great story I still remember the talk a decade later.)
"""
So if I'm thinking about this talk, I'm wondering, of course, what is it you take away from this talk? What story do you take away from Tyler Cowen?
One story you might be like the story of the quest. "Tyler was a man on a quest. Tyler came here, and he told us not to think so much in terms of stories." That would be a story you could tell about this talk. It would fit a pretty well-known pattern. You might remember it. You could tell it to other people. "This weird guy came, and he said, 'Don't think in terms of stories. Let me tell you what happened today!'" And you tell your story.
Another possibility is you might tell a story of rebirth. You might say, "I used to think too much in terms of stories but then I heard Tyler Cowen and now I think less in terms of stories!" That too is a narrative you will remember, you can tell to other people, and again, it may stick.
You also could tell a story of deep tragedy. "This guy Tyler Cowen came and he told us not to think in terms of stories, but all he could do was tell us stories about how other people think too much in terms of stories.
People love paradoxes, so the paradox inherent to Tyler’s story is a really great way for him to anchor his message in your memory. I see that as why he spends so many words on it.
1780 was way late for claiming the invention of a slitting mill. The first slitting mill in the Americas was built around 1640. It still exists, as a historical site and museum.[1] Some of the original slitting rollers for making bar stock are on display.[2] That wasn't the first slitting mill; there were earlier ones in Britain.
[1] https://www.nps.gov/sair/index.htm
I would find this piece more convincing if it had sampled stories from different sources in service of its thesis rather than focusing on a single work the author finds particularly galling, as there is no shortage of fictional narratives anywhere — look no further than US state-approved high-school history textbooks.
Given the narrowness of the material cited, this piece seems to me like unequal application of skepticism and status-quo gatekeeping.
There is, sadly, a vast difference in accuracy between a "state-approved high school textbook" and a "peer-reviewed scholarly publication".
A state-approved high-school textbook is subject to myriad business and political agendas that practically, by definition, compromise its truth and accuracy.
A peer-reviewed scholarly publication is supposed to have a chain back to actual sources and facts that anchors it in spite of any agenda or narrative it may propound.
The issue here is that Bulstrode presented a bunch of claims with no evidence or even contradicting evidence in the actual sources she cites. There are also some engineering issues: sugar mill rollers are oriented differently from steel mill rollers because they serve very different purposes.
> Historians, myself included, often make leaps of intuition from limited evidence. But speculation ought to be explicitly signalled as such, rather than presented as certainty. Especially when that certainty creates a narrative so compelling that it makes major newspaper headlines. Bulstrode’s narrative requires multiple smoking guns to work, none of which are in the evidence she presents.
Anytime school textbooks are mentioned I feel the urge to refer anyone who'll listen to the relevant chapter of "Surely You're Joking, Mr Feynman" Feynman is popular around here and rightly so.
Blatant corruption from top to bottom is the norm, it would seem.
Feynman also loathed psychology and gave it as an example of "Cargo-cult science." Last I heard every single first year psych texbook contained the Stanford prison experiment as an example without mentioning how it was total rubbish. Is that /still/ true in 2023? Textbooks are weird.
Ironically "Surely You're Joking, Mr Feynman" is a great example of what the article talks about: compelling and repeatable stories whose basis in fact is often extremely dubious.
So you don't believe it's a fabrication either and we can agree it doesn't make a strong case that reform is unnecessary and school textbooks are incapable of being improved.
Of course the best evidence either way on the latter is the contents of textbooks that are actually being used. How did they get that way? Heard anything that might give us ideas on where to look a bit more closely?
Fights over textbook content are ancient and written about constantly. Since these battles are more heavily scrutinized outright falsehoods won't be found easily — rather meta-narratives are achieved by omission or emphasis.
I would not argue that these meta-narratives are "ahistorical", though — rather, I would say that they reflect the interests of certain parties. Perspective is inseparable from "history".
As I read this thread in the early morning before the family rouses, my tired eyes keep misreading “ahistorical” as “shitstorical”, which, amusingly enough, isn’t far off the intended meaning.
> look no further than US state-approved high-school history textbooks.
One could make just as good of a case about online users posting disparaging and speculative comments without presenting a shred of evidence, looking no farther than HN comments. Where does this kind of reasoning take us...?
I cited textbooks because I figured their issues were well known, because numerous examples can be found across many political and social axes and thus we might avoid tumbling into a partisan hellhole, and because I knew I could back it up if I had to, as I already have in a sibling comment.
Historians are overwhelmingly on the left, so it’s not surprising most examples come from that. But if you wanted right wing dramatic narratives you could look at Niall Ferguson maybe.
"Historians have an increasingly strong incentive to tell dramatic stories which gain attention and make ‘impact’"
Asserted without evidence in an opinion piece arguing for the use of evidence. Not super compelling. Historian's have had incentives for dramatic narratives since the age of Homer.
The writing is reporting on a sensationalist article praised by peers and famed by media debunked perhaps completely by an other article with a potential motive picked that is not too far from what we all experience in the sensationalist world around us but now sweeping into peer reviewed articles too. What sort of evidentic proof you need that measures the incresing (observe the meaning of this word please when referring to Homer) incentives for being sensationalist exactly? Is there some sort of objectic measurement for dramatic storitelling you miss? Isn't enough considering this piece what it is as an educated speculation on what has happened here? Must not come up with a theory before have all the empirically evidence forming an ironclad around it?
I wouldn't know how you would be able to present evidence of this, but the economic realities of liberals arts and niche fields of research is a pretty well known circumstance.
Funding arctic expeditions and research tends to be easier if you find some man-eating killer penguins that were frozen in the ice and now resurface due to climate change. Getting funding is politics and politics needs to make headlines.
Since headlines do get more sensationalist due to the attention economy, it could be concluded that scientific claims have to follow the trend.
The example given about the iron inventions was likewise asserted without evidence and purely from conjecture (at least that is how it is described). This is no empirical evidence to prove a systemic effect, but nevertheless interesting.
One thing this article doesn't address is the pressure that many academics are under in the process of their research, and the 'meta-narrative' of their careers writ large.
When you are competing for eyeballs, attention, grants, and tenure – when the community you work in is so small and so tightly networked – there can be a ton of pressure to make leaps of logic or narrative in order to paint a picture that resonates within that community.
I'm not excusing this behavior, but I understand it. It's the same reason UX designers build dark patterns they know will degrade the experience, it's the same reason we vote along party lines, it's the same reason trends are adopted and movies become mega-blockbusters.
In most cases – even in academia – the stakes are low and so it doesn't really matter. But the stakes are much greater when that academic research influences policy – especially in the medical and political fields. Supposedly peer review & professional codes of conduct are supposed to solve for much of this – but, to me anyway, doesn't seem sufficient these days for those higher-stakes fields of study...
I wrote a longer post that has similar themes. I fully agree. The system encourages fast work and true believers. I think a big part is we try to run academics like a business. But it's very different and the goals should never be about direct profits. We often ask if our metrics make sense, but we fail to ask if there's easy ways to hack these same metrics.
I strongly considered pursuing a PhD in history a couple of decades ago. I figured there was so much disprovable BS published following dominant narratives that it would be easy to make a mark as a debunker.
Then I realized that if the problem was so widespread, I'd be up against the whole educational ecosystem.
Guess what happens when you have to publish novel work frequently. About subjects that people have been writing about for decades or centuries. Not just that, but you don't get access to certain documents and archives unless you follow certain narratives of certain counties and regimes.
What you are describing basically just what all historians do. All historians are "debunkers," because to be doing novel research, they have to be disputing prior claims about their historical field of interest. e.g. Bulstrode is here "debunking" (incorrectly, it seems) the prevailing story about Cort's ironmaking progress.
I guess it's impressively bad that a blatant conspiracy theory made it past peer review and into Wikipedia, but this article is a summary of a review of a rebuttal to a bad paper.
I don’t think that’s an especially difficult thing to do. It seems any paper dealing with social sciences or history that fits the current narrative is lauded and “reviewed” favorably. I think the mechanism which facilitates this is relatively simple and similar to how USSR officials would rather bring good news than bad facts
I provide language revision for academics who are non-native speakers of English, so I read a hell of a lot of papers, theses, and grant applications, while also continuing to publish and peer-review in my own field. My impression is that peer review in the social sciences and history isn’t necessarily the major force for conformance to whatever contemporary social narratives. Papers often deal with minutiae, and both author and peer reviewers alike are nerds who like delving into those minutiae. They don’t necessarily want to be drawn into any wider topic like in the case of the paper discussed in this article.
Rather, the force for ideological conformity may instead be funding bodies. I have seen so many grant applications where the author(s) clearly want to explore minutiae, but are forced to appeal to some grand social-justice cause in order to secure funding. I’ve participated through series of revisions of grant applications where 2–3 paragraphs claiming the research would benefit some minority or another, are inserted at a late stage after someone mentions that the application would stand no chance without them.
> claiming the research would benefit some minority or another, are inserted at a late stage after someone mentions that the application would stand no chance without them.
Funny, it's similar to "forced" mentions of Marxism inserted apparently at will in social science books/papers written in Romanian back when the Wall was still up. Towards the later years of the communist government being in power those Marx insertions had been replaced by quotes to what Ceausescu had said.
I guess the OP is referring to how communist countries are/were big in personality cult and general brainwashing. Marx and his ideology when they were happy with the soviets, Ceausescu later when the political winds changed.
Generally though, people have a tendency to seek flags to rally behind, be it personalities, ideologies or religions. There is nothing surprising on that, given that the success of our species relies on coordinating large numbers of people. The problem is that, sometimes, it is very difficult or impossible for the individual to choose the "right" flag, whatever the definition of right may be. In communist and totalitarian countries, the problem is solved for the individual by allowing only one flag. In countries with good democracy, e.g. USA[^1], it sometimes happens that the population divides evenly between the flags available to them (i.e. Republicans and Democrats).
[^1]: The US' democracy has many faults, but it's a shining thing compared with a lot of countries in the world.
> Marx and his ideology when they were happy with the soviets, Ceausescu later when the political winds changed.
Exactly that.
If it matters, and while we're talking history of ideology, towards the latter years of the Ceausescu government real and genuine "local" Marxists had been sidelined almost completely, meanwhile there was a push for inter-war nationalism veering into what today would be called far-right.
The 6th volume of the "Military history of the Romanian people", published in 1989 under the editorship of one of Ceausescu's brothers, had details of the Romanian Army's involvement on the Eastern Front during WW2, meaning while we were on the Germans' side and fighting against the Soviets, on Soviet land. That had been considered more than taboo until then.
But how could the reviewers have found the errors in this article?
The article can tell "Cort was informed about the new techniques from a family member back from Jamaica". How do you verify that? The only way is to redo the all work from scratch, refound all the historical references and cross-validate each of them. It's a huge work.
This article was later debunked by experts who have taken time and effort to do that. But peer-reviewing does not allow reviewers to pause they own work to do such extensive checks.
More realistically, the article passes the peer-reviewing process because the peer-reviewing process does what it is supposed to do: the article is "believable", it does not have things that looks not coming from the scientific process. The reviewers don't have time and it is in fact not even their jobs to redo the study. All they do is to check if the guidelines seem to be respected. If you want more than that, you want a "replication study", which is the next step in the scientific process of building trust on a study (and which also have difficulties).
And, sure, they may have flagged that some conclusions may need more convincing demonstration, but it is a Gaussian curve: sometimes they do, sometimes they don't, some passes through the gaps.
I think there is a problem with layman people who don't understand that the peer review process is not a magical tool that remove all the incorrect studies (on top of that, due to statistical fluctuation, some studies are false while the authors have done 100% everything perfectly).
I'm not saying it's the case here, but this is one explanation that should not be ignored.
Ironically, it is a bit funny that for your conclusion you jumped immediately on the story you wish to be true: social sciences or history are ideologically biased. This is a valid hypothesis, but not the only one explaining what we observe.
> This article was later debunked by experts who have taken time and effort to do that
It was debunked by a single non-academic with no publishing track record, who is currently completing a master's thesis. In other words it could have been debunked by anyone.
> the article passes the peer-reviewing process because the peer-reviewing process does what it is supposed to do
Indeed, it passed the process because the process is designed to create the illusion of consensus and credibility without actually blocking false claims. It worked wonderfully here.
> I think there is a problem with layman people
Yes that's right, those unwashed masses who expect scholars to not make shit up, who are these scoundrels? Do they not understand how the great processes of discovery work?
> social sciences or history are ideologically biased. This is a valid hypothesis, but not the only one explaining what we observe.
Yes it is the only one explaining what we observe. The claims were made up, completed with fake references, there is no scope for statistical P-hacking "oh bad luck old boy" explanations here (not that they're acceptable anyway).
It's not just scholars, the whole left is like this. Black Woman With Magical Powers is by now a Netflix trope. You can guess the plot of a lot of post-2015 Hollywood output just by looking at the race of the characters. This sort of scam is exactly what we'd expect from an academic culture in a state of advanced decay.
I was with you on most of your points until you extrapolated this to "the left" and rolled on into Netflix shows as evidence of the degradation of academia due to politics you don't agree with.
>How do you verify that? The only way is to redo the all work from scratch, refound all the historical references and cross-validate each of them. It's a huge work.
Sorry, but if the paper doesn't make its case, it isn't an academic work. If it needs to be 1,000 papers in order to explain why its assertions are reliable, great, that's 1,000 good papers to write. :)*
...by actually reviewing the citations and noticing they do not, at all, support the claims she made.
> ...such extensive checks...
The checks didn't need to be extensive, they needed to be cursory. The problem is they were absent because the reviewers enjoyed the same political bent as the author of the paper.
>The article can tell "Cort was informed about the new techniques from a family member back from Jamaica". How do you verify that? The only way is to redo the all work from scratch, refound all the historical references and cross-validate each of them. It's a huge work.
You start by going through the citations the author has listed and seeing if those support their reasoning. Sounds like that's what the non-expert in the linked article did. It turned out that some of what she claimed was not supported by or was in direct opposition to the sources in her citations. That's what citations are for. They're listed at the end of the paper in a standardized way so they can be easily accessed by anyone who wants to know where an author got a piece of information from.
If this wasn't possible, there would be no point to peer review. I can understand an article getting through where the author falsified experimental data. That's hard to refute unless you can find evidence of the false data or you run the experiment again yourself. A history article, though? If you're not double-checking dubious claims by see what sources the author is using to support them, you may as well not bother reviewing.
I agree with checking the references, but again, my point is that as a general rule it's really not that easy. You can think it's easy because you have had the work done for you for this particular article and that it is obvious in this particular article, but in general, it is way more complicated.
Properly checking that the citation is indeed saying that often requires studying the quoted paper quite in details. Again, it may not have been the case here, but it is an amount of work that a reviewer should plan in advance.
But more importantly, and it makes your point moot: if the article is misrepresentating the reference, then, rather than in the peer-review process, it will be the reference author that will notice it.
I think it is my point: people here are seeing peer-review and are thinking that it is the ultimate method to decide what is good or bad (or what is "consensus"). This is ridiculous. A peer-reviewed article has NEVER been considered as "obviously correct and uncriticable". A peer-review is just one of the step in the long process. If indeed the references were not saying what the article is saying they say, it will end up being discovered.
The peer-review is the first step before submitting the article to the whole community, and it is then that the majority of the discussion happens.
> If you're not double-checking dubious claims
While I agree with the previous things, this bit makes me pause: careful with that, historically, the majority of correct claims were, at first, sounding dubious, and the majority of incorrect claims were, at first, sounding totally not dubious.
>I agree with checking the references, but again, my point is that as a general rule it's really not that easy. You can think it's easy because you have had the work done for you for this particular article and that it is obvious in this particular article, but in general, it is way more complicated.
It is that easy to check a reference. The only real obstacle is whether you have access to an article cited, which any reviewer will through their university or organization. Find the article cited by the paper under review as referenced in work using the title, author, and year published. Read article. Done it a million times myself. Books are obviously going to take longer to double-check and very old articles might be hard to access. Studying the "quoted paper" (Although it often won't be a direct quote. That would be even easier to check on.) is an important part of reviewing. This isn't supposed to be a 5-minute read-through to be given a thumbs-up if it seems alright. It should take time to ensure that it makes a valid point using verifiable facts.
>But more importantly, and it makes your point moot: if the article is misrepresentating the reference, then, rather than in the peer-review process, it will be the reference author that will notice it.
The reference author isn't going through every work that cites theirs. It could be they notice, but it's more likely they never see the work. If the paper author is misrepresenting the reference, that's the peer-reviewer's job to catch it.
>The peer-review is the first step before submitting the article to the whole community, and it is then that the majority of the discussion happens.
The peer-review is the last step before submitting the article to the whole community. What do you think the point of the peer review is? It's not a rubber stamp. It's supposed to be other researchers who are capable of spotting a work which isn't supported by facts and the references cited.
>While I agree with the previous things, this bit makes me pause: careful with that, historically, the majority of correct claims were, at first, sounding dubious, and the majority of incorrect claims were, at first, sounding totally not dubious.
So you shouldn't look into questionable claims because sometimes false claims sound perfectly believable?
It is easy to check one reference. It is not easy to check all the reference, and read the whole article, and check each paragraph, and check that there is no flaw in the logic, and check that there is no hypothesis missed, and check that ...
My point is that when you peer-review, you usually choose your priorities (unless you don't do much work next to that). Sure, it's not good that the review of the references was not done perfectly. But it's just unrealistic to think that references are always reviewed perfectly and that if you spot one article with badly reviewed reference, it's the proof that the reviewers were part of an ideological conspiracy. It's a survivor fallacy: you see here a bad article where the reference is wrong, but you don't see all of the work of checking everything else. If you were presented with another bad article where the most obvious flaw was something else, you would have said "the reviewers are so bad, checking this particular element would have me taken 5 minutes" and you would not even have considered to check the references yourself for this particular example.
Honestly, checking the content of the reference is a low gain job with respect to the effort. Because citing a reference incorrectly is very risky (there is a strong probability the authors will notice, and it looks very very bad when they tell it then), it is pretty rare that checking the reference really lead to a correction, and it is not unusual that the reviewers assume that the authors are of good faith and missed something more subtle rather than got it so wrong. It does not mean it does not happen, and again, in an ideal case, the reference should be checked. But there are plenty of things that can be missed during a review, and checking everything is a real effort that is usually not worth it.
> The reference author isn't going through every work that cites theirs.
That seems stupid: they are building work around a specific subject. If they are going to read one article, it will be an article on a related subject. And what better relation than a relation that lead to a reference?
In other words: what are these authors doing if they are not reading the articles that talk to related subject? Can you even call those authors experts: they have read few article, they wrote one article, and since then, they have totally abandon their research and ignored all the new elements that bring a more precise and advanced light on the subject.
It's also a strange argument: in one hand, you say it's easy to read the article that need to be reviewed + all the article that are in its reference. In the other hand, you are explaining that it's too much work for an author to read one article.
And also, it's not only reading: 90% of research is about debating between researchers. That's the whole point of conferences and seminars. Either the reviewed article is totally ignored, in which case who care if the references are wrong, or it is not ignored, and the most it is not ignored, the most it is exposed to people for which an incorrect reference will be obvious.
> The peer-review is the last step before submitting the article to the whole community.
Exactly. And what is the point of submitting the article to the whole community: create the debate and the discussion, bringing new arguments that are then being disputed, refuted, ... until we get a new position that we trust.
The point of peer review is a basic first filter. The goal of this filter is excluding articles that, at 100%, are not worth the time to be discussed (or not yet worth the time). But this filter NEEDS to let some bad articles pass. The goal of this filter is to be 100% sure that the article is not worth before rejecting it. But as long as the rate of articles is manageable, then there is no reason to make this filter too stringent. It's just dangerous: rather have 100 bad articles passing peer-review than 1 good articles failing peer-review.
> So you shouldn't look into questionable claims because sometimes false claims sound perfectly believable?
It is not what I'm saying. What I'm saying is that you should not bias yourself by your preconception: be critical of EVERYTHING, it does not matter if it sounds correct or sounds incorrect. Sounding correct or sounding incorrect is a really really bad way of knowing if something is correct or incorrect. It even creates unconscious biases where we end up thinking something false should be treated as if it was proven.
> The article can tell "Cort was informed about the new techniques from a family member back from Jamaica". How do you verify that? The only way is to redo the all work from scratch, refound all the historical references and cross-validate each of them. It's a huge work.
Since this is about checking textual references, as opposed to laboratory work, would it be possible for an LLM to do that? Seems like the hardest thing would be for the A.I. to log into the requisite gateways for the databases hosting the papers.
I doubt it is so naively simple as "querying a database". For example, before the publication of this article, the majority of the "databases" were simply saying "the inventor is Cort", which is one example of thing that the A.I. will get trivially "wrong" when reviewing the paper.
And if it is true, then this A.I. review method can also be applied to mathematics, theoretical physics, computer science, ...
And even if it is doable, you will still have things that will pass the review process when ideally it should not. It is just an hard limitation, same as the one in "justice" (impossible to not sometimes judge a guilty person "innocent" or an innocent person "guilty", anyone who thinks otherwise just don't understand how complicated it is)
It probably can be done, but does the academic establishment really want word to get out that their great research is being auto-reviewed because the "peers" were so bad at their job that an AI can do better? They've spent decades pushing the idea that peer review is a gold standard on the world, it would be quite a climbdown to admit ChatGPT can do it better.
> This article was later debunked by experts who have taken time and effort to do that. But peer-reviewing does not allow reviewers to pause they own work to do such extensive checks.
Except you made that accusation up. In all likelihood, you do not know how censorship in USSR functioned (because the hints you provide suggest so). And in all likelihood, you know zero about what is in historical articles or how peer review functions.
Wikipedia has inherently broken system. It does not use primary sources. So as such if you get enough media publications to agree on something you can push that as narrative to there even if it was absolute lie.
> And so, Herodotus – the myth-maker – talks up the Spartiates at Thermopylae (you know, the brave 300) and quietly leaves out the other Laconians (who, if you scrutinize his numbers, he knows must be there, to the tune of c. 900 men), downplaying the other Greeks. Spartan leadership is lionized, even when it makes stupid mistakes. Thermopylae, to be clear, was a military disaster and Spartan intransigence nearly loses the battle of Plataea, but Herodotus represents this as boldness in the face of the enemy
That's why La longue durée put forward by the Annales School and by Braudel is the way to go, because with all its minuses it focuses not on the individuals or on specific events, but on the great "waves" of history. So, and back to the article, in that respect for the Annales School it doesn't really matter if it was Cort or a Jamaican man that "invented" the "Cort process".
I don't think "Stories are bad for your intelligence" is the right conclusion here, more like "ideologically motivated thinking is bad for your intelligence". As another piece of evidence for this I can recall from Superforecasters that the ones who did the worst at forecasting were ideologically motivated people and people with "one big idea".
It's interesting to try and define what good and bad forecasting is. When you consider the outsized calamity of black swans, it is practical to have some diversity of prediction, not unanimity around a single prediction.
You don't want to over-value predicting non-extreme events, you need that Israeli 10th man to persuade others of the improbable to ensure your plan is resilient. The insightful ideologue will be right about the intentions of a Hitler... but they also predict a few too many Hitlers.
Sure, but ideologically motivated reasoning will only provide "broken clock" predictions. What good forecasters would do with hard-to-predict events like the pandemic, the next recession or political revolution is to admit ignorance and revert to the base rate (e.g. once a century on average). Truly black swan events are things we don't even know we should try to predict.
An example of ideologically motivated reasoning would be "The US is about to collapse because Biden/Trump is president". Unfortunately many if not most people think like this, that's one reason why so few are good at forecasting according to the book.
I think it's a bit more than a random prediction, it's more like a heuristic.
Imagine the tree of all possibilities and all subsequent possibilities below those etc. It's just too vast to explore. The ideologue is not random a path through the tree but biased in a very human way with pessimistic assumptions about human motivations and behaviour that are sometimes true. A heuristic can be valuable without achieving more truth than fiction, just better than random.
> ideologically motivated reasoning would be "The US is about to collapse because Biden/Trump is president"
What I have in mind is the ideological forecaster would start with a set of assumptions and try to justify them against the odds e.g. they assume "Biden/Trump is a monster", they adopt the specific mindset of this monster and interpret the known events, which seem entirely innocuous on the surface, as contributing to a larger nefarious strategy and work through a chain of consequences to arrive at a compelling theory that hangs together. It's like exploring many moves ahead in chess without an evidence based goal function justifying continuing down a specific path.
A reasonable individual might not presume others are monsters or pursue a line of reasoning so unjustified for so long. The heuristic works because sadly "X is a monster" and "innocuous actions are actually part of a nefarious plan" is sometimes true.
I'm essentially defending conspiratorial thinking as a good thing in an adversarial situation so long as it's integrated sanely and not allowed to rule the day alone. If you don't have ideologues of different flavours in your intelligence unit, you might need to create them.
I think what you're describing is a sort of ensemble. I agree sometimes "conspiratorial" thinking is right, as in the invasion of Ukraine, or Hitler. Then the difficult part becomes how to weight the different world views in this particular time in history. But I agree at the very least you need to be able to simulate the thinking of ideologues in order to understand their thinking and actions. The invasion of Ukraine is a very interesting example where people underestimated how nefarious Putin really was, but if you read his writings on Ukraine, and do a bayesian update after Georgia and Crimea/Donbas the invasion shouldn't have been all that surprising. At least we should have assigned something like 10-15% chance.
History is written by the victors, and then endlessly rewritten after that. At present, historians are very busy writing the history of subaltern voices - ie introducing stories from marginalised perspectives into the historical record. At the other end of this sausage factory are the consumers who are fed this information which is presented as fact - hardly ever are we presented with the raw evidence that these stories are based on.
I'm very glad that this angle on history is receiving greater attention; it is absolutely equivalent to the replication crisis in science. It seems that all academic endeavours are politicised and bent towards some ends that are predetermined - rather than a natural unfolding of ever increasing understanding.
> well, in case of the Vikings it was mostly the other way round. The monks who got plundered were the "literate class" of their time, hence in this case it was written by the "losers".
But do you realise that the research you present here is not actually evidence? It is a link to a historian's opinion. It would be like reading a quote from the guardian about how Bulstrode is presenting remarkable research.
Meta analysis such as this, based on hearsay rather than personal verification and assessment of the actual evidence, is assuredly not the way to get to the truth of the matter.
The idea of a flat earth is also a fairly intuitive claim too, as none of us experience any of the sphere earth attributes - we don't see a curve, experience the spin, etc. Intuition without evidence is really just a story.
Can’t say I agree that the earth being flat seems nearly as intuitive as monks, being the only literate faction at the time and place, were the only ones to record the written history of their time and place.
I'd say the monks and the society they represented were the winners in the middle to long term, after all the vikings that had gotten to the Loire and those whereabouts ended up speaking French and "becoming" French themselves, and not the other way round.
A notable and unfortunate exception. To give you an idea of its impact:
> First enunciated in 1866, it has continued to influence racism, gender roles, and religious attitudes in the Southern United States to the present day.
> In that regard, white supremacy is a central feature of the Lost Cause narrative.
It is. But academic fields (perhaps all fields across society) are all bent towards a progressive liberal outlook. This outlook is also highly intolerant of divergent views - if they disagree with your voice, they will not fight for your right to speak anyway - you will be deplatformed. That intolerance is the most pernicious element to me.
This piece does the opposite of acknowledging that history is written by the victors. It it is an attack on the credibility of a particular perspective, striving to paint the author's opponents as fictional story tellers while the author is objective and rational.
History ought to be an evidence driven practise - the evidence (primary, secondary, tertiary) should drive the theory. It ought to be the scientific method applied to historical evidence.
As I read this post, the author is saying that Bulstrode had very little or no evidence to claim the story that was then widely circulated in many media outlets. The author is using that one example to illustrate a wider principle in play - that not many of the historical stories are based in sound reasoning. This is my assessment too.
If there is little or no evidence, where is the greater bias? Is it in the person that conjures up the story (Bulstrode) or in the author who is saying the story is not well supported?
"to warn against 'story bias'. Cowen argues that any time someone believes in a story they are effectively subtracting ten points from their IQ (by which he means, broadly speaking, their analytical intelligence)"
I started to hate natratives. The narratives authors construct reflect their biases. If their bias is weird enough it gets popular (because it's not boring). It's most attractive for people that are already (lightly) biased in the same direction. It makes them more biased. All it does is draw people away from correct understanding of objective reality. And yet narratives are praised as a way to understand reality better.
So if you want to indulge in fantasies of twisted minds about how the world works read literature. If you want to improve your model of reality read a coursebook or a manual.
I get quite angry when people get reductionist about history, usually because of story or narrative (pre-templated story patterns re-applied over and over again)
The beauty of history is that the same event can have multiple, conflicting, powerful narratives associated with it. Looking at these narratives challenges our understanding of our own humanity. If we're smart, we end up realizing as Solzhenitsyn did, that "...If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?..."
And then we _really_ begin our study of history, humanity, wisdom, and the rest of it.
It's far too powerful to buy into a story and get blinders on. I think it's probably a great thing when beginning your journey, but you should never stay there. And if you're a historian stuck in a version of a story I'd argue you don't know your own profession.
There's nothing wrong with stories, either in history or elsewhere. But good vs evil stories are bad history and almost always bad writing too. Reality is almost always either different shades of gray or blue and orange morality.
If you can't get beyond your own personal view of morality to be capable of understanding a different perspective, you're probably incapable of doing good history or creating good fiction because you're inevitably going to end up writing unbelievable comic book evil kind of characters for your villains.
As I learn more, I realize how much we teach kids younger in their lives is oversimplified, bowlderized BS, no matter the topic.
I think that's unavoidable. You have to learn some comic book version of, say, physics before you're ready to talk about the Kuhn and paradigm changes and so forth. But there's also a real danger here: whatever your dumbed-down version of a topic is, it had better be a positive one, one that engages and challenges students to learn more.
Negative stories and narratives have the self-destruction of passionate learning built into them. It's poor-quality pedagogy. If I teach a class of fourth graders that clowns are evil, have always been evil, circuses are the work of the devil, and so forth? I can guarantee you that nobody in that classroom will ever become an expert on the fascinating history of circuses, animal shows, clowns, and so forth. It's extremely tough to get students wound up enough to spend a huge amount of time diving in somewhere, but it might take ten seconds to turn them off to an entire area of future study.
And frankly, those simplistic good-vs-evil stories are not only bad history, they're boring. They make consumers dumb. They make for idiotic public discussion. (Like I said, I tend to rant. I feel that we are cheating an entire generation out of the deep and beautiful vista of the humanities in our endless search to sell stuff to one another)
It doesn’t have to be oversimplified baulderdash. As the whole “ELI5” thing has shown, skillful reduction can preserve the truth and spirit. It’s just not easy.
(It’s actually a lot of fun as a parent trying to do a good job of this)
I disagree. When there's pressure to piece together a story, it can easily become a search for facts to fit a narrative, motivated reasoning. Data that doesn't fit that narrative must just be outliers. New details not substantiated by facts enter the picture because they improve the story. Stories tend to lose some truth. That's dangerous.
It’s a tool of abstraction, I think? If you study almost any major historical event in detail, I think you inevitably discover deep layers of nuance and complexity. But we can’t really manage such depth and breadth at the same time as hoping to eke out meaningful lessons to be propagated into the future. So a moral is distilled and committed to story and song.
Since the epic of gilgamesh, bible, homer, etc, historians have been dramatic.
There is a reason why history is called 'his story'. The best and most dramatic story, usually favored by the elites, get tolds. More often than not, historians are commissioned by the elites to write stories. Even today.
It feels like almost every history-related research news item I've come across on HN lately on closer inspection appears to involve lots of guessing and wishful thinking, often in 3+ convoluted steps.
I suppose only the most sensational things make it here and they probably tend to be more bogus than the non-sensational findings.
While the author is of course correct in his assessment, I do wonder if he is doing the same thing. Certainly his point could be better or at least simpler made without such a "dramatic" attack on storytelling. Stories make us less intelligent?!! Easy does it lad!
Ironically this article is in itself a compelling narrative.
You might even call it a dramatic story! :-)
Joking aside: I found this very interesting, probably because it's a topic I have literally zero exposure to normally. Now I'd like to hear the other side.
By education I am an industrial engineer. I also am a hobby historian, read I like reading on history. As a result, the industrial revolution and historical impact of manufacturing technology is right down my ally. I jave to say, I love it when historians, and those come with all kinds of secondary expertise, try talking about post-industrial technology (and sometimes earlier one, generally zhose are easiwr to understand on a superficial.level, but then some e.g. Roman engineering feats are hard to get without a proper engineering or crafts background) without a solid knowledge basis. Results range from hilarious to ignorant to outright embarassing. Not sure what category the article, and the paper, fall in. I'll settle for hilarious.
Let's think about academia for a second and the metrics we use. You must publish often and your work most be "novel." The more novel, the more prestige, the more frequent the faster you rise.
This is a system that's problematic for science, but crazy in humanities. One reason in science is it discourages replication. But there is a parallel here in something like history, reinvestigation. Something like the Roman Empire is well studied and there's lower novelty to be gained. But do we want to discourage such research? Discourage modern people for maintaining such records?
Under the current system, we can encourage these people to continue studying popular areas but do we encourage maintaining historical truth and integrity? I conjecture that the answer is no. It encourages making things up, especially to fit a modern narrative. It's a system that makes people not know they're lying because they are true believers and no credible experts push back.
This isn't a problem unique to the study of history, but you can find issues like this is all research in academia. It is very hard to measure the quality of research. Citations, h-index, and so on do not mean a lot. We've seen in the past few months very popular researchers found to be frauds, with high positions in prestigious institution like Stanford. Even in practical areas that should be treatable: psychology, medicine, machine learning, and more. We want to believe, but do we have strong evidence and strong understanding? We make easy metrics at first to be "good enough" or because we lack better ones. But we often forget the limitations of those metrics as time goes by because we get caught up in the rat race. You can't just ask yourself if the metric makes sense, but instead need to ask if there's an easier way to score on the metric that isn't the intended object of measurement. We can see here that citations can be hacked easily by fun stories, surprising results, and we'll never uncover them because the only people who attempt to replicate are those in junior positions like undergrads and grad students, who are more likely to question their results than the original work. Those failures never get written about.
Science is really about progressing knowledge. Publications are really about communication. Both these are very vague and can be accomplished in many different ways.
I say instead of making narrow easy to define metrics that will never be good enough we embrace the chaos and noise. Let the researchers be free and let ̶G̶o̶d̶ time sort it out. We don't know what is good research until 5-20 years down the line. We don't know who's a good teacher until the same. By removing the metrics you force nuance to not be ignored. Most of the time it's a "know it when you see it" definition and if that's true we have to be careful about levels of confidence. The thing that makes us human is the ability to handle nuance. It is what differentiates us from animals and LLMs. If the downside is that in a field where people are required to think hard and carefully have to... Think hard and carefully, then that's okay. ML is very successful with the arxiv model because despite much hype, noise, and even many bad actors, us researchers can sniff it out. We have to do it anyways, as that's our job.
I fear if we don't fix things we will continue to just degrade our foundation. We've seen this experiment fail over the last century. It's time to just get rid of the bureaucracy. Let academics be academics. Micromanagement isn't helping. Research can take decades, but people concerned with the next quarter will never allow that. Academics is about risks, challenging the status quo, and nuance. It is not to be run like a business. I am happy for this sector to not be profitable because I know the profits are indirect and benefit everyone. But returns on investments may take centuries, and that's okay.
I often think the same, in fact in my todo list is (in the background) think of more ways to reduce the bureaucracy I face on a daily basis, which takes many forms - from arguing with reviewer #2 over whether or not my finding is notable, to trying to convince a panel that my idea is worth funding.
From a system design perspective, alas resources are limited and have to be allocated somehow, and in the absence of any metrics at all as you suggest, it would ultimately come back to who can tell the better story about their work. This is still at best only weakly correlated with the actual quality of the work.
Exactly. But I think if we at least recognized that metrics are guides rather than targets we could get reviewer 2 to stop wasting everyone's time or get an AC to tell them to shut up. It wastes a lot of tax payers money.
Similarly I think we need to recognize academic funding as that of long term investments. You can be highly risky and should be. In reality our total investment into this sector is very low, we aren't even investing a penny from our dollar.
And I could go on about the scam of "peer reviewer" which just seems like a way to extract government money, using another players workforce, and providing at best a trivial amount of utility
This is such an academic take! There's tons of fraud, bias and false claims in research, therefore the fix is to get rid of whatever trivial checks and balances we already have and let researchers do or say whatever they want.
The "let the researchers be free" is the system we already have and it's not working. Researchers need much less freedom than they have today. They need to be micromanaged far more aggressively, by people who aren't impressed by them. Society needs to treat them as potential adversaries, regulate the hell out of them and toss them in prison when they step out of line, no different to company CEOs or corporate scientists (see Theranos). Also reducing their number by 20x or so would be a good start.
Whatever happens, they definitely need to lose the freedom to publish false claims. Also universities are run by people who will never do this. Academic fraud happens all the time and there's never any follow up. Therefore universities need to lose their freedom too.
There are really only two solutions to these sorts of frauds:
1. Defund it all, let the private sector seek the right level of management
2. Try to fix it with more micromanagement and regulation in the public sector.
"Let academics be academics" is exactly how large sections of the population have ended up holding academia in outright contempt.
> There's tons of fraud, bias and false claims in research, therefore the fix is to get rid of whatever trivial checks and balances we already have and let researchers do or say whatever they want.
You've completely mischaracterized the argument. The problem is that the metrics don't work and that the metrics themselves incentive fraud more than they incentive quality research. Call it the cobra effect or goodhart's law, but it's the same thing.
The reason I present this solution is because I'm saying to get rid of the carrot that leads us towards fraud. You may also be able to infer that I'm suggesting to take the money out of academics too, which is a different carrot pointed in the same direction. The point is to make the system only enticing for those that want to research and learn, not to those that want to get rich.
Unfortunately they'll always be able to publish false claims. That's because claims are almost never immediately verifiable. It can take decades many times. We also do not want to cause people to fear being wrong so much that they never attempt to make any meaningful progress. The process of researching is being wrong over and over until you find something that doesn't look wrong. Then you communicate it to others. And like I said, one thing that is discouraged by the system that I want to encourage is that of replication. This is your metric that you seek. Unfortunately it still doesn't completely been verified results but it does build high confidence. Replication allows us to make the mistakes and learn from them. It teases out the charlitians as their work will always fail to be reproduced. Just because you do not think there are guides to my system does not mean there aren't.
> They need to be micromanaged far more aggressively, by people who aren't impressed by them.
This is such a business person take! To overly simplify metrics, to believe everything can be codified by numbers, and any complexity is non-existent to the system. I say this as a mathematician. Metrics are guides and always limited. You assume naively that everyone is chasing money when most academics aren't. Pursuing my PhD is pretty similar to turning down hundreds of thousands of dollars, and profit maximization is at the masters level.
The problem you have is that you're looking at the goals wrong. You're making this system profit oriented. But the goal instead is to educate and to expand human knowledge. You may think that these are aligned. I'm sure you've created a good explanation for why these are aligned. But the common mistake far too many make is that you have to challenge your metrics and ask yourself how else they can be achieved besides your intended route. You must constantly ask yourself this and try to find easier paths than the intended one, especially as time goes on. But you're just one person and there's a reason all code eventually gets hacked. There's a certain irony that on a forum called hacker news that we pretend that mathematics/metrics can't be hacked but all code can, despite these being just different representations.
All metrics can be hacked because they aren't perfect rulers with all the nuances built in. It is why all rules are meant to be broken, because the environment changes and a simple set of words does not completely codify the intent and spirit of those words. You are always required to be human, not automata, because the incompleteness is fundamental to the system, to the world.
> You've completely mischaracterized the argument ... The reason I present this solution
You haven't presented a solution. What you're making here, though I think you don't recognize it, is a classical Marxist argument. It goes, people are only bad because of the system of incentives within which they work. If we abolish that system then everyone will become good.
If you want to have full time professionals doing research then there must be a definition of what "professional" means and that must be measured and enforced by outsiders. That means management and will often mean micro-management, because as we've seen researchers will otherwise drift towards a local minima of making false claims as a group without any way to self-correct.
Now, if you want to argue for the mass defunding of academia then please do so! But that isn't what you argued for so far. And it wouldn't directly address the false claims problem except by reducing the overall volume of claims.
Replication isn't the right place to start with reform. The system is so broken it's routinely allowing through bad claims that don't require some fancy replication effort to detect, you just need someone with some time to read them and do basic sanity checks. Just paying professional peer reviewers would be a good start (require paper submissions to fund this process even if they get rejected to reduce journal CoI). This type of trivial reform could be done tomorrow but nobody seems to be doing it. But even if they tried, it would be immediately subverted by academic institutions. The only reforms to science that stick are ones imposed by outsiders who are the ones who want results.
> To overly simplify metrics, to believe everything can be codified by numbers, and any complexity is non-existent to the system.
Nobody in business believes that. Well run businesses don't overly rely on metrics to define productivity and frequently rely on intuitive things like the judgement of managers, which is in turn judged by the quality of their results by customers and so on. Part of what makes capitalist competition work well is that it ensures a constant search for the right balance between metric and management defined career success.
> You assume naively that everyone is chasing money when most academics aren't
They are all chasing money, that's why they demand a salary for their work and fill out the grant forms, instead of being a hobbyist in a shed. Academic denial of this is how we end up with repeated cases of blatant fraud that can be detected by anyone who simply reads the paper and its references, with no reform or even admission that the system is flawed.
Arguments like the one you've presented here are honestly the best sort of evidence for the abolition of academia. These are institutions overrun by people with an understanding of human nature so warped they learned nothing from 20th century history and probably never will. Self reform seems impossible with such a culture. We need politicians who will grab the bull by its horns and sweep it aside.
This is absolutely hilarious considering why communist countries fall. Because planned economies are too inflexible to adapt to rapidly changing environments. Communists love metrics. They love bureaucracy. They love management. They love simplicity. To boil things down into bit size components that are easily digestible.
You have things exactly backwards. I am preaching to embrace chaos, not order. It is the claim that big brother not only doesn't know best, but that no brother can. That truth has a lower bound to complexity but deciept has the capacity to be infinitely simple. That is why so many can fool us, because we abhor complexity. Even the most complex conspiracy theories boil down to nothing distinguishable from "wizards did it." They replace all knowing gods with all knowing men, because in the end it is better than evil rule than chaos. Because at least when evil rule there is order, there is someone in charge. My claim is that there is no one in charge and everyone is just as lost as you, pretending to know their way.
If you read Marx and Engels you will find that they absolutely love meritocracy. Something I'm literally calling a pipe dream and unobtainable. Something I'm claiming that pursuing leads you in the opposite direction of intent. You requested micromanagement, and to the extreme. I cannot see how you disagree with those who's most fundamental belief is "from each according to his ability." To be all that you can be, or be a failure to the state. The people who want to so intemently know you're life. Communist Russia, China, and North Korea are not known for their freedom and chaos. They are known for heavy handed authoritarians who want to micromanage, to subjugate, because they know best.
Talk about calling the kettle black. You impose a position on academics because it is one you want to believe but not one you've cared to validate. To see the stark divide between those in the humanities and those in stem. I have actually read the "Communist Manifesto" and other works like Stalin's "Dialectical and Historical Materialism"[0]. You know why I don't believe them and do not preach their words? Because unlike many of those who believe as well as those who hate, I actually read them before I made up my mind. I know their claims not because others have told me what they say but because I heard it from the horse's mouth. Maybe you should too.
And before you talk about Smith's Invisible Hand, maybe you should read "The Theory of Moral Sentiments" and "The Wealth of Nations." Maybe you should also read the work of his best friend.
You are so quick to call out others for being "sheeple" yet you demonstrate that you have not read the source materials. Are you afraid? If they convert you then clearly your beliefs were not strong. But if they are wrong then you can actual counter the arguments being made rather than strawmen which may not even exist. I'll tell you from personal experience, nothing has turned me off communism more then the Communist Manifesto itself. You speak of merit, but how can you claim it if you do not read the source material. If you do not give it a true fair shot, a good faith read. There's is no magic spells and reading shit will not cause you to believe shit.
You've shown all throughout this that that you have an abundance of confidence but a dearth of expertise. You have had ample opportunities to explain your arguments, to act in good faith, but at every turn you say no more than "trust me." Even the Russians followed that with "but verify" (and Reagan popularized in the west). I'm not willing to trust those who do not provide the means to verify. Nor do I trust those who do not care to respond to arguments being made. You have a lot more in common with Stalin than you may like to believe, for you too -- like in [0] -- happily create not just strawmen, but conjure arguments never made so that you can easily topple them.
You peach answers while I peach uncertainty. How dare you compare me to men who seek to be gods.
I've read the Communist Manifesto too. It is long on condemnation of the existing system and short on how their proposed alternative would work (because there wasn't one).
So what exactly is your proposed plan because it doesn't seem to be spelled out explicitly? You want to defund all of academia and let the market drive research efforts? I'm all for that. But the net effect would be that researchers have managers who hold them to account for their effort, same as any employees.
What I'm criticizing here is the frequent academic response that all the problems will be solved if the (very small) amount of supervision and bureaucracy they're put under is swept away. This critique is much like that Manifesto. It doesn't explain how anything will be fixed by embracing the chaos, as you put it. It just sort of asserts that it will.
You libertarians are no different from the communists you so protest. One just worships the invisible hand of the free market -- something Smith would be devastated to see -- while the other worships the omniscient hand of big brother. Both seek appeal to a higher power that knows all. You worship a market and your claim is that one size fits all. For you yourself said that all strive richest and wealth. So nothing I can say because you won't believe me that I do not seek wealth or richest but just wish to live my days reading and doing math. Your model is to strict to believe I exist, or my partner, or my best friend. I know many who seek wealth, especially in my field of ML, but I promise you that there are many like me who just want to learn.
I have said my due, but my due is not simple. I cannot spoon feed you something which I claim cannot be spoon fed. I'll admit my ideas are likely a pipe dream, because they require one to think long and hard, to constantly re-evaluate to a changing ecosystem. To iterate and improve. For there is a bound to the simplicity of reality, but complexity goes against our deepest desires. There is no spoon feeding but you must read my words carefully. For your priors are getting in the way. Stop being a true believer and seek to understand what people are trying to say, not what you know they mean. For the latter is impossible for nothing you can say will convince me that you're an omniscient god.
I don't think there's much to respond to here, but for some reason both your posts had been flag killed. I don't think this argument is much good and frankly your posts are a little bit offensive, but that shouldn't result in being cancelled so I vouched for them.
I think the authors confuses stories and /grand stories/ (sometimes called "meta-narratives").
I don't see how it is possible to do history without stories. That literally is what history is, telling stories about the past. Yes, the stories should have a factual basis, and some don't, but that's not a problem with stories as such.
The problem lies with the desire for all stories to fit a /grand story/. Such a grand story is comforting, perhaps even pleasurable, when it gives a feeling of omniscience, of everything falling in place. But it is seldom helpful as a tool for prediction and problem-solving.
But what is helpful is small stories. The world is messy and contradictory, everywhere and all the time. Without simplified stories we can't make proper sense of it (it is far too complex for individual humans to 'understand').
I have mixed feelings on this. While trivially it's true that we need to compress the world with stories, I do think there is a problem with people constructing grand narratives. We should be moving away from "story" and towards "hypothesis" so that we don't commit too strongly to a story that isn't supported by evidence.
IMO academic Historians love when their hypotheses are challenged, it shows a path for further understanding history, and writing an article with a richer narrative.
That’s what I saw during my History degree from my professors and the books i read.
"connected account or narration, oral or written," c. 1200, originally "narrative of important events or celebrated persons of the past, true or presumed to be; history," from Anglo-French storie, estorie, Old French estoire "story, chronicle, history," and directly from Late Latin storia, shortened from Latin historia "history, account, tale, story.
The non-historical sense of "account of some happening or events alleged to have happened" is by late 14c., but the word was not differentiated from history until 1500s and was used at first also in most of the senses of history. In Middle English a storier was a historian (early 14c. as a surname), storial (adj.) was "historically true, dealing with history," and a book of story was a history book."
IMO, the title is bad. The complaint in the article isn’t about story-telling, isn’t even about making them (overly) dramatic, isn’t even on not mentioning facts (nobody disputes there were slaves working in those ironworks, or that a man named Cort was nearby at some time), but on adding ‘facts’ and building a story that doesn’t follow from the facts at all.
Can you please elaborate?
what grand stories (or meta-narratives) historically shown to be good tools for prediction and problem-solving do you have in mind?
The grand story of "stages of historical development of the means of production" in Marxism for example. Or the grand story of the rise and decline of civilizations in Spengler.
intriguing to consider how many personal/oral accounts of historical events might be exaggerated simply due to the witness' propensity to signal an "I was there" type of notion to others.
A recent paper [1] purporting to reveal "the evils of imperial Britain, stealing from black slaves" is shown to be utter nonsense [2], and the article charitably blames "story" for the failure of academia to declaim the paper.
> Bulstrode hasn’t yet responded to Jelf or Howes in public, but it’s hard to see how her paper can be creditably defended.
> I doubt that Bulstrode set out to deceive.
No, I think she did. If your intentions are good, you would address the concerns and correct the record. Those around her have a _duty_ to strongly encourage her to do so, otherwise they are complicit.
> It’s one thing for a young and passionate academic to make mistakes; it’s quite another for a series of experienced academics to let her make them.
The reason these other experienced academics are onboard is because it fits a narrative that has involved rewriting history in front of our very eyes. You just need to take a quick look at the language used, it has swallowed academia.
To get grants in academia now (particularly in some spaces), you have to justify how it will have some form of social justice impact. The work itself could be completely abstract, but now the results are tainted by the lens you were forced to put on the work in order to receive funding. I am not surprised for example that a University funded a history paper that showed Jamaican metallurgists used ancient black magic, but it was stolen by the evil white man.
There are some pockets of academics holding out, but they evaporate quickly. They are leaving academia and receiving a minimum of double salary in the private sector. Expect to see more and more work like this come out of academia, as they replace these people with those who echo the proper narrative to get the grant funding.
> To get grants in academia now (particularly in some spaces), you have to justify how it will have some form of social justice impact.
Correct. You need to show that your grant can “change the world” which is just another way of saying “create power”. And for whom? Who does this “research” or “studies” or “the science” (which isn’t actually science) give a claim to knowledge and thus power?
Follow the money as the journalists say.
So whatever you want to study has to generate power for someone. If your findings don’t show evidence of something that doesn’t then you’ll just stop receiving money and the ability to work. So of course academics have responded this way because it’s the incentive. It’s why everything is a “crisis”, etc.
I don't think she set out to deceive, and your stated reasons for thinking so don't pan out. I believe the author of the article is right: historians have to be conscious of their narrative or the narrative will write the history. That's not being deceptive in the normal sense of the word, it's catering to a perceived true narrative, and unconsciously suppressing anything that might undermine it.
Politicians do it all the time, but historians in particular should not do it. They still do: in this case painting a skewed picture of the ingenious slave, in other cases they might paint a picture of the glorious supremacy of the western civilization.
Not getting caught in such narratives is ... difficult.
> I don't think she set out to deceive, and your stated reasons for thinking so don't pan out.
If you don't think that, then you should read her Twitter, at which is pinned to the top her paper with this introductory text: "This paper details how an innovation some historians have called 1 of the top 10 of the industrial revolution was first developed in the late 18C by 76 Black Jamaicans in a foundry just west of Morant Bay, but the central concern of the paper is much more important.. 1/4"
I read her x-twitter, and it doesn't support the notion that she has set out to deceive - at all. Instead it oozes of someone who is convinced that slaves and peoples of the colonies have got a bad rap (that's true btw) and that her research rectifies that injustice to some extent.
If you read her detractors you will find that absolutely no-one accuses her of "setting out to deceive", what they are saying that her conclusions (while they could be true) are not sufficiently backed up by hard facts. And that she has got some of her facts wrong.
"Setting out to deceive" is a fairly serious accusation to level at a historian, claims that need more proof than pointing to a pinned tweet saying "she still sticks to her story" even though someone on the internet (credibly) found her conclusions to be tenuous.
> Yehuda Bauer, an Israeli Holocaust scholar who chairs the International Holocaust Remembrance Alliance, said he warned his friend Wiesenthal, who died in 2005, about spreading the false notion that the Holocaust claimed 11 million victims — 6 million Jews and 5 million non-Jews.
> “I said to him, ‘Simon, you are telling a lie,’” Bauer recalled in a Jan. 31 interview. “He said, ‘Sometimes you need to do that to get the results for things you think are essential.’”
That seems more like nitpicking over definitions than anything relevant to this matter.
Your quote and several other parts of the story focus on the number being "made up" and yet the same person quoted by you also says:
> The problem, according to Bauer, who has debunked the number repeatedly in his writings over the decades, is not that non-Jews were not victims; they were. It is that Wiesenthal’s arbitrarily chosen tally of non-Jewish victims diminishes the centrality to the Nazi ideology of systematically wiping any trace of the Jewish people from the planet.
> In fact, he said, the term “genocide” could accurately be applied to the 2 million to 3 million Poles murdered and millions more enslaved by the Nazis. But the mass murder of the Poles, Roma and others should not come under the rubric “Holocaust,”
So that's up to 3 million Poles, up to half a million Roma, which only requires "others" to make up 1.5 million for the "made up" number to be exactly correct. A quick Google suggests about a quarter of a million disabled people, 20 thousand gay men. So in reality this seems a surprisingly accurate "lie".
What he appears to be mostly concerned about is that the term "Holocaust" be a specific term for the Nazi genocide of Jews, and other terms be used for their genocide of other groups.
If anything, his dramatic quote about telling Simon Wiesenthal he was spreading a lie is a better example of someone using drama to advance their opinion
Yeah, which is a quote relayed by a guy who disagreed with that person was doing and who seems to have a real bee in his bonnet about the topic under discussion, calling it a "lie" when I can't see any dispassionate observer agreeing with that characterisation.
Did the person allegedly quoted actually think it was lie? Seems amazingly lucky that he got so close to the real figure if he did. It feels like someone saying "sometimes you got to break the law to get things done" because they jaywalked in the middle of the night when out buying some milk. The drama of the comment just doesn't seem plausible at first glance.
Your count here relies heavily on the 2-3 million Poles, but to the best of my understanding, those were killed in war, either during actual battle or during the Wehrmacht's pillage and destruction. I say that because, according to the article, only ("only" ... yeah) 0.5 million non-Jews died in concentration camps:
> While as many as 35 million people were killed overall because of Nazi aggression, the number of non-Jews who died in the concentration camps is no more than half a million, Bauer said.
... so I can't see a way to reconcile that with the figure for Poles except by considering them overwhelmingly casualties of war, not a policy of extermination.
And so, as far as I can see, the original article is broadly right and your comment is inaccurate.
6 million murdered jews is about right and well proven by facts. Then we have another 3 million soviet POWs, half a million Sinti and Roma and euthanasie victims plus 3 odd million other civilian victims of the Nazis. Depending or your exact definition of the holocaust, 11 millions victims is pretty accurate.
Edit: General question, what is it with new accounts created, seemimgly, only for comments like yours? I don't see anything controversial in it that migjt justify a throwaway... Unless of course you are new to HN, in which cade, welcome!
Edit 2: It seems I have day checking links... well, you article quotes Bauer:
>>
“All Jews of the world had to be annihilated,” Bauer said. “That was the intent. There was never an idea in Nazi minds to murder all the Russians.”
With all due respect to Bauer, the Nazis deliberately killed Soviet POWs. And they considered them sub-human of sorts. Also tue, the Nazis basically started WW2 to murder all the jews. As I said, depends if where draw the line of the Holocaust, and whether or not you want to include non-jewish victims caught in the same industrial genocide complex seperately or not. It seems the question is not the number of victims, but rather under which column they are counted.
>>
> Also tue, the Nazis basically started WW2 to murder all the jews.
Hitler was driven by his desire to reunify the German peoples and his wanting to enable Germans to become economically self-sufficient and militarily secure. Within Germany, these goals were supported because the Treaty of Versailles, which had ended WWI, put terms on Germany such as paying financial reparations, disarm, give up territory, and give up all its oversears colonies.
After annexing Austria and Czechosloviakia, Germany invaded Poland in 1939. The French and British had guaranteed support should this happen, so they declared war on Germany, which started WWII.
And the motivation was, in the Fuhrer own words none the less (!), Lebensraum in the east (and enslaving the slavs there while doing it) and gwtting rid of the made up jewish-bolshevik conspirancy by killing both groups. Hard to accept, I know, but none the less true.
Actually the only mistake the Allies might have made, might have because it is hindsight, is not invading Germany in the West while the Wehrmacht was busy in Poland. History is history so. Seeing the causes of WW2 so, and the motivation of the Nazis, as anything else than what they are ia something I will leave unadressed so.
We actually can't. Russia and in particular Stalin was also there. Russia absorbed more Polish acreage than the Germans. I'm happy to agree with the phrase, "Germany and Russia, acting together, started WWII."
The Russian war with Finland did escalate into a world war so. That was the Germans and the Japanese. Stalin, for once, had actually nothing to with that particular shitty thing.
> the Nazis basically started WW2 to murder all the jews
Tim Snyder's explanation in Bloodlands is at odds with that story. There were very few Jews in Germany at the start of WWII. Initially, the goal was to deport them - somewhere (Madagascar!). The systematic killing of Jews came later, after Germany had invaded Poland (which contained a lot of Jews). The irony is that the Jewish population of Germany increased during the initial part of the war, with Polish Jews being interned in Germany.
Well, the holocaust of bullets started very early. In a sense, the final solution was, as cynical as it is, by the logostocal challenges of killing all jews by bullets (after there was shooting war going on at the same time) and the practical, and ideological, impossible solution of deporting them. Hence mass extermination camps as the only logical solution. Cynical bastards, all of them.
To get some better feeling, and insihjt, I can only recommend the original film, German, about the Wannsee Conference based on the actual meeting minutes.
> Also tue, the Nazis basically started WW2 to murder all the jews.
I don't think that's true. It's a stupid point to start a war. Where's the profit for Germany in that?
While it's also true that Nazi Germany explicitly set to murder all Russians. There're numerous official Nazi documents confirming that. Most of the population was determined to be killed directly or by causing hunger (by capturing the only lands suitable for agriculture in the South-West of USSR).
But since the West in general always considered and continue to consider Russians as subhumans, there's no mention of this fact in western books, movies and articles.
The Nazis goal of the war was to eradicate all the jews. It was since, at least, Hitler wrote Mein Kampf. It shows in their actions, in how they organized, who got which resources. It was their goal, murdering all jews, and went about it with murderous and frighting efficiency. They only stopped outright murder in has chamber to enslave a portion of people ending up concentrations camps, the vast majority being jews, when Nazi Germany ran into work force issues. It was Speer who convinced Himmler that was necessary, so the Nazis settled for murder through work.
No idea what you mean that the Nazis behaviour regarding slaws, not just Russians, isn't talked about in the West. I am in the West, and I defenitely didn't learn about from Russian books. Also, claiming the West sees Russians as sub-human is a strong claim, to put it mildly...
There's a massive scholarly debate about when exactly the Final Solution took shape, with one school of thought being that it wasn't until after the invasion of the USSR that extermination became the goal. Christopher Browning's _The Origins of the Final Solution_ [0] provides a good overview of this position.
The Wannsee conference would be my choice for the date the final solution (gas chambers and exterminatiom camps and sich, the holocaust of bullets started right away) became official policy. Obviouspy all the back room decision have already been met by then, the Wannsee Conference being more of a working level implementation meeting.
One issue : Wansee takes place in 1942, that is in the middle of the war. Until now, most camps are "just" concentration (death by labour) camps, and Jews are mostly in the ghettos. Only after Wansee you have start of mass transports to death camps - most people who died in Auschwitz were never inmates there - they were brought from other countries, sorted on ramp and send to death. Not to mention - at that moment camps were already full of people (Auschwitz was started for Poles and Soviet POV). So no, they didn't "stop" with gas chambers - they have inmates in camps and Jews in ghettos (all having to work in one way or the other), and the idea of Final Solution wasn't complete until 1942 (and took place until 1944).
And the holocaust by bullets pre dates operation Barbarossa. Turned murdering millions of people requires more sophisticated logistics than your average, run of the mill genocide as all tjose done before. Hence Wannsee, the final solution, the extermination camps (of which Auschwitz wasn't one initially). The goal was clear from before the war started so, the means were somewhat less well defined.
Edit: As a supply chain and logistics guy myself, the logistics of the holocaust seem to be the only historical occurance the Germans got wartime logistics right. Eichmann was a really good supply chain guy, running a tremendously efficient operation from, and beware the following will be incredible cynical now, sourcing (getting jews concentrated in ghettos) with close collaboration of local suppliers (the SS, Gestapo and local authorities ranging from French police to the various collaborators in the east) over transportation (including Eichmann himself working on details like damaged passanger rail cars to keep his prime carrier, the Reichsbahn, happy) to the classification of people arriving in incredibly well run extermination factories in the occupied East. It is that efficiency, applied at murdering innocent people for a deprived ideology, that makes the holocaust stand out from all other genocides in history. And that was the main, if not even the only goal, of the Nazis behind WW2.
> No idea what you mean that the Nazis behaviour regarding slaws, not just Russians, isn't talked about in the West. I am in the West, and I defenitely didn't learn about from Russian books. Also, claiming the West sees Russians as sub-human is a strong claim, to put it mildly...
The comment you replied to is somehow assuming that all slavs are / were Russians and that the Germans didn't somehow also murder a ton of Ukrainians and Poles, among others.
> I don't think she set out to deceive, and your stated reasons for thinking so don't pan out.
I believe at this moment in time she is well aware of the claims against her paper. She has not released a rebuttal to the claims, likely because she doesn't have one (otherwise she would have used stronger evidence in her paper). Every day that goes by, she leaves the paper un-adapted and un-retracted, meaning that she knowingly propagates this false narrative. It is likely she has zero intention of retracting her paper unless the journal forces her.
> Politicians do it all the time, but historians in particular should not do it.
Neither should do it, but politicians are somewhat shorter lived.
> Not getting caught in such narratives is ... difficult.
Posing a narrative isn't necessarily negative, as long as it is plausible and supported by evidence. For example, you could read a very good argument for Europe being wrong to go to war with Nazi Germany and have very good arguments being made based on proper evidence. You don't have to agree with it, just respect a difference of opinion.
The paper the article addresses does not fall into this category.
I have a friend who was a historian and now works as a data analyst for a SEO agency and makes more than double their previous salary. They had no particular expertise in SEO or data beyond being able to use Excel and being able to think and write clearly.
> Historians? If so, can you name a few example? I find this hard to believe.
I was speaking more generally about academics who are under pressure from the current political bias. As other comments have mentioned though, the critical thinking required for good historian work is also quite broadly applicable elsewhere.
In America I can believe it. The US government doesn't fund cultural institutions very much. It's all left to private institutions (who frequently have political and ideological agendas).
Nah, both public and private universities in the US are pushing this political and ideological agenda. I'm not aware of any systematic pay difference for academics in public vs private universities - for example, Stanford (private) pays similarly to Berkeley (public), and they have similar politics. Both pay much less than private sector for anyone who can get an academic job at one of those institutions.
Your comment is a little more applicable to museums and the arts, but the US government funds all universities, both public and private, pretty lavishly. Unfortunately the university administrators have a habit of accruing much of that funding to themselves and leaving the academics relatively little.
There are other examples of this genre of history. Here is a debunking from the claim that Vikings were Muslim because something resembling the word "Allah" in Sufic was found on a Viking textile. https://www.independent.co.uk/news/world/europe/allah-viking...
There are more claims in this genre if you look for them: Hidden Figures, White House built by slaves, 1619 Project. There is probably more and there should be a compendium of false history. The common theme is that Europeans are evil and lack ability but their relative success is explained by theft and oppression from non-white people.
Media or research? These two are very different. Media may sell you canvas bags and paper straws, making you feel good with stories driven from big oil about your personal responsibilities. Climate science on the other hand is well studied and well replicated with lots of validation. But that doesn't mean media won't distort what a work actually says. These things are very different but many confuse the two. Just because you get a false story doesn't mean the scientists told that false story. Never make this mistake.
It's well studied, but not well studied. Climatology is essentially a branch of history and suffers the same issues with low quality sources that can't be upgraded via experiment because they're recordings of past events, ideological bias amongst practitioners and other issues.
Well you have the opportunity to make an argument. But I've personally worked with the data. So your going to need more then a "trust me bro" to convince me or many others.
See my comments on PAGES2K on another thread, reachable from my profile link. But as you've worked with the data presumably nothing in those posts is news to you, and you're well aware of the sorry state of paleoclimatology.
Yeah, but we aren't supposed to say that. We are supposed to agree that it's the end of the world and run around screaming and flailing our arms like Kermit the frog.
The difference there is you can just look at the numbers and see that climate change is a slow-motion catastrophe. No amount of spin or storytelling can change the facts that temps are rising, icecaps are melting, Canadian forests are incinerating at record levels, etc. etc.
The Kuwaiti oil fires were caused by the Iraqi military setting fire to a reported 605 to 732 oil wells along with an unspecified number of oil filled low-lying areas...The fires were started in January and February 1991, and the first oil well fires were extinguished in early April 1991, with the last well capped on November 6, 1991.
It was predicted by experts that the fires would burn for between two and five years before losing pressure and going out on their own.
Although scenarios that predicted long-lasting environmental impacts on a global atmospheric level due to the burning oil sources did not transpire, long-lasting ground level oil spill impacts were detrimental to the environment regionally.
Over the years, I have read story after story after story where the initial predictions were extremely dire and everyone was extremely upset and concerned and when things got cleared up in relatively short order, there was no celebration on par with the amount of bellyaching that occurred. But if you try to say anything like that, you get dismissed as a clueless nutter who just does not understand how bad things are.
So regardless of how bad it really is, we still are emphasizing things in a way that is inaccurate and excessively negative.
Ever heard the expression - only paranoid survives? Maybe it's a reasonable strategy to cry wolf 100 times, out of which 99 times no wolf materializes.
Look at the recent catastrophes, from 2008 financial crisis, to COVID, to the currently raging war - who would have guessed? So we might be emphasizing things in a way that is inaccurate and excessively positive.
I have an incurable genetic disorder. I take extreme measures daily to address it. I have no problem with taking appropriate precautions in the face of issues.
But I see no reason to wallow in the negativity in this way. It does not appear to be curtailing anyone's carbon footprint.
All it appears to do is prevent us from having meaningful, constructive conversations because any attempt to talk about solutions gets shot down as "not enough" and "won't make any real difference" and "we are all doomed anyway, so why bother?"
Sadly quite a lot of people predicted the broad strokes of the 2008 financial crisis, COVID and Ukraine war, although you can always argue that they should have predicted things more precisely.
- Crisis. It was just yet another round of banks making bad loans then getting bailed out by the government leading to a recession as credit was withdrawn. This has happened many times before and that it would happen again was obvious (and it will keep happening).
- "Scientists create virus that escapes and causes pandemic" was not only predicted but was such an obvious possibility that it was a sci-fi staple. Even if for some reason you still deny the lab origin, Gates/WHO and friends had spent the previous decade predicting a mega-pandemic of respiratory virus, albeit usually of flu rather than a coronavirus.
- Ukraine war, well ... there was not a shortage of words written about Russia over the past decade. And western intelligence did predict it I think, albeit only a few days in advance.
Crying wolf definitely has downsides. It's a children's parable for a reason.
I think that's what's called rewriting the history books. The narrative du jour causes people to fit everything in a neat little box. Clicks, views -> land better jobs.
> The discipline, or a sub-set of it, has become helplessly in thrall
> to one of the archetypal narrative forms: Good vs Evil. Naturally,
> the academics are on the side of the Good.
I find that this sums up more academic works that I feel comfortable admitting. And not just in the arts, but also in some of the less rigorous sciences as well, such as history and psychology.
That excerpt you quoted reminds very much of Herbert Butterfield's The Whig Interpretation of History.
The introduction starts by directly talking about that topic, on page 3 of the following PDF if anyone cares to read a little.
He starts talking about the perception of the historian as an avenger, beating the high and haughty while raising the downtrodden, and that it would be better for the historian to be a reconciler instead rather than one who divides large swathes of humanity in good and evil, black and white. He prefers a historian who finds common unity between two different sides and pities these two sides who perhaps had no pity for the other.
His comments in the intro also touch on a phrase I personally dislike, "the right side of history".
I think that "helplessly in thrall to one of the archetypal narrative forms: Good vs Evil" sums up everyone commenting on this page who claims that their version of history is objective, while that of [insert opponent here] is not.
No, they're saying the claims made about Henry Cort in a paper lack historical evidence in favor of telling an ideological story, which shouldn't have passed peer review.
A couple of historians have already done the fact checking.
Edit: Oh I misread your post. You're referring to commenters critiquing ideological opponents, not the author.
These attempts to rewrite history to fit a narrative happen all the time. For example, I see constant attempts to deny that the Wrights' 1903 Flyer was the first powered, controlled flight. The evidence presented is always a pile of fanciful conjecture.
This rewrite occurs with modern events, too. For example, the narrative on the 737MAX crashes. I've posted about it here many times. On a recent trip, I ran into a 737-800 captain on layover, and struck up a conversation. I chided him a bit about remembering the stab trim cutoff switch. Well, that opened the floodgates. He told me in no uncertain terms that those accidents would have never happened to him, because he knew how to fly the 737. He told me one way they could have recovered the airplane was the one I mentioned here many times - trim to normal and turn off the stab trim system. He said there was another way. The EA airplane was operating at full throttle. The overspeed warning could be heard on the cockpit voice recorder. The pilots ignored it. Overspeed is a very dangerous situation, as the air loads on the airplane exceed design limits. All the pilots had to do was just pull the throttle back. Then they could have manually retrimmed the stabilizer, which they tried to but could not because of the overspeed.
He added that all the 737 pilots he knew knew about this, but were afraid to speak up.
Remember all the narrative about the pilots could not have diagnosed stab trim runaway? This infuriated him, because when the trim runs, it spins two wheels right by the pilots' knees, making a loud and unmistakable clack-clacking sound. Something I was told about by Boeing engineers, but it was nice to hear it from a captain.
I suspect that the reason the single point of failure design of the MCAS went through is Boeing simply assumed that the pilots would just turn off the stab trim system if it misbehaved. And this is exactly what happened in the first MCAS incident - the crew turned it off, and continued the flight normally and landed safely.
As far as is publicly known, the failure has occurred three times, and has led to two crashes. It is possible to recover from if you know what you're looking for and you act promptly. However, Boeing did not tell pilots what to look for.
The pilots did not diagnose it as stab trim runaway because it's not stab trim runaway. When MCAS is malfunctioning in this way, it engages nose down trim for 9.3 seconds, and then pauses for 5 seconds. A runaway trim is where you have a stuck trim microswitch or relay or some sort of short circuit. You're expecting a continuous trim input. The confusing thing about MCAS is that it appears to resolve itself, but then waits 5 seconds and comes right back.
By the way, the wheels going clack clack is a normal thing at that phase of flight. The 737 has something called Speed Trim and another thing called Mach Trim that automatically adjust the trim as the speed of the aircraft changes. In the initial climbout, this will be running constantly.
Both the speed trim and mach trim systems are documented in the 737 manual and pilots are required to know how they work to get a type rating. The workings of MCAS were deleted from the manual before it was released because Boeing was concerned that the FAA would require more training on the system, which would cause Boeing to have to pay a contractual penalty to Southwest Airlines.
It's very easy as a pilot to say that you would do a better job and something like this would never happen to you. The earth is pockmarked with the smoking holes of these people's airplanes.
> The pilots did not diagnose it as stab trim runaway because it's not stab trim runaway
Runaway stab trim is uncommanded movement of the trim. MCAS failure exhibits as exactly this. The most likely cause of uncommanded movement is an electrical fault of some sort. Electrical faults are often intermittent. When you don't know why an intermittent fault is happening, you turn it off. This is not rocket science. In the first LA crash, the crew restored normal trim 25 times. The uncommanded trim happened twenty five times. In what universe is this not runaway trim?
From what I've seen the definition you're using is one invented after the fact by parsing word meanings like a lawyer. If it's your butt in the pilot's seat, I recommend thinking like a pilot, not like a lawyer.
From your link:
"in at least one simulator session, Boeing pilots took more than 10 sec. to react to a runaway stabilizer"
First off, they don't seem confused about what a "runaway" is. Secondly, 10 seconds was plenty of time, as in both incidences the crew battled it for several minutes.
"Aerodynamic loads on the mis-trimmed aircraft made the trim wheel hard to turn"
The loads were too high because the pilots were at full throttle and ignored the overspeed warning horn.
> It's very easy as a pilot to say that you would do a better job and something like this would never happen to you.
And in the first incident, that's what the crew did. They just turned it off and continued to their destination.
> the wheels going clack clack is a normal thing at that phase of flight
When the wheels are clack clacking and the nose suddenly pitches sharply down, there's the clue that the stab trim is running away.
> The earth is pockmarked with the smoking holes of these people's airplanes
It sure is, and that's why airline pilots are supposed to know things like what the switches in the cockpit are for and what stabilizer trim runaway is and read and understand all Emergency Airworthiness Directives. Flying an airplane is not a joke and you'd better pay attention in training. My dad flew 23 years in the AF, including combat) and anything less than 100% proficiency was unacceptable.
Look, take it up with Boeing, not me. Here's the runaway stab trim checklist from 2018, before the accident [1].
Right at the very top, it says:
Condition: Uncommanded stabilizer trim movement occurs continuously.
MCAS produces an intermittent fault, not a continuous one, therefore you are not supposed to run that checklist.
There should be a checklist for MCAS failure, and the system and its failure modes should be explained, like every other system on a 737, but Boeing deliberately decided not to.
To your point, yes, you'd better pay attention in training. But, in order for that training to be of any use, they can't keep aircraft systems that can fail and kill you a secret from the pilots!
edit: Further to your point about the overspeed, the overspeed was not caused by the thrust levers being at full. The overspeed was caused by MCAS pushing the nose down and causing the plane to pick up speed. The pilots were trying to get the nose back up. The difficulty in moving the wheel comes from a combination of the airspeed and how far out of trim the plane is. Both of these problems were caused by MCAS putting in nose down trim. Yes, the pilots could have throttled back and bought themselves a little more time, but that is not the primary reason that they were unable to turn the trim wheel.
> MCAS produces an intermittent fault, not a continuous one, therefore you are not supposed to run that checklist.
Again, if you're a pilot and want to live, stop parsing sentences like Bill CLinton and parse them like a pilot. Remember, I worked on the 757 trim system. I guarantee you that everyone there would ridicule your interpretation. Do you really imagine it has to run all the way to the stops to satisfy you that it is runaway?
> they can't keep aircraft systems that can fail and kill you a secret from the pilots!
The pilot had everything he needed to know to save the airplane. He did not need to know the cause of the runaway trim, just how to stop it.
> the overspeed was not caused by the thrust levers being at full
When you're in a dive, it behooves you to pull the thrust levers back (and they were at full). Overspeed can tear the airplane apart. It just takes a second to pull the levers back, and reducing the airspeed would have made their attempts at manual trim workable.
Frankly, if I was teaching ground school and encountered a student who made arguments like yours, I'd wash him out. I'm reminded of that episode of The Office where the characters were driving using navigator software. The software directed them to drive into a swamp, and the characters were arguing over whether they should follow the directions into the swamp or not. There's a reason that airliners still have pilots in them and not have computers run it all.
I understand your point, but I have a little different perspective.
Pilots are not engineers. Airline pilots are not test pilots. They drill the procedures, and they follow them. We'd all like to have Neil Armstrong flying our plane, instinctually solving unexpected problems in seconds, but there aren't enough Neil Armstrongs out there to meet the demand for air travel. The solution that we've come up with is standardized procedures, and to have a procedure for every critical failure mode.
Boeing, for commercial reasons, did not want there to be a procedure for this failure mode. 737 pilots run the trim runaway scenario in the simulator, and they could easily have added a scenario for MCAS failure, but that would have cost Boeing considerable cash and they decided not to do it.
The simple fact is that you had two crews that were trained to Boeing standards crash, and as far as we know, only one survive.
If it had been just one crash and many successful recoveries, you'd be on a much stronger footing to argue that "oh, it's just those developing world airlines with their substandard pilots." When it's a 67% fatality rate though, you can't just blame the crew.
As someone who did not follow this in great detail, I don’t anyone claiming it was not possible to diagnose, merely that the signals towards it were very poor and that training for things was bad, and that the crews that experienced crashes had fewer safety features than typical western crews or something like that.
Your position here seems zany to me. Clack clacking sounds are not sufficient. Assuming that pilots would just turn off the system is not sufficient.
> I don’t anyone claiming it was not possible to diagnose
I read many times that it was impossible to diagnose it as runaway stabilizer trim.
> training for things was bad
Stabilizer trim runaway is trained for. Pulling back the throttles for overspeed warning is trained for.
> that the crews that experienced crashes had fewer safety features than typical western crews
How would you explain not being able to handle runaway stabilizer trim, and ignoring overspeed warnings? How do you explain the Emergency Airworthiness Directive sent to all MAX pilots after the first crash, that reiterated what I'm writing here, yet the EA pilots either never received it, never read it, or forgot about it?
> Clack clacking sounds are not sufficient
737 captains tell me it is. The wheels are also painted black & white so their movement is easily seen, as it's right there next to the pilot. How much more obvious do you require it to be?
> Assuming that pilots would just turn off the system is not sufficient.
Turning off misbehaving systems is the bulk of emergency procedure training.
> seems zany to me
It would seem zany to anyone who knows only about the popular media narrative.
I've also gotten emails from pilots who told me I had it exactly right.
As for myself, I worked on the 757 stabilizer trim gearbox design. At one point, I knew about everything there was to know about it. The 757 system is not identical to the 737 one, but the method of dealing with runaway trim is the same - turn the mofo off. It's why there's a switch prominently placed right there on the center console. Do you want to believe that pilots don't know what the switches on the center console are for? Do you want to fly with a captain who doesn't know what they're for? There's no excuse for a captain to not know what all the controls are for.
P.S. Why would a 737 captain open up and tell me the truth? I let them know who I am, and back it up with details of how it works that an investigative journalist would never know. Then they're happy to chat with me.
> 737 captains tell me it is. The wheels are also painted black & white so their movement is easily seen, as it's right there next to the pilot. How much more obvious do you require it to be?
I’m imagining Boeing putting you to talk to the press and you insisting it was the pilots fault because the plane was making a clacky clacky sound.
I think you sound like a lunatic.
I also recall a key issue was that Boeing tried to make minor changes to the plain such that new training was not required
It's striking to see an argument from authority, against the dominant narrative, deployed in service of dismissing those who purportedly aspire to "rewrite history to fit a narrative".
The way the press reported on it was just despicable.
P.S. in case it isn't clear, I am not a pilot myself, but friends and family are pilots. One just got his Apache Chopper certification! Wow! Me so proud and so envious.
Honestly, anyone who has domain knowledge about something the press reports on, and doesn't realize that the press is just doing whatever they want and fitting the relevant facts to that, is probably not paying attention much at all.
It's also pretty obvious that the "officially accepted story" around the 737 is better for everyone including Boeing - because it's now been wrapped up and completed; there was a bug, the bug is fixed, the end.
The real story that there are some unknown number of pilots flying planes right now that will not deal correctly with emergency situations - that's a much harder problem, and much scarier to face.
But it's obviously real; any number of air disasters occurred because the pilots didn't follow the basic emergency procedures.
None of that means that the cases where they have to shouldn't be reduced, mind you.
The Wright brothers story is well worth reading up on. The director of the Smithsonian refused to believe them, and championed an alternate "first powered flight" for a long time. This is why at least one Wright flyer wound up in the Science Museum in London. It only got fixed after he died/retired.
The lack of history on display in the Air & Space Museum on this, is quite interesting: it's like the whole "tell the truth, the whole truth" thing completely passed them by (Admittedly I was last there 15 years ago, it may be fixed now)
Air and Space has been undergoing a massive renovation, and reopened the first half last year. That includes a very good Wright brothers exhibit, easily better (IMO) than the one at Kitty Hawk.
Beyond that, all the new exhibits are great. Highly recommended for anyone that has not been back since they reopened!
A visualisation I particularly like, is that the first flight on record they made would fit inside the length of a Jumbo Jet. It really brought home how "one small step, one giant leap" can work out.
I've seen many current attempts on HackerNews to elevate one or another claimant to the first flight throne. It's not just the Smithsonian a century ago!
Isn’t one of the legit criticisms of the max is that the behavior of the cutoff switch and the thumb control was changed without announcing it?
Update from YKW:
Yes, the trim system behavior in the 737 MAX differed from that in older 737 models like the 737NG (Next Generation). In older models, manually adjusting the trim using the thumb switch would generally interrupt and disable automatic trim adjustments, including those by the Speed Trim System, a predecessor to MCAS.
In the original 737 MAX design, using the thumb switches to adjust the trim didn’t necessarily disable the MCAS, which could re-activate and adjust the trim again. This was a significant point of contention and contributed to difficulties pilots faced in maintaining control of the aircraft during the two fatal crashes.
Today we have news for conservatives and different news for liberals. That's stupid. News doesn't need built in commentary. Now it seems like history is going the same way. We will have conservative history and liberal history. But I hope not.
It's not really news, though it calls itself that. It's entertainment, and it's written and dramaticized to get readers/viewers and ad views. This is true on both sides of the political spectrum. Strong opinions engage viewers, often even if they don't agree with them. Fox News or MSNBC don't care if people are watching them because they love them or hate them, as long as they are watching.
History is story telling about a period of time for which we do not have easy access to all the information, nor full access to the participants and their motivations and self-conceptions. In addition, even with the benefit of hindsight, it is common to be unable to identify conclusively which possible elements of the story are the most significant (and as a corollary, which are cotemporal but irrelevant). Consequently, there are different ways to tell the story, and no "objective" set of rules to decide which to choose. Like all human story telling, history must come with a point of view that is critical in framing the elements used in the telling.
Now repeat everything I've just said, but substitute "news" for "history".
I’ll do you you one better, replace everything with “bullshit”.
History is susceptible to narratives because anything that isn’t clear cut (slavery, holocaust, etc) is open to subjective interpretation.
It’s the conspiracy theorists that are the real wildcard in all of this. They will take the clear cut (slavery, holocaust) and make that open to subjective interpretation.
I tend to think of subjective interpretation as "That was good" or "This is bad".
But there's something required beforehand: definining what this or that is. It's not really a subjective process, but it certainly isn't objective either.
Before you can decide whether or not William the Conqueror's invasion of the British Isles was a good or a bad thing, you first need a description of how the invasion was carried out and "all" the consequences. But there is no "objective" or "clear cut" version of this. What do you include? What do you exclude?
Oh, I meant subjective in terms of your own perspective (informed by world view and life experiences). However you can fit the square pegs into your own round holes.
Regardless, I suppose the bigger point I want to make is the truth for non clear cut things is only as true as the number of people that co-sign it. If there’s simply more people on one side, welp, that’s that. That’s the truth, as far as we know.
Right or wrong or true or false are almost not even in the equation (enter the conspiracy theorist).
I don’t know, based on what I just said, what do you believe?
This reminds me of how historians trying to find out about the historical Jesus have come up with some (objective?) criteria about what to accept as true.
Examples:
* Multiple attestation. Do we have multiple independent sources telling us the same thing?
* Contextual credibility. Do people behave in a way that's plausible given their setting (the languages they spoke, what they knew at the time).
* Embarrassment. Is this detail actually inconvenient for the person relaying it (and not the sort of thing they'd make up)?
I just plucked it out of the air as a historical event that took place a long time ago and occured in a culture with written history. It has no particular importance in the abstract (though the British and French do still seem to talk about it more than one would expect).
It feels like you just described war propaganda. Americans lie one way, the Russians the other. The better question is why are we in a war over current events (to your point over the news). Left propaganda and Right propaganda, for some invisible war.
The low hanging fruit to me appears to be the fact that propaganda is profitable, ::shrugs::
Just one more war machine that got repurposed for civilian use.
While I don't doubt that some of it is, calling it propaganda implies a deliberateness to me.
Most of this stuff isn't lies. It's different groups of people focusing on different parts of the same events. Most of the time when you get two competing narratives for an event, neither are actually dishonest, they're just a collection of events that tell a particular story that is of particular concern to a group. Usually the events all actually happened but their importance and relatedness are re-arranged.
> Just one more war machine that got repurposed for civilian use.
Already in 1928, Edward Bernays' book on PR and marketing was named Propaganda.
Another interesting point is that the term propaganda comes from the Latin "propaganda fide" (propagation of faith). Nowadays it seems to just be called evangelization in the Vatican and elsewhere.
I didn't read this (and i don't plan to) but;
I thought that the oldest of human history was recorded aurally in songs, ballads and epics, where the more dramatic things were the better they could be remembered. So to call the incentive to be dramatic increasing, is to selective over which time frame you are talking.
Isn't "ideological emotional-string pulling" what the original historian was after by writing a story that fit the current popular narrative within academia?
Wouldn't holding one fraudulent historian accountable be withholding the credibility of historians, rather than 'despise'ing them?
Are you not the nihilist to be so pessimistic about the intentions of people who agree that the story was fabricated for popularity?
Honest questions that I would be interested in hearing a response to.
Historians aren't hated here, rather sensationalism at the expense of facts and logic are despised here. Finding people leaning towards sensationalism across media types and disciplines unfortunately. Techies here are especially sensitive to click-bait and articles devoid of evidence to back up their claims.
There's great synergy between critics of non-mainstream historians and the voting apparatus of HN which ensures that unpopular perspectives are rendered unreadable.
Australia's most famous historian, Charles Clark (called himself Manning Clark to be special) admitted before he died that he made things up, because he was more interested in telling a story fitting his own ideas, rather than reporting facts. They still name roads and such after the guy.
This is an exceptionally hot take. Manning was Charles Manning Hope Clark. You can verify this easily. His status in Australian history is mixed, because of the strong left/right axis of history as a field in Australia. Manning Clark was unquestionably of the left. The Australian, and its tame philosopher-historian Gerard Henderson makes much of this, and Manning Clark being awarded an order of Lenin for history by the Soviets. Guess what? The Australian, Keith Windschuttle and others reject Aboriginal "black armband" history, working class history, and dislike Manning Clark immensely. They are unquestionably of the right (side)
I suspect you're partisan in this war. "making things up" has been part of what Historians do since Thucydides. We don't call it making things up but any time you see the word "infer" you can go there. History is complicated. Even documents don't tell the whole story. Do you think Cabinet Minutes record verbatim what everyone said?
Reading Windschuttle's "Fabrication of Aboriginal History" along with the heavily critical responses in "Whitewash" was a fascinating intellectual exercise. I went in quite neutral on the issue and came out convinced that, while there are some factual issues in Winschuttle's account, the thrust of his argument was correct and that Australian history was in deep trouble as a discipline. The essays in Whitewash show all the weaknesses described in the article above - they are filled with motivated reasoning by people who are very strongly constrained in the conclusions they are able to reach, both by ideology and by social pressures from their peers.
If you want another example of this, Try "Haig's Command: A Reassessment" by Denis Winter.
Winter had access to documents held in the Australian war memorial archives which strongly disagree with official war history, because they're the input texts before the official edit. He drives very hard to this being a whitewash by the establishment of Haig's command. The counter view is that it's entirely normal editing, He's displaying the not uncommon Australian/Canadian/New-Zealand dislike of Haig because Haig used the colonial forces as shock troops and then played down their role in the turn-around to attrition warfare, and he's probably also strongly leftist and anti-establishment. It's a great read, but unquestionably one-eyed.
I don't agree with you about Windschuttle but I am also very biassed against him by his continued association with the News Ltd schools of journalism which is a strongly rightist world view. I could believe he's a good historian. He's not in any sense the neutral arbiter. I think he would be amongst the first to deny it.
His primary vocal opponent in the press here in Oz (Robert Manne) is equally one-eyed and at times, more passionate than accurate.
I'm not sure the practice of history in Oz is any better or worse than anywhere else to be honest. I miss Eric Hobsbawm, who never hid his marxist affiliations. Hobsbawm absolutely hated counter-factuals. I rather like them but they're fiction, not history.
But let's be honest, regardless of all the work Haig did after the warbto support veterans, militarily he wasn't the sharpest knife in the drawer. Sure, we argue from hindsight. Also true, the opposition wasn't much better. Also true, most WW1 commanders were better than what we make them today. And Haig was no Cadorna or Hötzendorf.
The big mistake Haig made, or the thing he didn't see, was IMHO how the war was fought and won on the Western Front. He wanted a breakthrough, instead he got a war of attrition. One the Entente was winning. Haig went from attempted breakthrough to winning, without realizing why his strategy worked. Because one can fight a war of attrition without sacrificing as much of your own men as the Entente did.
How the Entente reacted to the Kaiserschlacht so, well, that was the right reaction: let the Germans leave their fortified positions, outrun their logistics, and sacrifice some ground until the enemy runs out of steam. Then counter attack. That way, you don't even have to break through the enemies lines as they are already abandonned and unmanned. No idea how much influence Haig had on this so.
At least the authors you mentioned argue based on primary sources and facts. As oppossed to people using autobiographies and post-war memoires of WW2 generals.
I haven't read anything by Windschuttle. But Charles Clark famously declared that he was pushing "my kind of history", which was a poetic and spitually uplifting narrative in his opinion. Shortly before his death, he gave an interview about this topic, admitted he had fabricated many things, but believed his motive was sufficient excuse for doing so. You can equivocate all you want about inferring and complication, but he actually admitted it.
The "You're probably a [insert anything]!" is one of those pesky little logical fallacies intended to divert attention from the subject.
So don't read him. Call a meeting of the historian club and have him expelled. But, be prepared to find out Windschuttle, and all the other "real" historians committed grave faults, sins of omission, partial readings, because everybody does it.
I'm not trying to divert, I think you've missed a point. Clark was truthful about his craft. You somehow seem to think that everyone else has no faults? Am I misunderstanding and you realise all historians select, sample and therefore distort?
Maybe the central problem is knowing how much he made up.
> The "You're probably a [insert anything]!" is one of those pesky little logical fallacies intended to divert attention from the subject.
It's shaming language meant to shutdown a discussion so that the predetermined conclusion held by the shamer cannot be disputed.
"If you think $FOO you're probably a Nazi/wife-beater/misogynist" isn't meant to make you rethink your argument, it's meant to shut you up.
After all, what would be the point of calling a Nazi a Nazi? They already know what they are and don't care.
But calling a well-intentioned person a Nazi does make them pause, even if only for self-reflection, and does make them shut up, because they don't want to be a Nazi or to appear as one.
> We don't call it making things up but any time you see the word "infer" you can go there.
If things are not presented correctly as being inferences, then that's the problem. I don't know if that's the case here, but there's a big difference between presenting your own imaginings as facts, and presenting them as theories.
>"making things up" has been part of what Historians do since Thucydides
historians up until quite recently had nothing to do with the modern academic profession. they were paid writers who made tons of stuff up to sing praises to whomever would pay them.
https://www.ted.com/talks/tyler_cowen_be_suspicious_of_simpl...
(Because it's such a great story I still remember the talk a decade later.)
"""
So if I'm thinking about this talk, I'm wondering, of course, what is it you take away from this talk? What story do you take away from Tyler Cowen?
One story you might be like the story of the quest. "Tyler was a man on a quest. Tyler came here, and he told us not to think so much in terms of stories." That would be a story you could tell about this talk. It would fit a pretty well-known pattern. You might remember it. You could tell it to other people. "This weird guy came, and he said, 'Don't think in terms of stories. Let me tell you what happened today!'" And you tell your story.
Another possibility is you might tell a story of rebirth. You might say, "I used to think too much in terms of stories but then I heard Tyler Cowen and now I think less in terms of stories!" That too is a narrative you will remember, you can tell to other people, and again, it may stick.
You also could tell a story of deep tragedy. "This guy Tyler Cowen came and he told us not to think in terms of stories, but all he could do was tell us stories about how other people think too much in terms of stories.
"""