Hacker News new | past | comments | ask | show | jobs | submit login
Avoiding Intellectual Phase Lock (books.google.com)
206 points by monort on Sept 30, 2019 | hide | past | favorite | 74 comments



Nice, Feynman also described this in cargo cult science:(http://calteches.library.caltech.edu/51/2/CargoCult.htm)

> We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

> Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.


> We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.

This isn't the case with sufficiently controversial topics in the social sciences.


Oh, please. It’s not the case with science. This effect arises from human nature, not pre-quantum theory science culture. And whatever lesson Feynman May have learned, academia (including science academia) can hardly claim to have universally learned it.


That would be because sufficiently controversial conclusions in the social sciences can upend social stability (endangering the ability of social science work to be performed) and serve as the justification for the infringement of human rights.

That's bad, if there was any confusion.

As another comment puts it, findings that buck conventional thought to the point that they can prompt irrevocable ramifications better have an "unimpeachable, reproducible path." And that's difficult when so much of social science is based on statistical modeling of phenomena that are multifaceted and difficult to control for, and not actually produced by deduction.


>That would be because sufficiently controversial conclusions in the social sciences can upend social stability (endangering the ability of social science work to be performed) and serve as the justification for the infringement of human rights.

This is assuming the current incorrect findings aren't being used as justifications for infringement of human rights currently. For example findings that might relate to rehabilitation for crimes and how long we sentence someone to prison, or if we even sentence someone to prison at all. Findings related to forced institutionalization of someone deemed too mentally ill to be allowed their freedom. Findings that are used to justify existing laws that see people put in prison for things they perhaps shouldn't be.

Using such an impeccable standard for changes when such wasn't used for the existing social structure isn't justifiable.


> For example findings that might relate to rehabilitation for crimes and how long we sentence someone to prison, or if we even sentence someone to prison at all. Findings related to forced institutionalization of someone deemed too mentally ill to be allowed their freedom.

Before cargo cult science becomes a problem here, we need to actually accept and adopt evidence based management of these institutions. Currently policies in these areas are primarily driven by tradition, rhetoric and anecdotes, not scientific inquiry.


>This is assuming the current incorrect findings aren't being used as justifications for infringement of human rights currently.

...No, it doesn't assume that. The notion that circumstances are imperfect doesn't doesn't preclude a drastic change from making things worse.


There is ample evidence that: - there is no physical embodiment of "free will" - everything in the physical world is deterministic, although at the quantum level, "statistically deterministic with random sampling"

Thus, one cannot be said to "commit a crime," only that the particles in one's brain and one's body happened to be in a configuration and receive interactions with other stimuli such that those actions physically occurred.

We already know these things, but we ain't updating our justice system to account for this.


You're making the mistake of projecting the deterministic nature of "machine level" causality onto the ability of the scripting language that is human agency to interact with the world in ways that can be characterized as "good" or "bad." Which isn't to say that compassion for the sinner and acknowledgement of how in thrall we all are to circumstance shouldn't be encouraged. Just, high-level, there is choice, if only because the deterministic switches are black boxed.


>One example of the principle is this: If you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish both kinds of result.

Isn't non-publication of studies/research (for various reasons) a massive problem in a lot of different disciplines now? [1], for example.

[1]https://pediatrics.aappublications.org/content/138/3/e201602...


Mainly because reviewers won't accept uninteresting results. In many fields it's hard to differentiate a negative result from just a clumsy researcher. In computer science a negative result usually generates a flood of questions like "okay, but why would you do it like that, couldn't you try this or that etc." and the conclusion is that the researcher just didn't manage to get it to work because they didn't use the right techniques.

Essentially you'd have to argue that the method you tried should work, based on what we already know, but surprisingly it doesn't work. This type of argument is very hard to carry through.

The opposite is much more publication friendly: "nobody would have thought that this could ever work like this, but we now show that this novel idea does work very well". That's the type of thing for award-winning research.


Publication -- somewhere -- of interesting negative results would be invaluable specifically for the discussion of how the experiment went wrong. Experiment design is tricky and it's not something you get a lot of exposure to except through the experience of others you interact with.


Eliezer Yudkowsky talks about this in Inadequate Equiibria [1] when he analyzes academic research.

short answer - here you need researchers and grant makers to simultaneously decide to do what is not in the best interest for them.

[1] https://equilibriabook.com/an-equilibrium-of-no-free-energy/


>We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.

I'm sure the people doing those experiments also shared the conviction that they were smarter and more objective than people in past ages.


I read that (the quote) and thought, surely this is sarcasm.


Feynman is using very dry humor there, I'm pretty sure.

Or maybe I'm revealing my biased image of him...


But we are getting better, right? Despite undoubtedly repeating a lot mistakes (and often times being foolishly ignorant of that), some collective knowledge is surely being retained, otherwise we would not make progress in basically all sciences.


The fact that we're making progress in acquiring knowledge doesn't necessarily entail that we're getting better at being objective or unbiased. I see no reason to think that we are. (Nor am I particularly convinced that objectivity and lack of bias are major factors driving scientific progress, but that's probably another conversation.)


The very experiences that Feynman recounts have lead to a practice of doing "blind" experiments in particle physics and other places.

You simply do your analysis with some extra, unknown, factor added and once you think you done the best you can, the blinding is removed and you check to see what you measured.


Sure, but who knows if this has been beneficial to scientific progress or not? It's not like we can wind back time and do a comparative experiment.

edit To expand on this a bit, as it might sound flippant. People often talk about "bias" as if it's something that obviously ought to be eliminated. But in fact you can't get anywhere without biases. We only have the time and resources to explore a tiny portion of the hypothesis space in most domains.


> People often talk about "bias" as if it's something that obviously ought to be eliminated

Yeah, bias means our model of reality is distorted; one doesn't correspond to the other as well as it could. An example of bias, from https://en.wikipedia.org/wiki/Space_Shuttle_Challenger_disas... (appropriately enough)

<<<In the appendix, he argued that the estimates of reliability offered by NASA management were wildly unrealistic, differing as much as a thousandfold from the estimates of working engineers. "For a successful technology," he concluded, "reality must take precedence over public relations, for nature cannot be fooled." >>>

That is bias, and it killed people and didn't do the US any favours. If by bias you mean something else, your post needed to be clearer.


I think that's an odd definition of bias. Bias is just a prior inclination to believe P rather than not P. P may or may not be true.

In any case, that's the kind of bias that's relevant to this discussion.


Okay, decent answer, upvoted. I think what I said still applies, if P = "shuttle is safe 9,999 launches in 10,000" then not(P) turns out to be true. I think that's bias by both our definitions.


Now you are basically saying "what if overtraining is good?"


Overtraining is bad by definition, like overcooking. But "don't overtrain" is about as useful a maxim as "don't overcook".

Naive bayes has higher bias than logistic regression. Is that a good or a bad thing? Depends.

(I don't actually see any analogy with overtraining.)


That's really interesting. How are the blinding factors constructed and applied?


> The fact that we're making progress in acquiring knowledge doesn't necessarily entail that we're getting better at being objective or unbiased.

Agreed. In your original comment however you wrote "smarter". Granted, it's a rather vague term, but I would say the type of progress we are making in sciences could easily qualify.


otherwise we would not make progress in basically all sciences

Other than physics, chemistry and, to some extent, biology, how sure are we that we have made progress? Can we say that the CBT-psychologists of today are any closer to a theory of human cognition than their Freudian predecessors? Can we say that modern dynamic general stochastic equilibria theories of macroeconomics do better at predicting long-term economic trends than the Keynesian models that preceded them? Even in physics, we seem to be spinning our wheels, building larger and larger particle accelerators while waiting for a theoretical breakthrough that never seems to come.


We might be getting better, but it’s not hard to find examples of this. Nefarious or not, dropping “outliers” because they don’t fit the narrative is common. It’s one form of p-hacking.


Maybe we are not making progress.


:kappa:


I remember doing the Millikan oil drop experiment in a college physics lab. I got a charge value about 1/3 of what it should have been. Obviously, I was observing unpaired quarks ;'}


I did it in a lab exercise and I think we were 20% off with the proper result somewhere close to the end of the error bar (inside or outside I don't remember). It felt disappointing at the time, like all early lab exercises. Some later ones were much closer to the state of the art.


Interesting, I did so as well, back in - I think - 1990 or so. Did it together with a fellow student, we both reported seeing a quark, but did not get the Nobel prize;-)


I remember reading this a decade or two ago, but I never could find such a plot to confirm this. Fortunately, someone generated one here back in 2016: https://hsm.stackexchange.com/questions/264/timeline-of-meas...


This kind of conservatism is actually pretty important. This is t engineering where you need to get that bridge built so people can use it; you’re trying to get to some “objective” “truth”. If your estimate is way off what someone else’s there’s a pretty good chance you’re wrong. After all the stepwise refinement did converge.

The alternative leads to things like the cold fusion debacle. There are still people working on it, but the ones I know know that they are regarded as cranks (At least for their CF work) and are super careful to try and discount any results seemingly contradictory to existing physics. Which is correct: another CF claim better have an unimpeachable, reproducible path.


> We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.

That’s very optimistic. If nowhere else, polling has a huge herding problem, where outliers are dropped when they are too far off the consensus.


I'm pretty certain that's satire, or would be satire if Feynman were around to see the state of modern science (interesting things off-the-beaten-path were being done in his day).


Isn't that also an appropriately Bayesian approach to new evidence?


No, because the Bayesian conditioning ends up being used twice. If you and a friend do an experiment independently, and he gets 10 and you later get 12, you might justifiably think the answer is closer to 11.

But you shouldn’t publish that you got 11. Because then somebody else will see that the measurements were 10 and 11, and think the true answer is closer to 10.5...


The correct approach would be to say something like "Previous experiments reported X, we observed X + Y, therefore the real answer is probably around X + Y/10." But it's very important to state the observation of X + Y, not X + Y/10, otherwise you'll take much too long to update to the correct value!


No, it isn't.

The appropriate approach is to accept the evidence, and correct your priors based on it, even if it's not good enough to believe. It's not to fiddle with the evidence until it's something you believe is correct.

Of course, it's much easier said than done. I don't think any group of scientists is safe from repeating this.


I was kind of wondering about this after reading the passage.

On the one hand the traditional Bayesian response is something like "yes, we're making our prior assumptions explicit and then incorporating that into a formal inferential paradigm."

However, this prior is being used to bias the estimates, rather than to avoid the bias. That is, it would be akin to Dunnington taking the current estimates of e/m and using that to shape any new estimates from data. The argument is then that at least he was being explicit about his biases and how they are used to make an estimate.

This has always seemed backwards to me, though. It seems what is more defensible is to use a formal theory about how prior biases affect estimates, and then to leverage that theory to minimize biases. This is basically the idea of the reference prior, to estimate things such that any role of the prior is minimized in an information-theoretic sense. This seems more analogous to what Dunnington was doing.

I really wish reference priors were more widespread, although they can be computationally pretty hefty. It's one of my hopes that quantum computing might make these types of approaches more feasible in general.


Being a good Bayesian would entail taking the evidence as-is, only adjusting your estimate of the "true" value after the fact taking other measurements into account.


What evidence do you have that this sort of thing is not occurring nowadays?


Well, the GP is quoting Feynman, so I'm not sure he's endorsing that part of the quotation. FWIW I think it's another healthy warning sign suggesting not to receive Feynman's pronouncements about scientific method too uncritically. However it may also be that the situation is now actually worse than when Feynman wrote this.


Sounds like the author is describing techniques to avoid Kahnemann & Tversky's anchoring effect. Avoiding one's own cognitive biases is important in debugging too; I mutter "listen to the system" as I read logs and error messages to try and avoid ignoring output that contradicts my preconceptions about root causes.


I use "what does IT think I'm asking it to do?"...


To summarize it for the curious in a hurry: intellectual phase lock is the tendency for people in science/intellectual-endeavors to publish/assert results that are not too far from what other people are getting. With this tendency, it can take a while for a (science) community to drift from a fashionable, wrong belief to a more correct belief. thijser's comment[0] is a good example of intellectual phase lock.

[0]: https://news.ycombinator.com/item?id=21113365


It may be weird to some, but there are a lot of intellectual phase locks today.

A major cause is publish-or-perish. And expert-group-bias. That last one is like: "Experts in astrology agree that astrology is working well."

We can spot these phase-locks by comparing the theoretical predictions with the actual real-world results. I also noticed that some of the results are altered afterwards to fit to the model.

Another signal is that good (and friendly) criticism is attacked, with personal attacks usually. This often happen when two different experts meet. From their expertise they come to different conclusions.

I noticed that these conflicts are hidden due to the peer-review system. Each specialisation is controlled by their own experts. This means that the different experts won't touch each other areas much. And just stay at their own territory to avoid conflicts. Or do not even widely publish their conflicting results.


In programming, so many major decisions are phase locked, because not that many programmers have the time to meaningfully test out new technologies, so many of us are dependent on hearsay to choose what's "best."


IIRC similar care was taken for the gravitational waves paper. They had the measurement team send multiple data sets of observations to the team writting the paper but only one of them correct.


As a side note, if you like collecting notes like this but find yourself frustrated that you can't copy paste from the source. I recommend you download tesseract, take a screenshot, and parse the image to extract the text. Tesseract runs really well on these kind of things and saves a lot of manual typing.


More commonly called bias I think, calling it "intellectual phase lock" seems like a weird flex.


The book was written in 1987, which is before Kahneman and Tversky's experiments were popularized (at least outside of academic Psychology.)

That said, bias is a general term, and "intellectual phase lock" as described here is a more specific example. The modern terms would probably be "anchoring", "confirmation bias", "courtesy bias", which slice up the space in a slightly different way.


It is a particular type of bias with an identifiable cause and specific remedies -- remedies that, despite what this exerpt might suggest, are widely known and practiced. On the other hand, Alvarez's term is idiosyncratic (possibly inspired by his WWII work on radar?) and not, apparently, widely used. Related terms: doctrine, received wisdom?


"Most people are concerned that someone might cheat them; the scientist is even more concerned that he might cheat himself."


Looks like the two good comments were taken (Feynman, Kahnemann). I'll just leave a couple quotes I thought of when I read a few paragraphs further about the hierarchy for ability in math and how it can be quite upsetting to discover how far up it goes beyond you, when you thought you were pretty high up.

>> [Pascal Costanza] Why is it that programmers always seem to think that the rest of the world is stupid?

> Because they are autodidacts. The main purpose of higher education and making all the smartest kids from one school come together with all the smartest kids from other schools, recursively, is to show every smart kid everywhere that they are not the smartest kid around, that no matter how smart they are, they are not equally smart at everything even though they were just that to begin with, and there will therefore always be smarter kids, if nothing else, than at something other than they are smart at. If you take a smart kid out of this system, reward him with lots of money that he could never make otherwise, reward him with control over machines that journalists are morbidly afraid of and make the entire population fear second-hand, and prevent him from ever meeting smarter people than himself, he will have no recourse but to believe that he /is/ smarter than everybody else. Educate him properly and force him to reach the point of intellectual exhaustion and failure where there is no other route to success than to ask for help, and he will gain a profound respect for other people. Many programmers act like they are morbidly afraid of being discovered to be less smart than they think they are, and many of them respond with extreme hostility on Usenet precisely because they get a glimpse of their own limitations. To people whose entire life has been about being in control, loss of control is actually a very good reason to panic.

–– Erik Naggum, 2004 https://www.xach.com/naggum/articles/3284144796180060KL2065E...

> Fermi and von Neumann overlapped. They collaborated on problems of Taylor instabilities and they wrote a report. When Fermi went back to Chicago after that work he called in his very close collaborator, namely Herbert Anderson, a young Ph.D. student at Columbia, a collaboration that began from Fermi's very first days at Columbia and lasted up until the very last moment. Herb was an experimental physicist. (If you want to know about Fermi in great detail, you would do well to interview Herbert Anderson.) But, at any rate, when Fermi got back he called in Herb Anderson to his office and he said, "You know, Herb, how much faster I am in thinking than you are. That is how much faster von Neumann is compared to me."

-- Relayed by Nick Metropolis

I got the second one from https://infoproc.blogspot.com/2012/03/differences-are-enormo... which also quotes this submission at the point a bit further, no wonder it was so familiar and these quotes came to mind.


Man, that Naggum quote hits home. I was a much bigger asshole early in life before I went to work at Microsoft. Usually one of, if not the, smartest kids in the room. Go to work, usually the same thing. I had to be insufferable at times. Then off to Microsoft I go to get thoroughly humbled. Much like I didn't know what "rich" really meant until surrounded by millionaires, I didn't know what "really smart" was, either. I still consider myself to be "pretty smart", but I now know that I am a long way from anything resembling brilliant, and always will be. So how about I tone down that attitude, eh?

Lesson #2 was that those super-smart folks I worked with had absolutely no problem saying, "I have no idea what you're talking about, could you explain it?" Probably how they got so super-smart.

So though I learned a lot about the craft in my time at Microsoft, I'd dare say I learned a little bit about how to be a more decent human, too.


Very good! I got an early education in college when I took Real Analysis in my first year (which most programmers do not take) and it kicked my ass so hard I still have impostor syndrome. But I don't make the error that I'm the smartest person around anymore.


Yeah, same here. As embarrassing as it is to admit, I think a lot of us grew up thinking we are The Smartest People Ever because it seems like so much of your self-perception is solidified in your early teenage years.

It takes a whole lot to shake that. If you see a few pieces of evidence that other people are smarter, it's easy to dismiss. However, if you regularly surround yourself with people who can run circles around you and provide so much evidence that you can't ignore it, you're eventually forced to reevaluate yourself.


Would he have used the same approach if there were fierce competition in his field?


Probably, it only costs a couple of days to dismantle and measure and you get a very valuable source of confidence.

Of course, if the machinist messed up and made an angle outside of the allowed range that's probably a few years down the drain (but I expect he checked before putting it in, just not too closely)


Ok, but with competition he probably would have had more pressure to publish sooner and more often.


It's remarkable that in this case he was able to obscure just a single piece of information to avoid phase lock. That's rarely the case. If you wanted to avoid phase lock for most topics you'd have to completely cut yourself off from all the literature in the field. I can't think of a way to do this in any of my fields: robotics, programming languages, or algorithms.


Can intellectual phase lock apply culturally, and if so, what does it look like? For example, I wonder what techniques can be used to avoid it.


>Can intellectual phase lock apply culturally, and if so, what does it look like?

It certainly does. It looks like what appears to be a majority of the population bullying others into accepting their viewpoints. This can be because only a single political party exists in the country (China), because the culture is populist and hence unstable leading to conservatism in individuals (Europe), or because as soon as you voice dissent with the current cultural direction a lynch mob materializes and tries to kill you (modern US, but has happened a few times in the past).

>For example, I wonder what techniques can be used to avoid it.

You need a healthy economy, a liberty-minded society, a strong Constitution with freedom of speech, equal protection, and privacy rights (that doesn't have "except when we don't feel like it" clauses like most nations do), and a lack of places and resources for bullies and those in favor of that phase lock to gain power.

In other cases, you need a good pseudonym or (where culture lock is at a despotic level) a decent pair of running shoes.


I'd say absolutely. That's why new generations tend to do things differently because they come from a somewhat new slate.


Yes, so should we break out of our cultural intellectual phase lockings the way we should break out of our scientific ones?


You're free to break from it, have a breakthrough and share it with the world. On a larger scale it's a drop in the bucket though. You'd have to change how and what things are taught and sometimes the cultural intellectual phase might unlock to a worse phase. Best is to let these things happen on their own.


What does "let things happen on their own" mean in this context? One person can pivot a culture, just as one researcher can pivot a field. What's the difference?


What I mean is not try to "break" the establishment but let it "break" itself. But not in the sense that establishment is bad but in the sense that some practices have crystallized and nobody challenges them.


Of course. Fashion, style, trends are everywhere.

The alternative is to actually evaluate claims objectively, and find out what you like despite what others think. Both of those things are hard to do.


This is a great excerpt.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: