If one has been labeled 'smart' at one point or another (maybe when very young), one may also begin to identify with that label. The label becomes the identity. And, at that point, it becomes quite difficult to accept 'being wrong' because it's an attack on one's identity and reputation.
This seems to have become worse in the Internet-age, where a lot of 'smart' people have had their reputations branded into the permanent record. So much arrogance, rudeness, vitriol, etc. in order to 'win' at being 'smart'.
Ironically, I have recently discovered that it's often easier to channel Socrates and just act stupid. It's way too hard to be smart, these days. And, given that there's so much new shit to learn, who really wants to be?
Additionally, once branded "smart," people (young especially) want to avoid looking "stupid" so much that they shy away from learning new things in order to not look incompetant/inexperienced.
So much this. I recently posted a question on reddit about a technology stack I've been using for the last 10+ years I think (a long while) because there were specific aspects of the stack I had always avoided due to the way it forces you to work (and later versions of the tooling on top of the stack moved away from the approach, meaning my interpretation of it wasn't completely without merit).
So I have a client of a client ask me to help out with a problem they're having, the problem is, the tech usage falls right smack dab into the hole in my experience because of my earlier decision not to use the tech stack in this way.
So I ask a fairly specific question about it on reddit. The top voted comment was a snarky reply in which the respondent implied I was the one who created the initial problem and because I don't have the experience necessary, I should hire a freelancer with more experience in the tech stack.
The issue is that person in question was so far off base about what the problem could potentially be, I ended up solving it using exactly the methods they stated were bad assumptions on my part. This poster colored the entire question such that I got pretty much no good input out of it, and instead simply had to rely on what I did know about this aspect of the tech stack to piece together a solution.
It's a bit long winded, but I guess the point is that people are amazingly short sighted with respect to asking questions. The poster in question took my willingness to ask a question as an indication that I had no idea what I was doing.
And it's this propensity that causes everyone to put forth such a persona of arrogance. If my reputation is linked in any way to my financial success, I cannot be seen asking such questions publicly. That's just reality.
Colloquially, "channeling" X means behaving like X, or in a manner inspired by X, in this case, where X == Socrates.
This meaning is a natural outgrowth of the term's original meaning, a purported means of communicating with the dead: a psychic medium is supposedly possessed by a spirit -- i.e. the medium becomes the channel for it -- allowing it to speak through his/her body:
The other incarnation of non-physical mediumship is a form of channeling in which the channeler goes into a trance, or "leaves their body". He or she allows the spirit-person to borrow his/her body, who then talks through them.[22] When in a trance the medium seems to come under the control of another personality, purportedly the spirit of a departed soul, and a genuine medium undoubtedly believes the 'control' to be a spirit entity.
In reply to your edit:
I think ap22213 is referring to applying the Socratic method[1], although it might be a more obscure reference that goes over my head, as I am not well-read on Socrates.
But, Socrates is a guy from Plato's books who's is frustrated by all the smart people around him. They all seem really smart to Socrates, and Socrates wants to be smart, like them. So, he goes around asking all of these smart people lots of stupid questions. But, it turns out that they can't answer a lot of his stupid questions.
It may not be a good strategy though, because eventually they kill him, because he's an annoyance.
I assume you're being ironic. It's clear to me that Socrates did not for a moment think that all those other people around him were smart--more precisely, he didn't think they were as smart as they thought they were.
The Socratic method means something more specific than that. Socrates' basic MO was as follows:
(1) Find a thesis that the person you're talking to appears to believe, and get him to commit to it.
(2) Get the person to accept a few apparently innocuous assumptions.
(3) Get the person to realize, by repeated questions and answers, that the apparently innocuous assumptions lead to conclusions that contradict the thesis the person originally committed to.
This could be construed as "acting stupid" to start with, but if you're successful with the method, it will be obvious by the end that you are smarter than the person you're talking to. Which can be frustrating for that person, and since Socrates apparently did it to a lot of people, they eventually got fed up, as ap22213 says.
labelling it as smarter is a bit of a misnomer, they may very well be more knowledgeable and experienced than you in the field you're asking questions about.
You can question someone's assumptions without being 'smarter' than them.
they may very well be more knowledgeable and experienced than you in the field you're asking questions about.
But if they are, then the method will break down at either steps 2 or 3, because either the person will see that the assumptions you're trying to get them to accept would contradict their thesis, or they will be able to use their superior knowledge to convince you that the assumptions don't actually contradict their thesis after all.
You can question someone's assumptions without being 'smarter' than them.
Yes, but the method I described is more specific than just "questioning assumptions". It's a specific tactic with a specific objective: to force the person you're talking to to admit that a thesis they had believed up to that point is false, by making them prove it false for themselves. But to use this method successfully, you have to pick your targets: you don't use it unless you already know, in advance, that step 3 can be reached, which means you already know, in advance, that the person will not be able to head you off during the process. That knowledge is what makes you smarter than the other person.
It's worth noting, btw, that most of what we know about Socrates, and practically all of what we know about how he used this method, comes from Plato's dialogues, which portray Socrates as having a very high success rate with his method. We don't know how accurately Plato represented that aspect of things; the real Socrates might not have been as consistently successful at using the method as the character Socrates in the dialogues. My comments are really about the character Socrates; they are only about the real Socrates to the extent that the character accurately represents what he did and how well he did it.
well obviously you're right because you hold the opinion. If someone else held that opinion than it would obviously be wrong because then you would no longer hold that opinion, thereby nullifying the validity of said opinion.
People who are never wrong are not smart, they are "smartass". Same goes for people who "know it all".
Being successfully smart means you recognize you don't know everything, but you also recognize you know quite a lot. And for things that you do know you are always looking for why these might be debunked. And for things you do not know, you know you can learn and become pretty confident you know them very quickly, to grasp concepts somewhat faster than other people.
Smart people actively seek evidence that they are wrong - finding evidence that you're right doesn't prove anything. A single piece of evidence that you're wrong shows everything.
That's sort of a no true scotsman argument. People can have astonishing amounts of brainpower and still do very little critical thinking on their own beliefs or others' arguments they disagree with.
The point isn't about what all smart people do or don't do. It's the power of counterevidence.
There was a great Veritasium video on this while back. The schtick was that Derek Muller was posing a puzzle to people as a series of numbers. They had to figure out the rule governing the series by proposing their own series.
The point was that finding other sequences with matched the rule didn't do much. Finding a sequence which failed the rule, that is, one that disproved what they thought the rule was, was much more informative.
If you've got lots of brainpower but never push the envelope of your beliefs, that brainpower's not going to be of much use actually expanding your understanding.
Which is why really smart people try to break things: software, hardware, systems of understanding. By pushing them beyond what they feel their limits are and seeing what happens.
While I agree that trying to actively break one's theories is a thing that should be done by smart people, I think that in the general sense of smart meaning "possessing raw intelligence", it is not something that I can say that smart people do in general, nor is it something you have to be that smart to do.
Intelligence and education have the effect of greatly increasing the number and variety of "things you can think", exponentially so. Unfortunately the exponentially-vast-bulk of such new thoughts are also wrong. Many of them grossly and obviously so (the vast bulk are new ideas that are syntactically-correct gibbering nonsense), but there's a much more dangerous and still very large fringe of wrong ideas that require one to dig into reality to determine are wrong. And what I often find myself calling something like the Prime Danger of Intellectualism is that our education system is happy to introduce you into that vast space of new ideas, but while flirting for a while with ways of determining their truth, a lot of intellectual tradition in the last 50 years or so has given up on it (in many cases wearing this surrender as a badge of pride!), leaving many modern intellectuals adrift in a massive sea of ideas with no compass, no sextant, and no map. It's a sad and frankly dangerous state of affairs, and it is the default in the intellectual world right now, unfortunately. I know that's a bold statement but once you learn what to look for you, alas, see it everywhere.
I'll condition my definition as "those exhibiting smartness", not merely those possessed of an innate mental capacity and facility.
Knowledge is a container. You can be knowledgeable (full of information) and either smart or not, though I'd consider someone filled with facts and the lack of capacity to connect and relate them sensibly not particularly knowledgeable.
Intelligence is somewhere between a talent and a skill. It's trainable, but requires an innate foundation. #INCLUDE GLADWELL_10000_HOURS
Wise is a mix of both. You need the information (much of it acquired through experience, or at least direct observation), and the capability to use it. And it's the wise who will almost always exhibit the trait of questioning their most closely held facts, beliefs, understandings, and mechanisms.
I've been reading up on Hyman G. Rickover, father of the US nuclear navy. His approach to technology, organizations, training, and more, are very much what I consider to be the workings of an absolutely first-rate mind. I'd like to find a good biography of him.
>Smart people actively seek evidence that they are wrong - finding evidence that you're right doesn't prove anything. A single piece of evidence that you're wrong shows everything.
This only holds if you define "smart people" to be this way. Which is kind of a tautology.
I you use the common meaning of the word, or even just check what people other people consider "smart", you'll find out that they don't always fit your description.
I think to be practical it's best to see who people label as smart, and then note what attributes they have. Instead of starting from the attributes that you think a smart person should have (because then you just made your own definition, and it doesn't describe what most people see).
Seeking for evidence that you are wrong is the same as seeking evidence that you are right. If nothing comes up to prove that you are wrong then that's stronger evidence that you are right.
But neither this, nor what the author is talking about is "the curse of smart people".
Conspiracy theorists, evolution deniers, religious fundamentalists, alien abductees and so on are examples of people getting together, and successfully rationalizing their belief in some quite twisted ideas. We don't call such groups "smart people", typically. They're not necessarily stupid either. They're just people.
So it's just the "curse" of being a human, I guess.
Whether we use our mind superpowers for good, or in order to fool ourselves and the people around us is completely independent of one's smartness, and has more to do with wisdom.
Many people believe that if you can wrap a convincing narrative around a set of selected facts, the narrative must be truth itself. Stories posing as insight are all around you.
Experience leads to wisdom, and wisdom teaches us to be more open-minded and less sure in the stories we hear or tell ourselves.
The curse, specifically, is that the smart people make their mistaken theories sound good, build them up to the point that it's hard to break them without being equally smart AND also primed to look for subtle glitches in reasoning or technique.
To me, intelligence is like a fast truck with a lot of cargo space: you can use it to help other people move into a nicer home, you can use it to make great trips, you can run it into a wall and kill yourself much better than you could with a slower truck, and you can load it up with children and drive off a bridge. Intelligence is to stupidity what a power tool is to a screwdriver, you can do more powerful good and bad things with it, but the fact that it's more powerful says nothing about how it will be used. Sadly.
As someone who secretly fears they suffer from exactly what the blog author stated, I can tell you unequivocally that your interpretation is not what I fear.
Plus, I've noticed that with some people, their great knowledge and/or intelligence concerning a particular issue, combined with confidence (and maybe a dash of social 'deafness'), can come across as arrogance. Sometimes it took a while to figure out that they were open to correction, and hella smart.
I would've missed out on their insight had I applied a heuristic as simple as "arrogant people aren't smart".
You need a certain level of arrogance to think that you are right despite everybody thinking you are wrong. If you are threading new ground, you need it to succeed. For example, all those startup creators are arrogant. They think their ideas will succeed despite knowing that they have 99% chance of not succeeding.
Your reaction is an irritating trait of our western societies. Almost all our hero figures, all our geniuses, are arrogant assholes. Almost everything on TV is about a guy winning against all odds, our business geniuses had all crazy business plan, ... Yet, we despise the same people if their plan does not succeed.
Take a guy like Zuckerberg. Drop out of school to build your startup with borderline illegal business tactic, that's genius. Do the same and fail and you are an obvious loser that deserved what he got and should be happy not to be in jail. This mixed signal the society is sending is annoying.
Being a parent to two kids, I can definitely see the 'evolution of rationalization' taking place as kids grow older. Your child will feel angry about certain events, or about other people and so forth for whatever the reasons, but usually, it's simply because they are hungry or feeling tired.
What's interesting is when you observe how they are trying to find out reasons why they are upset, when basically, it's their instinct/emotion that caused them to get upset. So you will notice the lack of consistency or relevance of their evidence in backing up their argument.
But this changes gradually as they grow older. They become more logical and more consistent. I think this is what most adult, or smart individuals learn to do as well.
It amazes me everyday to find myself rationalizing so many things and at the same time, persuading others using the same logic, when the other part of my brain secretly acknowledge/recognize that I'm making it sound all logical and trying to get others approval for my own sense of security.
Your observation of kids blaming events or people for their 'bad' feeling, when they are actually hungry or tired is spot on. However this kind of causal relationship exists in far more subtle and extensive ways and stays operational all your life! Examples from research: judges give harsher sentences right before lunch vs. right after lunch. You are less willing to help others when you are cold. The list goes on and on.
Your kids becoming 'more logical and more consistent' is only them becoming so good at rationalizing and explaining their rationalization that you believe them. In fact it is still being hungry that made them lash out.
I am absolutely terrible at noticing when I rationalize things in this way, and I think a big cause of it is that I spend most of my day stuck in a world of ideas and abstractions that keep me from even feeling hunger sometimes.
Working on balancing that out is what I consider one of my primary challenges as I enter my thirties. I hear meditation can help, but does anyone here have any other tips?
> Your child will feel angry about certain events, or about other people and so forth for whatever the reasons, but usually, it's simply because they are hungry or feeling tired.
It's amazing how much this affects our own behavior as adults, too.
It is my own recognition that I will not be correct or make right decisions that leads me towards faith in long-term knowledge building: on any given day I will most likely make a bad decision, but in the aggregate of many days, like any other practiced skill, I will be able to recognize mistakes, correct course, and develop a way to "practice out" or "systematize out" a bad habit.
This is an interesting angle on something I've noticed, and perhaps a better explanation of it, but I'll put my observation in the ring in case it offers any other insights: Smart people can end up developing faulty world views because they are good at arguing, and every time they win an argument, they convince themselves more and more of the infallibility of their world view, when in fact all that happened was they encountered yet another person who couldn't spot the flaw in it.
An example of this is extreme libertarianism. For a long time, I had convinced myself that it was totally sound because if I argued politics with most people I knew, I could shut them down with logic and "prove" my case. The logic and robot-like consistency of it just reassured me that I was unassailably correct. It took me getting to the point where I realized that I didn't truly believe some of my own arguments, despite the fact that I had successfully made them countless times, to snap out of it and realize that my views, as rigidly consistent as they were, were probably disastrous from a public policy standpoint. I rationalized a position to others, which ultimately rationalized it to me as well. Don't do that.
> "I didn't truly believe some of my own arguments"
One of my core philosophies of argument is that I always argue honestly. That is, I give my real reasons for believing things (as well as I understand them) even if I know of other arguments that sound persuasive but that I don't believe as strongly.
I find this means I lose arguments slightly more often (which is a good thing; I end up changing my mind), but I also win handily quite a bit more often. I see far fewer stalemated arguments, and far fewer arguments that devolve into quoting my side's famous thinkers at people who quote their side's famous thinkers back.
It was difficult at first, but with practice it's become fairly easy.
I think the problem is that you never encountered anyone who actually understood the best reasons for the policies you opposed, or even had a model (however ad hoc) in which the relevant benefits could be expressed.
I see that kind of thing all the time: I heard bad arguments against the minimum wage for years before I was even aware of a purported disemployment effect. Ditto for the relationship between traditional morality and STD/population growth concerns.
I don't think you can blame yourself for that. To the extent that an idea makes sense, there should be someone that appreciates it, and can convey its logic. To the extent that your acquaintances were not those people (and did not subject their beliefs to the same checks you did), neither did they have any business being more confident than you in their views.
Sure, your ideas may have been disastrous as policy, but how could you have known that? You can't update on evidence you haven't gotten -- let alone evidence which resists revelation by interested adversaries!
The issue that hamburglar seemed to have is that he was relying too heavily on social proof. Something like "My ideas are the best out of the people I know so I should bet heavily on them" does not work well when your friends to do not cover most if not all of the relevent solution space with well thought out arguments.
That's why I said that hamburglar might not have a basis for being confident in an absolute sense, but definitely had grounds for being more confident than people who can't even spot a flaw in the argument.
(And FWIW, "trusting social proof", as the term is used, would mean deferring to the group beliefs; I think you mean to say s/he was overestimating the peer group's representativeness. But even then, I think the group was representative of the population at large: very few people possess the kind of understanding to do that, nor hold themselves to the standard that would make them seek it.)
> And FWIW, "trusting social proof", as the term is used, would mean deferring to the group beliefs;
I did not know the phrase had commonly held definition. I meant more along the lines hamburglar's heuristic for deciding when they were more likely to be correct was overly dependent on not just the people who he happened to be around but other people in general.
> But even then, I think the group was representative of the population at large: very few people possess the kind of understanding to do that, nor hold themselves to the standard that would make them seek it.
My comment was not about a group representing the population, but rather representative of the possible solutions weighted by relevance or similar metric.
> I think the problem is that you never encountered anyone who actually understood the best reasons for the policies you opposed,
hamburglar should have/be use heuristics that do not depend on encountering the right people to gather and judge evidence when possible.
>I did not know the phrase had commonly held definition.
"Social proof" refers to the general phenomenon of people forming opinions in alignment with what they observe others believing. I understand what you meant, but it's confusing to frame that as a social proof issue, given its common usage.
>My comment was not about a group representing the population, but rather representative of the possible solutions weighted by relevance or similar metric.
I know, but that's it's wrong: the group was representative of the typical arguments and reasoning offered regarding how to model government policy. Most people lack the understanding to articulate what's wrong with unfamiliar or "absurd" proposals; casting his net wider wouldn't have helped much, even in the internet age.
>hamburglar should have/be use heuristics that do not depend on encountering the right people to gather and judge evidence when possible.
Then what should they depend on? Again, you can't update on evidence that even interested adversaries can't find! At most, it means you should lower your confidences all around, but even then, the people he argued with should have lowered them even further, for the very same reasons.
This reminds me a story of an old composer, John Cage.
He's notorious for some really bizarre performance. For instance, he once sat for four minutes in front of an enormous crowd only to stand up and leave before making any sounds at all. This action, as he intended, took advantage of peoples' expectation to listen while in the concert hall. It built up a sense of anxiety, as he walked in, sat at the organ, and flipped pages atop the music stand. But then, suddenly, there was silence. And those with open ears were forced to hear minute sounds in the room, rather than the presupposed "music".
On another occasion, he composed what ideally would cause expert musicians to sound amateurish - wretched, actually. But after many trials with different orchestral branches, he became overwhelmed with failure. These people, having been trained for excellence, literally could not play nonsense. Any sheet music you'd give them would result in reasonably coherent output.
Like the smart people of this article, these musicians rationalized Cage's music, no matter how nonsensical. They could not play bad music.
Not just impostor syndrome, but the willingness to see what is actually there.
I think this is why art (particularly drawing classes) can be so hard for most people starting out at a later age. One of my teachers would constantly say 'don't draw an eye - draw what you see'. While well baked into our "memory" and knowledge sphere, our mental model of what is an eye or a human head isn't actually what is there.
See also "Gorillas in our midst: sustained inattentional blindness for dynamic events"
If you draw eyes using white paint, they will always look wrong until you acknowledge that the actual appearance of an eye is almost always some subtly colored shade of gray.
Most of the visible part of the eye isn't the eyeball, it's the iris. Take a look at some photographs: the iris and pupil fill more than the entire vertical height of the eye, and about half of the horizontal width. Given the shape of the eye, that's a majority of the area.
Take a photograph, properly color correct it, and see what color the "white" actually is. It will vary a lot, depending on the color of light falling on the eye, and be tinged by the blood vessels in the eye.
I think he/she took offense to the "darkest part of the face" portion of the comment; which is clearly false for the majority of the earth's population, particularly since no one draws the iris white so the original comment was clearly referencing the "white" portion of the eye.
The eye ball is white but the eye is not especially if the eye is considered to include the eye socket. Unless you are lying on a dentist's table, your eyes are dark when compared to the rest of the face because the eye socket creates shadows. Yes the eyeball is white but the eyes are in fact dark.
The author anecdotaly notices a great many work place fallacies such as "confirmation bias", "the bandwagon effect" etc. Then, seemingly unaware of these separate categories, lumps them together and ascribes that lump as an innate characteristic of "smart" people.
Obviously an organization that succumbs to these fallacies is going to have trouble. Obviously adding "unsmart" people to the mix will only exacerbate it.
But that's not really surprising.
What's interesting is a trend, of which this article is an example, to ascribe the problems of the alleged meritocracy (which are real) not to the system itself but instead to clear reasoned well informed thinking. It's possible that this strange dislike of reason is triggered by pseudo-intellectuals on the far right cloaking themselves in the language of rationality to argue against global warming, minimum wages, the cigaret-cancer link etc.
But I'd like to see more conjecture/evidence about what's causing it.
I agree with the author by 100%. What a beautiful article. The experiences described almost perfectly match my own experiences at Google and they were one of the reasons I left. (I assume the author is talking about Google here)
If there's one thing to add it would be that: Some smart people (especially the insecure kind) prefer pre-chewed answers by supposedly smarter or more senior people over their own scrutiny if the two disagree. I have seen this countless times.
In fact I have met surprisingly few people at Google that have actually displayed critical and unbiased thinking when it came to the project they worked on or Google in general. This made the whole workplace feel a bit cult-like.
Your experience is directly contrary to my own :-)
Of course I worked in the Chicago office so maybe it was due to being a remote office. But I and my colleagues regularly disagreed with senior people. Sometimes we were wrong and when we got more information realized it. Sometimes we were right and when we presented more information on the problem those senior people changed course.
Both of us are working of purely subjective experiences though so neither one of us can really comment on Google as a whole.
A person who I very much respect once told me there are two kinds of people in the world, stupid and aware of it and stupid and unaware of it. The implication is that everyone is stupid on some level, meaning there is always someone smarter than you and there are always things you do not know or have not yet learned. People who are aware of that are wise. Wisdom and intelligence are not the same thing. I think that is sort of the key meaning of the post, and if one can learn that at an early age as the author did, then they learned it earlier than I did.
This isn't true unless you redefine one of the two comprising traits. You could make a good case for saying that wisdom is intelligence plus experience, but to me that too misses a certain essential element of wisdom.
I don't think you can 'buy' wisdom, you either have the capacity to get it or you don't. I was recently at my dad's retirement ceremony, 30 years in the Marine Corps. One of his good buddies was there, a similarly seasoned leatherneck, let's call him Tom.
I have a habit of addressing people I don't know well with 'mister'. This time he asked me to call him 'Tom', with the explanation that 'you're probably more grown up than I am, anyway'. It was a joke, but the kind that makes your heart go out to him.
He'd had a successful career, but his personal life was always a mess. He was in a particularly reflective mood that day as he spoke at the ceremony, and I learned a lot about the many mistakes he'd made and all the things my dad did to help him out.
This was a fairly intelligent man, my dad didn't make friends that weren't intelligent, and also very knowledgable, something you can't build a 30 year career in the Corps without, they'd kick out out at 20. He was definitely experienced. But he wasn't particularly wise, or rather, wisdom was particularly hard-won for him.
He was one of those people who was just really hard-headed about certain things. Had he been wiser, he would have been able to let go of perceived slights and been easier with his family and spent more and better time with them. He would have picked better women to get involved with. Now he's got a relationship that works, and it's about time.
Wisdom is the thing that lets you see the essential quality that makes the important things in life important. Wisdom makes life less complicated and more pleasant. Intelligence tends to complicate things, and knowledge and experience move in one direction. Wisdom and experience makes you more perceptive, knowledge and intelligence by themselves don't.
First thing, your Dad sounds like he was a great man. As for Tom, his story reminds me of the old saying, regrets are just lessons not learned yet. I can relate. Cool conversation, thanks for sharing.
I'm tired of this meme; it is way overstated. You could just as easily say "the curse of smart people is arrogance, because they know just how stupid the people around them are." I can assure you, I've known plenty of incredibly smart people who were quite arrogant.
Depends how you define "smart" then. Viewed differently, nominal smarts + arrogance = lack of big-picture smarts = at the end of the day, just not all that smart.
One of the absolute worst attributes of smart people is their willingness to argue over the semantics of a word rather than simply accept the overall premise being communicated by the other person.
Except when, as in this case, the underlying semantic confusion of their premise (or how they articulated it) has a lot to do with why it was wrong in the first place.
This does seem to make sense, the more aware of your environment you are (the more you know), the more modest you become when you realise you don't know it all.
'Smart People' certainly seems like an ego-stroking classification we enjoy being placed under (especially here on Hacker News).
I also have trouble seeing how impostor syndrome is viewed as something wrong. I always thought it was just the new buzzword replacement for the age-old concept of humility.
>Smart people have a problem ... That problem is an ability to convincingly rationalize nearly anything.
My apologies to all us 'Smart People' but this can be attributed to 'People' in general. When we want to believe something, we will seek out objects, arguments, 'evidence', etc. that support that belief. If we find enough pieces (that also seem to complement each other) we will be satisfied.
Maybe 'Smart' just means 'more-equipped' (with ideas, studies, etc.) to rationalize something (more quickly or more 'effectively'). Which also makes me wonder if we are just equating 'Smart' (as a classification) to people who have a relatively wider knowledge/experience base to pull from relative to someone else? How else could we 'rationalize' something new other than using/rehashing ideas we have come across already?
>But I think Impostor Syndrome is valuable
I agree. Humility is extremely valuable. It is especially valuable when solving problems, which is what most 'smart people' do, isn't it?
I didn't read this as a critique to the importance or relevance of logical thinking. Nor to smart people's arrogance. But as the dangers to rationalize as a tool for self-dellusion.
Depending how do you feel, how your emotions are, you might want to protect yourself from changes, from disapointments, from "everything-is-wrong-start-all-over" situations. So when you are smart, in this definition, when you have highly developed rational thinking; then you use this to create a illusion that keep you in your comfort zone. Because you can rationalize pretty much everything that happens in the world.
I would say this is not a "curse of smart people", just the particular tool these smart people, according to the author's definition, use to self-dellusion. Other people, with other intelectual resources, use their other resources to self-dellusion all the same. The point is, you have to try to find out when you are on a state of self-dellusion, whatever resource you use to prolong it. In other words, smart people may not be wise. Just as any other "kind" of people.
In short, rationality is not the same thing as intelligence and general intelligence does not correlate with rational behavior nearly as well as we might like it to. Individuals with high intelligence can and do put their mental endowments to work to argue for mistaken ideas and execute irrational plans.
I'm now thinking of an intelligent individual who made a small misstatement in conversation, then, when challenged on that point, proceeded to explain why he was not mistaken for several minutes... then finally wised up and admitted that his original statement was wrong.
Respectfully, I think this is the main contribution of the referred post:
"Working at a large, successful company lets you keep your isolation. If you choose, you can just ignore all the inconvenient facts about the world."
Having an established revenue stream for an organization is a superpower which let's an organization absorb ridiculous amounts of bad management, poor decision making, lackadaisical execution and so on. Compared to a startup or small company - Good bye scrawny Bruce Banner, hello Hulk. See that bus coming? Who cares, now you can take it, no need to do anything smart about it.
People moving to large established organizations should be keenly aware of this. Having an established product and maintaining that product might sometimes seem like an odd side-effect to large areas of an organization of all the other busywork done at siloed "scientifically" managed organizations.
My day job is working with programmers to find better ways of doing things. As part of that, I share with them things that folks have found make their lives better in 70-95% of the time. Not all the time, but most of the time.
You would think this would be easy, but it is not.
Instead of accepting the heuristics I'm sharing, many programmers want to argue about whether something "works" or not. They want a logical, rigid proof of why it works. They have tons of objections as to why it wouldn't work. They have anecdotes about other people who tried the same thing and it didn't work (Remember, I'm sharing things that mostly work, not Newtonian Laws) Anything at all besides actually trying it out to see for themselves!
Smart people, indeed all humans, reason emotionally and then use logic to justify reasons for their conclusions. People who aren't so smart are used to demurring to others for answers -- a priest, a counselor, a politician, a teacher. After all, they've learned they're not so smart. Society has told them that folks in authority are there for good reason.
People who are smart will also demur -- but to somebody who already agrees with them. And the internet is full of authority figures taking every position imaginable.
I really love working with extremely smart people, but you don't ever want to get in an argument with them. It's not good for either of you. They'll be able to convince themselves -- and you -- of just about anything they want. It's truly a terrible thing to be super smart, have blind spots, and be completely in the dark about it.
EDIT: And to the other commenter's point, this is not about arrogance. Even nice, humble people do this. This is about smarts -- the ability to use resources and solve problems creatively. When you put that to work to justify something (or avoid something, as in this article), it's amazing the things you can come up with. Even if you're the most humble person in the world.
> They want a logical, rigid proof of why it works. They have tons of objections as to why it wouldn't work. They have anecdotes about other people who tried the same thing and it didn't work (Remember, I'm sharing things that mostly work, not Newtonian Laws) Anything at all besides actually trying it out to see for themselves!
The problem with "actually trying it out" is that it's extremely costly. They might be afraid to risk wasting two weeks on something that ultimately didn't bring any benefit. That's understandable. Part of this is also because a lot of "better ways of doing things" probably actually don't work. People think they do, but that's because of a mix of popularity, good marketing, just-so stories and the fact that it's often incredibly hard to measure actual impact of such changes on work performance. Add to the last point that majority of people seem to not understand data, statistics, and that 86% of population believes any number they read on the Internet. So I completely understand that they ask for "logical, rigid proof of why it works" - because most likely it doesn't, otherwise you could provide a reason why it does.
The author seems to attack logical and rational thinking a lot, however, their argument is not convincing.
"But I think Impostor Syndrome is valuable. The people with Impostor Syndrome are the people who aren't sure that a logical proof of their smartness is sufficient. They're looking around them and finding something wrong, an intuitive sense that around here, logic does not always agree with reality, and the obviously right solution does not lead to obviously happy customers, and it's unsettling because maybe smartness isn't enough, and maybe if we don't feel like we know what we're doing, it's because we don't."
Most importantly, consider the quote "logic does not always agree with reality". Logic neither agrees nor disagrees with reality. Logic itself is flawless, but the assumptions you put into it can be wrong. The problem is not with thinking logically, thinking logically is good. Rather, it's an issue with holding incorrect assumptions.
Leaving aside the problem of defining smart, the claim, then, is that the smarter you are, the more false assumptions you hold. That's quite a bold claim. It needs a lot more evidence than personal anecdote for me to accept.
There is no quote that "the smarter you are, the more false assumptions you have". I can't even find the words false or assumption in the article. You are strawmanning or reading incorrectly.
This is what he says:
> Here's the problem. Logic is a pretty powerful
> tool, but it only works if you give it good input.
> As the famous computer science maxim says,
> "garbage in, garbage out." If you know all the
> constraints and weights - with perfect precision
> - then you can use logic to find the perfect
> answer. But when you don't, which is always,
> there's a pretty good chance your logic will lead
> you very, very far astray.
My personal experience in life has been that this is the case. No matter how much I have studied, it's not possible to logic your way out of insufficient or misperceived data which in most domains other than programming is the rule.
I've also experienced the same over-confidence of logical thinkers kept in bubbles - they adjust to the quantity and correctness of data they have in one part of their life and then use logic and cognitive biases to rationalise the outcomes they have in life.
> My personal experience in life has been that this is the case. No matter how much I have studied, it's not possible to logic your way out of insufficient or misperceived data which in most domains other than programming is the rule.
> I've also experienced the same over-confidence of logical thinkers kept in bubbles - they adjust to the quantity and correctness of data they have in one part of their life and then use logic and cognitive biases to rationalise the outcomes they have in life.
Well if they use logic then they are dumb, not smart. Centuries ago we've invented better tools called statistics and probability theory that are designed to deal with uncertain and incorrect data, and do this job optimally, as opposed to human intuition and cognitive shortcuts humans normally do.
> ... the claim, then, is that the smarter you are, the more false assumptions you hold ...
I disagree, the claim wasn't that smart people are wrong more often. But that smart people, even when they are wrong, are too good at rationalizing their wrongness to ever realize they were wrong in the first place.
Not recognizing you are wrong and heading down the wrong path is a sign of a lack of experience. Experience is what teaches you how to recognize 'wrongness' whether the reason for not recognizing it is rationalization or obtuseness doesn't matter. And a smart person would learn the lessons of this faster and thus suffer from it less in the long run.
As the author was speaking of his experience as a founder, I read his "logic does not always agree with reality" to mean that in an environment of sufficient uncertainty logic is not that useful, because you can think logically as much as you want, but you still cannot predict your environment. No thinking-based solution, be it logic or something else can work if you cannot predict your environment. It does not matter if logic is flawless or not. It is misleading to think about it as putting in the wrong assumptions, rather it is that in an unpredictable environment you cannot know what the right assumptions are.
> No thinking-based solution, be it logic or something else can work if you cannot predict your environment. It does not matter if logic is flawless or not. It is misleading to think about it as putting in the wrong assumptions, rather it is that in an unpredictable environment you cannot know what the right assumptions are.
Of course there is a solution that can do this, it's called probability theory and it was invented exactly for this - for dealing with uncertainty and unpredictability. Moreover, I don't know where this meme that IT people/geeks/smart people use "logic" came from, it's nonsense and it must die now. Reality runs on probability theory, not by high-school logic.
Also I think that those smart people are getting things right, the "curse" seems to me to be partly jealousy of the less smart (so they "those smarter guys are actually dumb" to boost their self-esteem [0]) and partly the fact that many things in this world are actually pointless nonsense that the society cultivates because, well, not everyone is smart. Many people get labeled "geeks" or "introverts" because they couldn't possibly care less about the soap operas average person play out in their lives.
[0] - I've seen this behavior countless of times when objectively better schools, universities, courses, or even TED gets called "elitistic" and thus becomes bad/stupid in the eyes of those who're outside of the "elite" group.
Probability theory is great, but there are still environments that are unpredictable enough that experimentation is the only alternative. Think of science, for example; If probability theory combined with past experience was enough, there would be no point in doing experiments.
Probability theory can only give you the best estimate from the past. If the future is in any way different from the past, for instance because the past gives insufficient information about the process in question, the prediction will still often be wrong, and you are back to experimentation rather than thinking.
Mr. Spock was always illogically neglecting the reality of emotional behavior in humans. I always thought Spock was the most illogical person on the Enterprise.
But look at what they did on ST:Next Generation! They went the other way and replaced him with a psychologist who would proffer such advice as "The Klingons are firing at us, I sense anger."
It also repeates the cliche that emotions are opposed to reason. That's kind of a self-fulfilling prophecy for most people, but it just doesn't have to be that way.
All you have to do is stop repressing your emotions but also stop appeasing them, two opposite sides of the same mistake. Rather you listen to them and treat them as an input and an output just like all other data (except they are special in being both input and output). (In a nutshell---in practice there is a lot more to it.)
> the smarter you are, the more false assumptions you hold.
I read a bit past what the author wrote. I think he observed that smart people in that context externalize blame. This is extremely natural, and something only seasoned individuals learn to avoid. Thinking that a project fails purely for external reasons prevents correction of personal actions on future events. As such, it is better to err on the side of blaming oneself than to err towards blaming the environment.
II personally believe this behavior is pervasive across the entire intelligence spectrum. Personal accountability is something that is socially learned.
> Personal accountability is something that is socially learned.
But the problem arises, when you then need to take things into account which you simply can't anticipate.
Consider this (overly simplified) exampe: I have to choose one out of A, B and C. I know that A is ruled out because of this and this. So I only have B and C as realistic options. I choose C, as it is furthest away from A and I could use this as an reasoning for it to be the safest bet. So, C it is. But then it just happens that for whatever reason C is ruled out too for whatever reason. Thus, by choosing C out of the three, I make no mistake. I had solid reasoning, for all I knew C should have been the safest bet. The fault is not mine, the fault is external.
How can someone ever just suck it up, when they follow a precise, definite and convincing procedure to come to a situation in which external influence just makes it all fail for them? How could there be any personal accountability in an example described above? There is none, the reasons for failure are absolutely out of my control. I can not be held personally accountable (in a philosophical sense, world out there doesn't give a fuck) for things I have had absolutely no way to anticipate.
How could you just go all "Oh well, you should have picked B instead, because had you done so, your choice wouldn't have failed!". Yeah, that's true, but there's nothing to learn about personal accountability there.
Personal Accountability is about more than just admitting you were wrong. Sometimes it's about admitting that you didn't have enough information to make a definite choice. In you hypothetical situation while you didn't do anything wrong you still had something to learn.
If you don't look back and see what information would have allowed you to pick B and instead go oh well, there was nothing I could have done differently. Guess it was just inevitable. You will miss an opportunity to learn and possibly do better next time. That missed opportunity is an example of arrogance in action.
I also think it's ok to look back and believe you made the best decision you could have given the information, and that you simply got it wrong (and that it's ok and understandable). And that while you may have gotten it wrong, going another route still wouldn't have necessarily been the best decision due to risks vs rewards.
Logic is very much flawed in the sense that you can't necessarily model reality using logic, and good paradoxes point out examples (for example look at the torturous "solutions" to http://en.wikipedia.org/wiki/Sorites_paradox ).
And that is a blatant lie. By a famous philosophy professor, no less, which just means it a worse lie.
All statements are true, false, arbitrary, or nonsensical.
Contrary to Priest, a statement can't be true and false. (It can be neither true nor false, by virtue of being arbitrary or nonsensical.)
A self-referential sentence, like "This sentence is false," is simply nonsensical. Famous philosophers have written whole books on the premise that this is an important problem, but it isn't.
Graham Priest would probably gesticulate that I just don't "get it," but no, I do get it. I've read the article, and it's bullshit.
You can define any consistent logic system you want. It would help if you think of "T" and "F" merely as two symbols rather than having semantics that seem familiar to you in the real world. But once you assign the meanings "true" and "false" to those symbols, you venture beyond mathematics into the realm of philosophy (in particular, epistemology). The problem is that while you can define "true" and "false" in a common logic system, taking the same definitions into the real world isn't so smooth. The main problem is that in the real world you don't have axioms -- you have assumptions which might be "true" or "false" (whatever that means once we go beyond our measurement range). This alone pretty much precludes almost everything in the real world from being "true" in the mathematical sense. These assumptions are so pervasive that even the most basic quality, or concept, underlying all of science -- namely, causality -- is a mere assumption. Therefore, your statement "All statements are true, false, arbitrary, or nonsensical" can only be defined to hold in a system of logic, but doesn't necessarily translate into the real world. Mathematically speaking, nothing we know of the physical universe is known to be either true or false. In fact, you can't even conclusively (again, in the mathematical sense) say that two statements about the physical universe contradict, because we don't "know" anything about the physical universe in the same sense as we know something in a mathematical logic system. In short, your opinion is neither true about mathematics (where you can define any system you want) nor about the physical universe, where you can't directly apply any mathematical logic system, and in particular the one you think is "the one true logic".
This is pure skepticism. Your position is: "We don't know anything about the real world."
That is absolute hokum. The Earth orbits the Sun, living things need fuel to survive, two units combined with two units sums to four units. I could go on and on.
You need good epistemology to explain why this is, since while it's common sense, I agree that common sense is not sufficient when we're talking about philosophy. Clearly you have not discovered good epistemology.
Isn't his position that mathematical (2+2=4) and logical (!T=F) knowledge is a different kind of knowledge than empirical (temp=20C) or scientific (F=ma)?
I took this to be similar to what Einstein [1] said:
"As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."
Thanks for the Einstein reference. I wasn't familiar with it. However, my comment expressed little more than well established principles in epistemology and the philosophy of science. Einstein in his essay discusses the same general issue. My favorite demonstration of the difficulty of defining (non-mathematical) knowledge is the Gettier problem. The coolest thing about it is that if we try to come up with a definition of knowledge that is always agreed upon by everybody -- one that fully reconciles what we can intuitively and "logically" consider to be knowledge with all forms of the Gettier problem[1] -- we find ourselves with such a narrow definition that hardly covers anything we consider to be "known". This means that in science, as with everything else, we must adopt more lenient -- or pragmatic -- definitions of knowledge, but those definitions are not rigorous in any mathematical sense, and are always lacking.
> Your position is: "We don't know anything about the real world."
This is absolutely not my "position". The word know requires definition, which is a very hard thing to do, and is the subject of the whole of epistemology. All I'm saying is that "know" in the real world and "know" in logic are not -- and can never be -- the same. We cannot "know" something about the physical universe in the same sense that we "know" something to be true in mathematics (but we can certainly know a lot about the physical universe, if only we would define knowledge differently than in math, which is pretty much what every philosophy does). The point I was making is that you can't blindly overlay mathematical concepts on the real world merely because they use similar terminology. Particularly, the definition of "truth" is inevitably different in math and in science, if only because science does not have the concept of the axiom, which is a core principle in the definition of mathematical truth.
You cannot directly apply your notion of truth from logic to the physical universe and vice versa. Even in your examples you're confusing two different notions of truth. We do not know that the earth orbits the sun in the same sense that we know that 2 + 2 = 4. The latter is simply an algebraic axiom of the field of real numbers, while the former is an application of scientific assumptions to observations. We can be content with both types of knowledge, but we cannot equate them. We can treat both as true, but they are not true in the same sense.
I didn't just say that to take a cheap whack at you. I'm not interested in that kind of low-brow discussion. But I see why you don't agree with that characterization. So I'll back off on that point.
I do think you're making a very serious error (albeit common in philosphers and mathematicians).
What you are saying is that mathematics is derived from a set of axioms, and therefore is true "by definition," whereas truth about the real world is something else. And you have a harder time in various ways) with non-mathematical truth. The foundations there are shakier.
However, the foundations actually are not shakier. They just aren't as widely known among philosophers (or common people). I'm not going to try to explain it here. That would be ridiculous. But to say one important thing that most people miss: You have to realize that knowledge is only valid in its context. It is contextual. So it is true to say, for example, that Earth orbits the Sun. It could turn out that we all live in the Matrix and that Earth doesn't orbit the Sun, but that claim would still have been true in the current context (where we have no reason to assume we live in the Matrix). Likewise, Newtonian mechanics can never be invalidated by later discoveries, because it is true in its context, i.e. at a macro scale. I don't know if this point will help you, but it is one barrier to getting scientific knowledge on a non-shaky foundation.
A separate point: while it's true that you can have a mathematical system with arbitrary axioms, you can also have a mathematical system that describes reality that is developed using philosophical (as opposed to mathematical) induction (just like all "scientific" truths), and thus that is actual true in a scienfic sense, and which is NOT based on axioms.
For instance, you can define a mathematics where 1 + 1 is 2, but you can also induce this rule from reality by looking at objects.
You can define a mathematics where 1 + 1 is 1, but you can't induce that from reality, because it isn't true in reality.
Really, the point of mathematics is to improve human life, not to be an idle pasttime for professors that don't have moral qualms with getting paid by taxpayers yet not rendering them a service. So the point of mathematics should be to look at mathematics as it can be induced from reality, and figure out things that are not only true in what you call a mathematical sense, but also true in reality. It's also fine to look at made-up mathematics, but only in that that sometimes turns into useful discoveries for real mathematics.
That anything can be induced from nature is a scientific assumption, and cannot be readily reconciled with logic. This is the "problem of induction".
In any case, even if you decide to accept induction (and therefore causality) as "axiomatic truths of nature", even then you cannot reconcile mathematical deduction with scientific induction. Worse: induction is a constant "working assumption", as laws of nature "induced" from observation have often been shown to be false, or at the very least inaccurate, making even axiomatic induction (if you prescribe to that notion) a theoretical, yet unattainable ideal. This holds regardless of whether or not we live in the matrix.
I don't quite get your distinction between "real" and "made-up" mathematics, but I think you're affirming the consequent by presupposing a "true math", and then arguing that some set of axioms is "false" simply because it is not your own. In particular, the concept of truth and falsehood are not induced from nature because they are properties of statements about nature. Therefore, your own claim about the only possible truth values is not any more "real", or even induced from observation, than any other.
First, you absolutely can do scientific induction. For example, if I drop a ball a bunch of times, I know that balls fall, as long as the context does not change. If a primitive man stands atop a tower, looks around, and says, "The land is flat" (as far as he can see, it is), that is true, in that context. Sometimes you don't know the boundaries of your context, as in these simple cases. (I am well familiar with the problem of induction.)
But the thing is, with modern science, you can say "In the full context of modern thought, gravity behaves thusly," and the context is absolutely enormous.
You can't ask for more than that from induction. You can't ask for omniscience, and then, not getting it, throw the baby out with the bath water.
Note that there are no mathematical axioms here. I do assume that existence exists, that A is A, and that I am conscious. So those things have special status (for reasons I won't get into). But they are implicit in all claims to knowledge---even if you deny them, you are assuming them. They are not like mathematical axioms, which are just assumed and could be different (and are in different systems).
I don't know why you would say that deduction cannot be reconciled with induction. Let me give you an example.
Premise 1: All men are mortal.
Premise 2: Socrates is a man.
Thus: Socrates is mortal.
This is a deductive argument. However, how did we arrive at Premise 1? By induction. And it's true in a certain context. (And we all know what the context is; it not longer applies if Aubrey de Grey "solve death," for example, which is changing the context.) We also got Premise 2 by observation, which is kind of like induction, but is simpler.
Let me address your second paragraph.
I'm not making the error you suppose, which strikes me as a bizarre supposition, but that's because we are coming from such fundamentally different approaches. Anyway, I didn't say anthing about "true math" or "false math," that is your interpretation but it is not accurate.
What I am saying is that if I take one seed and add another seed and count them, I have two. In fact, if I have any number of seeds and add one and count them, I have one more than I did before. Moreover, if I plant five rows of seeds with five seeds in each row, it requires 25 seeds. Here, I am inductively discovering mathematical rules, instead of deducing them from axioms.
So you can have mathematical rules that are (or could be) gotten inductively from reality, and thus correspond to reality.
It would also be possible for mathematicians to come up with axioms that deductively lead to the same conclusions, and a lot of that goes on in mathematics. So if you start with Z-F set theory or Piano numbers or whatever (I'm not a mathematician), you can end up figuring out a lot of stuff (like calculus) that actually does correspond to reality. Though it's not coincidence that Newton did not discover calculus by reasoning from basic axioms, but did in trying to solve actual real-world problems using algebra.
Mathematicians also can and do come up with axioms that do not correspond with reality. That's fine, too, but it's not directly useful, though it can shed light on mathematics that DOES correspond to reality and be useful that way, and/or be used to develop techniques that are logical and then can be used to work with math that does correspond to reality, or whatever. So it's not totally pointless to do this. I'm not saying it shouldn't be done. I'm just saying, for example, that if I create a mathematical system where 1 + 1 = 1, or something can be true and false at the same time, that's fine, but it won't correspond to what we see in reality. I won't put two seeds into a pile and only have one seed, and I won't be able to have my cake and eat it too, even if that would be possible under a certain mathematical system.
> First, you absolutely can do scientific induction. For example, if I drop a ball a bunch of times, I know that balls fall, as long as the context does not change.
You absolutely can do it, but that the result of scientific induction is meaningful is, itself -- at best -- a proposition that rests on scientific induction. Specifically, the proposition that "my memories of patterns of things which I have observed is a useful basis for predicting future observations from other acts or observations" is a proposition that, itself, must either is unsupported or is a generalization from past experience of exactly the type it supports.
> What I am saying is that if I take one seed and add another seed and count them, I have two.
You can't count them and get "one" or "two" until you define what one and two are, at which you have already made one plus one equals two deductively true, and the same is true of any other supposedly induced mathematical truths -- in order to be able to induce them, you must first have definitions, from which deductive mathematical truths follow of necessity.
Let me state your argument in the most general form: You need reason to validate reason.
That is true, but it's completely fine.
Ultimately you have to assume certain things. In Objectivism they are stated as: Identity (A is A), Existence (Existence exists), and Consciousness (And I know it). They are called "axioms," but they are not like mathematical axioms. Rather, they are rules that must be assumed in any claim to knowledge, including in any claim to deny them. For example, if identity is invalid, a claim like "Identity is invalid" would be meaningless.
Another difference from axioms as they are used by mathematicians is that nothing is deduced from these axioms; they are just the prerequisites for induction, any induction at all, from reality.
To address your second point.
First you need to induce the concept of "unit." Then, you can induce the concept "one," "two," and, if you want, other numbers.
Then you can come up with the concept of "addition." I'm not sure how you could deduce addition from the notion of counting numbers. If you'd like to explain it to me (without presuming it in the argument, or any other mathematical knowledge that we haven't gotten to yet), I'd like to see that. I'm skeptical that it can be done.
However, you certainly can inductively observe that groups of things denoted by counting numbers have a certain relationship, and then come up with addition that way, which is induction. And then you realize the causal reason for it being that way. But that is a normal thing in induction. Induction doesn't mean, "there is no cause for this to be true."
To summarize, my point is that the logic is something like observing "If I put one and one together, I have two," but that is induction, not deduction, prior to having addition defined (and then you do it deductively from then on because you already know the rule and you are just applying it).
Though at this point we only know about 1 and 1, we don't know about 2 and 1, for example. So to make it general, and have a general rule of addition, would require more inductive work.
This is all just me thinking through it. I don't claim to be representing Objectivism perfectly on this point, though I do on the first point.
The only difference is that Math adds a few extra requirements in that it is required to stay internally consistent using a given system of logic (in this case, strictly Boolean logic, it's the entire reason why proof by contradiction is even possible).
No it's not. Not any more than chemistry is a philosophy or history is a philosophy.
> The only difference is that Math adds a few extra requirements in that it is required to stay internally consistent using a given system of logic (in this case, strictly Boolean logic, it's the entire reason why proof by contradiction is even possible).
Philosophy is also required to be consistent (properly, both internally and with the external world), and allows proof by contradiction without relying on strictly Boolean logic.
As someone with a pretty strong Math background, I certainly didn't take that from his comment. I took it to mean he understands Math fairly strongly as well, and that most of your grievances with him have more to do with your lack of familiarity with the deeper underpinnings of Mathematics.
>All statements are true, false, arbitrary, or nonsensical.
And which was that?
AI depends heavily on statements that are somewhat true and somewhat false at the same time.
How distorted does a letter shape have to be before it stops being readable? How much does readability depend on context? How much does meaning depend on context? What if input is inherently ambiguous and noisy but there's a signal under there somewhere?
As for smarts, there are different kinds:
Abstract symbolic reasoning (e.g. pure maths)
Using abstract symbolic reasoning to predict possible futures (physics, engineering, finance, AI)
A talent for optimising choices towards an explicit goal through abstract reasoning about possible futures (all of the above, with brain-powered decision making)
The same, but using unconscious processes ('intuition') that are not explicitly abstract or logical (all brain, no AI so far, but still effective)
The author seems to be suggesting that the first two kinds of smarts don't necessarily translate into the second two kinds, for various reasons. And not all smart people realise this.
>There is also “awareness”, but awareness is not a thing or localized in a particular place, so to even say “there is also awareness” is already a tremendous problem, as it implies separateness and existence where none can be found. To be really philosophically correct about it, borrowing heavily from Nagarjuna, awareness cannot be said to fit any of the following descriptions: that it exists, that it does not exist, that it both exists and does not exist, that it neither exists nor doesn’t exist. Just so, in truth, it cannot be said that: we are awareness, that we are not awareness, that we are both awareness and not awareness, or even that we are neither awareness nor not awareness. We could go through the same pattern with whether or not phenomena are intrinsically luminous.
-Daniel Ingram
You're too quick to reject it. The classic dichotomy works well in the objective sphere, but not so well on the boundary of objective and subjective.
There is awareness---I am conscious, so are you. It's a fact. It's true. There is simply no problem, here. It is not a great mystery. We don't know precisely how consciousness works, of course.
There is no boundary of the objective and subjective, because there is no such thing as subjective truth, which is an oxymoron. For example, if vanilla is my favorite ice cream, that is an objective truth about my personal preferences.
How is it a fact? It's your fact, subjective fact. That we both have an agreement doesn't make it objective. You can't prove that there exists no man that will say he doesn't possess awareness. Even if everyone agreed that vanilla is their favourite flavor, it won't make vanilla objectively the best flavor of all.
Godel constructed statements S that are neither true nor false (more precisely, you could add an axiom saying "S is true", or you could add an axiom saying "S is false", and you'd get a consistent logical system either way).
It turns out that if you can prove something is both true and false, then EVERYTHING is both true and false. For example, let's say you want to prove X. Note that Y is true. But then you obtain a contradiction (since Y is false)! Hence X is true.
"Godel constructed statements S that are neither true nor false"
No, that's not what he did.
He constructed a statement that is true, but that can't be proved in a certain logical system. The statement was basically "This statement is unprovable with these axioms", which can either be proved - meaning you've proved something false, or can't be proved, meaning the statement is true, but is unprovable.
Note that this uses two notions of "true" - provable (can be derived from axioms), and actually true.
> Note that this uses two notions of "true" - provable (can be derived from axioms), and actually true.
Even the second notion of truth actually just semantic consequence of the second-order theory of the naturals, which is a mathematical formal concept, not quite the same as actual (ontological) truth.
For e.g. the continuum hypothesis, there are models which obey the ZFC axioms in which it is true and models which obey the axioms in which it is false. The Gödel sentences are less interesting; any model in which those sentences were false would be a model in which PA was inconsistent. Which sure, can exist (e.g. the "self-hating theory", PA plus the axiom that PA is inconsistent, is a somewhat legitimate set theory with some interesting properties). So saying the Gödel sentences are true was not entirely accurate; rather, it depends which model we're working with. But a model which declares PA is inconsistent doesn't seem like the sort of model that we'd want to do physics with.
You're right that there are "larger" theories that can prove the consistency of PA, e.g. PA + existence of a large cardinal proves the consistency of PA. But, per Gödel, no consistent axiom system large enough to contain PA can prove its own consistency. This is probably a lot less bad than it sounds though; after all, if we have some unknown axiom system T, and we have a proof in T that T is consistent, does that really tell us anything? Because if T isn't consistent, then it can prove anything, including that T is consistent.
Ok, now here's a question: are there any textbooks introducing model theory and talking about these sorts of topics that don't rely on preexisting knowledge of abstract algebra? Not that I don't want to learn abstract algebra, but I don't even know a good textbook to start that from.
All statements are true, false, arbitrary, or nonsensical.
I'm missing something here. Is the statement "the three angles in a triangle add up to 180 degrees" true, false, arbitrary, or nonsensical? What do you call a statement that lacks (or reveals the lack of) crucial context?
In that particular case, any proof that the angles of a triangle add up to 180 degrees has to start from the relevant axioms - and if you include the Euclidean parallel postulate then yes, that can be proved. What would the "crucial context" outside of the axioms you start with and inference steps to get from the axioms to your statement be?
The statement given is equivalent to the parallel postulate, and thus is true in a Euclidean geometry. The statement is tautologically false in a non-Euclidean geometry.
still, logic is simply a system of thought, it's philosophy, nothing more (which is why Mathematics is considered a branch of philosophy).
So the idea that logic isn't reality is a completely valid idea both because logic is simply a system of thought, and because logic cannot reasonably take into account perspective.
you can reason about perspective, but reasoning by itself is not enough for logic, logic dicatates that you reason in a specific manner, and sometimes it's better to reason in other manners.
This doesn't seem to be the curse of smart people. Many bureaucrats don't care if a project succeeds, instead focusing on whether or not it is approved and who's name is attached. That isn't because they are smart. It's because their environment rewards this behavior.
People can rationalize anything. It has nothing to do with intelligence, merely experience and current environment (which is most of what's written in the article).
Most bureaucrats are smart. Most politicians are smart. When you lament why the people in charge seem so dumb, its probably worth reflecting that they're still in charge. They understand something you don't.
It seems highly likely to me that power brokering (i.e. social intelligence) and analytical intelligence are orthogonal skills (and not the only orthogonal axes on which one may be "smart").
"Smart" is inadequate as an adjective, because it leads people to say things like, "If you're so smart, why aren't you the boss/senator/president?" If we separate these forms of intelligence in our daily conversations, we might make it easier for the oft-exploited analytically intelligent to appreciate and take advantage of social intelligence, and gain more recognition from those who have social but not analytical intelligence.
> "If you're so smart, why aren't you the boss/senator/president?"
I think it mainly has to do with comfort. Most people don't want to step outside of their comfort zone to pursue political power. There's too much risk and not enough to gain from the time commitment.
Speaking as someone who helped with someone's unsuccessful political campaign, it takes a lot more than just discomfort to be successful in politics. I think the "just don't feel like it, but if I did, I could win" excuse for avoiding politics is like the "I could probably get in, but I don't want to be a member" excuse for avoiding Mensa. Both things are probably harder than they seem (though I only have experience with one).
They know how to lie, play people against each other, abuse human emotions and so on. It is quite simple to understand this without being in power for various reasons, e.g. different moral standards.
And no, most of them are not smart. Just smart enough. Not the same thing.
Having worked in politics, I tend to disagree with you. Still, I'm not lamenting why the people in charge are dumb, I'm saying you don't have to be smart to rationalize things logically.
As I have grown (just a little) older, one of the best lessons I have learned is just about the limitations of intelligence, of rationality, even, of language. We are intelligent animals. However, we are more animal, than we are intelligent. And that's not even necessarily a bad thing. I grew a lot when I realized that 'being intelligent' is not the only important way to be a good person.
> Smart people have a problem... an ability to convincingly rationalize nearly anything.
This has a simple solution really. They should be actively trying to prove their assumptions false. Rationalize that their premises are wrong until compelled with extraordinary evidence. Be a contrarian.
It will also help if all the smart people in the group have a diversity of beliefs so that everybody gets his assumptions cross-examined.
That's possible, but given that it is one possibility out of a nigh infinite possibilities that they are wrong, what's the probability that they're right?
This reminds me of a kind of intelligence scale ive seen before:
First there are stupid people, and then there are stupid people who know they are stupid, then there are smart people who think they are smart, and finally there are smart people who know they are actually not that smart.
Self awareness is the differentiating factor at every level.
Intelligence comes in many forms; practical, creative, mathematical, logical, social, emotional, etc. As members of society we are more or less forced to specialise and must accept that being really good at something usually means we are terrible at something else. So if you are half as smart as you think you are, you will learn to recognise all the areas where you really are not, which vastly outnumber the ways in which you are.
Which is a bit like saying: "I dont know half of you half as well as I should like; and I like less than half of you half as well as you deserve."
I for one am not that smart, im just a lot less stupid than most ;)
You make it sound like it's good - it isn't. It's not something you enjoy, it's something you suffer. I'm trying to get out of that particular tar pit right now.
The real cure for Imposter Syndrome is objective, hard data concerning a) something real world that you are trying to accomplish and b) your actual relative ability compared to some social norm that matters for the ability being measured. The issue with Imposter Syndrome is that "social proof" is often not very good proof. There are forms of social proof which can be valuable but they are also often painful (a al Bill Gates quote about "your unhappiest customers" being your best source of learning). But a lot of that "rah rah, we think you are awesome and will thus kiss your butt and not give you meaningful feedback" stuff is something to very much worry about.
>Answer: I think it's really true. A suprisingly large fraction of the smartest programmers in the world do* work here.*
Not so sure about this. There are people that are smart for solving math problems and IQ style quizzes and such, and there are also people that are smart with language, patterns, empathy, creativity etc ("Emotional Intelligence" also doesn't cut it to describe this kind of smart completely).
I've met several IQ 140+ people that are basically stupid in lots of ways and incompetent of doing anything with their lives, much less contribute to society. Some of them thrive only on some small niche areas (like chess or speaking many languages).
I think there are smart programmers in both of the categories that I have described, and a process like Google's is mainly getting the first kind.
> a process like Google's is mainly getting the first kind
They might have decided long ago that they largely need just the first kind but how are you going to communicate that sort of preference and why should you..
What the author describes is an US and in particular a Silicon Valley attribute. That of a concentration of smart technical people in their own bubble in a nice climate, outside of a wider community.
It's self selection filtering, it's team belonging and humans in groups. It's earnestness, it's bright eyed optimism.
It's not socially awareness, it's not a healthy cynicism. It's not cosmopolitan. It's isolation.
It's why the largest pure technical efforts and successes come from SV and why more socially concious and digital humanities and world humanitarian technical efforts occur elsewhere in the world.
Isolated community bubbles are present in many industries with equally damaging effects. It's arrogance to assume it's just Tech because the 'unique selling point' of tech seems to be intelligence (ironically, probably the most common 'unique' selling point).
Fashion, Film, Finance, Politics all have their own problems stemming from a self-selecting isolated elite.
Correct me if I'm wrong, but it seems that both the author and a few of the commenters here suffer under the misapprehension that impostor syndrome is equivalent to a lack of confidence or arrogance, or to humility.
As far as I understand the phenomenon it is rather an inability to internalize accomplishments, resulting in a skewed perception of one's own competence.
Humility is, of course, important; but it is altogether something different from an irrational amount of self doubt, and impostor syndrome is thus clearly not valuable.
Please excuse any errors in the text above, as english is not my first language.
If English is not your first language God help the rest of us :)
I think your observation and analysis is spot on. And I think I may add to your assessment with a couple of other points.
1) "Imposter Syndrome" is an 'in' phrase at the moment, its use as currency marks out the speaker as being part of an 'in' crowd. Similarly when people use the expressions "onboarding" or "I'll just leave this here" or "ping" or "tl;dr" or any of the countless other expressions that serve as social status plumage.
2) The entire article implies being the author is smart. Look at me, I'm smart -- this is an unwritten all-but-explicit large part of the message of this post.
3) This is the type of post that HNers lap up. It's hip, it's got the right bounce, it talks about trendy social phenomena, weaves a satisfyingly yummy narrative. You have to ask yourself why these types of posts get so many comments and upvotes.
Thank you!
I wholeheartedly agree with your points.
How ironic it is then that the article lamenting the ability of "smart people" to "convincingly rationalize nearly anything" may, in a sense, itself be a rationalization of the author's own perceived smartness (as well as the smartness of those that might identify with him).
This is, of course, not to say that there are no good points in the article itself - there certainly are.
I honestly identified with it because it's been my observation that too many people decide on something and then rationalize it without considering their thought process for getting there.
And I'm always secretly afraid that I do it more than I think I do. So when I see the blog author describe something so exactly close to how I feel about it, it's gratifying.
good grief, yes, smart people can be arrogant, yes, smart people can be many other negative things.
but my experience walking around the world is that most people are too stupid. Too stupid means, for instance, that they could never begin to fathom programming a computer in C or C++. So pursuing that narrow example (doesn't mean that you can't choose quantum physics instead, if you are distracted by that re-read what I said) and restricting ourselves to the population of people who can fathom programming in C and C++: again, my experience of them is that even they are mostly too stupid to do a good job of it, these are difficult and dangerous languages, and we as a society do not have enough people who can actually use them safely.
So, from where I'm sitting, the arrogance of the world is not concentrated among the small numbers of truly exceptionally smart people, the arrogance is mostly found among all the other people--as we see in this discussion--who focus all their criticism on smart people.
Yes, we smart people are arrogant, but we are not stupid, so shut up in order for you to get what you can from us, and do not wish that we could somehow be more like you because that would entail us being more stupid. Don't be insulted; just as I am not insulted that you can't see what I can see, I'm simply saying stop complaining about this, it goes with the territory. Do you complain that the brilliant brain surgeon who saved your life is also very arrogant, or do you kiss his ass for saving your life? Would you trade the pleasure of watching Michael Jordan play basketball away because you don't like the arrogant way he acts off court? No, you wouldn't.
Yes, there can be super smart people who are also super humble, but so what? there can also be super smart people who are not humble, just as there can also be super humble people who are not very smart, and the worst of all, arrogance that is not accompanied by extreme skill, which is what I am complaining about. It is arrogant of all of you (no, not you personally, just look down the discussion) to all jump into the same discussion on the same side--ooh arrogance is bad--as if what you are saying is interesting. And this is not a topic of great interest to me, I'm simply trying to represent my side since I didn't see it represented anywhere else here.
"I have already proven to you, I make mistakes like the next man. In fact, being – forgive me – rather cleverer than most men, my mistakes tend to be correspondingly huger"
Depending on which side of the pond you reside, one could be forgiven that this is an issue with the sartorial sensibilities of people, rather than how intelligent they are.
I think a lot of the discussion here assumes that smart people are unable to make proper decisions under uncertainty. That has not been my experience. I've found that truly smart people are able to discern when there is uncertainty and apply the proper analysis using probabilities. They're also able to sanitize the inputs to avoid the garbage-in, garbage-out problem. Perhaps the "smart" people he is referencing are not so smart?
From The Jargon File, Appendix B, “A Portrait of J. Random Hacker”, section “Politics”¹:
[…] Hackers are far more likely than most non-hackers to […] entertain peculiar or idiosyncratic political ideas and actually try to live by them day-to-day.
I am guilty of imposter syndrome, however I recently heard myself speak about a subject I care a lot about. And I giggled because I actually sound smart. That was a revelation, although I am still not convinced.
Smart people I know also have a critical thinking. They verify their ideas and learn from their mistakes. What author describes seems more like management issue.
They're also better able to avoid backing the wrong horse to begin with and better at rethinking and cleaning up after their mistaken rationalizations.
> A suprisingly large fraction of the smartest programmers in the world do work here. In very large quantities. In fact, quantities so large that I wouldn't have thought that so many really smart people existed or could be centralized in one place, but trust me, they do and they can.
There are certainly other places with lots of smart programmers, but it seems every time I read something self-congratulatory like this, it's Google.
I am not sure if breadcrumbs means low compensation, but the comp at Google (especially for people succeeding) is high and low risk.
If you just want to write software (as opposed to run a business), being an engineer at Google is really nice. You get a ton of resources, work on projects that have millions of users, etc. Don't get me wrong, entrepreneurs are awesome. But it's a much, much broader skill set than what is required to be a successful software engineer.
As for academia, my personal problem with it is that it's too focused on elegant/complicated/clever solutions to problems that don't really exist. And the compensation is terrible.
"... an increasing relative level of IQ brings with it a tendency differentially to over-use general intelligence in problem-solving, and to over-ride those instinctive and spontaneous forms of evolved behaviour which could be termed common sense. Preferential use of abstract analysis is often useful when dealing with the many evolutionary novelties to be found in modernizing societies; but is not usually useful for dealing with social and psychological problems for which humans have evolved ‘domain-specific’ adaptive behaviours. ... when it comes to solving social problems, the most intelligent people are more likely than those of average intelligence to have novel but silly ideas"
Haha, this is great. I'm not claiming to have a high IQ; however, the article you linked resonates a lot with me.
In my previous relationship, I had a difficult time figuring out if what my (ex)-girlfriend said corresponded with what she actually meant, and I often thought she was playing "mind games" of some sort with me. So I began recording the intervals of time between our text messages and plotting them to search for discrepancies in our texting patterns.
Lately, I've also been putting on different personas at parties I attend (I'm a grad student) and taking notes on the varying responses I get to each type of personality. I'm attempting to optimize charisma with this data, but I'm not having a lot of success yet. People always say to "be yourself", but if that was the case I simply wouldn't interact with others at all, so I have to develop some kind of personality to use.
I was telling my sister about these types of social experiments, and she told me that overanalyzing everything to a fault produces the opposite result of what I'm trying to achieve. So far, she seem to have a point, because I still have difficulty deciphering how most people interact socially with each other.
(I hope none of my friends know my HN username...)
When arguing with someone, or even just trying to understand their advice, I often find myself thinking:
"If I actually understood this topic as well as you claim to, I would reveal that understanding much better than you currently are."
The advice to "be yourself" is an excellent example: you can pretty trivially reveal "yourselfs" that the adviser recommends you not be, and "non-yourselfs" that you should be.
And yet ... most people recommending that, cannot open the black box and reveal any greater precision to that heuristic, which I expect I would do if and when I gave such advice (or any heuristic advice).
It's why I distinguish between levels of understanding[1]. There's the level at which you can emit the correct answers with no ability to introspect on them, and the level at which your understanding "plugs in" to the rest of your world-model. If you acquired the former without ever having to reflect on it, you may never turn it into the latter, nor need to. But neither will your advice make much sense.
Being yourself doesn't mean what you think it means. It doesn't mean doing what feels natural or easy, it means that you should be honest and open about who you are, how you feel and what you want. Just knowing the latter things is hard and being honest and open about them is also hard.
Considering that I didn't say what I think it means, it seems you may be under the very same illusion of understanding I was criticizing, in which one thinks one's explanation, metaphor, or heuristic has caused a specific model in the other's mind, when it has not.
In the same way, about half of the people who have offered that advice, operationalize it oppositely from how you just did, because they (mistakenly) think that the correct behaviors do result from doing what feels natural or easy.
Furthermore, for many people, "being honest and open about who they are and what they want" means directly conveying things that are socially inappropriately to convey, which leads them (including you, probably) to immediately update the advice in unpredictable ways, revealing it to have the same lack of content or understanding that I claimed.
Well, I assumed you held the normal interpretation. I'm OK with being wrong occasionally.
If who you are and what you want is socially inappropriate to your friends or romantic interests then I suggest finding new ones. Seriously. Similarly with everyone you can choose. Of course, with colleagues and family, certain masks have to be worn... But minimize mask-wearing!
Someone once told me that "If my friends didn't know me, they wouldn't be my friends". Best thing I ever heard. After I became more honest with my friends I feel a lot less lonely and it turned out they were very accepting.
So you advise, in at least some cases, people go to those whom they haven't met, and express an explicit desire for sex, exactly as it comes to them in their mind?
No? Then this is exactly what I'm criticizing: you have some classifier that you have black-box access to, with boundaries you can't really specify when prompted. And I would not have the confidence to convey something as advice unless my understanding yielded an intuition for these boundaries, which yours does not. Hence:
"If I actually understood this topic as well as you claim to, I would reveal that understanding much better than you currently are."
Now, the black box, for you, may certainly emit the correct answers, but not in what that allows you to convey it as meaningful advice.
Consider that you might be using the wrong hemisphere of your brain to analyze these situations and that chemically nullifying this tendency of one hemisphere to dominate your reasoning may improve your results.
For example, consider the experiment where you get blindingly drunk and see how you react physically (calibration). Then, tone it down a few drinks and see how you react socially. Give someone else the keys to your car first.
Alternatively, observe how you breath through your nose. It's known that the nostril through which you breath is related to which nervous system is active (you have two: sympathetic and parasympathetic, and they are related to increased blood flow in the right and left brain). Observe how you react socially when different hemispheres are active.
I've worked with lot of engineers with the described "logic bubble", often coming off as an asperger or rude. It's rampant. Engineers just don't know how to admit ambiguity, everything is solvable with logic, and this rubs off as arrogance.
Like the author I also experienced early on in life that the real world is much more unpredictable. Day trading, reading books have opened my eyes quite a bit. Also starting my own business selling my own software has really enforced the view that the only certainty is uncertainty.
Such a beautifully written article. I'd put this right up there with Dr. Nassim Taleb's article on smart people or experts with no respect for fallibility of the human condition, or the innate limitation as humans, the absolute guarantee to incorrectly perceive our point in time, space, and make poor decisions and walk off with a large check because of a certain pecking order in society that awards chutzpah or blind confidence in one's own ability or a groups ability. It probably comes from the evolutionary practice of hunters and gatherers.
Funny ... I had the same impression of some managers. Simply could not accept that certain things are unpredictable. The other thing is that there are places where ambiguity is wrong e.g., documentation. Which was hard to accept for some managers and analysts. I can not write ambiguous code, it ends up doing either one thing or another.
Notice the some in my sentences. I managed not to offend whole profession of people.
Realistic people are best at predicting reality, by definition. Street smarts -- realism -- belongs to those who have deep domain expertise rather than pure raw IQ.
Anyway, logic itself, as a system of reasoning, is known to be <strike>inconsistent</strike> incomplete (Godel, etc).
But using logic is a trait of a dumb person anyways, the world is much better model probability theory which is designed to deal with uncertain, incomplete and incorrect data. It's basically what your brain does and what "street smarts" is, albeit the brain uses so many cognitive shortcuts and heuristics to avoid computing things.
I'm very sceptical when reading quotes like "smart people [...]" because IMHO it's extremely complicated to define smart. That said, I believe the following statement is 100% wrong:
> "Smart people have a problem, especially (although not only) when you put them in large groups. That problem is an ability to convincingly rationalize nearly anything."
Rationalizing is not something smart people do. IMHO it's more of a cultural thing in large scale. Take a north European (or American, must have to do with the weather) and you'll get a huge % of the population rationalizing things, instead of being emotional like the south Europeans/American.
I understand that most HN readers are programmers with strong mathematical background but try to picture this: Ghandi didn't knew any math compared to Godel, Einstein or Von Neumann. Ghandi achieved a gunless revolution by convincing poorly educated people to follow his lead. To me this is EXTREMELY hard to do. Nearly impossible. Ghandi was a social genius. Can we say that he wasn't smart because an average mathematician would probably had more analytical thinking when talking about algorithms?
The author (and not only the author) confuses the analytical/non-emotional thinking that he does with smartness. I believe that's a bold mistake.
ps. Of course there are situation where school teachers think of introvert children as not that smart. This thing plays out both-ways unfortunately.
Convincing poorly educated people of something is not hard to do. A large part (or side-effect sadly) of education is critical thinking skills. Lack of education leads to easy followers; something that has been preyed upon numerous times throughout history.
It also helps that Gandhi was highly educated (for his time), was a lawyer and quite a good writer, orator and politician. Not exactly the characteristic of an unsmart person.
Bah. I raided an Egyptian tomb, and ever since I've been followed around by smart people. They talk complex philosophy, discuss string theory, rave about Haskel, and talk about some the boast some French fella or another wrote in some book (in the margins I think). It's really annoying. Now that's a curse.
This article is just about how smart people have issues just like everyone else. Gee.
Edit: Ooh, some of "smart" people can't take a joke.
Could you explain why it is standard practice to downvote jokes? I don't get it. If it's funny, then people might want to read it. Over on /. a lot of the best comments are modded funny.
It's tradition on HN, in an attempt to lower the chaff and keep things on-topic. It used to be that any attempt at humour would get you canned, but now it seems the pendulum has swung to 'if there's no other content in the comment'. The same goes for comments about people not being able to handle things like in your edit - they don't really contribute anything. Snark is frowned upon.
I don't think pure jokes automatically get downvoted, if they're funny enough, though I have no recent examples in mind. But this particular example is more condescending than funny.
Jokes beget worse jokes beget bad jokes, until the majority of the comments consists of people trying to be funny and it's hard to judge insightful comments. That's my major gripe with /. and Reddit.
Always allow the possibility that you could be wrong.