Hacker News new | past | comments | ask | show | jobs | submit login

I think this article is saying:

  - 1st order thinkers primarily see causes and *direct* effects.
  - 2nd order thinkers frequently see causes and their *indirect* effects.
I guess that seems like a reasonable idea.

For what it's worth, the truly exceptional people I've met in life had a different quality.

* When most people are presented with a difficult/challenging problem, they soon give up.

* The most exceptional people that I've met just kept hammering away after the rest of us had stopped. Most of the time, they failed, but if you have some aptitude and you keep hammering, you have a better chance to make breakthroughs that the rest of us don't make.

Just as an example, I had worked for a company that used X-ray crystallography as a tool for drug-development. I would be in meetings with crystallographers where we discussed the technical problems they were having in trying to grow crystals. The crystallographers were all smart and talented, but when we had group meetings, there was only one guy who would float suggestion-after-suggestion-after-suggestion, long after everyone else had run out of ideas. I don't think he was any "smarter" than anyone else in the room, but he just could not shut himself off. He was relentless. He went on to make some important contributions to the field.




You have to split the title, which is marketing mumbo jumbo, from the article, which is fundamentally talking about decision making.

Smart decision making and persistent work is the X and Y axis of achievement.

Decision making (which broadly includes subjects like efficiency, policy, systems, problem solving, and design thinking) is the heart of efficiency gains for large organizations (not necessarily individuals, although there really shouldn't be a separation between the two).

Generally speaking, decision making falls into 2 categories - prioritization and policy. Prioritization is deciding what best to do out of the given options, and while policy (systems building) is building ecosystems that enable activity or work output.

2nd order thinking is the necessary requirement for policy/systems building. However, there's a lot more skills needed to enact good policy decisions, so this is just scratching the surface on the subject.


I humbly presume that you are familiar with topic at hand, and so, if you please, can you elaborate more on 'scratching the surface' part? I'm genuinely curious about other aspects of policy/systems-building, as it were. Pointers to blog posts or books should be enough, as well. Thanks.


The seven habits of highly effective people.

The Peter Principle

Books or articles on "planning backwards." Gantt charts get used a lot in, for example, construction. You need certain things to precede certain other things. You set a goal and end date and then start asking "What has to happen just before that? And just before that?" It is the reverse of asking "So, then what?"


Backward state-space planning in classical AI/automated planning


Sorry for the late reply.

Many people have recommended and I agree, Thinking in Systems is a great primer.

Here's an excerpt from the intro of that book:

> So, what is a system? A system is a set of things—people, cells, molecules, or whatever—interconnected in such a way that they produce their own pattern of behavior over time. The system may be buffeted, constricted, triggered, or driven by outside forces. But the system’s response to these forces is characteristic of itself, and that response is seldom simple in the real world.

> The system, to a large extent, causes its own behavior! An outside event may unleash that behavior, but the same outside event applied to a different system is likely to produce a different result.

> Think for a moment about the implications of that idea:

> • Political leaders don’t cause recessions or economic booms. Ups and downs are inherent in the structure of the market economy.

> • Competitors rarely cause a company to lose market share. They may be there to scoop up the advantage, but the losing company creates its losses at least in part through its own business policies.

> • The oil-exporting nations are not solely responsible for oil price rises. Their actions alone could not trigger global price rises and economic chaos if the oil consumption, pricing, and investment policies of the oil-importing nations had not built economies that are vulnerable to supply interruptions.

> • The flu virus does not attack you; you set up the conditions for it to flourish within you.

> • Drug addiction is not the failing of an individual and no one person, no matter how tough, no matter how loving, can cure a drug addict—not even the addict. It is only through understanding addiction as part of a larger set of influences and societal issues that one can begin to address it.

Once you start to understand the principle, you'll see systems everywhere. Good case studies in business are studies of systems. Annual reports are windows into the systems companies build to run effectively.

Everything is a system, really. Losing weight is no longer an extended act of willpower, it's building a system into your schedule and habits that produces the outcome of less weight (well, to be fair it's a bit of both). Writing a book is a series of behaviors sustained over time, induced through scheduling.

A lot of traits of people are systems induced, and instead of blaming a culture for a negative trait, a system (usually just looking at income level) can explain a lot of it. For example - why do Chinese people copy western IP so much? Before getting into "it's part of chinese culture", which is a slippery slope, consider how much can be explained by systems - just look at the national income average and compare education levels to see how little skilled labor they actually have to work with, and how much economic growth the country needs.

In Thinking Fast and Slow, there's a great little section that talks about professors wanting to write a book and budgeting 2 years, only later to find out that the it took anyone in the history of the school 8-10 years to write of a book of similar scope. They indeed ended up taking 10 years to write the book. There's a system hidden in there too.


Thanks a lot!


+1, really would love to learn more


I'm reading Gerald Weinberg's general systems book and it talks about second order thinking. Also, thinking in systems by Meadows is a great primer


Agreed with ignoramous' reply - interested in reading more literature here around what you're describing.


It's worth pointing out that blindly hammering away on the wrong problem is a failure mode for many intelligent people.

The key is not just sticking to things, but also having enough taste to know when to drop a problem.


> blindly hammering away on the wrong problem is a failure mode for many intelligent people. The key is not just sticking to things, but also having enough taste to know when to drop a problem.

Agreed. In my original comment, I was trying to imply that those exceptional people did (ad you say) have enough "taste" to know when to stop, but I don't think I made that aspect clear.

The crystallographer I was talking about definitely failed more often than not, but I was always struck by his ability to keep floating reasonable ideas after the rest of us had reached what we thought was an intellectual cul-de-sac.

If you have ever read the transcript of Richard Feynman's speech "There's plenty of room at the bottom" [1] you get the same feeling--here's an incredibly bright person who can't seem to stop when he's told that "x is impossible."

I love this part of Feynman's speech: "The reason the electron microscope is so poor is that the f- value of the lenses is only 1 part to 1,000; you don't have a big enough numerical aperture. And I know that there are theorems which prove that it is impossible, with axially symmetrical stationary field lenses, to produce an f-value any bigger than so and so; and therefore the resolving power at the present time is at its theoretical maximum. But in every theorem there are assumptions. Why must the field be axially symmetrical? Why must the field be stationary? Can't we have pulsed electron beams in fields moving up along with the electrons? Must the field be symmetrical? I put this out as a challenge: Is there no way to make the electron microscope more powerful?"

[1] https://www.zyvex.com/nanotech/feynman.html


I think the deeper explanation here is that unusually clever people know how to locate the unquestioned assumptions and have no problem throwing them out. Or put another way, they ask questions of the form, "I know this sounds dumb at first, but hear me out. Why don't we..."

Most problems come with a slew of constraints we take for granted, and then a set of constraints we consciously impose on the solution because we think it helps. Most reasonably bright people try lifting those conscious constraints, but rarely touch the less apparent ones.

I think that more neatly explains what you both are going for with "gives up too soon" (not identifying all constraints), "hammering away" (also investigating non-obvious constraints), yet "have enough taste to stop" (there are no more constraints to lift).


Couldn't agree more here.

I've been encountering this effect in rapid succession in the last couple of years as a new parent.

First few weeks: Everything is new and you have no firm assumptions other than what you've observed from the outside of other families, so you try to build a new model on how your child behaves, how you should react and the routines needed to function as a family.

A month in and every other month going forward for the next year: Everything you think you know about your child has changed, and all (most) assumptions about what works when comforting, putting to sleep, and feeding goes out the window and you start over building a new model.

Of course everything doesn't change, but this scenario of developmental changes has really ingrained the idea in me how easy it is to make assumptions (deliberately and not) that you assume are static truths. It has made me go back and reexamine everything from old wives tales I learned through osmosis as a child through my politics through technical decisions in my work.

Whether you consider Jobs, Musk, Wozniak, Brin, Page, etc creators/innovators or something, I think a lot if not most successful ventures has come from reevaluating assumed truths, ranging from the market, state of technology, or paradigms. I'm not saying you should throw out everything old, but merely that learning from history should be a scheduled process, not a one-off.

Sorry for the unwarranted rant, but your comment just resonated with my own experiences.


Musk, in particular, appears to me to be refreshingly naive when confronted with a problem. His suggestions sometimes sound like something a kid would say, and that's not a bad thing.


I wonder if there is a way to train yourself to have the intuition for quickly lifting those less apparent mental constraints in all situation.

Anyone have any thoughts on achieving this or resources. I recall some dual n back game posted from a website called gwern that supposedly helps speed up your recall time or how many items you can recall.


Doing mathematics is probably one way to train. Especially the counter-intuitive stuff, like higher dimensions, statistics, weird geometry, and such.


Sometimes. Forward motion is always important. In incident command scenarios, I’ve seen situations where people will sit around and wait... you need to move forward at all times or you get used to nothing happening.


It's worth pointing out that blindly hammering away on the wrong problem is a failure mode for many intelligent people.

Also, being right about something, but at the wrong time. That's the worst failure mode of all IMO.


I somewhat disagree with this idea -- it's important to drop unrewarding lines of enquiry, but more because you've reached the point where it becomes clear the answer would be boring than because it won't pay the bills. I mean, you have to get with the program sometimes and drop an interesting problem because you can't afford to continue working on it, but at that point it's not really dropped. It's just become an itch you can't scratch. If the problem is interesting enough, the pursuit is its own reward.


Agreed. I have seen this plenty of times.


>there was only one guy who would float suggestion-after-suggestion-after-suggestion, long after everyone else had run out of ideas. I don't think he was any "smarter" than anyone else in the room, but he just could not shut himself off. He was relentless. He went on to make some important contributions to the field

Whilst I admire determination, enthusiasm and persistence; if none of the ideas had come to fruition or contributed anything meaningful, it could have been misconstrued as gish gallop. It probably requires second-order thinking just to manage such a scenario, which is not uncommon.


What I prefer, rathet than blunt endurance, is efficient in testing hypothesis. A radar for aesthethics.


Persistence is it. Quitting, by definition, takes you out of the potential winners domain.


Quitting a losing effort permits you to place bets in a more likely to win effort. There is a huge opportunity cost to using bad judgement to continue a failing project.


It has to be intelligent persistence, though. Only persist if you firmly believe that the potential reward outweighs the cost of continued effort.

Quitting a hopeless avenue of exploration frees you up to pursue greater chances of success elsewhere. It all depends on context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: