Heh I get this a lot in my daily sysadmin/developer duties. When you need to turn on a knob somewhere you don't just wanna know that you have to turn it on, you want to know why you need to turn it on and follow the chain up until you get to facts you already know. But it's not always possible to get that far, there's too many layers of abstraction, the source code is not available or you just don't have the time.
I can draw a parallel to this in software development. Some product features that require development "from scratch", where you can get down to the original code and logic - this is where "taking time to think" really pays off.
But when you are basically composing a final product from components, libraries and features - this is where figuring something out may take really long time and a lot of effort. It today's world many libraries are open source, so you actually can get to the bottom of many issues. But the time and effort cost of that is almost never acceptable.
My conclusion is - if you are a "slow" thinker, prefer getting to the bottom and figuring stuff out - try and choose the "fundamental" type of work. Where you are "done is better than perfect" kinda person - you'll thrive in the upper layers of development stack where shipping stuff out is of utmost importance. Focus on your strengths.
One example would be analytics libraries that require precise calculations and background in math as opposed to the UI that displays the bar chart with results. The former is what I consider to be "fundamental" where the latter is much higher level and close to the end user (UI).
Other examples: audio / video codecs vs. media player app; game engine vs. intro screen and menu stuff, etc...
Swimming in uncertainty and (conditionally) trusting abstractions is one of the skills I have to teach my interns. I love their desire to understand and I try to be careful not to kill it. But at the same time, software engineering in the real world means contributing to a picture that is larger than will fit in your head.
Agree on the aspect of time. As we have progressed, we now have many levels of abstraction that it is hard to think deeply about the problem. Almost à la like a code that has grown too deep to understand every "bit" of it.
Moreover, I think people now work in teams rather than one individual thinking about the system holistically.
It’s possible. It just takes time; very few people are in a position to devote that time.
This gives an advantage I haven’t seen discussed: when you put in the time, you make connections no one else thought of. It happens time and again, and it’s a clear pattern at this point.
It takes months of daily study, often tedious, with no clear benefit. But the benefits sometimes come. (I wrote “usually” rather than “sometimes,” but that’s not really true. The usual result is that you go to sleep more confused than you started. It’s not till much, much later that the connections even seem relevant.)
I like to imagine that at some point we might collectively have a large enough software development population to solve most significant problems comprehensively enough -- and fairly and equitably enough -- for most people that we begin to see developers with increasing amounts of free time.
At that point I think we could collectively really begin digging into some of the huge backlog of software bugs and errors that we've built up over time and make everything more reliable, seamless and consistent.
It'd be a massive undertaking, especially to solve each issue thoroughly and without causing negative externalities elsewhere. But it'd also be a great puzzle-solving and social challenge, not to mention an educational and useful one.