Hacker News new | past | comments | ask | show | jobs | submit login
"Smart and Gets Things Done" is necessary, but not sufficient (github.com/raganwald)
96 points by raganwald on Dec 6, 2012 | hide | past | favorite | 46 comments



To be honest, I didn't think there was much of an argument or a point here. The conclusion that "there's no substitute for experience" doesn't follow from the rest of the article, and most of the rest of the conclusion falls into "getting things done" anyway.

You were able to think of an edge case for language feature that neither of your friends had considered. Awesome. Smart. Realising that upfront is the kind of skill that saves you hundreds (anecdotally) of hours of debugging because you solve it in the design phase.

Saying "smart interferes with gets things done" is exactly what Joel was saying when he tacked on "and gets things done" - so you can't really use that to argue he left something out.

They would rather mull over something academic about a problem rather than ship on time. These kind of people can be identified because they love to point out the theoretical similarity between two widely divergent concepts. For example, they will say "Spreadsheets are really just a special case of programming language" and then go off for a week and write a thrilling, brilliant white paper about the theoretical computational linguistic attributes of a spreadsheet as a programming language. Smart, but not useful.

http://www.joelonsoftware.com/articles/fog0000000073.html


This is an interesting reply, thanks. I possibly didn't explain this well. My reasoning didn't actually help, unfortunately, because I convinced my self that a proc MUST behave like a lambda with respect to "return" semantics.

So had I been left to my own devices, I might have written some Ruby code that assumed it was perfectly safe to return a proc from a method. This is a real danger for me, as I like programming with combinators. And I would have been bitten by my mistaken beliefs.

The only thing I'll say in my own defence is that having made this mistake many times, I had the good sense to check how things actually worked when I got home. I used to know, I'd forgotten, and now I know again.

I don't know if that's "getting things done." It may be spinning wheels. Perhaps someone "getting things done" never wastes any time worrying about the difference because they never bother to wonder what happens when you return a proc from a method.

They simply avoid the problem in the first place, so not knowing is irrelevant.


I had the good sense to check how things actually worked when I got home

Sounds like you fit both the criteria for "Smart and get things done".

I like programming with combinators -> They simply avoid the problem in the first place, so not knowing is irrelevant.

I think that a general awareness of the tools available to you is extremely useful. Specific knowledge isn't unless you're using them. Given that your friends "read about it today", I assume they don't often make use of this particular language feature - in which case it's fine not to know. In your case, not knowing was dangerous, and when you realised the limitations of your knowledge (unknown unknowns and all that), you corrected it.

My point is that I think all of this is covered in Joel's original essay, so your title isn't really justified.

Finally, I think there's a good essay to be written with a similar title, where you explore the point that "sometimes you need an expert". luu touches on this wrt hardware engineers: http://news.ycombinator.com/item?id=4882560


I think all of this is (actually|also) covered in another of Joel's essays, "Lord Palmerston on Programming:" http://www.joelonsoftware.com/articles/LordPalmerston.html


"They simply avoid the problem in the first place, so not knowing is irrelevant."

I always see this as a quality that I'm missing compared to other coworkers. They'll go for the simple implementation with the complicated usage pattern while I always aim for the complicated implementation with the simple usage pattern (to a point).

We've had a lot of discussion of your simplicity-is-not-simple article which did a great job articulating the trade-offs.

http://raganwald.posterous.com/simplicity-is-not-a-simple-co...


I also learnt the hard way that answering a message differs from returning from a closure. And I was using Smalltalk, whose designers chose distinct syntax for those actions, a good idea that was apparently broken when Smalltalk became Ruby. "Return" should mean the same thing in different places; scepticism about how tools behave has to stop somewhere.


Steve Yegge wrote a great piece on the subject:

http://steve-yegge.blogspot.ca/2008/06/done-and-gets-things-...


I'd say your argument was convincing, but you forgot that Ruby's language design is full of sharp edges where features were forced together in poorly thought out ways. For example, I yesterday learned that you can't determine the order of invocation when chaining super through multiple mixins that override the same method of a class. It's as though the designers thought that saying "we don't do multiple inheritance, just mixins" relieved them of actually having to define their semantics properly.

So it's not that your reasoning was wrong. It's that your conclusion was. The right conclusion was, "What the hell were those amateurs thinking? That's idiotic!"

I'm going to go fondle my ALGOL 60 report now...


Is there an Algol compiler for Linux these days? Brief poking reveals there's an Algol68 out there for Ubuntu.


Hmm, this lead me down a different path of thinking. This is exactly why the rough edges in language design like the proc/lambda differences in ruby impose a considerable cost on the community as a whole when you add it all up.

As I get more experience I appreciate the benefits of languages that try very hard to minimize cases like this, as well as how difficult that is and the trade-offs that must be made.

It's hard to encourage flow type states of high productivity when your mental model of execution either keeps disagreeing with reality or is so full of previously discovered edge cases that it's too complicated to do useful simulations of code in your head.


I think this conflates "smart" in the sense of "raw ability to reason" with "smart" in the sense of "knows the intricacies of how things work". Or, as my uncles might have said, "book learnin' versus real knowin'".

In fact, even in the example given with respect to Ruby procs and lambdas and returns and whatnot, clearly the author hadn't "gotten things done" in that area (by his own admission, this isn't snark).

So to me, the real point is that thinking you've worked everything out from first principles doesn't mean you actually understand all the implications of the interactions in a complex system. That's the mistake I made as a young man, certainly, and it took me years of learning the hard way that I wasn't always right.

Also: there's always a relevant XKCD. http://xkcd.com/793/


"smart" and "gets things done" are sufficiently vague that they can mean whatever you want. If I say "So-and-so is smart, however X," the reply can always be "Well, if X then so-and-so isn't truly smart" in the No True Scotsman sense.

This is why I tried (perhaps poorly) to be specific and talk about smart in the sense of mathematics.


I think in the instance you described you were and weren't being smart in the sense of a True Smart Scotsman.

Mathematic smarts isn't always enough to know how something really works. But I still think you were smart enough to realize you might be wrong and to double check how it really works.

Also, you have the GTD mindset to accept how Proc works and try to learn from the experience instead of obsessing about how Proc is broken and ruby sucks.


I pretty much agree, but I'd like to point out that, as a field, programming emphasizes the importance of "Smart and Gets Things Done" more than pretty much any other field does.

I've been interviewing lately, both at hardware and software companies. At microprocessor startups, I ask about the pedigree (past experience, not educational credentials) of pretty much everyone I meet. I don't do that at software companies. The reason is, you pretty much never hear about a new team successfully making a high-performance microprocessor. Apple bought PA Semi, which had a moderately successful exit, but PA Semi was basically the SiByte team, which left after SiByte was acquired by Broadcom, and SiByte was composed of key people from DEC who had been working together for over a decade. When you hear about a new team where most of the people are smart new grads, they usually spend ~ $100M over five or six years years, find that they don't have a competitive product (or, more likely, don't even have anything that's close to working). If they have funding left to burn they may pivot a couple times over the course of a couple years before they burn through their cash and implode.

And that's despite microprocessor design being close to pure reason, in the grand scheme of things. Experience and wisdom matter a lot more in most endeavors. Something you see a lot, even here on HN, when non-CS/programming/engineering topics make it to the front page, are people who try to extrapolate from a few "obvious" facts and then come up with a conclusion that's completely and totally wrong.

In software, you hear about successful companies founded by people just out of school, or even people who have dropped out of school, all the time. "Smart and Gets Things Done" can be good enough to make a decent product. But, something you'd never want to hear from a plumber or a carpenter is "well, I've read some books on the topic, and tried out some tools at home depot, plus I'm smart and good at hacking".

Things like carpentry, manufacturing, etc. are hard. They're done so well here (in developed countries) that it's easy to forget an idea of how hard they really are. If you want to get an idea of how hard they are to learn from scratch, consider South Korea after WII. Its GDP per capita was lower than Ghana, Kenya, and just barely above the Congo[1]. For various reasons, the new regime didn't have much baggage, and they wanted Korea to become a first-world nation. The story I've heard is that the government started by subsidizing concrete. After many years making concrete, they wanted to move up the manufacturing chain and start making ships (among other things). They pulled some of their best people who had experience in business (having learned skills like management, operations, etc.) from working in concrete to try to build ships. They knew they didn't have the expertise to do it themselves, so they contracted out the plans and got detailed instructions how to build ships. They got plans from Scotland, because Scotland has a long history of shipbuilding. Makes sense, right? But, when the Koreans tried to build ships with Scottish plans and detailed step-by-step directions, the result was two ship halves that didn't quite fit together and sunk when assembled. For historical and geographic reasons, Scotland's shipyards weren't full-sized, and they built their ships in two halves and then assembled them. Worked fine for them, because they'd be doing it at scale since the 1800s, and had world renowned expertise by the 1900s[2]. The Koreans eventually managed to start a shipbuilding industry by hiring foreign companies to come and, build ships locally, and show people how it's done. It took decades to what we would consider basic manufacturing working smoothly, even though all of the requisite knowledge existed in books, was taught in university courses, and could be had from experts for a small fee. "Smart and Gets Things Done" goes so much further in programming than it does in virtually any other non-academic field.

Today, anyone with a CS 101 background can take Geoffrey Hinton's course on neural networks and deep learning[3] and start applying state of the art machine learning techniques[4] to research grade problems within a couple months. But, if you want to build a ship, and you "only" have a decade of experience with carpentry, milling, metalworking, etc., well, good luck. You're going to need it.

[1] http://www.nationmaster.com/graph/eco_gdp_per_cap_in_195-eco...

[2] In 1913, 20% of the world's ships were built at Clyde.

[3] https://www.coursera.org/course/neuralnets

[4] About 1/3rd of the way through the course, he talks about a new technique that was published after the course actually started. The future of education is going to be awesome.


The reason this is mostly true is that most developers are doing CRUD apps. Its been like that from COBOL through VB through to Web Apps, nothing really changes. Learn it once and then just repeat. Once you stray into a field where some domain knowledge is needed then experience counts for a lot more.


I would like to point out that software comes in different flavors of complexity, for example kernel programming that requires the extensive experience you speak off.

I think it's more about the technical difficulty of the project you undertake rather than its form (software or hardware).

Many startups are technically very simple.


Kernel programming isn't some mystical art, you can just hack on it and see what happens like anything else. It takes more work to get any particular thing done and it's harder to debug, but I wouldn't discourage someone from making a startup just because it required kernel development and they didn't know anything about it.


I've only written a simple LKM to patch the interrupt descriptor table and patched the kernel pool memory allocator on the true operating system of the proletariat (That is to say Plan9) but I think I can safely say that you don't just hack on it and see what happens.

The knowledge required to write device drivers or understand and manipulate kernel behavior is totally distinct from what is typically taught in Computer Architecture/Computer Science and Software Engineering, a massive amount of knowledge is required that I could see no way to acquire short of trying to build your way up designing/reading kernels of greater complexity or being interested in some field where consecutively deeper levels of knowledge are an exponentially more profitable use of your time such as information security.

The documentation on ALL modern kernel's current implementation details is sparse (yes, even Linux) and the knowledge is assumed which gives some truth to the statement "You just hack on it" in the sense that there's no way to learn it save actually doing it. In truth however, to really get anywhere you MUST know x86 (Or whatever architecture you are developing for, additionally, if you think this will be easy because you coded MIPS or SPARC in school, be prepared to feel dumb), you MUST know deep C and a slew of other topics that the typical CS education leaves you remarkably ill-equipped to even get started.


I agree, but that's because you're starting with an existing kernel you're trying to improve. If you were a startup-sized team with no kernel programming experience and you had to write an entire OS from scratch I think you'd be pretty likely to end up after five years with the software equivalent of the non-competitive microprocessor designs the grandparent describes...

[Note the caveats: obviously history provides us with examples like Linux of OSes from scratch which did get from zero to competitive in five years but that had rather more people working on it.]


I never thought I'd see verilog-mode, haskell-mode and scala-mode in one person's dot emacs file.

I'm in telecom semi and I know companies like Huawei are using a model where they have 100s of engineers in China with less than one year experience with senior engineers in other countries teaching them.


Wellsaid dan!

look forward to catching up later this month! :)


We pondered this for a while and the conversation meandered elsewhere, no doubt because my companions were embarrassed on my behalf. I had committed one of the basic errors of inexperience with the state of programming: Assuming that you're smart and you can work things out from first principles.

I used to have this problem. Not the "problem" of assuming you're smart. I mean the problem of assuming that if someone moves the conversation away or disagrees with me, it must be because of something I'd done. Psychologists call this "intrapunitive behavior".

Here's what I would posit happened: you said something intelligent that your companions had no way of countering. Thus, you showed them up and rather than fess up to it, they decided to change the subject.


Thanks for the support, but that was a literary device.


Then I suppose I'm saying that I support you, but not your literary devices. :-)


I feel like this post might be talking about the arrogance/humility axis, which is important but orthogonal to both dumb/smart and lazy/gets-it-done (depending, I suppose, on how you define those things).

Humility is necessary to say "yes, I'm smart, but there are the limits of that intelligence." Any smart person can learn enough C++ to get things done if they know Java, but it takes humility to say "I will learn C++ idiomatically and approach it on its own terms, rather than coding up Java paradigms with C++ keywords and libraries".


This is especially true and contradicts the generally accepted opinion repeatedly expressed around here, that knowing about programming in general is enough to be an expert in any technology in a short amount of time. The reasoning then goes that when hiring, one shouldn't rule out seasoned developers that have no experience in the technologies in actual use. This is an arrogant recipe for pain for the whole team. The myriad combination of interfaces between tools/stacks/technologies cannot be understood expertly in any small amount of time.

I suffered at a polyglot consultancy that shifted people around rapidly between stacks and technologies due to overconfidence in this belief; it is very painful for any programmer with a sense of craft and client stewardship. Of course, if all you care about is banging out a piece of garbage that will hold up until the checks clear, then I guess this doesn't matter.


The great conceit of thinking you are "smart" is believing that because you are very good at working out consequences from axioms, you needn't know everything. If you know the axioms and the rules, that's it, you know everything.

I feel like I have to deal with this all the time -- people who think they're smart and reason their way to tenuous conclusions based upon a ton of seemingly solid beliefs. This rarely ever works, and it surprises me how people who base a significant portion of identity on being "smart" don't grasp that basic point.

There are a lot more confounding variables than there are times when things go perfectly. 6 months of direct knowledge/experience of a particular domain can go a lot further than a year of theory.

In theory, anyway.


I think the underlying point in all of this is just that "smart and gets things done" isn't something someone can be trusted to observe about themselves.

The example he gives about FizzBuzz in the Lambda Calculus is perfect. If you're truly working from first principles, then you ought to understand the difference between a theoretical and an acceptable solution.

I'd say that in the context of software, "smart and gets things done" is at a minimum someone with a solid ability to extrapolate from first principles coupled with the ability to problem solve and think critically about what is is they're actually building. Experience helps tons on that front, but it's no better an indicator of future success than grasping first principles.


I think the counter-argument is this:

We have control over the amount of real reasoning we can do about our code. The hassle of the real can be minimized. For example, I like clojure, and there are real benefits to theoretical things like immutability by default, or garbage collection. This ability to reason mathematically while being practical is worth promoting in computer science and engineering. If I couldn't reason this way about programming, I wouldn't be a programmer. Engineering is the intersection from theory to practice, and I appreciate that in computing, incidental complexity can usually be traced back to someone's neglect. Without this, we only have heuristics.

There is no silver bullet, but things can be made measurably better. We can accept reality without giving up on reason. I think to prove it, we can look at the influence of mathematics on real discoveries in history, and maybe ponder if humans are just meant to think this way in order to make progress. I hope my code reflects more order (purity, generality, conciseness) than chaos (special cases, entropy, technical debt).

I would just call the procs thing a 'wart' and avoid using it. I don't think my opinion favors the mathematically inclined programmers over the empiricists.

So, let's think about the kinds of things that get done as a consequence of certain types of thinking.

Tradeoffs.


An interesting read as always from raganwald, but I think the title is off. It seems to me that the reason Joel added "gets things done" to his creed was to avoid exactly these kinds of problems, and by "these kinds" I mean all the myriad ways that smart people shoot themselves in the foot by either overthinking, or making assumptions, or just plain being used to gliding through on minimal effort.

Now obviously the definition of "smart and gets things done" is open to a huge amount of interpretation, but I think it's clear enough that he means getting things done in terms of shipping useful product. If one does that, then the problems that come with inexperience are self-rectifying.

The problem is if someone is not "smart", they won't learn from their mistakes, and if someone doesn't "get things done" they may lose site of the final goal and fall into mental tarpits as they gain experience.


to me, constantly learning is part of being smart and getting things done, especially in the tech industry. You cant be smart and getting things done unless you are learning the latest tools and techniques.

I have always like the saying "get things done, smart" better which perhaps encompasses this post's thesis better than Joel's original wording.


this was a lot of words about how - you have to just try something really to know how it works...

it's like voyager 1, we thought we knew what the edge of the solar system would be like, but until we actually reached it or at least reached the zone we are in now we didn't know it would behave the way it does...

With code, that is also true, you can't assume it will behave as you'd expect, what if under the given conditions there is a bug in the assembly code generated by gcc when it compiled the ruby interpreter such that the lambda behaved incorrectly? Or even a bit of memory was flipped into an invalid or unexpected state... shit happens which is why we must type, test, type, test, type test... all day to figure out how it really works...


+1 because of the up-front TL;DR.


Where I come from, we just call it an abstract.


[deleted]


I, on the other hand, suspect that many people buying these books don't even want to get a job in programming... maybe they just want to learn, or build their personal projects. http://planoa.wordpress.com/2012/12/05/why-google-and-others...


I am a professional programmer who's been working in the field for over a decade, who went to a top 40 school, use multiple languages professionally, yadda yadda.

I buy those books and the "best" advanced book on the language when I want to get up to speed fast.

Going through and doing all the exercises, and stupidly simple setup they always have is so worth it. Typing in all the code takes almost no time at all, and it gets you the basic understanding down pretty fast. Then the advanced book smacks any bad things the silly book tried to give you, and you can start making something real with it.


Can someone take a shot at listing a few math languages vs non-math ones?


You want second-order smart. Not just experienced in the technologies you happen to use (as OP alleges), and not just generally quick learning (first-order smart). You want introspective and seasoned second-order learners: people who've learned how to learn, even under adverse circumstances, how to figure out what's worth learning, and how to tell when you've learned enough that the right next step is to start doing.

One of the most useful things I've gotten out of my machine learning study and work on MOOCs is second-order learning. I've learned a lot about how I learn, especially under the added and somewhat artificial difficulty of having a full-time job. Very valuable second-order insights.

"Gets things done" is ridiculous. It's a good size-up for very junior level developers, but beyond that... you want "does the right thing". The idea that there's a large number of perfectionists or lazy people who are constitutionally incapable of "delivering" is nonsense. When to stop perfecting and optimizing is one of many important skills, but not a hard one to learn.

So... let's make it, second-order smart and does the right thing.


As someone who has taken engineering courses and holds a BS in applied physics and a minor in math, while working as a developer (for physics experiments), I'd have to say that physics, more than any other major, teaches you how to learn. In reality, the main reason a BS in Physics even exists is to teach you how to learn so you're prepared when you get to graduate school to work on new problems people haven't had to face before.

The unfortunate thing is that corporate american tends to highly undervalue the BS in Physics. While I graduated at the worst time possible time (2008/2009), it took me over a year and a half to find a "real" job. No, I didn't have two + years experience in developing with .Net. I did have 4 years of experience in everything from LabView to Verilog, photomultiplier tube calibration and particle detector design to advanced electrodynamics to CADD and machining, and all the calculus and linear algebra necessary to get there.

As someone who was perpetually frustrated by a lack of consideration when applying to jobs, I'd just like to recommend that anyone reading this comment to always consider a candidate with a background in the hard sciences, even if they don't have the relevant experience.


I think that may be the shitty economy more than the BS in Physics. When all the on-campus recruiters were visiting my senior year in 2004, I'd go to these info sessions for investment banks or consulting firms and they'd ask my major, and then when I said "Physics", they were like, "Oh, you should have no problem. I was just worried you were an English major or something."

I ended up switching to CS my last semester, so nobody knows I was a physics major for most of my college career and that basically all of my programming knowledge is self-taught, but I've never felt at a disadvantage to all the folks who bulked up on CS courses in undergrad.


I was in a similar situation (majored in pure Math.) The best answer is to learn something like SAS. It only takes a few weeks to learn the language and get certified, but it gets you past a lot of the bullshit keyword filters and lets you start actually working on interesting Statistical work in a corporate environment, which also opens the doors to a lot of jobs that seriously value your Scientific background.


Learn R and machine learning (this will take a while). Move to a major city. One of the biggest career benefits you have as a scientist is experience with real data. If you can't get a job in the sciences, get a data science job. You are very qualified for a better job than most companies will give you at the entry-level with a hard-science BS.


This is a perilous route. Learning R and "machine learning" (probably meant as a catch-all for graphs, statistics, machine learning, and a dash of distributed systems) is necessary but not sufficient, since you'll be competing against people who know all of these things and have significant research experience, given the oversupply of PhDs in various hard sciences (the Python astrophysics community is booming, for example--they'd be immediate "data science" hires).

The net result is that you might end up in a BD-type analyst role, which may not be bad depending on your goals. It wouldn't really hit the engineering target, though.


Hah, yes. I don't know if the comment before yours was intended to be advice to me, but I'm one of those python astrophysics people (professionally). I deal much more on the data and distributed systems side much more than the analysis side, or really... I write software and frameworks that the analysis guys (from several very large experiments) use.


I think it depends on the company. There are a lot of terrible companies that won't look at you for an ML/data science role without a PhD. That's true. There are others that will hire smart people and give them a chance.

What will probably happen is that the pool of "data science" jobs will grow, but the bottom half won't be real DS.

I don't mean to imply that a person can become a decent data scientist overnight. It takes years, but it can be done.

On Wall Street this sort of role is called a "quant", although there's less machine learning in finance than one would expect (hard to audit). "Data scientist" is a startup quant. Just as with quant jobs, some firms will only hire PhDs with research experience, while others will take chances on smart people without it.


There are a lot of terrible companies that won't look at you for an ML/data science role without a PhD.

I'm arguing that it's more serious than this: there are so many PhDs out there right now, companies will have no lack of skilled, well-credentialed candidates to choose from. Going down the "do it yourself" DS route is thus (a) very difficult, and (b) not very likely to pay off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: