Most of these have wide acceptance among the HN/Startup crowd; I might expect to see more disagreement if the list were exposed to corporate programming environments, or especially to their managers. I have a couple of quibbles, though:
- 1: I'd modify "Programmers who don’t code in their spare time for fun will never become as good as those that do", to be "Programmers who don't code in their spare time for fun will never be as good as they would if they did". I definitely believe coding for fun helps your skills, but I've seen too many "just-a-job" programmers code circles around others on the same teams who had side projects and kept up with the trendy languages. It's not a clear differentiator, just a data point.
- 2: Unit tests don't help you code in the same way that a safety net doesn't help you walk a tightrope: this is technically true, but not a helpful statement in reality.
- 10: Print statements are "valid" in that they often work, but when a debugger is available it's almost always the right way to go.
>Print statements are "valid" in that they often work, but when a debugger is available it's almost always the right way to go.
I don't know about this one. I think that having to write print statements makes you do actual thinking about your debugging strategy and therefore makes you understand the flow of your code more. A debugger is certainly useful, but I was surprised the first time I had to go without one to find that I didn't miss it that much and possibly was even more efficient.
Debuggers are great and superior to print statements when they're usable. Use a debugger when you can. Sadly, it is increasingly the case that you cannot use a debugger when you want to use one.
Print statement binary search is primitive, but it's survived as a tool because it's almost universally applicable.
Debuggers are more efficient at locating and fixing specific bugs. However when you debug with print statements you get the opportunity to review the code and find bigger design problems that need to be fixed.
Furthermore, and more controversially, if you debug with debuggers, you will be driven to use programming techniques that are friendly to your debugger. But when you debug with print statements, you are free to use whatever programming techniques are best for human comprehension.
Now before you raise your keyboard and rush to disagree, consider carefully that the position I just described is agreed with by well-known programmers such as Linus Torvalds and Larry Wall. There are equally well-known programmers who disagree.
The true merits of the case are hard to determine. But if you think that one side is trivially wrong, then you should view this as a learning opportunity. Because you're certainly mistaken.
The only time I need to pull out a debugger is when I can't reason through the problem on my own (possibly with a couple print statements)
By and large, the only time I can't reason through the problem on my own is because the code in question is not properly unit tested, and it is complex to isolate the problem itself.
The only time it's worth my time to bring out the debugger is when the issue is so opaque as to require that level of in-depth investigation. I've spent days tracking down esoteric heap smashers in gdb.
However, most of the time, I just watch my unit test assertions fail, and then fix the issue.
Print Statements, basically a form of logging, can be superior to debuggers, too (especially "when they're usable" :-) )
IMO, all three: logging, printing and the debugger have their place. In theory, print statements could be 100% replaced by debugger macros or dtrace, but the tooling just is better (it is easier to keep your output the same across runs) for a class of "takes a couple of days to find" bugs.
Debuggers have features like watchpoints, which cannot be emulated in any form using print statements. Also, many codebases are not set up to provide a decent stack trace without using a debugger. Features like watchpoints allow you to think about the debugging process in a very different way and can turn many otherwise difficult bugs into 30-second fixes.
That was my feeling too. I kept reading through the list and nodding, wondering where the "controversy" was supposed to be.
Regarding your criticism of #2, I think you're arguing orthogonally. The point to me seemed to be against TDD and the idea that code written "to" a test is inherently better. The point is to write and execute tests, not to fetishize how those tests are written or run.
And your point about #10 is just plain wrong, sorry. Arguments of the form "debuggers are easier than hacked-up logging" always presuppose that the "hacked-up logging" is hard to do. The fact that you feel more comfortable typing into gdb or whatever than you do adding a line of code and rebuilding tells me that you don't feel comfortable building and running your code. If that's the case, then you have already lost. Debug via printf works better because projects designed for debug via printf are inherently better. This applies to fancy tracing tools as well. They have their place, but if you can't whip out a printf then you need to fix that first.
And, you can look at logs from production when something goes wrong. In my experience the debugger can only take the place of logs if you have access to the same data, and in many organizations you don't get that luxury.
"I'd modify "Programmers who don’t code in their spare time for fun will never become as good as those that do", to be "Programmers who don't code in their spare time for fun will never be as good as they would if they did". I definitely believe coding for fun helps your skills, but I've seen too many "just-a-job" programmers code circles around others on the same teams who had side projects and kept up with the trendy languages. It's not a clear differentiator, just a data point."
Yeah, as originally written it basically reduces to "programmers with more experience are always better than those with less experience" and that's just not true. Side projects are a good sign that someone cares, and a good way to get experience faster than others and (potentially, unless you're just repeating stuff you already know in them) keep your experience diverse, but that's all.
Somewhere along the line, it changed from "do you program in your spare time?" to "show us your side projects."
I program in my spare time. I like it. But when interviewers want to pour over my "spare time" code before I get a job, it stops being fun. It becomes more work.
Without the ability to look at code you've written "on the job", side projects are the next best thing. Get a license for code you've written at previous employers and showcase that during the interview.
I'd much rather simply be asked to code (at the interview, or as a "homework" thing before the interview, whenever... just not over the phone, please!) something similar to what they'd want me to do at the job.
My side projects are for fun and playing around with new ideas, and don't really represent the style of code I'd be writing in a professional engagement. Here's an example from in the past: I wanted to crunch some baseball stat numbers using a Matlab library I'd found. So I hacked together some minimal parsing of publicly available game logs in Python, since I wasn't that interested in learning Matlab IO, then dumped them from Python to a format Matlab could read. Iterated that part until it worked for all the files I was interested on (but still in a rather ugly state) then hacked together some analysis in Matlab. DRY, or good design in general? Nah, it was a one-off thing.
You can't please everyone. There are numerous HN posts about how people don't want to spend a lot of time doing coding during an interview. Or are offended that they'd be ask to do a code test as part of an interview. And doing work for hire like this is considerd spec work in other industries, which is often frowned upon too.
My point was that sideproject code is better than no code at all. And it's not so much the quality of the code that's at issue, it's being able to talk intelligently about it. Hopefully you'd want to work with someone who can understand that something that is one-off is going to be lower quality than something you might write professionally, and having code to look at and examine that you wrote, that you're an expert on (because you wrote it), gives context to having discussions around how the code might be improved or why you made the decisions to made for a one-off.
There are numerous HN posts about how people don't want to spend a lot of time doing coding during an interview. Or are offended that they'd be ask to do a code test as part of an interview.
Those people are instant no-hires. You aren't going to write code when I ask you to write it? That is your job!
(I would grant exceptions is very exceptional cases, like the candidate having physically lost his hands. Or being blind.)
And doing work for hire like this is considerd spec work in other industries
Are you talking about the company giving the guy a problem, and then taking his answer and putting it right in their codebase without hiring him?
"Those people are instant no-hires. You aren't going to write code when I ask you to write it? That is your job!"
Well... typically, my 'job' involves people paying me as well. If you want me to 'code when you ask for it', then you need to pay for that. Or... make it easier to judge how I do stuff (like look at my side work).
OP was talking about companies judging the quality of your one-off side project play for-fun code in the same way that they might judge the quality of your professional work code.
It's not necessarily unheard of. People who work at architecture firms routinely having portfolios of stuff they've worked on. If you don't have a portfolio of work from previous employers handy, what is in your portfolio? And if one doesn't have a portfolio of demo code that showcases style and experience and skill, one doesn't really get to complain that someone else doesn't have context to judge appropriateness for a job, or that they are expecting some context to be able to make an informed judgement.
10 - a complex system is likely to fail in the field, not on your desk, and print statements, redirected to a log file, are your last line of defense to iron out those rare or production-only bugs.
>Unit tests don't help you code in the same way that a safety net doesn't help you walk a tightrope: this is technically true, but not a helpful statement in reality.
The author mentioned that unit tests are useful to check for broken code, but he's saying TDD doesn't actually help you write better code--that there are plenty of other ways to screw up that unit tests won't catch so the extra cost of writing them ins't worth it (in his opinion).
He's saying that it's an ineffective safety net, not worth the extra time involved setting it up.
I started out as a ruby developer, and I personally think TDD is can be useful, but it's not the panacea that the TDD cargo cult proclaims it is.
>Most of these have wide acceptance among the HN/Startup crowd; I might expect to see more disagreement if the list were exposed to corporate programming environments, or especially to their managers.
Hi, corporate code monkey here: that list looks entirely fine and not particularly controversial. Thanks.
It's telling that these are the 20 most upvoted "controversial" opinions. I don't think they should have taken the 20 most controversial (according to the voting system), or the 20 most downvoted, but the system chosen does have obvious consequences. Looking at the results, they break down a few ways - Reactions to corporate processes and "things management likes", old and unpopular things are no good, programmers should be more free to express themselves and so on.
The reason people think these things are "controversial" is that the sources of information these programmers are getting are often limited to their immediate environment, their university profs, and what they read online. That naturally excludes the bulk of competent programmers who don't blog or speak in conferences[1]. It's impossible to constantly remember to account for the biases in the media you consume, so people are naturally (and gradually) led to believe that the trendy ideas of yesterday and today are ubiquitous. They forget about the masses of programmers doing heads-down development on non-web, non-mobile platforms because the web and mobile guys are trendy and they all have blogs.
> The only “best practice” you should be using all the time is “Use Your Brain”.
This is true, in that very few best practices are universally applicable and you should never stop thinking. The author is also totally right about people jumping on bandwagons and being cargo cult members.
So I'm not really disagreeing with him, but just adding that a lot of best practices actually aid your ability to reason through code. They can help to push repetitive things off to the automatic portions of your brain. They can help to make patterns in your code that are visible and familiar, so you can spend less time thinking about how to implement (or read) some simple thing and more time thinking about how the simple pieces fit together.
Which is not to say all "best practices" are great. But even the questionable ones usually have some interesting problem they are trying to solve that explains why they sprung into being. If a "best practice" is popular enough that it gets called a best practice, it's probably worth paying attention to and thinking about even if you ultimately decide not to use it.
If you face the exact same problem 10 times, and you think it through from the beginning every single time, you're going to get 10 different answers. You just can't remember all the little details every single time.
Once in a while I surprise myself by independently coming up with exactly the same designs for similar problems (or, in fact, the same problem that I forgot I solved already few months ago). Most of the time I don't remember any details of the problem besides the general idea that I had it before. And then when I finally dig up the code it looks exactly like I was going to write (or was in the process of writing :)).
I think experience can actually become a liability. Things have changed and you don't have to write assembly code to optimize your program. Compilers will probably do a better job than you. And if you have a lot of experience writing JavaScript for IE6, you may actually be wasting time optimizing in ways no longer relevant.
True, but this is why you tend to migrate towards an architect/lead role later in your career where the general insights about systems and design are still applicable with out needing to overly focus on present day implementation minutia .
Well said. Normally, a best practice becomes known as a best practice because it is the right choice most of the time. It probably won't be the right choice all of the time and the truly successful can excel by knowing when to creatively deviate from the "best practices", but they make for good baseline to adjust and customize from.
>They can help to push repetitive things off to the automatic portions of your brain.
I agree, further, anything that can be automated should be. GTD and most similar systems stress getting everything possible out of your head and into the system, so you can concentrate on what is most important. Many "best practices" try to do the same, the trick, as you point out is to use the ones that are appropriate to your problem.
It seems to me that most of these aren't controversial at all.
I disagree with one of the points supporting "Unit testing won’t help you write good code.", though, where he implies that writing the code reveals the edge cases. Surely the edge cases are something you should have carefully considered before starting to code? I've found that crystallising the edge cases as a test before writing the function can be really helpful.
Surely the edge cases are something you should have carefully considered before starting to code?
There are two kinds of edge cases: the essential ones that derive from your requirements and problem domain, and the accidental ones that result from implementation details like your choice of programming languages or which libraries you use or your software architecture.
You can try to identify the essential ones and write tests for them in advance, but for the accidental ones there’s only so much you can do with a black box approach to testing.
Indeed. Writing a unit test makes you think about what kinds of input your code might get. It's one of the few situations where you are specifically prompted to think of possible edge cases.
TDD doesn't work well in every problem domain. Sometimes you have to accept that parts of your code can't reasonably be unit tested, and isolate the bits that are testable. But where it is a good fit, it's magic. My favorite book on the subject:
Public fields are still debatable too, at least in some languages. I've worked with / talked to some people who insist that you can never in any case have a public field. I personally believe there are a lot of factors to look at before making that call.
I found the "programmers should be able to write code" off putting, but not because I dont think that programmers should be able to write code. Too many hiring development managers expect that someone can just walk into a room and without hesitation expose the way they think to them. I also think that most of these people fall victim to their own hubris. So what if a problem can be solved with "3 lines of code". I guarantee that many problems are not immediately solved in 3 lines of code and its only after iterating through the problem domain that an elegant solution can be found.
Being put on the spot is a different kind of stress that can take many people out of their game. Some managers may argue that they want to weed out people who cannot act under pressure but once a "team" is formed, the pressure of having to act quickly doesn't involve some of the socialization that is required in a job interview.
If you really want to know if someone knows how to code, have them write something for you. If you are afraid that they would cheat, look at their git hub account. Ask to see a portfolio if possible. If none of that is possible, make the hiring process take long enough that a level of rapport can be developed between the team and the candidate.
The point of Fizzbuzz is that some people simply cannot write any code at all.
It's not necessarily looking for the best possible answer. In fact, I completely expect (as both interviewee and interviewer) that the candidate will write a first pass on paper, we will look at it, figure out it's big-O, and then go on to improve it to be better.
That is a fair point, too. I have done some interviews where I questioned how the person got far enough into the process that I was involved. However I have also seen the flip side of the coin where a developer/manager interviewed and subsequently rejected someone who was likely a great candidate over an on the spot brain quiz, and one that the candidate would likely NEVER see while performing his/her job duties.
I think developers are putting themselves at a severe disadvantage and are being taken advantage of because of this whole 'hacker culture' or 'startup life' ideas.
We are making it the norm, and raising everyone's expectations of us which end up hurting us in the long run.
Stop for a second, and consider other professions and industries. Do you see lawyers or financial analysts play with side-projects in the weekend, and running all-nighters and hackathons? Or working on open-source-like-equivalents of their professions?
Sure they are different industries, but there are similar things they could do if they really wanted to, but they don't.
When we talk about 'hacking for fun' etc... we are sending a message to other people which changes their attitude to "hey i only pay you this much because you'd be doing similar stuff in your spare time anyways..."
People value your time based on things like this. Lawyers play this game very well, they pretend like they don't have a single extra second to spare, and hey they don't talk about law being fun either, and hence they can charge $200/hour fees and others will happily pay for it.
I think we need to be smarter and adjust our attitude to account for the economic goals and political games of the rest of the society.
As a senior engineer having done tons of side projects that I really enjoyed doing, when I approach a customer for a contract it is not rare that the customer tells me : "We could saw your code online / your involvement in that or that" so it is a good showroom for your skills.
Consider it as an investment,
1. because it is an opportunity to learn new stuff compared to your daily professional routine
2. because it is a way to demonstrate your skills.
Remember that you sell your expertise, not just your time.
BTW : yes, lawyers do pro bonos ! For exactly the same reason, for the fun of the case and their image.
I actually think #17 is the odd one out here, in that it directly conflicts with #1. "Programming is just a job", yet "Code in your spare time"? I think you're right that #17 conflicts with silicone valley mentality, but while this mentality isn't for everyone, I think it's absurd to claim it's wrong.
Ask yourself the following questions, keeping in mind that different people will answer differently and that's okay: Do you have a passionate desire to do what you do (whether your motivation be to build a better future, change the world, or create cool technology, etc.)? Or is programming really just a way to make money (i.e. "just a job") to afford the things you really enjoy? (Or some balance in between the two, usually.)
IMO for those who are truly passionate about their work, their work is NEVER "just a job". Thus while #17 is true for a great many people, it's not true for the few who tend to accomplish the most - at least judging empirically from historical results.
For these people, it's much more than "just a job". For every highly successful person who made a huge impact on the world, it's never "just a job." Just ask Elon Musk, or countless others, if they consider their career "just a job."
Lawyers and financial analysts don't play with side-projects, but many of them do pro-bono work for friends and charities, and I think that's their equivalent of this idea. Engineers and craftsmen often do side-projects for fun... engineers create fighting robots, and craftsmen create furniture and art for their own use. That's pretty much exactly the same as programmers doing side projects and participating in hackathons.
Lawyers and financial analyst are not similar to programmers at all. I don't disagree with you - I too think that how we're perceived could be damaging. But your examples are not well chosen.
In short: find me a journalist or a writer or a musician or a sportsman who does NOT do some kind of side-project for fun or without being paid. They probably exist, but I bet there's not many of them - and I can suspect that they're viewed just as 9-17 programmers by the rest of their fields.
Most professional athletes are contractually prohibited or discouraged from engaging in very much physical activity outside of training and official games due to fear of injury.
For musicians and writers (journalists are a type of writer), there's no actual distinction between what they do for work and what they do on the side, because anything useful they produce, they're going to publish anyway, in a way that doesn't substantially differ from the rest of their professional output. I'm sure they noodle around for fun, but a programmer "noodling around for fun" isn't going to have very much to show off on Github either.
> Most professional athletes are contractually prohibited
I didn't know this. It makes sense, I suppose. But then again, why do you think those clauses are included in athletes contracts? Is it because they are unlikely to do any physical activity outside of their regular training or on the contrary, they would be active anyway and this is to prevent them from overworking themselves?
> [for musicians and writers] there's no actual distinction between what they do for work and what they do on the side
I disagree. Either one is paid to do something or is not. It's a job if they are compensated. It's not if they are not paid. This is how I understand this. Do you think that everything a writer ever writes is going to be paid for? Or that everything a writer writes he does in hopes of getting paid? I think that it is not the case.
I think this is exactly the same for programmers. We're writing code just as writers write prose. Sometimes we're compensated for what we wrote, sometimes we're not. Sometimes what was started as a quick letter to a friend becomes full fledged essay, and sometimes what was started as a 'hello world' in that new and fashionable language becomes useful product. More often this does not happen, of course, and that is how it should be.
But that's not the point. I wanted to say that being a programmer is more similar to being a writer or a musician than being a lawyer (maybe not - pro bono cases, as others suggest) or accountant.
This doesn't reflect how writers and musicians are actually paid, though. You might be in a band but have a side project or a solo album, but you still receive royalties or concert revenue for it, same as for your role in the ordinary band. Most writers make nearly all their money taking the risk that someone will publish their work and pay them for it. In either case, there's no real distinction and the financial returns are uncertain either way.
There are authors who have regular columns or something, but even in their case it's a little disingenuous to say that writing books is a side project. If you wanted to describe Thomas Friedman's profession, it would be something like "New York Times columnist and author". You wouldn't say he was a NYT columnist who happened to write best-selling books in his free time as some sort of hobby.
> Most writers make nearly all their money taking the risk that someone will publish their work and pay them for it.
You are right, of course, but this is not what I was asking about. Let's leave 'nearly' (for clarity) out and let's say that 'all the money writers make comes from publishing their works', which is true. I'm asking if it's also true that 'all the writing writers do is supposed to/has to/they hope it will make them some money'?
Assuming that "some writers write letters to their mothers" and "no one writes a letter to their mother expecting it will make them some money"[1], then it's not literally true, but we're splitting hairs. We're talking about hiring programmers based upon a portfolio of professional-quality work they've done in their spare time. How many writers would get a book contract based on a portfolio of their unpublished work?
[1] This requires more qualification. For instance, if one was a struggling writer who didn't make enough money from their writing but had a wealthy mother sending them money for their living expenses....
> How many writers would get a book contract based on a portfolio of their unpublished work?
Certainly not many, because how anyone could know they actually wrote something? But I was speaking about published but not paid for works. There is more than one way to publish your writings and many of them do not involve financial compensation. I know, because quite a few years ago I was publishing short novels and articles in a (real-world, made-of-paper!) magazine about pen&paper RPGs. I didn't earn a penny (and the magazine went out of business quickly), but I was published.
I wasn't the only one who submitted texts to the editors of said magazine. Mine were of poor quality, but there were a few authors that I was not surprised to find in the bookstores some time later. They got a book contract (I don't know, I'm guessing) from real publisher probably with less hassle than other debutants, probably because of what they published for free earlier. You can argue that they became writers only after they were paid to write a book, but I don't see this that way. Also, many of them continue to contribute they writings (mainly reviews, essays) to on-line magazines (about RPGs; sadly, there is no one such a magazine still being published on paper in my country) for free.
How is this different from publishing side-project on github and getting hired based on that? I really can't see the difference.
Either way, thanks for interesting discussion, I enjoyed it, especially footnote about writing a letter to one's mom :)
As an example, tech journalist Andy Ihnatko is also an avid writer of fiction in his spare time, most of which he has yet to publish. This doesn't resemble his professional output at all, which are articles that get published in a newspaper.
I think the point is that when you're doing something you love, you want to keep doing it outside the constraints that exist when someone else is dictating the type of creative work you can do.
Well, that's up to him. I don't really think any editor would think Andy Ihnatko was a worse tech journalist if he spent his free time playing with his kids or riding jetskis.
When you're doing something you love to the best of your ability, sometimes it's mentally exhausting and you need to do something totally different to recharge your batteries. Monomania isn't always the best way to go. I mean, it definitely works for a lot of people but it's not a reasonable expectation.
Sorry, I still can't agree with #1. It assumes that all workplace programming is gluing things together, fixing/maintaining legacy crap, or writing generic CRUD apps, which is untrue. I know some great programmers who never code in their spare time, and some that do that are, quite frankly, very uninteresting people to be around due to their lack of other hobbies.
I know the current trend in startups is all about "show me your github", and I admit that it has some value as a filter, but I feel like I'm seeing people writing fairly uninteresting code snippets and blogs just to put it on their resume, in the same way that high schoolers join a bunch of student groups to beef up college applications. There are plenty of programming jobs that require writing complex code and having deep domain knowledge, and to discard those candidates because they can't show you their code and have other hobbies outside of work is just not smart.
I think "show me your github" might be the most toxic practice for the well-being of software developers' lives that is popular on HN.
As I said elsewhere, I really like programming and will often do it in my spare time because it's fun. But needing to maintain some public repo or an open-source project in order to get a job makes it no longer fun.
> needing to maintain some public repo or an open-source project in order to get a job makes it no longer fun.
I completely agree, and I already have an above average amount of open source code available on Github. Some startups have found my projects and mentioned them in interviews, which was great, but I've never been directly asked for my code before an interview, and I like it that way. Writing code for the purpose of showing to potential employers would ruin the magic.
I actually respect more programmers who do other things on their spare time. Cooking, traveling, photographing, sports, volunteer work... Doing other things add knowledge, build character and improve your level of happiness.
It's important to be open-minded and possess a large array of skills. Specialization is overrated, an invention from industrial era. Ideally, everybody should be a da Vinci.
I don't understand why this is either/or. Life is long. It's possible to write code for fun AND race motorcycles, travel the world, cure meats, play soccer, sail boats, read books, etc...
Topic 4: So comments are a problem because people do not update them? I believe the blame is misdirected there. I agree that one should make code very readable but comments can be very useful. Just because someone uses the tool incorrectly doesn't make it a useless tool.
Topic 18: His questions are not about writing code, they are about writing algorithms. The kind of code I write on a day-to-day basis does not require working with math equations of this type. I could eventually do it but I would probably have to research and test before I would be happy with it. Most definitely not something I could do on the spot during an interview, but I guess that means I can't write code.
I don't think I have much of a problem with the rest of them and agree with most of them.
The author said that most comments in code are duplication, not all. And that is true in the average case. Most code comments explain what the code is doing and how it does it, and so need to be updated when either of that changes. Then they get out of date, etc.
Typically, comments can just be merged into the code directly by extracting well named methods, and using better names for variables.
Good comments tend to not be redundant, they tend to tell you "why" information and be written at the level of intent rather than at the level of implementation. And thus they don't need to be continually updated when the implementation changes.
> Good comments tend to not be redundant, they tend to tell you "why" information and be written at the level of intent rather than at the level of implementation. And thus they don't need to be continually updated when the implementation changes.
This. The exception being when things have to get messy for reasons of performance or other non-obvious constraints- at which point, the comments should document why it's messy, but may also provide some guidance around the messy code.
While I can't disagree with your statement, that's not what I got as the focus of his comment. The part about code duplication is the last sentence of the comment. Before that he does state that it would be better to write precise readable code, which no one disputes (I hope), but I don't see that as a reason to discount proper comments.
I have written comments as he described where they practically duplicate the code they are describing. But when I do that it's because I'm working with multidimensional arrays and I use comments to remind me what section of what array inside what array I'm currently working on. If I change the structure of the array and don't update the comments then that's my fault.
I was reacting more to the first sentence, where he said that "poor, incorrect, outdated, misleading comments" must be the most annoying aspect of comments. I believe he is right in this regard, but this where I feel he is blaming the tool for someone misusing it.
> His questions are not about writing code, they are about writing algorithms.
What's the distinction between those? He isn't asking the candidates to invent algorithms, just to implement simple ones. The answer to the second question is "return Pi * radius * radius", isn't that just writing code?
The first one (find pi to 5 decimal places from the sequence) is asking candidates to remember or invent algorithms - in fact, I'm not immediately sure how to generate the error bound there.
No, he provided the sequence. There remains the question of how you make sure that calculating n terms of it puts you within 10^-5 of the true value of pi. If he provided that as well, then yes, it should be a trivial exercise. If not, it's nontrivial (although not, I think, hugely difficult) mathematics followed by trivial programming.
The sequence is enough, you just keep going till your fractions (multiplied by 4) do not change 10^-5 any more. You do not need the actual value of pi.
As soon as 4 * 1/(2n+1) < 0.00001, then you can stop looping. The sequence is the algorithm.
This is not true for the general sequence, and I would be hesitant to give that answer without an understanding that it is true for the given sequence. It is, and I sketched the proof in reply to your sibling comment, but it takes more figuring out than I could do before coffee.
You are correct, but I think underestimating the insight required for it to be "obviously" correct.
That the error must be less than the nth term does follow from 1) the fact that he says "more terms giving greater accuracy", which means the error must be monotonic, and 2) the fact that the sequence is alternating. Together, this means the two must be bouncing around the answer instead of inching toward it, and so the sum of the tail must be less than the current increment.
It won't work, however, for the general sequence (or even the general convergent sequence).
This is a worthwhile point. The non-alternating sequence, 1/(2n + 1), when summed, diverges. So if you cut it at the 100th term, or the 1000000th term, the sum of the tail will not be negligible, it will be infinite.
It's only the fact that it's alternating that makes it summable. The alternating series converges like 1/n^2.
This is the kind of thing that, if it's noted in an interview situation, marks a better than average candidate. It could also allow such a candidate to go on about problems with summing series, and really show off. (One secret to interviewing well being to make the given question into a question you know something about already.)
Ah yes, the old "it's simple to me therefore it must be simple to everyone else" thought.
I didn't say "invent" algorithms, I said write them. If I didn't happen to have that "simple" algorithm in my head already then it would not necessarily be trivial to implement it. Especially if it's been a while since you've worked with similar equations.
Now, if the goal is to see how the person would try to work it out allowing for questions to take the place of research then I could possibly see the benefit.
I'm still fuzzy on the distinction you are trying to make between "writing algorithms" and "writing code". If someone told me they were "writing an algorithm", I'd understand it as them trying to come up with a new algorithm, either on paper or in code.
If you're given the outline for an algorithm, and asked to write code to implement it, is that writing algorithms or writing code?
This is just me, but in the two examples the first one is algorithm and the second is code. The difference being that in the first one he starts to explain the sequence he wants but apparently expects you to complete the sequence to code the function. Granted he does give you enough of the sequence that you can probably see the pattern. The second he basically tells you the equation he wants to see in code.
The first one requires an ability, even if minor, beyond just coding while the second is all about coding. I also find it interesting that he says the second version was more difficult for his applicants than the first. Maybe instead of "algorithm" it's more about pattern recognition that leads to minor math skills to then coding.
> The kind of code I write on a day-to-day basis does not require working with math equations of this type.
Normally I agree that algorithm-heavy CS problems are not representative of day-to-day software development, but this is pushing it. We're talking about a "return pi * r^2" here, in the latter example at least. The former example is a bit less realistic, but I could still see that type of logic being required occasionally.
Yes, this was my reaction, too, but after some thought, the first question is about seeing a pattern and converting it to an algorithm. I don't have a background in CS or math, so my initial reaction to a problem like that is "math! No fair!" but after some thought, it is just about patterns.
And, yes, if you've had high school geometry, you should be able to answer question #2 without much trouble.
Assuming one didn't graduate high school twenty years ago and may need some research time to recall equations that haven't been used since college.
Although, after reading the second one again I can agree that it shouldn't be too difficult to complete since he essentially tells you what to do; you just have to write the actual code. Well, assuming you don't have to write code to estimate for Pi and can just have 3.14 in there.
> I agree that one should make code very readable but comments can be very useful. Just because someone uses the tool incorrectly doesn't make it a useless tool.
Actually, I think the problem is people tend to make comments which attempt to explain the function of the code at more than a high level. This is the mistake.
Your comments should explain the "why" of your code but seldom go into the "what", because the "what" will almost certainly change. And in fact, in many cases the "what" may not be entirely determinable from a single point in time with properly uncoupled code.
You can sit here and say "Oh yeah well its the developer's problem to update the comments don't blame the comments." but the observation that "what" comments get out of date is so universal that we should probably accept it as the norm.
I can't disagree, but I still feel that the comment blames the tool instead of the misuse. If you feel explaining "what" in the comments is bad practice, then it is the fault of the developer that wrote the "what" instead of the commenting.
Based on responses to my comment on this, I'm beginning to wonder if I need to approach explaining my thinking on this in a different way.
> If you feel explaining "what" in the comments is bad practice, then it is the fault of the developer that wrote the "what" instead of the commenting.
Do you think misleading comments should stay in place then? I'm confused why this matters.
I don't see how it could be seen I was saying that. I thought I was clearly saying that misleading comments are the fault of the person who wrote them and not the thought of comments in the first place. The original controversial thought was that comments were too annoying to be useful, my reaction was that was blaming the tool instead of the person misusing the tool.
To answer your question, a misleading comment should of course be removed or changed. Again, I fail to see how I was suggesting otherwise.
I'm interested if you could solve it if you gave it some thought. The interviewer is presumably not expecting you to write it instantly and perfectly, but to reason your way through it over a few minutes.
(I will try it now.)
it can be answered in about 10 lines of C# is an unhelpful way of expressing how difficult a question is.
I remember very well the first time I tried a programming exercise like this. I probably spent 2 minutes writing the loop, and then I must have spent another 20 minutes wondering what I’d done wrong, until I realised that the sequence I’d been given really did converge that slowly. :-)
In the spirit of helping anyone who’s at that stage now, if you’re running on a slow machine, I suggest you try this sequence instead:
One of the weaknesses of dynamic languages is that certain classes of errors aren't caught by the machine until run time. Now a think about comments, where errors are never caught by the machine.
I agree that developers should be able to code, I just got off the phone with a "web developer" who specializes in adding Joomla to a website, installing a template and sticking in some text.
For all intents and purposes this man is a glorified text editor.
He doesn't know the smallest bit of PHP (the stack with which he claims to work), CSS or HTMl - in which world is he a web developer?
Developers damn better be able to write code. But writing code and writing code during a job interview are two very different things!
If you only hire people who can write code during job interviews, you might get a great guy who was completely relax during the interview. Or you might get someone who's OK at writing code, and miss out on someone 100 times more productive only because they were nervous during the interview.
Nervousness is poison for thinking, fear is the mind killer, etc. I've always been most nervous at interviews for jobs I really, really wanted. Not because I needed a paycheck, but because I was passionate about the job.
Advice for job candidates: Drill coding under pressure.
Could not agree more, I attend a fortnightly hackathon for just this purpose, one day when my dream job comes along (but don't tell my employer...) I want to be ripe and ready to be drilled by whatever they have to offer.
If I can't write code during an interview, neither can I coordinate a staged release of a fix I just wrote during a 3 AM outage. That shouldn't have happened but sometimes it does; pressure isn't always artificial.
Indeed. But if I find out that a company never does 3AM fixes, or death marches, then I know their management is damn great. And I really want to work for them!
It depends what your company does. Where I work we write CAD/CAM software which gets released roughly twice a year. It is pretty inconceivable that a 3AM fix would be necessary. Even 0 day security flaws in the software (pretty unlikely since there isn't any need for network connectivity) really shouldn't be fixed at 3 am, they can wait 12 hours and be handled under less stress and tiredness.
I agree and perhaps handing a coder some code and asking the coder what's going is maybe a better test of their skills then asking them to code on the spot.
That's true, however asking someone on the spot to write a function that requires domain specific knowledge isn't a good solution. What are you in fact testing, do they know how to code or do they know how to calculate the area of a circle. I can code but I'd fail horribly on the area of the circle thing simply because I haven't done anything like that since grade school math class probably 30 years ago.
So his comment is valid in that coders should code but his example of how to tell if someone can code I think is flawed.
#2 contradicts itself. It says that unit tests will make sure that code that already works doesn’t break but at the same time it won't help writing good code. Well as far as I'm concerned, code that breaks can't really qualify as good code. Writing tests first, or writing code to the tests is ridiculous. Not sure what writing code to the tests actually means, is that even correct English (I'm not a native English speaker)? The main benefit of writing tests first, is that tests do get written. It's too easy to say we'll write tests when we have time. If you discipline yourself to write tests first, by the time the code is written, the tests are written as well. Writing tests first also saves a lot of manual testing time (eg. clicking over and over in a UI), which is repetitive and stressful activity. Overall testing and TDD help a lot to achieve #11.
#19 is an amusing caricature, but misleading. Design Patterns are not limited to GoF patterns. They show up at different levels of abstractions and give developers useful vocabulary to communicate about stuff they do anyway. #19 mentions 2 patterns that are not very useful in our daily lives, but they are patterns like MVC, observer or iterator that we use all the time without even realizing they are patterns. The point of patterns (design, coding or architectural patterns) is not to memorize the GoF list but to learn to identify, label and share recurring solutions that we can reuse in various situations.
Not sure what writing code to the tests actually means, is that even correct English (I'm not a native English speaker)?
It’s idiomatic, and it usually has a negative connotation. An analogous example is criticising a school for “teaching to the exam”, usually implying that the school’s teaching is inappropriately prioritising getting the student a good grade in their exam, at the the expense of giving the student a good general education in the subject.
Here’s a contrived example of writing code to the tests, in the negative sense. Given this test:
def test_add():
assert(add(1, 1) == 2)
we could write this obviously broken implementation of the add function, which is nevertheless the simplest code that passes the test:
In Kent Beck's "Test Driven Development by Example", he starts just like this, with an insufficient failing test. Then he writes a minimal function that passes it. Then he adds another test, then improves the function until it passes both. By the time he's done, he's got tests for all the edge cases he could think of.
I think it's a great way to make sure you cover all the edge cases, for a sufficiently complicated function. I don't have the patience to write all my code that way, though.
Yes, it is. The point is that even given many additional test cases of the same type, they will remain insufficient, unless of course you’re planning to test the entire input domain, which I imagine is going to take you a while.
Clearly at some stage you have to replace satisfying isolated cases with some form of generalised understanding. At that stage, you’re no longer coding to the tests.
This is not to say that unit tests can’t be useful. On the contrary, I find them a valuable tool for many programming jobs.
However, as I have observed before, the plural of “unit test” is not “proof”. I do find it frustrating when I see overconfidence instilled in too many developers just because they have a suite of unit tests available. Using TDD can’t magically guarantee that a program actually satisfies all of its requirements. Refactoring without thinking isn’t magically safe as long as the code still passes the test suite. Writing code to the test is not a substitute for understanding the problem you’re trying to solve and how your solution works.
> #2 contradicts itself. It says that unit tests will make sure that code that already works doesn’t break but at the same time it won't help writing good code.
No, the only thing you can say about the quality of code through tests is that if you change some code and it still passes the tests, then you didn't break it. The tests say absolutely nothing else, because their output is binary - pass or fail. They say nothing about the efficiency, readbility, complexity, or style of the tested code.
Tests can help take you from broken code to working code, and not a step further. To make it good code, you have to look at the code, not the tests.
There are times when clean code that almost works is better than messy code that actually works. Generally when the requirements are still fuzzy. But also when your still deciding were to handle edge cases and such.
I agree with #18 wholeheartedly but I disagree with the examples because those are more of math questions than programming questions. I have never needed to write a program that dealt with circles/radius/area/pi so I'd probably fail that test as I'd have to look up a few things that I have't thought about in over twenty years.
I prefer to have about 15-20 index cards with problems that have open ended solutions ("Build a RPG combat system using 4,6,8,10, 12 or 20 sided dice", "Create a system to store Books and allow people to search by author/title or genre")
If you can't answer those questions in #18, and you can code, then it isn't mathematical skills you lack - it's reading comprehension (a skill that seems to be woefully underdeveloped amongst developers). I suspect you can actually pass both of those without difficulty though.
I have to agree. The formula is given right there in the problem description. This isn't about knowing math. It's about whether you can translate some simple text, "Pi times the radius squared," into code. This particular example is trivially easy; even easier than FizzBuzz.
I would just write the code to compute it to arbitrary precision and then tell the interviewer that I would test how many iterations are required and hard-code it.
If they insist that it must be solved at runtime, I would check the size of the computed term (1/(1+2x)) and repeat 'til it's less than 10E-6.
If you are told the formula to use, you use it. No, it's not the bestest way ever of finding pi. It's to make sure the guy sitting across from you can actually write code, which a scary number of people actually cannot do.
It's an alternating decreasing series so the sum of the remaining terms is always less than the current term. Also, each pair is 1/n - 1/(n+2) = 2/(n*(n+2)) which converges like 1/n^2 (all successive terms sum to the same order as the current term). That this pairing converges much faster than the original series is part of what makes the question good: a good solution will exploit this pairing for numerical stability and to converge in much fewer steps (square root of the number needed by the naive algorithm).
I was thinking that you'd want to pair the terms, but going further didn't happen to tickle my curiosity. Of course after seeing your answer, it looks quite obvious.
My larger conclusion is that either the OP didn't intend to require this analysis or he's surprised that people in an interview+coding mindset are going to get stuck thinking there is a straightforward algorithm. Since he refers to the question as simple for any "seasoned developer" (read: has forgotten algebra ;), I presume he actually misstated the problem and really expected one of the simplifications people have come up with. Either way, I think it says more about the interviewer than the interviewees.
The interviewer can ask leading questions if the interviewee gets stuck or doesn't consider certain attributes. Seemingly simple questions that actually have depth are good for an interview because it stimulates discussion and can be carried further for more advanced/competent people. Quizzing someone on "trivia" is a terrible test of competence.
Yeah, it's true that good technical questions are an interactive process. But I'm not too confident that the OP actually saw the complexity in that question, and the many naive responses certainly did not. Also in the context of trying to come up with code on the spot, knowing how that infinite sum is going to converge strikes me as a quite helpful piece of trivia.
... although maybe my gripe only came about from knowing enough to immediately see that there had to be some kind of convergence bound, but not knowing/figuring exactly what it was.
I wonder if they really do mean to break when the answer hits the known value. If so that's kind of a pathetically useless thing to do. While I know most whiteboard coding problems aren't things you ever need to write in production code, the change to make it break when five decimal places stabilise is not much more difficult and provides an actual nontrivial output.
I'm sure it would be fine as a first pass but there's no way anyone would end it there. Two obvious immediate objections being that the point where it first rounds to the correct value does not imply the point where it stabilises there and the second being that you're using the value you're trying to calculate in calculating said value. At that point, a good interviewer leads them toward fixing these issues rather than just saying "wrong", of course, but I'm sure the end result is not to use pi in the algorithm.
Also, his next step down isn't really one jump down in difficulty; it's presented as "if you don't even know how to begin to approach that problem I need to make sure that you can code something trivial." That is to say, it's an arbitrary number of steps down that implies little about the initial question's intended difficulty.
The only part of the question that required any math knowledge beyond basic arithmetic was knowing the value of pi to 5 decimal places. I'm sure they wouldn't have minded if you'd asked what exactly that was.
You could be right though, in that the high number of developers who failed it freaked out at the site of pi and flubbed it without really thinking further. Although I think that says more about our horrible math education than anything else, i.e., math education is so bad that the mere site of something "mathy" elicits terror in the hearts of otherwise competent programmers.
You don't even need to know that. Once the fractions update your estimate by less than 0.5x10^-5 (half of 0.00001), you can be sure that your estimate is accurate to 5 decimal places. I think it's reasonable to expect a programmer to be able to figure that out. If the programmer wants to use a REPL to find the pattern in the numbers, I think that's fair as it demonstrates using one's tools to solve a problem.
I guess that demonstrates why it's important to have a REPL or a compiler available for the interviewee. (My computer wasn't available at the time that I wrote the comment.) I just wrote up my solution and confirmed that its terminating condition isn't correct.
It doesn't matter that it's not correct, the sad truth is that if you get that problem and write a loop and some math operators and update a variable for each run - you're in the top 10% of applicants!
There's a mindboggling amount of people applying for programming jobs who can't even do that!
How can people code without knowing math ? Atleast to the extent of knowing the value of PI? ( I am not asking for a 10 decimal places value ..How about just 2 decimal places )
I'm with you there. I didn't mean to imply that programmers don't need to know math. To the contrary, I find it very useful and keep a graphing calculator on my desk. I was just pointing out that knowing the value of PI isn't strictly necessary for this example, because by writing a quick script, one can determine the behavior of the sequence and figure out an appropriate stopping condition. The rest of it is just translating a formula into code.
How many other constants have you memorized? e? phi? sqrt(2)? and you know all those to 5 decimals? When I conduct interviews I try really really hard to set aside the things that I know and give candidates an opportunity to tell (or show) me what they know. Someone not having a constant memorized is not a show stopper, as I am after the logical thought processs.
I have been a code producing developer for more than 40 years. I hate wasting time proving that I can do trivial problems. Sometimes I have spent half an afternoon writing one trivial problem after another. I would much rather spend the time looking at the company's problems and discussing solutions.
If you cannot read and write a trivial math equation as code then I really question how you got here. That formulation for approximation of irrationals is one of the reasons computers exist! If you can't do this, go into a cave on a mountaintop behind a waterfall in a hidden vale and proceed to train until you can. It's not an unreasonable or un-useful thing.
Of course, it's actually trickier than you might think on modern computers if you go out much further than 10 decimal places or so.
The problem with most of these opinions is that they are absolutes, and therefore by definition many of them are wrong. They could mostly be improved with the addition of "sometimes", "in most circumstances", "depending on the goals of your project", etc. For example:
- "Programmers who don’t code in their spare time for fun [frequently won't be] as good as those that do.
- "Unit testing [may not] help you write good code [in many situations I have encountered]."
- "[Possibly the most useful] “best practice” you should be using all the time is “Use Your Brain” [though for some teams in some circumstances there may be more useful best practices]."
- "If you only know one language, no matter how well you know it, you’re [almost certainly] not a great programmer."
- "Readability is the most important aspect of your code, [depending on your company's goals and method of achieving those goals at this point in time]"
Of course, these are [mostly] opinions, and adding all sorts of disclaimers is [almost] never fun, but opinions stated as absolutes are [almost] always wrong.
I agree with #8. Learning a different programming language is not that hard. Because I think most programming language are related with each other. They have the same "structure". Just different in syntax. So if you only one programming language I think, programming is not for you.
While knowing one will certainly ease learning the others it's rather misleading to say that Java, Haskell, Prolog, and Brainfuck all have the same structure. Their differences are significantly beyond syntax.
I would go so far as to say that almost all the most important revelations I had programming-wise where when I learned a new language and suddenly gained a new way of looking at everything I know. And the nice thing about this is, you will keep these insights whether you stick with the actual language or not.
That said, there have been a few languages that have failed to do so. Almost invariably, I loathe working with them.
I agree with the first point, "Programmers who don’t code in their spare time for fun will never become as good as those that do."
However, out of curiosity, how many people actually work on side-projects?
Because, after ten years, I have only worked with a couple of people who actively work on side-projects. And I know that some people treat programming as "only a job" and they're done at 5:00 PM, which is fine. But is it really uncommon that programmers work on side-projects?
[Shameless plug: I have a few small projects on GitHub (https://github.com/mattchoinski) and I also work on freelance projects for various clients.]
I don't have time for side projects. My work consumes all of my productive time. Granted, I am a researcher, but I think this should apply to others as well.
In work I'm concerned with writing code my colleagues can maintain. Often the most sensible design decision is to use the technology stack my team mates and I already know. If everyone else uses Linux to write Python web apps I'd need a very good reason to write a Windows only GUI app in C#!
Five years from now, my employer's standard technology stack might be obsolete. If I only gain skills at work, my skill set would be obsolete too. This would harm my future career prospects.
Could an employer pay me enough that I'd give up my side projects, sabotaging my future career prospects? Probably, but it would have to be a lot of money.
Of course, I also maintain a healthy work/life balance and have hobbies that let me get away from the computer and meet other people. This is also important.
I realize that my intended meaning was weaker than my wording. I meant that surely there are others that this should apply to. That is, surely researchers are not unique. But I don't mean that it should apply to everyone.
I'm much like you. But there are enough side projects at work to keep me challenged. In fact I am working on one now. It started as an idea I had a couple of months ago and it's grown into a team. I have another skunkworks project up my sleeve once this thing is in decent enough shape...
As a researcher, your work day is your free time to do fun stuff! Similar for people who have 20% Time. The rest of us need an occasional outlet for creativity that isn't part of the narrow junk we get paid for.
Trust me, I have overhead as well. Meetings, traveling, presentations, and sometimes tasks that are necessary but not research. I consider the actual research "real work," and some weeks I can get more "real work" done than others. And certainly I have interests that I can't satisfy with the research I'm paid to do. But, there is so much interesting research to do that I am paid for, that it's just not a good use of my time to spend it on those other things. The way things are set up for me, the best time investment for interesting result return is going to be with my paid work.
That's me, though. My overall point was that surely there are developers out there who feel the same about their job, even if it's not labeled "research."
I really like this list. I think the only opinion I don't share is number 20- "Less code is better than more". I've been coding in C++ for a while now and approaching a problem without an object-oriented focus can yield substantially less code, but it's not very readable nor flexible for quick changes later on. Also, if performance is key, you may want to dig deep and that usually yields a lot more lines.
I started using Ruby this summer and it was fun turning 10 lines of code into 1 or 2. But it was a big pain deciphering it weeks later =(
I reckon #15 is one of the ones that's very easy to forget, and easy to lose track of between starting writing some code, and committing it. I wholeheartedly agree that it's more important than correctness though!
Readability is the most important aspect of your code.
Even more so than correctness. If it’s readable, it’s easy to fix. It’s also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
There is a reason why programming chops are not usually measured in lines of code.
In my opinion the "art" of programming is to keep things simple yet still solve the complex problems.
I'd say the guy was being humble for sure, as it seems extremely unlikely he would have the discipline to sit down and write something SIMPLE from scratch if he really was a bad programmer.
I think most of these are spot on, but I disagree with number 12 (especially if you're writing code in a library).
It's worth having getters and setters for variables that need to be publicly accessible, even if there's no logic in them, because it gives you the option to change how that data is stored in the future and not break all the code that uses it. You want to be able to change the implementation without breaking all your clients' code!
I'm inclined to side with the author. In the 13+ years I've been writing Java, there's only been a handful of cases where I switched a field to a more complex implementation. And in all of those cases, the IDE would have helpfully pointed out where I need to update client code. I can understand the value of strong encapsulation when publishing library code, but for internal code getters and setters are usually massive overkill.
Fortunately projectlombok takes away most of this pain, and pretty much every modern language post-Java makes attributes/getters a non-issue.
How can the 20 most upvoted opinions be the 20 most controversial? If the opinions were upvoted based on "I agree" (and this is what I assume), then they would be among the least controversial to the stackexchange crowd. Maybe the middle opinions would be better.
Are these opinions really that controversial? My favourite, "Loops considered harmful", aren't even on that list and I consider it more controversial than most of them.
My most contoversial programming opinion ... there are too fucking many programming languages already. Proposing a new one should be grounds for immediate dismissal.
- 1: I'd modify "Programmers who don’t code in their spare time for fun will never become as good as those that do", to be "Programmers who don't code in their spare time for fun will never be as good as they would if they did". I definitely believe coding for fun helps your skills, but I've seen too many "just-a-job" programmers code circles around others on the same teams who had side projects and kept up with the trendy languages. It's not a clear differentiator, just a data point.
- 2: Unit tests don't help you code in the same way that a safety net doesn't help you walk a tightrope: this is technically true, but not a helpful statement in reality.
- 10: Print statements are "valid" in that they often work, but when a debugger is available it's almost always the right way to go.