I largely agree w/ the argument here, but "slow" is going to be self-defeating nomenclature, and is also inaccurate. Business doesn't want slow. So if we're pitching slow, we're setting ourselves up to lose and the speed-hackers are going to win.
Our goal is architectural soundness. I believe the biggest fallacy of our industry is we think the only way to get these is to go "slow". Not true.
What we're really saying is our industry is short on skill sets. With specific skill sets, you can build architecturally sound systems at no extra cost.
If I were to build a house today it would take me much longer than someone else because I don't have the necessary skills. I might hurry in which case the house would be shoddy. Is the shoddiness of the house necessarily because I hurried? No. It's because I didn't acquire the required skill sets first.
The software industry has no such (practical) concept of the skill sets required to build architecturally sound systems. We have a bunch of well-meaning hackers, and as a result shoddy systems that decay into technical liabilities.
Our industry needs to solve this skill set problem. The challenge is that academia has a hard time teaching these skill sets, because they are so removed from the practitioner. And businesses can't teach it b/c it takes years and special experience to actually teach it. So it's not advantageous to a business to teach those skill sets.
So how do we do it? And how do we organize an industry around professionals who know how to build architecturally sound systems and code? This is a very difficult problem for a world that has such high demand for code and such little understanding of what the professional skill set would afford them.
> I largely agree w/ the argument here, but "slow" is going to be self-defeating nomenclature, and is also inaccurate. Business doesn't want slow. So if we're pitching slow, we're setting ourselves up to lose and the speed-hackers are going to win.
German has a very nice word for that: "zügig". It means "speedy" as well, but goes a bit towards "stable", "steady" and "friction-free". It's the good kind of fast, which sometimes needs a step back and a look at things.
So, if I came to work "zügig", I didn't speed, but there was no traffic jam, I didn't stop for a coffee somewhere, etc.
I like this word, we should adopt it. It says a lot about the German mindset that they have a word for this.
I don't think English has any equivalent (but it's a big language so I wouldn't be surprised to find out I'm wrong). We do have a very similar and pretty common idiom though: "slowly-but-surely", often used in the phrase "slowly but surely wins the race". This of course comes from The Hare and the Tortoise in Aesop's Fables.
Another analogy that works in programming terms is comparing the speed of a container ship to the speed of a Ferrari. One of them zips about a lot and goes round corners really fast. The other one takes a while to load up and takes an hour to turn, but it will shift a hell of a lot more cargo, a much greater distance, in the same time.
There's a place for both and the most important thing is knowing the difference; not sticking to one or the other dogmatically.
Expeditious is used rarely enough, and 'expedited' services are spoken of frequently enough, that I don't think that expeditious would at all capture the notion.
Things that are expedited are put on a 'fast track,' obstacles removed -- and, frequently, corners are cut.
I like to tell that "vitesse" (noun), French for speed, comes from "viste" (adj.) and latin "vistus", rooted in vista, to see. The only way to go fast is to see, to know; and for that, one often needs to go slow.
As said below Latin has it in festina lente -- "Make haste slowly."
Zügig is just so..Germanic!
It's exactly how I imagine the stereotype of German efficiency.
In the anglo-saxon world we pride ourselves on how many hours we work.
In Germany they work fewer hours and produce more and of better quality.
At least that's my impression.
Before anybody else gets funny ideas: the claims in that article aren't entirely correct.
First of all, I find the claim about 35 hours being the average a bit dubious. It may have been the average some years ago, but today it's probably closer to 38-40 hours, with service jobs (from retail to agencies) often adding unpaid overtime to that. On the other hand our work laws are pretty employee-friendly (e.g. a mandatory uninterrupted resting period of 24 hours per week in case you have to work on sundays) and larger companies are more likely to follow the letter of the law.
The rules regarding Facebook and private e-mail have more to do with German privacy law: if the private use of work computers is explicitly forbidden, there are less legal landmines involved with intercepting or monitoring Internet use. In practice many places have an informal policy that allows these things, just not officially.
The bit about employees not hanging around after work is also inaccurate. For many people a lot of friendships (and relationships) involve co-workers. In fact, this is one of the reasons there was such an outrage when Walmart tried to implement its US work policies in Germany (which forbade romantic relationships between co-workers -- not that that rule would have held up in German courts to begin with).
There are indeed more days of paid vacation and families do receive preferential treatment when it comes to scheduling vacations during the major holidays or summer break.
The difference when it comes to "Handwerk" is also very striking. Mediocre pay (and rampant moonlighting) aside, craftsmen are generally held in high regard and like most professions take a lot of pride in correctness and precision. This probably again goes hand in hand with Germany having a lot of laws, rules and standards for various fields of work (e.g. you can't just set up a shop as a car varnisher, you need a formal qualification for that).
Also, bureaucracy. Although the sibling is right in that you do usually get the expected result if you follow all the rules, the paperwork can be daunting. Most people joke about the tax law in particular, but we have laws, rules and standards for everything. German law generally tends to define even edge cases clearly rather than leave them up to interpretation by the courts, consequently the legal system tends to be less shady than in the US, but trickier cases can still take years (but on the plus side, they are much cheaper than in the US).
That article reads like the inverse of the Japanese "salaryman" (company as family, patriarch, protector, provider of all material comforts), which is also sometimes fetishized in Western circles (in terms of results) as "more efficient" or somehow better than American working culture.
Please note that another german culture is to be overly pedantic in following rules, no matter if their initial intention applies or whether we just set them five minutes ago. Rules are followed because.
This culture makes administrative processes bulky and tedious and tiring.
It often fuels my Ungeduld, because I want to get st done zügig.
Indeed. I needed some paperwork when I got married with a german. I'm from Sweden. I called the tax office in sweden that was taking care of the papers I needed to file for marriage certificate in Germany and the woman on the phone was laughing heartily saying "Then you know how byrocrazy feels!" :-) They happily sent all papers I needed in the mail to me in Germany, translated to English. The clerk in Germany was eying it suspiciously and asking some colleagues and took quite some time before finally putting down his stamp.
Zug does mean train but it comes from ziehen (to pull) as others mentioned. So I am not sure whether zügig comes from the original meaning or from 'train'. The english cognate for Zug/ziehen is tug btw (follows the common t -> z shift in german)
Not quite. "Steady" usually implies a certain slowness. "Zügig" generally implies the opposite.
For example, if I asked you to leave a hotel in a "zügig" manner, I would be asking you to quickly pack up your things and then leave. In other words, it's about doing something "faster than normal", but not so fast you can't take proper care.
If I told you to leave "zügig" and you'd first finish watching a TV show, I would probably be slightly irritated.
In this case "zügig" is better translated as "in a timely manner" (rather than with haste, which would imply dropping what you are doing and running away).
In every explanation of the word 'zügig' here on HN, the word that pops into my head is 'directly'. Not sure how to make that work beside the word 'programming' but it matches the same concept.
It implies that you are following that one named goal and considering nothing else.
German has a very nice word for that: "zügig". It means "speedy" as well, but goes a bit towards "stable", "steady" and "friction-free". It's the good kind of fast, which sometimes needs a step back and a look at things.
We have a term for that as well -- cruising speed.
The IPA would be /tsygɪç/. Pseudo-English would be more like "tsoo-gish", though obviously not completely accurate.
You can enter it in Bing Translate[0] to get a text-to-speech rendition. I can't figure out how to make a direct link with text already entered. (You can type "zuegig" if you don't have easy access to the ü key.) Interestingly enough, the translation there is "swift".
I am not so sure. The point of "slow programming" is the same as "slow food" and "slow democracy." I think you have to see the three together. The point is to slow down, deliberate, and then do. That doesn't mean long iterations, nor does it mean deferring progress. It means ensuring solidity of the progress as you make it.
I once interviewed for a job at one of the most recognized and big Perl shops in the world and didn't get invited to face to face because I "took my time" with their puzzles. My response was that my first drafts were faulty, and that I preferred to program slowly because I could deliver more code, faster, and better by taking my time with it. In the end, their loss.
Deliberation is important. That often means slower apparent progress but faster progress overall.
Yep - there are benefits to a slogan being counter-intuitive because the whole point is to challenge the intuitive way of approaching a problem and making you see it anew.
In the reverse case, facebook's "move fast and break things" works because it suggests you move fast enough to dare breaking things to try an overcome the caution that holds people back from doing great things.
In this case the equivalent slogan might be "Do it right and ship it late" i.e. its so important to get it right its worth risking missing a shipping date. Practical principles, like the agile manifesto, describe a preferred trade-off rather than just naming a good attribute like swiftness, because who doesn't want the good attribute? It would be irrational.
I think this also highlights why moving faster is nearly always better even if uncomfortable for some: its a more effective way of handling uncertainty and learning faster that in most commercial environments is more important than quality. Quality in the wrong places is waste.
This is also why I don't think moving faster is almost always better. There's a certain point at which moving faster slows you down.
A better way of saying it might be "optimize your progress over the next 10 years, not over the next week."
I think there is a false choice usually presented between agile, move fast and break things, vs old waterfall approach. I don't think waterfall is the right alternative. The waterfall approach is really difficult to get right because to some extent it insulates design from coding. The integration of these two is something that agile gets right.
But what agile gets wrong is in de-emphasizing design too much. Design is important, and it is important regarding long-term progress. And if you don't design, you will pay for it a hundred times over. But the best design is where technology is designed bottom-up and UX is designed top-down. Done right, this integrates design into coding but it also emphasizes design and code contracts.
I am not sure that ship it late necessarily becomes a part of slow coding. The point is to code deliberately and with quality. Once you have a production codebase, if you do things right, there's no reason you can't ship on a fixed schedule. You might ship less initially than you expected, but in the long run, you will ship more than you expect, because the delay of design pays back later.
It takes a certain amount of time to build a house (that will be durable and to code) no matter how much people want "skill sets".
If it takes a longer time than management wanted for something to be built, that isn't necessarily because "skill sets" are lacking.
If the product is shoddy, that is probably not because the team lacked the magical ability to make sound products instantly, but because the team was rushed (on the theory that they are not professionals and have to be micromanaged)
The odds are that the feature spec was bloated and management was bad at estimation (perhaps because they demanded estimates out of programmers who are also not good at estimation, in a bid to squeeze as much output from them as possible).
Hackers aren't the problem. A pogrom on "hackers" is unlikely to solve anything. "Professional" in the sense of wearing this or that, or following whatever you personally consider to be best practices isn't going to fix scope creep, wrong estimation, fundamental facts about project coordination, or the laws of physics.
Building serious stuff takes time.
It's management's job to handle that without flipping its shit.
I'm not saying that it doesn't take time to build software and I'm not making a case for how to do management.
In my decade of experience on different teams and products, the main impedance to productivity has by far been to do with bad code and architectural choices. The justification is always "speed" but my point is that is wrong. It's not speed that is at fault, it's that most software engineers don't know how to properly engineer software -- they haven't been taught nor had the time to learn it. It's absolutely an issue of skill sets.
Yes, management and project estimation are other issues to work out. I'd say we're actually much further along on these fronts than we are on the technical skill sets front. That is, the skill set of building architecturally sound systems.*
* Note: I do believe there is a time trade off between features and feature flexibility -- but there is no time trade off necessary for achieving architectural soundness, again so long as the right (presently rare) skill sets are present.
> The justification is always "speed" but my point is that is wrong.
Not necessarily. Proper architecture takes time and planning and reasoning. If it didn't, we could make a program do it for us. There is absolutely a time tradeoff between coming up with the proper architecture for the application vs just using something that looked cool from a google project. The latter will always be faster at first, but not in the long term.
There are three types of "good architectural choices"
The first: The ones that can be made after the original, bad ones. The second draft, which is an improvement in the first draft, in every field.
The second: His (or her) own. We all know there are developers out there who think they're the greatest thing to ever write code, and that everyone else's decisions are bad.
The third: The ones that really should have been made, in place of the ones that really shouldn't have been made. As in, "You wrote your own SQL parser using hardcoded strings and no grammar parsers?"
You've probably heard it as ones that lead to low-coupling and high-cohesion, which is correct. But in my experience this needs to be made more concrete -- many people can say these words, but couldn't tell you accurately to what degree a system was cohesive and uncoupled.
100% agree. Though a lot of the churn here could be reduced with the right education and mentorship. There is a mathematics to good architectures -- a mathematics that can be taught. But currently the initiative of us are left to find the way ourselves. We need to find a way to formulate and teach this mathematics to the next generation/s so they can get there faster.
Practice. As well as trying to notice when things work or don't (I have colleagues that see "firefighting" as a normal part of a running system. If something keeps needing attention, it is probably badly written or architected).
Linus said something along the lines of bad programmers worrying about the code, and good ones worrying about the data structures. I think there is truth in this, even though I develop Django database applications most of the time. If I find myself doing too much work at the application level, taking a step back I can usually see a better approach when I rearrange the database.
Good point. How many "rockstar" developers show up at Company X, build the initial versions of a product or architecture, then move on to the next company never to understand the longer term consequences of their choices?
This is exactly my experience. The vast majority of software failures are the results of dysfunction outside of the software team - this is why more best practices or process rarely fix the issue, it was an organization one to begin with. In my career there's been a strong correlation between healthy functional organizations and healthy functional software
> I largely agree w/ the argument here, but "slow" is going to be self-defeating nomenclature, and is also inaccurate. Business doesn't want slow. So if we're pitching slow, we're setting ourselves up to lose and the speed-hackers are going to win.
Agreed. The ideas in this article are sound, but I'm not sold on this "slow programming" term. It's more like "careful" programming, or how about "care-full" programming, in which you put your full amount of care into every character you write.
The reason we've been able to speed things up is because of these tools, like CI systems and test frameworks. Faster iteration means getting results back faster and that means you can spend more time thinking about the problem and less time implementing it. That is the major benefit of what the author calls "fast programming" and it's something he glossed over completely. Just because you move fast doesn't mean you can't be careful along the way. You just need tools that extend your own visibility in order to catch problems faster and more accurately solve them.
> Wu wei is an important concept in Taoism that literally means non-action or non-doing ie. "effortless doing"
Set goal, think and explore a little, rest a lot and wait until you get clear picture. Then and only then start coding.
Pros: No burnouts, less stress, etc, and constantly moving forward (in contrast to 2 steps forward 1 step back cycle) what results in overall faster progress.
Cons: Personally it's sometimes pain in the ass to wait, but it's important to understand -> not to be obsessed with job. Get a hobby, do something else and let unconscious part of your brain deal with what needs to be done -> mind blowing discovery for me that actually works, but still have to get used to this concept :-)
Unfortunately, the process of building houses can't be compared to building software. Houses don't get bolted sideways onto scyscrapers 5 years after they're built, or suddenly need to accommodate multiple order of magnitude more people than they were originally built to hold. These are regular occurrences in the software world.
You are right that most people that come out of university are woefully unprepared to design software systems. However, that doesn't change the fact that what ends up being the "best" design in the end is rarely apparent (or appropriate) in the initial planning stages of a software project. Requirements change, and the architecture needs to adapt with them. These architectural changes do have a very real cost. The tough sell is trying to show management that the alternative creates technical debt that will strangle the project if it isn't addressed, and that the longer the refactoring is deferred, the more expensive it will be.
> Unfortunately, the process of building houses can't be compared to building software. Houses don't get bolted sideways onto scyscrapers 5 years after they're built, or suddenly need to accommodate multiple order of magnitude more people than they were originally built to hold. These are regular occurrences in the software world.
Most likely, not a lot of things compare to building houses in all aspects.
The point is, if developers are having problems wrt. scaling, then we need to build tools that abstract away scale. Define a language that allows the runtime to auto-scale, depending on load. As far as I can see, it's "just" another engineering challenge, not essentially different from figuring out how to make a skyscraper withstand heavy winds.
> The tough sell is trying to show management that the alternative creates technical debt that will strangle the project if it isn't addressed
If you're worried about that, then the problem is that you have someone managing a software project who doesn't know how to manage software projects. That's not a failure of the methodology; it's a failure organizational design.
You are probably right about the marketability of "slow" as a movement, but in hacker circles these ideas need to be discussed honestly and directly.
The analogy with house building is flawed. Home construction generally follows a preplanned architecture, schedule, and process that has been refined and tested over a long time in an environment which doesn't change much over time. The process of building houses is well understood and if a delay occurs it can be attributed to a lack of skills as you say (or funding problems, etc).
Software has a different nature. If you have thoroughly solved a problem, the solution can often be automated. There is no button you can press to build the same house you built once before. It still takes time, money, and materials. With software, if you are not spending a large portion of your time working on a problem that has a novel component or two, you or someone else has probably failed to automate enough.
As a result, you never quite possess the necessary skill set to solve your current task. You are always pushing some boundary, even if small. Gross skill set mismatch is a separate problem, of course, but a transient skill mismatch is an inherent aspect of good software development and I don't think it's justified to place any blame on moderate and transient skill mismatches as a result.
Whenever you are dealing with something novel, you will need to go slower than usual. It's the same for new types of buildings that use newer construction tools or technology.
I see it like a Craftsman woodworker vs a high school shop class. At the end of the day they can both build a birdhouse though the quality of final result will be very different.
Yep -- and I argue that the craftsman can finish at least as fast as the shop student. When someone says "I don't have time to make it sound", they're really saying "I don't have the skills."
By the way, I think these skills can be acquired and even taught. They just aren't yet being taught afaik.
I would also bet that the craftsman will show less overt progress as he works. He will appear to be moving slower, but will in fact complete the three to requirements faster. And in the initial phase, in particular, the craftsman may take a lot more time. The speed will be shown not over the initial design, but over the whole production cycle.
from what I understand the slow movement came out of the hard left in italy - and from what hhe told me the internet is still regarded as a yankee/capitalist plot based on his experiance trying to sell the idea of .coop to italian coop's
How do developers currently learn to build scalable, maintainable, well-architected systems? Certainly there are some books on the subject, and that might be a good start, but I'd be willing to bet that these skills are largely learned "on the job" through a bunch of trial and error.
Are there any good ways to dive in and get experience with a lot of smaller examples, similar to the "code school" approach but for more advanced topics and systems?
Maybe we need something like that. Code School: Advanced. There's a lot of thought going into training the next generation of software engineers from the ground up, but there doesn't seem to be much work toward improving the skills of the ones who already have a bit of experience.
It is all trial and error from my perspective. I like this line of inquery. The problem I see is the technology is constantly changing, and the meaning of 'scalable' is always expanding. Ie the 'Enterprise' archticture of the past didn't scale to 'internet scale', will 'internet scale' scale to 'internet of things' scale?
That's a good point -- things are constantly evolving and current tech becomes outdated so quickly.
At the same time though, I think it supports my point that the training for these kinds of things should be faster than reading a big thick book followed by a bunch of trial and error.
That's actually the problem! Many current engineering disciples were before handled by craftsmen until scientific principles were applied to it, making it engineering.
There have been very few scientific studies about the basics of building software, like how much unit tests really affects the bug density, and those few studies go often unnoticed when teaching future software engineers.
It would require a major effort to first study the existing emergent software craftsmanship practices to find common patterns and to scientifically evaluate their effectiveness, and the results would have to be taught in the hypothetical future curriculum where future software engineers would be made. Then we might have software engineering instead of just software craftsmanship.
It's almost a bit amazing that software "engineers" who can be very obsessive about getting the data or A/B-testing will craft software based on methods they have no data about.
> There have been very few scientific studies about the basics of building software...
There hasn't been a lot of great quantitative scientific study on how to productively do science either. Academic science is a craft, and is more or less learned through apprenticeships.
I'm trained as a physicist, and I'm certainly a fan of quantitative scientific methods, but I think it's important to realize that there are some areas where quantitative science is effective and rewards the required work well, and other areas where it is largely an ineffective waste of time.
I suspect trying to quantitatively study things like "how much unit tests really affects the bug density" is one of these cases with a poor reward to effort ratio.
There's a danger in trying to apply quantitative science to complex phenomena, when many or most relevant factors are actually uncontrolled and unmeasured. Once you have a number--any number--the tendency is to fetishize it at the expense of broader thinking. Make no mistake: quantitative measurements of the behavior of people performing complicated tasks do not carry the same epistemological force as quantitative measurements of the magnetic dipole moment of the electron.
Scientists themselves more typically try to understand their own craft through case studies and "professional wisdom" passed down from practicing experts to apprentices. Software developers would probably do well to (continue to) do the same.
It's not just that not a lot of science is applied to the craft of programming. It is that many programmers are actively hostile to the concept. For example, the rules of programmers.se forbid asking for statistics. So if you want to know objective facts about the craft of programming, you're forbidden from asking for it on the most popular forum about that craft. When I asked why this rule was in place, it became apparent that the only real reason was an irrational dislike of statistics. With that kind of attitude our craft will never become an engineering discipline..
Maybe that future isn't too far away! Have you checked out Code Complete? My university used it as a textbook for a Software Engineering course. To borrow your example, it cites multiple studies about the effectiveness of unit testing on errors, as well as many other software engineering topics.
Career vocational software craftsman here. Academic background in math and science have served me well as I self-taught all the CS and application development skills I have collected over the years. Vocational training probably would not have hurt, but I don't see how it could fit a traditional curriculum; I have only accelerated my pace of learning as the internet and quality of free tools and instructional material have grown, and I attribute that entirely to the buffet nature of online training.
Well, what I wanted to say was that the "craft", i.e. skills learned when working on a real project with experienced developers, is something I use all the time. Academic knowledge less so. Take, for example, complexity theory. I use that once a day or even less. And in 95% of those cases I could just consult a simple cheat sheet (stuff like "sorting is O(n log n)").
> I largely agree w/ the argument here, but "slow" is going to be self-defeating nomenclature, and is also inaccurate. Business doesn't want slow. So if we're pitching slow, we're setting ourselves up to lose and the speed-hackers are going to win.
I like to use the word "deliberate" when I mean "slow", but think that "slow" will be misinterpreted.
This seems similar to michaelochurch's thinking about 'guilds' - which I guess is a precursor to formalised 'engineers' or master craftsmen, which is something Software 'Engineering' seems to be in dire need of.
When was the last time you engaged a journeyman carpenter? This problem runs much deeper than just software. Any skilled craft is always at odds with a manager/employer/clients view that you are a fungible asset.
The best and most concise way I've found to explain this is a systems focus.
We aim to focus on systems, because it is the most efficient way to work. By using good methods, designing complex systems well, and spending adequate time designing, we achieve a net speed increase, productivity increase, cost decrease, and additional positive benefits such as programmer morale and overall quality.
In my mind, quality is the genesis of everything, but you do have to word that really well so that the overlords comprehend the consequences correctly.
Instead of slow, continuously evolving the domain abstractions is key. Faster & more effective is better. If slow creates effective judgement instead of prejudice, then go slow.
There are two systems in the brain at play. "Fast thinking" involves gut reactions & intuition. "Slow thinking" involves logical analysis & formations of models. Both contribute to effective solutions.
Recognizing iteration as the cycle of evolution & growth creates working systems in congruence with natural law.
Deliberate Programming might be a better term; it encompasses the idea of steady progress, and solid intent.
- update - I see this has already been suggested. Still like it...
Today, yes. But that's a problem our industry needs to solve. A lot of our labor force has already paid in money and time for education. Our education system has simply failed to deliver the full skill sets. Not necessarily the education system's fault -- our industry needs to know what skill sets to ask for and what is their value.
I would like to respond that the maintenance costs and thus TCO will be lower for the better designed system - however I have no idea how to formulate this into a tenable economic argument since I have absolutely no data to back this up.
There's also the question of what you work on, not just the quality of your work. You can do a barely-works version of something that it turns out people really want and it's better than if no-one had built it...
Maybe slow programming means finding important things to work on.
"Well considered" is too subjective, I think. There are specific, concrete skill sets that lead to systems with specific, well-defined properties (i.e., architectural soundness).
Your system may be well-considered but if it's not architecturally sound it's going to increasingly tumble over time.
It's mostly statistical and based around comparing your skill level to other players. While a 7kyu level is not necessarily that well defined, and might differ between different ranking systems, there's still a meaning to the difference between the levels. When a 5kyu (stronger) is playing a 7kyu, the 7kyu player is given 2 stones in advance (handicap) to make the game even. The number of stones given as a handicap is determined by (or defines) the difference in levels.
It's not immediately obvious how such a system could be adapted for programming skills. But the idea of having levels, and having ways to measure one's level, is very interesting, and might be quite useful.
Japanese (and probably other languages) have standardized tests[1] to determine one's language abilities.
You don't take a test to determine your level. Rather, you take a test designed for a particular level. You either pass or fail.
N5 is the lowest level (beginner)
N1 is the strongest level (near-native or even better maybe)
When you feel you are at say, level N3, you take the N3 test, and if you pass, then you can say that you have passed the N3 test. If you fail, you just fail. It doesn't mean you are N4 or N5. (As far as I know - I might actually be wrong here).
I wonder if some standardized tests can be, in principle, constructed to measure programming abilities.
The tests must be crafted such that, even if all the questions are known to the public, then studying for the test is the same thing as studying to build up your skill level.
> I wonder if some standardized tests can be, in principle, constructed to measure programming abilities.
This absolutely misses the point of the article. It is definitively not about programming abilities, or speed of these abilities.
Many authors have hit TFA's points from different angles. For example, Tom Demarco's excellent Slack[1] introduces uses of "efficiency" (what speed is the organization moving at?) versus "effectiveness" (is the organization moving in the right direction?). DeMarco uses these to illustrate that many organizations optimize for the wrong parameters: they want to move at breakneck speed ("efficiency", "butts in seats", etc.), but in doing so trade off their vital strategic ability to think, design, to steer the ship!
In an organization without slack (briefly, the ability of knowledge workers in an organization to engage in vital reflective tasks, a key part of TFA's design process), everyone's so focused on Getting There that no one remains to decide if There is the right place to be going. This can and does result in everything from classic technical debt to strategic business failure. DeMarco describes a pile of organizational anti-patterns that stem from this philosophical damage.
I like that TFA's author connects these concepts explicitly to more recent thinking on the importance of design process.
It is the MOST trivial thing to measure performance in a Go game.
It is the LEAST trivial thing to measure the "code quality"/"unit of time" metric in a programmer. For example, you might bang out an implementation that looks fine, but 1 guy will say "that will become unmaintainable in 1 year" or "that will be a problem if we ever switch databases" or whatever and sure enough, a year later, the team has to do that... and that 1 guy was fucking RIGHT.
Who's to say how you measure such a thing?
The longer you spend in a career that has ANY creative component, the more you will come to loathe the importance placed on "performance evaluations." In fact, I'd argue that the more advanced you are in those creative jobs, the SLOWER and BETTER you work... the problem is that the newbies won't be able to recognize the "better" portion.
"that will become unmaintainable in 1 year" or "that will be a problem if we ever switch databases" or whatever and sure enough, a year later, the team has to do that... and that 1 guy was fucking RIGHT.
And sometimes YAGNI, we'll cross that bridge when we come to it, nice problem to have, good enough is good enough, do the simplest thing that could possibly work, premature optimization, perfect is the enemy of good, grass is always greener somewhere else, better a bird in the hand than two in the bush, he's right but it's still not worth the cost right now, etc.
That is a perfect mindset to have to the problem of the type "that will be a problem if we ever switch databases". However, "that will become unmaintainable in 1 year" is a "bridge" you want to cross sooner rather than later. If you have a deadline to meet, fine, but come back and fix it asap if the software is something that needs to be maintained. I really cannot explain it well, it's just something most developers realize after a while when they've had to maintain an old codebase.
They key point however is, that one guys insight shouldn't be ignored. You can take the words of wisdom and proceed to not do anything about it, but at least you know the cost down the road. All experienced managers I know personally, know the cost of umaintainable software and are more than willing to do something about it if ressources allow it. They need to, otherwise it's their ass that is in the line of fire when it cost a factor 10 to implement a new feature and bugs creep up at the customer time after time, even though they spend a factor 10 more on QA.
>It is the MOST trivial thing to measure performance in a Go game.
Actually in terms of very old games Go is one of the harder ones to measure performance of during the game. In terms of measuring performance in any 1v1 game however it is easy.
I'm more interested in measuring abilities relevant directly to software as a profession (or craftsmanship), such as: how well can you architecture a system to solve a complex problem?
This is important: you can separate handling complexity from the profession of building architecturally sound systems.
We can teach people how to do math correctly. That doesn't guarantee that they will be able to solve arbitrarily complex problems. But to solve math problems, you MUST at least have the skill sets.
So there is a test to measure do you have the skill sets. How much one is willing and/or able to handle complexity is a separate thing to consider.
I largely agree w/ the argument here, but "slow" is going to be self-defeating nomenclature, and is also inaccurate. Business doesn't want slow. So if we're pitching slow, we're setting ourselves up to lose and the speed-hackers are going to win.
I think a better adjective would be robust. But I'd hate "Robust Programming" become as much of an abortion as "Agile", now an excuse for micromanagement and business-side mediocrity, has proved itself to be.
I agree about what the term 'agile' has become, but in many situations (especially in startups), spending too much time and energy to make something robust in version 1 is worse than hacking it together with spaghetti code. The key is to know which mode is appropriate for what you're doing.
If we want to create more tolerance for good engineering practice, we can't act like primadonna artists obsessed with building our perfect masterpieces on someone else's dime. We're hired by businesses, so our arguments need to be grounded in business value.
"It's a shitty architecture" isn't going to get you anywhere.
"It's going to fall apart and cost you millions if you do it that way, but if you take out these less important features and spent more time on what matters, we can do it right for the same cost..." People might just listen to that one.
I was once a "fast" developer, like thousands of lines a day. I could knock things out at an amazing pace but they always had problems and were rarely testable. That actually worked out OK where I was where we basically built things and ideally never touched them again.
Now, 8 years later I write maybe 50-100 lines a day. I can see all the vectors of things that would go wrong and take the time to mitigate them. I work with "hotshot" young developers who write my old maybe a thousand+ a day. I'm the grumbly old man in code reviews who forces them to break them into multiple smaller pull requests. The difference is my 100 lines will stay in the codebase for years, whereas their thousand will likely be rewritten several times over in the lifetime of my code. There's tradeoffs here, to be certain.
I see so many younger devs thinking just because their code is tested that its "good". This is certainly not the case. There is so often so little consideration of how their code fits into the "big picture". We end up with 5+ "widgets" where one general widget would have sufficed if they'd thought things out ahead of time. Sigh. Shakes Fist. Get off my lawn. Old man grumbles... I'm 28.
But how could I? My task must be done in time for the sprint. Everything I do must be logged into the right ticket, I need to time everything I do. The enemy is time, and I must defeat it. I wish I could output better quality code. Right now, I'm working on a project that was finished at 90% and I need to finish it. The code is horrible, but I have no time to fix it. So I'm simply going to hack at it as fast as I can and make my agile manager happy.
This conversation reminds me of an electronic relay assembly line I once observed over a period of time. The workers hand-soldered relays, their numbers were being measured/judged, so many cold welds were shipped.
QA sent them to soldering classes, where temperatures and flowing of solder were discussed/practiced. They came back with nicely soldered joints, and production numbers dropped to the floor.
Management came back in, exhorting higher numbers, so out came the cold welds again to make the numbers.
It's your (shared) responsibility to determine what done is, and you should have a hand in estimating how long tasks will take. In my opinion, you as the engineer are ultimately the one who decides what quality you will accept. Instead of saying/thinking "it's done, but I'm not happy with the code quality yet", just say "it's not done yet".
I also firmly believe you should review all code before you commit it (would you skip proof-reading a paper before turning it in?), and that's an excellent time to spot easy improvements, places that need documentation, code that doesn't make sense, etc.
- Well it sounds silly but there will be fewer tickets when you slow down.
- If you work for a company where you're not free to question the legitimacy of tickets, get out now.
- Not all tickets should need to be done by the end of the sprint, but rather what can be done well or what legitimately NEEDs to be done for buisness reasons. I know your manager mught say everything needs to be done but that is a bold face lie.
- If you're throwing together garbage just to get the tickets done by the end of the sprint you are doing Agile wrong and just building technical debt rather than a product.
A couple years ago I worked on a project that tried to put a 40+ page printed form online. The form is complex. The form has a lot of intricate guidance and notes, and sections that must or must not be completed based on previous sections or fields.
I threw together a form builder by re-purposing old code from a side project that took me two years to polish. My proof of concept let an admin change guidance on the fly, without touching HTML. Fields could be re-sequenced, data types changed. Business rules could be linked to fields dynamically, and applied to single fields or entire form sections. Data was saved to a SQL Server (already in wide use in the estate). My proof of concept took 4 weeks, and on the basis of that I estimated a 6 month project. When I presented the idea it was viewed as being too complex, and therefore too risky.
So the dev team just started hacking it out, field by field, and saving the whole form into Mongo as a single, nested document. Guidance was hard-coded into the HTML. It was fast, and at the start everyone was awed.
Of course soon after the customer decided that the guidance for the printed form wasn't appropriate for the online version. And of course the customer wanted to analyse data across all forms. What was a 6 month piece of work took 3 years. And still no ability to analyse data nested in Mongo documents. Any change requires hours of re-work, regression tests and sign-off.
All because agile.
In my 20's I'd have been emotionally destroyed. Ego like an aeroplane into the side of a mountain. No survivors, call of the search. Happily I'm in my mid-40's. I know better, and I know better than challenging an inexperienced project lead that has too much authority. So I move on. There's always a project that's more suited to my style of work (slow).
This post resonated with me as well, but I think for different reasons.
At the beginning a team doesn't really know exactly what kind of flexibility and functionality will be required as they iterate. Teams can of course leverage experience with similar projects to come up with possible future requirements, but these are just educated guesses at best, and self-inflicted scope creep at worst.
Because very little is known about the stakeholder's needs a team needs to iterate to become more familiar with the domain and the problem space so they can make informed decisions on how to proceed. And in my experience it's easier and less costly to evolve (or replace) something that's dead simple and wrong than to iterate on something very complex and sort-of right.
Attempting to anticipate requirements at the outset is always a gamble. If you get it exactly right then you can save months of development time. However, if you get it wrong, even a little bit, then you may find yourself saddled with a not-quite-right solution requiring compromises with every enhancement request. I think part of "slow" (let's say "deliberate") software development is the willingness to put up a straw man for the purposes of getting feedback from the stakeholder and then going back to the drawing board with information gained to build a more appropriate solution.
And that's what agile is about--timely feedback and course correction. If in the span of a sprint a team can get a hard-coded form out and learn all of the reasons it isn't a viable solution, then that's valuable information gained at relatively low cost. The alternative, investing many sprints in a more complex solution, may (and often will) yield more technical debt and cost, especially in the long term.
That being said, once the project takes form and everyone has a good handle on what the needs are, there is definitely value in shifting priorities from features to design. At that point the insights gained from stakeholder feedback will ensure that the system can be as simple as possible, but no simpler, i.e. easy to understand and test while being abstract and extensible (only) where needed.
I come to coding from a Graphic Design background and I wish I had computer science under my belt as I test the waters of programming, but what I'm learning reading HN is that many programmers could benefit from the thinking they taught us in design school.
What's the difference between an 'artist' and a 'designer'? One is employed! When you are designing something (even with code) your measure of success is how well you deliver what needs to be built. With 'art' there is no such requirement—art can become whatever your whims desire.
One thing they warned us very strongly about in Design school was never to become 'married' to our designs. If a client pays you to make something and doesn't like it it's NOT good design because it didn't meet their needs. Don't be personally offended, it's not art, it's design. You can do art when you're paying yourself for your time.
I write code and add things like subtle text-shadows and gradients to enhance legibility (scientific reason) to make my designs more accessible, but if I look at the commit logs I often find that my supervisor goes back and removes all text-shadows, box-shadows, and gradients just because he believes gradients and drop shadows are a passé design trend from Web 2.0 that he probably read in an article. I could get offended that he's essentially neutering the accessibility of the designs I'm making and nobody will ever see the full design as I intended it - but I got paid for writing it just the same so what do I care?! He has the website he wants and I have employment. Bottom line is I'll keep delivering value whether he wants it or not and once that handoff is made it's his to do with as he pleases.
> subtle text-shadows and gradients to enhance legibility (scientific reason)
Honest question: can you provide citations for this claim? I have been trying to produce a compendium of all the empirically-supported claims about visual design affecting usability. I have yet to find citable claims for these cases, and would greatly appreciate a leg up in the effort :)
Let me disagree with that one sentence. Martin Fowler has described how design and agile can work together well [1]. What many people describe as "agile" is actually what the agile community names "cowboy development".
> I threw together a form builder by re-purposing old code from a side project that took me two years to polish. My proof of concept let an admin change guidance on the fly, without touching HTML. Fields could be re-sequenced, data types changed. Business rules could be linked to fields dynamically, and applied to single fields or entire form sections. Data was saved to a SQL Server (already in wide use in the estate). My proof of concept took 4 weeks, and on the basis of that I estimated a 6 month project. When I presented the idea it was viewed as being too complex, and therefore too risky.
Was your flexibility in the right direction? I could tell you exactly the opposite horror story, where an enterprise architect built in a bunch of flexibility that we never used, but we still spent a load of time changing everything as the customer requirements changed. Only we had to spend twice as long because we had to change everything in three places because the "flexible design" required that.
> Of course soon after the customer decided that the guidance for the printed form wasn't appropriate for the online version. And of course the customer wanted to analyse data across all forms. What was a 6 month piece of work took 3 years. And still no ability to analyse data nested in Mongo documents. Any change requires hours of re-work, regression tests and sign-off.
Why would your way have been different? If you are using the database to store logic in, it's still logic and still needs the same level of testing as it does in code. And you don't get existing tools like VCS, release systems, staging environments to help you with that.
I know we're just comparing anecdotes, but everywhere I've worked the business effectiveness has been directly proportional to the extent to which they actually followed the agile principles.
You know, a lot of failed projects that I've worked on were my fault, because I was that guy. The guy who questioned the seasoned greybeard. The guy that knew JavaScript is the only answer. That agile is the only way to manage development. And if agile didn't work, it wasn't because agile was a bad choice, but because you were doing it wrong. That document databases can replace relational databases. That dependency injection is good. You get the idea.
What sounds good, looks good and feels good is sometimes, and sometimes very often, very bad.
The thing someone in their 20's doesn't know is that people in their 20's don't know. People in their 30's that are even vaguely in touch with themselves start realising they don't know. In my late 30's I questioned whether I'd wasted my entire career by not learning anything. Today I just know. I know an idiot before he opens his mouth. And I see a bad design decision happening when the team is assembled, before anyone has even started to design anything. That the 5th sentence in the 5th paragraph of this post is badly structured. Most importantly, today I know when I don't know.
I still fuck up. Spectacularly so, sometimes. But not as often as 30 years ago. And not as often as last year. Or last month. I happen to have been right in my post above. Hindsight proved that.
I have a text editor that creates forms. I have a good abstraction for what a form is, meaning I can represent one with a simple class that expresses only the things that make that particular form unique. And I have a whole bunch of standard tooling around managing this representation of forms, like my VCS.
It sounds like you haven't really lost your certainty. "I just know. I know an idiot before he opens his mouth." - no, you don't, you can't. Being older doesn't prove anything - some of the worst decisions I've had to work with were made by the most senior people, and vice versa.
Indeed it doesn't. I know because I pay attention.
I'll put it another way - you can be young and not know. You can also be old and not know. However, you cannot be young and know. But you can be old and know. Old is usually, but not always, measured by time.
First, I sympathize with your reaction against building in functionality that is not required. This typically causes a lot of problems. However, for a library, that is less of an issue because you aren't maintaining it. It sounds to me like the guy used what was essentially a library he had previously developed to do this and that's a step towards having the flexibility in the right place.
> Why would your way have been different? If you are using the database to store logic in, it's still logic and still needs the same level of testing as it does in code. And you don't get existing tools like VCS, release systems, staging environments to help you with that.
My largest open source project does a lot with database stored procedures. We use all the tools you mention above, and we have written some of our own tooling to make that easier. So I don't think there is any reason why those tools don't work. So you have to spend a little time on tooling? That gets paid off many times over.
> I know we're just comparing anecdotes, but everywhere I've worked the business effectiveness has been directly proportional to the extent to which they actually followed the agile principles.
There are many things that agile gets right. As a reaction against the waterfall engineering model, the emphasis on integrating design into coding is a welcome advancement. There are many other things as well.
But agile development, because of short release cycles, tends to de-emphasize design and code contracts because it comes from the assumption of changing requirements, and here it gets a lot of things wrong. You can't do agile development if you don't have a stable platform to develop on because otherwise the ground is always being pulled out from under your feet. That platform needs to be well designed, and I am not sure you can do that iteratively without a fair bit of up-front thought. This doesn't mean a waterfall (except in the area of UX). Rather it means getting the right amount of design done at each level and having the right person do it.
> My largest open source project does a lot with database stored procedures. We use all the tools you mention above, and we have written some of our own tooling to make that easier. So I don't think there is any reason why those tools don't work. So you have to spend a little time on tooling? That gets paid off many times over.
Programming logic in the database, or config files, is a huge antipattern I've seen many times. People seem to have this huge blind spot where you can take the exact same piece of logic and label it "code" or "not code" and in one case it will be subject to review and signoff and all that and in the other it won't be.
You can always write your own programming language and tools, but you're unlikely to do better than the existing ones. So why not just use one of them?
I hate that the term has seen total dilution. I have been on outstanding teams that called their methodology agile, but it was an adjective at the time. In that era.
Not the fault of the word, though. I'll keep using it.
> Fast programmers build hacky tools to get around the hacky tools that they built to get around the hacky tools that they built to help them code.
This resonates so much with me. Sometimes I worry that I'm falling behind on keeping up with technology, but whenever I try to learn something new it seems like I have to install a package manager, then another package manager inside the first one, then pull a million other tools and libraries and frameworks before I can even begin doing something with this stuff I'm trying to learn. No, thanks.
I love computers, I love computing, I love thinking about solving problems using computers, but I hate the direction things seem to be taking. I'm seriously considering a career change in the immediate future because all this crazy tooling truly burns me out.
Try Go, seriously. There's just the one tool. No package manager, no dependencies... And the code it produces is the same. Just a binary. "Here, have this tool, just run it." It's small, simple, and the community actively searches out simple solutions.
I found go to be quite cumbersome in comparison to npm. Magic folders in the filesystem and such. With node you never have to do more than npm install and everything is ready.
npm (and the CommonJS module system) is the #1 reason I use Node.js. I don't care if Javascript is ugly, npm totally makes up for it. I don't get why most module systems implicitly import symbols in the scope (Python, Ruby) or worse, those that pollute the global scope (PHP).
Brew (Ruby) is not that bad but Python's package ecosystem is a complete mess (distutils, setuptools, pip, etc.). Cabal (Haskell) is pretty good too in comparison to C/C++ (installing dependencies usually involves following instructions in a README file, that is, if there are available instructions for your OS). I haven't tried Go.
I believe the language of the future will be built around, and win mainly because of, its package system/ecosystem.
npm is easily the worst. Python is a lot better. I hold the C library model as the golden standard, so we're not going to see eye-to-eye.
The worst part of npm is the sub-dependencies. Having enabled them, they proliferate endlessly. Each time I improve the efficiency of deploying a strictly controlled tree of modules, the front-end devs manage to double the number in use. 200, 350, 650, over 1000... oh and these improvements come from realizing that npm-shrinkwrap isn't 100% reliable, and ending up with scripts comparing npm-ls output and doing full wipe and replace with a tarball for even the slightest change to the dependency tree.
If some serious bug is discovered in one of these things, there's no way the 20 copies of it at various levels of this tree of 1000 modules are going to get patched. No one really has a handle on what's in these hundreds of megs of various versions of modules. This is the true fast code movement - serious problems can't be fixed in there, they'll just be ignored and replaced with other bugs in the twice-yearly full rewrite. Don't get me wrong, our frontend devs are among the best I've seen, but the pressures and environment they're in make them focus on churning out the latest fad in web design and skimping on engineering quality everywhere possible.
Python doesn't do implicit import of symbols into a scope if you don't use "from blah import (star)". Don't use the (star).
Managing, and using, a list of python modules, each at a particular version, is way simpler, more reliable, and more efficient (than npm style). Pip can reliably list installed module state (freeze) and install from source tarballs or git tag checkout.
You need to fully control and understand the versions of all libraries installed and used on a system in order to have a fully reproducible deployment state, and be able to reliably roll back to a previous state. Given that, the Python or C model is much easier to work with.
We definitely have a philosophical disagreement. The fact that an app (especially a server side one where storage is usually cheap) has hundreds of dependency signals to me that the package system is successful. It tells me that there is a lot of code re-use going on and that it's presumably easy to import new dependencies. By now, I believe the number of NPM packages outnumbers any other package manager by far despite being relatively new.
So basically, I think your point is that a project should have as few dependencies as possible whereas I think the opposite is good. Both philosophies have their pros and cons. If you are writing mission critical software that handles financial data, it's probably a good idea to know about every line of code that gets executed. However, if your goal is to release something as fast as possible, have a more manageable code base and security is not as critical, using lots of dependencies makes sense.
Go's model is pretty good. This is why there's just a single $GOPATH and every package has exactly one location on disk, so even if 100 dependencies use the same subdependency... you only have to update one spot if it needs a patch. And again, this only needs to be done at development time. It's baked in when you compile, and from there on, deployment is just a file copy of your binary. No dependencies during deployment.
Maybe I'm missing something, but this seems like a bad idea to me. What if projects are relying on differing versions of the same package? What if two developers collaborating on the same project have different versions of that dependency installed?
Sure, it's space-efficient, but disk is cheap, and developer hours are expensive. I certainly wouldn't want two employees burning man-hours trying to figure out why some app is behaving differently on their respective machines, only to fix the problem and unwittingly break another project in the most expensively subtle of ways.
I'd really like to think I'm missing something though, I certainly don't presume to be more insightful than the collective Go team.
So, in theory, any particular import path should be stable. There are no "different versions" of the same import path. A different version would be a different path.
So, for example, github.com/natefinch/lumberjack is an import path. When I wanted to change the API, I have to put my project at a different import path so I don't break projects that use this version. I could just make a new github repo called lumberjack-v2 and that would work fine.
Of course, reality and theory don't always see eye to eye, and the Go community realizes this. That's why there are several community-made tools to help out.
The most well-respected one is godep, which has the ability to pin revisions on your dependencies. That is, it'll look at the git commit hash (or equiv for hg/bzr/svn) for every repo that you depend on, and record it in a file in your repo. It can then reset those dependencies to those specific hashes. Then all members of your team just need to use godep to make sure they're all using the exact same git commits for all dependencies. Bam - now you're insulated from someone updating a dependency with breaking changes, and your team can test new revisions and decide when they want to start using them.
Godep has a second function that lets you copy all your dependencies into your repo and renames all import statements to reference the copied paths. This is an even more extreme version of the above, which not only insulates you from changes in dependencies, but also insulates you from those dependencies disappearing entirely.
Godep is definitely a good tool. One thing that gives me pause, though, is that this functionality is not built into `go get`. Failing to put hard versioning into the canonical package manager seems short-sighted to me. I'd much prefer that over something like the race detector.
There's a philosophical disagreement here. It's true, developers are expensive. But I think it's a fallacy that what's less efficient for a computer is better for a developer. The npm-style dependency tree enables and encourages much more complexity, which developers then have to deal with when debugging or deploying. They need new tools to help them get a handle on the huge number of modules. That's more expensive than a system kept more "under control".
> I think it's a fallacy that what's less efficient for a computer is better for a developer.
I don't think that was ever said; I claimed that trading disk space for developer hours is a good trade-off in this case.
> The npm-style dependency tree enables and encourages much more complexity, which developers then have to deal with when debugging or deploying.
I don't believe this is the case. When developing an app (as opposed to a library) it's considered a best practice to check your node modules into source control, ensuring that there's never a mismatch between installed dependencies on developer machines. If you don't want to do that, you take what comes - but even before I picked up on that, I never ran into issues even when collaborating with 5+ developers. You just need to make sure your package.json locks your versions appropriately.
FWIW, Go namespaces everything by default, so if you import a package called foo, all types, functions, etc from that package are namespaced by foo, i.e. "foo.whatever"
The thing I love about Go's packaging is that your VCS and package manager are the same thing. There's no need to wonder where the code from package foo comes from... because you know you imported it from github.com/jimbob/foo It also means there's no fighting over namespaces, since it's just done by domain name (i.e. I can have the package npf.io/foo because I control that domain, and I don't have to to fight over who gets to use that name).
npm is without a doubt the best module system I've come across in my years of development. RubyGems is kind of a pain, .NET dependency management is a joke, Python is super fragmented, and Go doesn't have one. Godeps is an okay tool, and `go get` is cute, but I have yet to use a package manager that couldn't stand to borrow a thing or two from npm.
In Go, the dependencies only matter during development not during deployment. Let the developer hash out what specific version of each library works, and then during deployment, it's all baked into a single statically linked executable.
That's the nice thing about using tools written in Go. There is no npm install. There's a file copy. Bam, done. You don't even need to know it was written in Go unless you're using the code for your own development.
The magic folders in the filesystem exist in every package manager, it's just more obvious in Go. So, for example, npm puts code it downloads into /usr/local/lib/node. I'm very far from a Node expert, but this doesn't sound terribly different from GOPATH.
Go has a couple wonky conventions, but it's a very very simple system. Once you accept and learn to deal with the GOPATH quirks, that's pretty much the only tooling weirdness in the whole ecosystem.
There's no required package manager. By default, your VCS is your package manager. You can write code for a long long time with just the default go tool. Those are optional tools that you can use once you become comfortable with the language and understand their tradeoffs. You really don't need any of those tools until you want to write a professional project with multiple people on a team.
Yeah, I was also about to post a comment recommending Go. It's not completely immune to tooling bloat, but it's actively reductionist in that respect. It's relatively pleasant.
I don't know that I agree with that. Tooling bloat was specifically mentioned, and Go is quite good with respect to that.
There's just the one go tool to install, which can be installed just by unzipping a zip file. From there on in, you don't need any other tools. It just uses VCS for package management. You don't need an IDE... the single go tool has support for pretty much every part of the development cycle - compiling, testing, code formatting, profiling... there are no other tools you need to create real deliverable software. I've been writing Go for over two years and still haven't added any other tools to my standard workflow - git and go are the only two commands I type while writing go code. (At work we use a revision pinning tool, but I haven't needed that on my side projects.)
It is not the fault of these tools. You can think of code as data, and programmers producing this data, just like many other professions are producing data, too. Programmers often have a variety of far better tools to handle this data. I think the reason for this are of course the programmers dogfooding, but also the culture of life-long learning, which minimizes UI design to just API design. An intuitive GUI for a crowd less open to learning new things is so much harder to make.
I nevertheless see this as a gift. It is your attitude with these tools, that makes you unhappy. I I often have this strange feeling of missing out on an even better technology, the cool kids might be using. I think the key here is to be confident with your choices of tools, improve them, when their is pain and identify when something is good enough for some time (measured in years) before you should reconsider again. In the end it is also about handling the tools. This sounds very reasonable but in practice it is often hard to overcome the urge to use the new shiny tool, which seems to make you a better programmer, when you define yourself as one.
I don't think the extremist view of skipping them altogether and condemning the whole thing is the solution. I believe people, who feel they can't keep up in this imaginary race either search for ways to define themselves in another way e.g. in alternative careers (like you) or in their opposing way of programming (like the article author) throwing the others in a bottle of people, who do it wrong. There is a middle-ground between the Node developer and his weekly changing recursive package managers and the Coldfusion shop not doing scm.
> Curious which languages you use that don't have this problem?
None. It's why I'm burnt out and want to so something different. Not something different from my current job. Something different from developing software.
I've lost count of how many times I've spent entire work days battling against the environment which is supposed to help me write the code I need to, to make a customer happy. Entire days of trying to figure out why IIS is acting crazy, or why Visual Studio is crashing, or why is Windows so damn slow all of a sudden. Entire days of trying to align the planets so that service A can talk to service B, because for some reason there's a cryptic SSL error that's not giving me any helpful information as to what's happening. Or entire days just pulling things from a dozen different sources and figuring out where to place each other so they can work together.
I've recently been to the Living Computer Museum in Seattle. You can play around with old computers there. I was fascinated by the old machines where you'd boot into a REPL. The shell was a REPL to the language you programmed the system in. I think despite all the advances we've made in the past decades and all the great performing technology we have nowadays, we've lost that essence of simplicity along the way.
VS is an awesome development environment, but I was having your problems with Windows and programs running under Windows for most of my adult life. I switched to Linux full-time at about 5-6 years ago, and I've been happy ever since. I can leave the OS running for weeks with no issues, and it would probably go for months but I'll turn the power off.
Oh, and with Linux, if you don't run a graphical desktop environment like Gnome or KDE (which most distros have the option of doing), then you boot into a REPL you can program the system in (Bash shell). Windows used to have the same with DOS, but no more...
Simplicity for simplicity's sake isn't a virtue. I often feel that Go chooses that path ideologically, whereas real world use cases should have more innings (yes, it's the generics and shitty type system thing again, I'm not expecting us to agree, I'm just pointing it out).
To that end, Go doesn't work for me. I use a fairly straightforward stack atop the JVM because I can hold the whole thing in my head (nothing in either Dropwizard or Play is deep magic) and have the expressiveness in Scala to be clear with my code.
And the issue isn't so much whether or not a programmer can hold the "entire system" in their head so much as, if they don't need to, they can be doing a whole lot more. This is, after all, what computers are good for.
I can do a lot more when I can trust my system to evaluate my code and throw compile-time errors. I can't trust Go's you're system in the same way I can Scala.
I noted the simplicity of the stack mostly to forestall the usual tired complaints about complexity, nothing more. What you call "academic", I call "building at scale."
I feel the same way you do about everything you said, including considering a career change for this same reason.
I'm now fiddling with Tcl/Tk as a way to quickly throw some visual ideas around. I'm using it to build an idea for a tabular programming language that is more visual and allows for letting go of incidental details like syntax, argument order, etc.
> My wife often comes out into the yard and asks me: “are you coding?” Often my answer is “yes”.
I think I need this quote in my kitchen! My girlfriend often pounces be with the accusation "YOU'RE NOT WORKING" if she sees me out of my chair moving around, doing mindless house chores or half-vacantly tossing a cat toy around the house, or running errands, but the reality of _design_ is that it happens all day. And when you're done 'work' for the day, ideas and solutions still happen.
I'm working in the shower, I'm working all evening. I'm working waiting for the bus. I can either let it pass or write it down, but I can't seem to shut it off.
But I don't mind. I've been learning to slow down too. I used to be a perfectionist and when I was billing a client I would only bill my butt-in-chair-and-head-down time as billable hours, but lately I realize if I sit and think, plan, and research for 2-4 hours, and spend 2-4 hours implementing the results of that thinking, I get more done each day than 8 hours of butt-in-chair-head-down typing.
They key is to approach it as design, and that's where my training as a graphic designer comes in handy. The Design Process was drilled into out heads and I've only recently been applying it to my coding :)
"Sunday: Lay on my back most of the day, reading, sleeping and day-dreaming. Very literary. Some women, however, resent it, so young writers should choose their wives with care. Many a promising career has been wrecked by marrying the wrong sort of woman. The right sort of woman can distinguish between Creative Lassitude and plain shiftlessness."
> The casualty of my being a slow programmer among fast programmers was a form of dysrhythmia – whereby my coding rhythm got aliased out of existence by the pummeling of other coders’ machine gun iterations. My programming style is defined by organic arcs of different sizes and timescales...
Boy oh boy. Look, I'm all for coding slow, taking time and understanding what you're doing. But there is something to be said for "playing well with others". We're all trying to make things, not be the backdrop for your exquisitely-crafted sense of craft.
To me that reads a lot like "I would have done something really good if I weren't surrounded by those philistines". Well, maybe so, but you are surrounded by them. So choosing a method of working that does not result in "dysrhythmia" or whatever strikes me as rather prudent.
"Everyone around you is building this bridge in clay.... what are you doing fooling around with this 'steel' crap for?"
Of course you can spin it the other way too, which just goes to show this isn't a useful comment. Don't do what everybody else is doing just because they're doing it, do the right thing. If that does happen to be clay, great, but there's a great deal more people using inappropriately sloppy engineering than people using inappropriately careful engineering, so I'd expect to hear a lot more about the former.
The only people in real life that I've met that I would consider saying were being inappropriately careful were really just masking fundamental incompetence under a veneer of concern about process. I'm not sure I've ever seen anybody correctly apply massive overengineering to a programming task in real life. I'm absolutely sure it has happened, but it is not common enough to be worth talking about.
Don't do what everyone else does if you can do something better that leads to better results. The last part matters -- otherwise you're just crapping on everyone who can go faster than you and attributing the difference to "quality" (conveniently left nebulously defined).
Put another way, how do you know what you're working with is steel, and others clay? What if your "steel" is really just clay that is slower to produce and more brittle? How do you reassure yourself that this isn't the case? The answer is, make a big deal about going slower and drink tea and stroke your beard and say "hmmmm." and lots of other things that connote wisdom but do not actually bring it to bear.
I am extremely skeptical of all this, not because I am unsympathetic but rather the contrary. We can get extremely wrapped up in our signifiers of skill and wisdom, to the point that we mistake the map for the territory. Instead of cursing the punk kids, try to learn from them and beat them at their own game.
I find this sort of thing fascinating. I think it nicely showcases how perspective matters on this sort of thing. I almost feel like the whole fast/slow dichotomy is missing the point, since you can have two polar opposite views agree. What the point it... well, I guess that's why there's hundreds of replys in this thread.
One of the things I have had to learn is how to code slowly in a fast code organization. This means understanding where I am going and doing lots of small iterations to get there.
One of the reasons for small iterations of a working system is that it gives you a limited QA footprint. This is far underrated. But it means that over the course of many iterations, my code ends up coming out, well designed and elegant, and playing well with others.
Short iterations and fast releases of a production web site, for example, are not the enemies of slow coding. In fact they have a role to play as well. What is the enemy is trying to do everything at once within a fast release cycle.
> But there is something to be said for "playing well with others". We're all trying to make things, not be the backdrop for your exquisitely-crafted sense of craft.
I had the same reaction here. Everybody wants good design. This article seems to walk the line between providing valuable insight and just complaining about how other people work.
The problem is clean code is only a net gain when it sticks around. So, some teams are toxic to clean code. Let's refactor to use library X, no wait, libarary Y. There is a conflict with foo unless you use bar version 0.37c. Or the classic: "Works on my machine."
I used to think it had to stick around for some time. In practice you start paying for bad code after about a month and you pay back many times over. If a bunch of people are throwing poor code in, the payback becomes exponential before you know it.
Most of my work at the moment is done for a client which had exactly this problem, and it nearly destroyed their business. I came in about two years into the cleanup and things were still a mess. With a lot of work all around (and some pushing by me to focus on code contracts more), things have continued to steadily improve.
But the problem you mention is real and it can sink businesses.
I hesitate to recommend my process to other people, because I don't think I'm a very good programmer.
But for the past year or so, I find that I program best by actually writing out my program in a notebook (in my case a quad-ruled lab notebook). I don't even start typing until I have it laid out pretty much in it's entirety on paper.
This sounds ridiculous (and I can imagine it's not practical for all types of programming), but I've found that it's been tremendously helpful in getting me to understand what all the code that I'm writing does.
Most of the code I've written this past year has been in Haskell, so that helps somewhat by not having a lot of syntax to write down, but I'm sure I'd be doing the same thing even if I was writing in Java.
I find it very hard to convince my students to pull out a pencil and a notebook before they start coding.
"Draw some pictures. Make some notes. Write a function that you suspect will be tricky. Make a flowchart. List the methods that will have to be in that class. Write out how the user will interact with it. Try to list all the pain points for the user. You can write it any way that helps you think. Make up a notation if you want."
Yes, the assignment is due and they need to get code on the screen. But one of these days I'm going to give them a programming assignment and tell them that I only want to see their notes.
I had to add a new feature to a decades old user facing codebase with had to interact with all of the existing system. The implementation of the core of the new feature took only a few hundreds of lines of code since it was just a bunch of graph operations basically. Interfacing with the rest of the system was hell.
I began by just investigating the requirements and designing on paper. I realized on my paper designs quickly that a few graph operations would be sufficient. Understanding how it fit with the rest of the system was only achievable to me by documenting for myself on paper how the bits and pieces worked I had to touch.
Took half a year to get the sucker ready.
Worked just like it was supposed to and my colleague praised how easy the codebase was to extend later on.
As a beginner I recall myself asking other programmers "is using pen and paper cool before writing any actual code?" and they giggled before nodding in agreement.
I had this false expectation in my head that when programmers tackle a new problem (be either writing a small program or solving a challenge) they're able to think about it for a few minutes then straight up write code and if I can't it's because I suck and this job isn't for me.
Then I realized that what I expected was unrealistic and the "no pen and paper" stage only occurs to people who already met the same or similar challenge before and can recall even the slightest hint of what they did back then.
I think most people would agree that writing things on paper first is a total necessity if you are writing mathematical code (I mean, are you going to do the equations in your head and type them out?), and I think the benefits to paper increase in proportion with the mathiness.
On the non-mathy side, I find it useful to take the concept of "rubber duck" debugging (which is totally applicable to original design and creation, and not just debugging) but write a "duck" document rather than talk out loud. Aside from not looking crazy, I find writing easier for thought organization.
Sometimes preparing slides can also be a nice way to really force your thoughts into simplest and clearest form. And hey, when you're done, you can present it to your co-workers.
Paper is my favorite programming tool. The beauty of paper is that it's easy to find out when you're wrong, when you're pursuing the wrong path, without going deeply in and getting distracted by the small-scale details of the code.
No, I need to get away from the computer completely. Word docs (or any other computer typing) is still bound by the computer structure and the computer distractions.
With paper, I can draw graphs of relationships, write tables, add notes to things I drew earlier, etc. It's far more expressive, and far faster, than any computer tool I've tried.
This is the way I work too. I don't handwrite pseudocode, but I draw data structures and relationships in varying degrees of granularity at least until I'm sure I understand the problem. Only then do I begin writing tests and coding.
Paper is awesome. I find I rarely visit my notes afterwards. I just need a medium to spit out my thoughts, or I end up going in circles. As soon as I flush to paper, then my mind is unblocked to go solve the next problem. Most of the time, I remember most of what I've written down, so the paper is write-only.
My best work I do while half-napping with a notebook by my side.
You get shitty fast programmers and shitty slow programmers.
You get good fast programmers and good slow programmers too.
'Fast' or 'slow' in isolation are not really a measure of anything valuable, except perhaps how an individual fits into the team culture.
The crucial thing to measure is how long it takes to get to good, robust software that does what it needs to do.
There are many strategies to achieve that (agile or big design up front or others) but software engineering as a medium is too immature to have found the one true way (maybe there will never be a one true way). Up front design and emergent design have both succeeded and failed on many occasions.
This post reads to me as someone who doesn't like working with younger teams as the work style often doesn't fit, and therefore concludes the team's working style is wrong, rather than he just subjectively doesn't like it.
Most of this thread sounds the same to me too.
Maybe the benefit of a fast programmer is it is quicker to find out if they are shitty?
Agreed. I am fast, but refactor constantly, so there really is no "final" code or "finished" result. The current form is continously honed and refined. My cycles are very quick, however the design is not rushed but rather well thought out over many iterations.
He makes a reasonable point, but this piece is largely a strawman. And
> For the same reason that many neuroscientists now believe that the fluid-like flow of neuronal firing throughout the brain has a temporal reverberation which has everything to do with thought and consciousness, good design takes time.
Yea, the article kinda reads like "bad programmers are bad, and I'm a code master, and the kids don't program like me, and they're bad-bad-bad. their fast is so slow. my slow is so fast."
It's weird how he like does this thing where he equates their fast with thoughtlessness, and his slow with actual fast.
That being said, I'm all about white-boarding, project planning, and clearing out dependencies, but if an engineer is slowly coding something without any deliverables, chances are their going to hit a wall near the deadline. I've seen this happen too many times.
> As long as everyone makes frequent commits, and doesn’t break anything, everything will come out just fine.
He is presenting the arguments and beliefs of his intellectual opponents in a way they probably have not or would not, and then attacking that argument.
Now in a piece like this there is nothing wrong with that -- but in terms of the strength of an argument, it is no greater than just enthusiastically stating his preference.
Yes, typing speed, when looked at in isolation, does not cause the production of high-quality code. But there is an argument to be made for reducing the friction between one's mind and the code on the screen. Insofar as we can ignore that transference of data from biology to technology, we can think more fully and clearly. So, emacs commands that are fully ingrained in one's subconscious may be good, but probably not the C-u 12 C-x TAB; at least for somebody simple like me.
I used to believe that. It's why I picked up coffee script. I thought, the less there is to type, the more you can focus on the problem rather than the code.
I found out I was wrong.
The redundancy actually helps. There's somehow a calming effect in it. I don't know how to put it into words.
I found out that I never actually think while typing per se. I have to first jot down my thoughts on paper. Typing out the code becomes almost a mechanical process. I already know what I'm typing, I've figured it all out and put it in paper (or some plain text note in another text editor or note taking app).
There's stress in trying to hold it all in your head. That stress is released when you jot things down. Then you can just mechanically write out code with less levels of stress.
The explicit syntax means you have more to type but less to think about.
I know it sounds paradoxical. But that's been my personal experience.
It's not paradoxical at all. I think that the more syntax you have (thus shortening the code), the less you can reason about what you write while writing it, because the mental overhead of writing itself goes up. Compare stenography to latin in that regard - stenography is almost never used to write down ideas, only to transcribe.
My hands sometimes get pain and there's a hardwired response that makes me feel awkward, physically, I've I start typing out a lot of repetitive code. When I read Java or most C# code, I just cringe at all the extra crap, at all the pomp and boilerplate. Every time I have to write it, I feel annoyed that I need to type more stuff out like an idiot, because some compiler writer had misguided ideas, or because they already dug enough technical debt to allow more expressivity.
It doesn't work that way for me - for some reason the syntax makes a huge difference in my productivity and level of energy.
Coffeescript works way better than JavaScript, Elixir better than Erlang, Slim better than ERB. It shouldn't make that much of a difference, but it does.
Actually, I've found that the act of moving what's in my mind to a source file is in itself an act of revision for my ideas. I don't need to type fast for it to happen, because I'm not going to write anything at all until I have a solid idea or path in my head. The actual act of typing consumes less than 10% of my programming time, and is for the most part a mere formalization of my thoughts. Would a 50% increase in typing speed make me finish the feature in 4 days instead of 5? Nope.
I agree with you to a point. I am a touch typist. There's something to be said about a lack of friction.
However, at the same time, there is also something to be said for friction and learning to work with friction. It is about time and the amount of deliberation the concentration of holding a thought in your mind for longer.
I am a touch typist and I appreciate that I am. However, I also find that spending some time periodically with a quill pen, writing thoughts down with that, is extremely valuable for the opposite reason.
One of the core challenges is how to slow down thought. This is something which has many, many benefits, and touch typing at the speed of thought would not be conducive to quality code.
Have to disagree with this 100%. A touch typist can always hunt and peck if they want to or need to for some reason (would love to have a reason for this being a benefit though). Being able to touch type helps greatly with just about everything when you are using a keyboard. Sending emails, writing letters, filling out forms, coding, the command line and so on.
Not only that but it's extremely easy for the average person to learn to touch type (I learned from a book and was proficient after about 3 week).
I really can't think of a good reason (other than taking the time to learn which as mentioned is easy enough) to not learn to touch type. I think it's one of the least talked about productivity boosts out there.
I agree but the concentration that comes with slowing down thoughts is also a very valuable skill. I suggest learning to write with a quill pen, or at least a dip fountain pen.
Coincidentally many of the better coders I have known have valued dip fountain pens.....
First, I think that the key is a mindset, not an age--I'm the youngest person at my company and yet I'm consistently the one pushing for smarter, smaller, better-designed solutions to our problems. It seems that a lot of people, especially those with an academic background, forget one of the three qualities of a good programmer: extreme laziness.
Developers (often younger ones) that manically hop from framework to framework and sprint to sprint because design work is "too slow" end up expending more energy than if they'd just been lazy and really thought about what they wanted to accomplish before they worked on it. This is especially true in business, where that mindset is invaluable for skipping work you'd otherwise do while finding fit.
Second, I disagree that on a good team engineers aren't fungible: the fact is that, if you have one person who "owns" a part of the code, you are inviting disaster and bad design. This, and not its converse, has been proven to me over and over the last decade. Sure, one person might have the domain expertise or familiarity with something to really nail it, but they should never be the only one on the team who can do so.
If they are, you slip them pizzas under the door, make them happy and productive, and then sack them as soon as you can extract their knowledge for they are a liability waiting to turn into a problem.
Third, and perhaps most controversially, the author supposes that good design is important. For almost any business, sadly, it isn't. Nobody gives a shit when you have deadlines to meet and bills to pay--and if you're lucky, you can push that burden up onto the next poor bastard to inherit the codebase after you've cashed out.
I really, really wanted to believe that good design mattered, that somehow it had intrinsic value. Sad thing is, it doesn't. BeOS failed. Plan9 failed. Inferno failed. Transmeta failed. Sun and SGI failed. Smalltalk failed. Lisp failed. Erlang failed (and cleverly stood back up, but not because the majority knew or cared about design).
But NextSTEP didn't fail. It succeeded, in the biggest, most spectacular way I can imagine an operating system would. So, design does matter. Design seems to matter a great deal at Apple, so that backs up your other point that most other businesses could care less about their software's design and that NextSTEP at Apple is an exception.
I brought up NextSTEP because it was a notable success that was not mentioned by the parent comment.
And arguably, NextSTEP is a success story because of its design which enabled it to crossover successfully into mobile through iOS. NextSTEP's blueprint allowed iOS to operate at a performance level that other OS'es and architectures could not match when it debuted.
BeOS was fine, but it was rightly or wrongly seen as a spiritual successor to Amiga, which only had a niche market.
NextSTEP wanted to be a spiritual successor to MacOS, but never really could until Steve Jobs forced it in there.
Even so, the technical design points of NextSTEP were well-suited for a resource constrained mobile platform, and that is one reason iOS has been such a success.
Agree with you that NextSTEP is well designed (at least in the upper layers, not necessarily the Mach-based kernel...) but the reason why it succeeded where BeOS failed are not technical: BeOS failed because of network effects (market too small for ISVs) and Microsoft actively preventing OEMs from installing it, and NextSTEP avoided both of these problems by being bought by Apple.
iOS is not really running on a "resource-constrained" platform; the first iPhone already had 128MB RAM, which was a lot of RAM for a PC in 1999, in particular considering the low screen resolution. Apple just got the timing right; the mobile hardware became powerful enough to run "real" OSes (instead of things like Symbian or WinCE that were actually developed for resource-constrained hardware) around 2005.
" if you have one person who "owns" a part of the code, you are inviting disaster and bad design. "
I think we have different definitions of ownership. To me, ownership signifies that a certain person is aware of the history of a particular area of code, is responsible for keeping it in good shape and is usually the go-to person for all modification, or at least code reviews all work done to the code.
This does not mean that they should be the only one who understands it and has the ability to modify it.
I think we both agree that the understanding of the code should be dispersed, so that any other employee can get up to speed with the code just in case the prior owner exits the organization.
I'm working at an organization that some parts of which do development in the "share all" mentality and some of which try to maintain code ownership in the way I described. The code ownership strategy seems to work much better, but, YMMV as always.
When the code ownership is not made explicit, actually there can develop pockets of "accidental ownership" even in the "share all" codebase. Something that only one person has ever touched, and who does all the modifications because all the organization needs is a tiny, quick tweak, and the codebase is already a hot mess after endless amount of these "quick tweaks".
"I really, really wanted to believe that good design mattered, that somehow it had intrinsic value. Sad thing is, it doesn't. BeOS failed. Plan9 failed. Inferno failed. Transmeta failed. Sun and SGI failed. Smalltalk failed. Lisp failed. Erlang failed"
Those all are large scale systems, so I cannot speak for them.
However, in the scale of day to day coding, I've usually attempted applying good design to all of my prodcution code and usually the only feedback I've got from my way of working has been positive. I've shipped on time and the bugcount (for all I know) has been low.
So, based on my experience, I really cannot concur with you.
Perhaps our experience from software development is from a completely different business area. Professionally I specialize in performance critical C and C++, 8 years and counting so perhaps I've not been long enough in the field just yet to witness the fail of good design.
I've had the good fortune of working with good programmers doing performance critical C and C++ for a hobby project--the time spent in design has paid off, and the codebase is a joy to work with. Sadly, the code has yet to see the light of day, because reasons. For the purpose of "get game project finished", more bad code faster probably would've been better, though we would've had to rewrite it anyways.
As for the rest, I've been in a few shops now where people who should've known better (or who didn't!) have written klocs and klocs of unmaintainable poorly-performing garbage and who subscribe to the "code owner" philosophy. The result is, of course, that you can't fix or even organize their messy fiefdoms, and so there is no peace-of-mind to be had that the code works. And yet, the businesses still trundle along to this day despite what is probably stage 4 cancer of the IP.
Maybe it's just Stockholm syndrome. I kind of feel like an abused puppy every time I have to look at that code and fight, for hours and weeks, to do something as basic as "Hey, maybe we should try both debug and release targets!"
You sack them as soon as you have their knowledge? Seems a bit extreme. Or you are simply saying that the person in question is aggressively keeping people off of their turf? That must be it :P I have been working at a small startup for the past 6 years. The code is all mine so sometimes it is a bit hard to relate to all of the internal struggles that programmers seem to have.
Actually, because you have deadlines and bills, that's why design does matter. There are definitely echo chambers where it doesn't, or at least it seems like it doesnt, but when it comes to making real money being able to deliver on time with quality matters more than anything else.
I probably spend just as much or more time sitting, thinking, and staring at partially written code, than I do actually writing the code. It's frustrating when you have superiors who don't understand that time spent thinking is just as productive as time spent typing.
That's just my personal style, although I've never worked in "large" teams on a single codebase so can't comment on what styles work best in those situations.
The best analogy I heard for communicating this to superiors is that programming is like doing a crossword puzzle. 95% of the time you're doing a crossword you're not writing but that doesn't mean you're not intensely working on solving the puzzle.
And yet, most coding rounds in interviews are rigorously timed. Almost always, the importance is given to code completeness rather than code design/elegance. I'm sure a lot of talented engineers lose out here.
Most interviews I've done have expected me to write code on a whiteboard. This is pretty unnatural for me. I'm a very fast typist, so it's frustratingly slow to have to draw letters manually on the whiteboard.
At least in my experience, the main point of coding interviews has always been to expose and analyze the way that you approach and solve problems, not to see how fast you write code. It's usually better to take a step back and re-evaluate the design of your solution rather than dive into the first solution you create, since you'll usually find a better way to express the solution that is short enough to write down on the whiteboard without pressing against the time constraints.
It's also important to see how reliant people are on their tools.
I don't think the whiteboard interviews expect an executable at the end of the process. It's not a complete test, but a partial one: testing if you can come up with suitable algorithms within a reasonable time - and perhaps how you work/your process. None of which is dependent on any (software/IDE) tools
If you're asking them to design something rather than build something off a pre-existing design, you might like to know that they actually understand the principles of design, and aren't relying on a tool to pretend they do.
It is completely reasonable to rely on a text editor and the language interpreter. Hell, even the language documentation.
This could be an issue with really fancy IDE features, but there's a world of difference between a whiteboard and an IDE - namely, a text editor and an interpreter/compiler.
I joined one of those 7-day coding competitions for a game. I ended up with a framework for making games, but no game. Needless to say, I came in last place.
If you can't write a simple function to reverse a string, or words in a string, in a rather short amount of time, is that a good sign? Sure, everyone can pop out the "well the most elegant is use the stdlib's reverse function". But then you say well implement that as efficiently as possible, and we know the end result is a simple little loop -- what should taking more time mean? Someone that takes 10 minutes to come up with the function makes me a bit more nervous than someone that takes 1 minute. Is that wrong?
> most coding rounds in interviews are rigorously timed
In the real world you will have to work against a deadline. As much as code gardening is fun and does produce better software, in the real world shipped software always wins over well designed software.
Can you get it done, is a more important question to answer than can you make it beautiful.
In the real world, you'll be spending more time on code design than typing it out. The race against a deadline mostly affects design choices. eg. Whether implementing a certain feature yourself with rigorous testing, or introducing a new technology which has been tested before. There are pros and cons of each. You go ahead based on the constraints involved (time, cost, expertise, etc.). I'm not talking about making the code beautiful. I'm talking about the /right/ way to do it.
John Draper, AKA Cap'n Crunch, made a great case for it:
"It was a perfect coding environment, coding in jail. [...] Those long nights without the computer really got my smarts in top gear, as I really focused in getting the code perfect and bug free. Not having a computer some of the time, got me to thinking more about writing good code, and less time debugging. During this time, I wrote a really cool FORTH debugger that allowed single stepping through FORTH code (Totally unheard of in those days).
I also write a De-compiler that would take the compiled FORTH code and re-generate source code. This was invaluable in tracing down some gnarly compiler problems in FORTH. You see, I was not only writing a word processor, but I was also developing the language on the fly as well. Modifying the compiler, interpreter, and I even write a DOS (In forth) to manage the easyWriter text files, because EasyWriter didn't need DOS. So I implemented one, using a FAT (File allocation table) and all that other Gnarly Disk Operating system low level code. I found out that FORTH allowed me total flexibility. If the language didn't have a feature, I implemented it. Simple as that.
The day finally came when I was to be released from jail, and Matt had already rented a fully furnished apartment in West Berkeley for me, and met me at the jail when I was released. That evening, we met at the IHOP on University Ave to sign the contract YAY!! and the incorporation papers YAY! Now we can call ourselves Cap'n Software Inc. We rented office space on Telegraph avenue a block from the UC Berkeley campus and called it our "Corporate Headquarters".
Soon we got our first royalty check of $3500, and I gave Matt $1000 of it and put him on a salary. Michelle, Matt's roommate and holistic friend was hired on as our Secretary, and handled all of our bookkeeping. WOW!! I get out of jail and in 24 hours, am president of my very own software company. SUPER COOL!!"
http://www.webcrunchers.com/crunch/Play/ibmstory/
The best solutions don't come from hacking, but by grasping the problem and it's full scope, the chance raises that you find a solution that can not only be used for the current problem, but also for similar other problems. A good programmer always tries to find similarities and to reuse old solutions in as many other areas as possible, but not more.
Such a behavior also boosts the maintainability of the solution. On the other hand, when it is ignored, a solution quickly can become unmaintainable and the solution that was implemented so "quickly" can become a black hole of man power.
I personally experienced more than one project, where the source code has become in very short time so complex and so bug ridden, that development times literally exploded.
Very well put. I have a similar (slightly younger) age, and often pose myself the same set of questions on speed, tooling, typing. Needless to say, my answers are very similar to yours.
As for the Design Process, it works in the exact same way you describe in other fields like Architecture, which I studied and practiced ages ago at a more than decent level.
Money (VCs'), the myth of the young billionaire, and a few other things are probably what make developing different.
This article reminded me of something I read in the Vanity Fair article about Sergey Aleynikov, the Goldman programmer[0].
>He’d been surprised to find that in at least one way he fit in: more than half the programmers at Goldman were Russians. Russians had a reputation for being the best programmers on Wall Street, and Serge thought he knew why: they had been forced to learn programming without the luxury of endless computer time. “In Russia, time on the computer was measured in minutes,” he says. “When you write a program, you are given a tiny time slot to make it work. Consequently we learned to write the code in a way that minimized the amount of debugging. And so you had to think about it a lot before you committed it to paper. . . . The ready availability of computer time creates this mode of working where you just have an idea and type it and maybe erase it 10 times. Good Russian programmers, they tend to have had that one experience at some time in the past: the experience of limited access to computer time.”
Even while he was in prison he managed to code:
> A few months into Serge’s jail term Masha received a thick envelope from him. It contained roughly a hundred pages covered on both sides in Serge’s meticulous eight-point script. It was computer code—a solution to some high-frequency-trading problem. Serge was afraid if the guards found it they would deem it suspicious, and confiscate it.
That kind of discipline and meticulous thought is very hard to build in to someone who has never had these kinds of constraints but it is certainly an admirable goal.
This is a good point. I am not Russian but I learned programming in the same way because I started out doing embedded work. Where a build/deploy/test cycle could take hours to an emulator and debugging was very unfriendly. If you did not do rigorous thinking about your program and what you were doing, you could waste huge amounts of time. I no longer do embedded work, but the mentality has stuck. When I commit my work, it tends to have fewer bugs than what my co-workers produce.
I kind of disagree with the premise here, from my own experience, it's much better to work fast but ruthlessly refactor and never be afraid to delete your own code. I find I rarely nail things on my first attempt, but by being quick and failing fast, I learn more about the problem domain and I end up with a nicer end result than the person that agonized about their decisions rather than trying things out. (Of course, the corollary is that I usually do these things on a branch so I'm not inflicting it on my coworkers)
The problem with doing a big design up front is that you're generally going to run into something that's surprising, that you didn't account for (unless what you're doing isn't novel -- but then, why are you writing code rather than reusing it?) I think when people get all ponderous about these things, they're not really learning about the problem, they're just procrastinating. We're not building bridges here, if your first attempt isn't brilliant you're not out a million dollars of concrete.
It depends a lot on the problem domain. Sometimes rapid prototyping is a useful tool, but the benefits rapidly diminish in the face of complexity. In my experience with moderately complicated scientific and mathematical software, there isn't anything to prototype - either it works, or it doesn't. If you aren't deliberate about design choices, the bugs can be really devious to find and squash.
I think the implication of this article is that fast = reckless, but I disagree with that. Sometimes you recognize that you need more data points, and writing something that works will move you closer to the correct solution, even if the first version isn't perfect.
But yeah, it depends on the problem domain. If you're asking me how to build a website, I can probably have a pretty solid idea of what that code should look like. It's not a novel problem, usually. If you're asking me to build, I don't know, a voxel renderer and you're trying to figure out if an octree is the right way to go for representing the data, or you need to do something more exotic, I'm gonna have to play around a bit.
The author of the article also says he sometimes starts over. That his first attempts just explore the problem domain. There was a post about Jon carmack throwing out some "gross" code because as he worked on it he found the true crux of the problem and it reduced the complexity significantly. Almost an aha moment. Different problems take different styles to solve, but I see a lot of people on here have the same experience that quality takes time, if you get it on the first time or the third depends on if your environment can tolerate the intermediate steps.
The Author lost me at "I'm glad I'm not a touch typist". Coding isn't about typing at the same time when programming you should be writing code not thinking about where the semicolon is. I type and interact with my IDE incredibly rapidly. Generally my ability to type quickly is not the bottle neck. Sometimes it is. When it is I'm glad I can do it as rapidly as I can.
Meanwhile I agree with other comments that slow programming is the wrong word. You can rapidly write quite a bit of code with out needing to resort to hacking or avoiding best practices. It depends on your familiarity with the space and ability to properly construct the problem in terms of implementation quickly.
The point here is that you work at the appropriate pace and put some sense of craftsmanship into your work. I doubt that over the course of a sizeable project the approach of being diligent versus hasty is going to result in the diligent programmer tacking longer to get the same amount of work done.
You have decades on me and I still wish to work your way. It maybe anecdotal, but the more timepressure a codebase has been under the less i've liked it, roughly speaking.
I usually really enjoy reading Steve's posts, but I have a hard time with this one. Many great programmers admit to being clumsy typers (IIRC Joe Armstrong mentions he's a poor typer in his interview in Coders At Work).
A lot depends on who you are programming for. Usually it is for a company that places more importance on a deadline than anything else, including quality. Other "ilities" are even less relevant. So you just hunker down, and produces something that "works" very fast. For, say, a small dataset.
Then either the project finishes (abandoned sometimes), or it grows (exponentially sometimes), and what was created fast no longer is usable. So you do it once again, now to support the new requirements. And so on.
We are constantly rewriting systems, as we continuously copy data from different media. It keeps us employed. It is much easier to be remembered for doing something fast than for creating something that lasts.
Only few businesses want a well-crafted software. Today's startups are endeavors of treasure hunts. You build a ship just good enough to hit the island. If it doesn't work out(which in all probability it won't) you dump the ship. If it does, you still dump the ship. There is a element of expected failure in the current enterprise of writing software which calls for and pushes people to knowingly write bad fast software. As the author says it will be faster to write slowly if the goal is to write good software. But the goal is rarely that.
As women we should be able to birth the same babies?
After the baby is born anyone can raise the child, some better than others, but its easy. We call that adoption. Until its born though, the baby, and it's mother are symbiotically connected to each other.
I think this is similar to the process of programming. Break down an interface, and work on a team at that level. Behind the interface though, keep it small. one or two people max if possible, and those people have to be on exactly the same brain wave as each other. Even well planned code takes some time to stabilize, and it's hard to even find that if other people are actively changing interfaces etc.
That analogy only seems to work well if the software you're developing can autonomously interact with other software, like how humans can interact with other humans.
Also in the largely-agreed camp, and probably unsurprisingly I'm about the same age. But unlike some I don't have a problem with the word "slow." Yeah, it has negative connotations, but it's clearly meant as an ironic counterpoint to the culture of speed. Whatever, as long as it will get the young guys I work with to figure out what the problem is before trying out a solution, I'll be happy.
In general I think there is a case to be made for "slow" programming, but this article falls short for me. I happen to think that software development is a knowable discipline, that we're in the process of figuring out how to build the kinds of systems we need and struggling with a new kind of engineering where there are no physical constraints.
Therefore, I think we have a lot of exploring to do in order to come up with practices that reliably lead to better software and certainly the speed of the development process and the number of iterations is an important thing to experiment with.
However, the post is too wishy-washy to teach anything meaningful. What does "dot my i's and cross my t's" mean in the context of software development? What does "something like implementation-ready code" mean and why is it useful? How do I, as a person separate from the OP, get from where I am today as a developer to the super effective zen-master you're telling me I could be? I'd love to read that post, because the one I just read makes it seem like I should wait to get older and take up gardening.
This can be appropriate in certain scenarios. It royally sucks for other scenarios, though. Measure-twice, cut-once is a great development strategy. However, iterative-fast can have as many benefits as slow-and-steady.
Are you certain about your feature set? Must the feature be everything we can imagine when it's released for the very first time? Are there umpteen other things that also need to be addressed in the allotted amount of time? Are you sure you're not over-engineering a solution?
The biggest issue with slow-and-steady: cost and time-to-market. Of course one can argue "but cost is actually lower" and "time-to-market for a solid product will be the same or less", but (for me) that equates to any iteration having zero value along the way. And for us, that's just not true.
I'm default-wired to well-thought systems and architecturally sound operations. I would certainly like to deep-dive into our codebase and bring it to a beautiful state of robustful bliss. The only problem is -- I can't afford it right now. Maybe later, but early in our lifecycle, speed is more valuable.
This is exactly how I work. It can seem a slower when you tell your boss you just scraped everything to start over, but it's actually super efficient. It never takes nearly as long to rewrite an application. On my latest proejct for example, I took 1 week to think about the problem and play around with different data schemas, then I spent 3 weeks writing the program. It worked well enough, but there were a lot of holes, and it was a god damn mess to read.
I rewrote the entire project in two days. That's right, what took me 3 weeks of playing around only took me 2 days the second time around, and it was well worth the effort. The refactoring increased security tremendously, and made many parts of the code so much cleaner and readable it brings a tear to my eye.
What's even more important is that I was able to easily add new functionality which would have taken a lot of effort to implement because of the way things were written before. Had I left things the way they were, or tried to refactor without a complete rewrite, I would have just introduced more bugs and got really frustrated.
"There's never time to do it right, but there's always time to do it over".
This reminds me of something I've been saying for a while. I'm a practitioner of TDD - Test Driven Development, not Test Driven Design. I like test-first coding not because it's faster, as many proponents claim, but because it's more comfortable and thoughtful. It forces me to slow down and think about what I'm actually doing. I don't think I'm any faster when I'm coding to tests. I do, however, feel much safer. I hate that feeling of cranking out a bunch of code, then it breaks, and I don't know what broke or why.
But good design is absolutely not an emergent property of the process of coding! Thoughtless coding leads to poorly structured spaghetti. Well-tested spaghetti is still spaghetti! Good design is always a matter of compromises. You can either make those compromises in a conscious and thoughtful way, or you can close your eyes and hope for the best. Good luck with that.
>> But good design is absolutely not an emergent property of the process of coding!
>> Good design is always a matter of compromises.
You need to have coded something to have shined the flashlight into the darkness so you can put a few dots on the wall and start envisioning a path. I have small scrap projects to let me shine the light and see what reflects back.
It's not about being fast or slow but being in a flow. You don't want to go faster than you can or intently slow down. Unnecessarily long design cycles inhibits experimentation and may cause you to loose touch with reality. Artificial push for speed up is probably even more harmful. So I would vote for "flow movement".
Lack of design in software nowadays was cited as a problem in the post. I agree, and I think there's a lot of not-classically educated hands out there that wouldn't know a good design document from bad. Nor would they have any practical knowledge of a combination of: how to structure code in large code bases, test their code, converse about/apply software patterns, document their API's for public consumption, RAII, SOLID, TDD, etc. So, I think there's little hope for improvement in software quality/design/success metrics/etc.
Big up-front design only satisfies document weight tests and typically does not stay up to date with the software. So, just simply saying "we need more design" is not going to get us anywhere but hell.
Self-documenting code is a nice idea, but scales only so much in a large codebase. UML is a bear to produce and maintain, and I only really like it in whiteboard discussions, not actually drawn in a tool.
Wikis devolve into a mess of disconnected ideas.
Successful teams I've been on tend to rally and organize designs around the already generated documentation (like JSDoc or JavaDoc), but those docs often only focus on the user-facing API's and leave the internals as a partially documented wasteland.
A few teams I've been on had a scribe, which was sweet, but not usually.
What patterns of design and documentation actually work well for you in real-world, large project/team scenarios?
Regarding needing an aged "adult in the room" to guide the fledgling developers to glory, as I think the OP suggests is an answer to the problems ... Well, in my many experiences where imparting wisdom from on-high was the goal, that has been a miserable failure. It was all because said "senior architect" or whatever the title was invariably only cared about blessing random big decisions randomly/destructively and had otherwise checked out to walk his dog after lunch and otherwise coast during his pre-retirement. All I can say is, God bless 'em.
I do NOT count on age as a factor, only that a developer has reached a certain point of experience in their career. What I DO count on is one person who can enforce a clear vision with passion. Who holds the torch for your team? Is their grip shaky or firm? Where are we headed? Are we headed there with conviction? Does the end goal still seem plausible to all hands after every step we take? If you have proven you've got the chops and can back up all your assertions with creditable past successes and/or literature, you can rule your team and demand excellence - YOUR excellence! Pick a darn direction and let's go! I have total respect for that someone that has a coherent vision AND can communicate it.
Agile methodologies sound plausible to address design and software productivity issues, but I have yet to see exceptional results from an agile project vs waterfall. This is probably just because software is a hard thing to do well in any case, and is subject to political winds, funding, ineffective product owner, etc.
I would agree but I would also say that good design takes a lot of time and practice to get right. You don't want to design too much up front, but you don't want to design too little. The top-level design really shouldn't be that detailed. The component design should get progressively more detailed.
One of the best things programmers can do is to learn software design and I don't mean UX design but rather API design. Be conscious of what you expose to the world. Be conscious of your interfaces. Design your interfaces. Think through error conditions etc. This allows you to push a lot of component design down to the people actually doing the work.
I have been instrumental in pushing my largest client at the moment from a fast programming mentality to a contract-oriented design mentality. The goal is to speed up the development of those components that need to be developed in a more agile manner by providing greater stability in a platform they can rest upon. You can't program fast if the ground changes under your feet (or rather you can but you will never get anywhere).
I have to agree that age alone doesn't necessarily mean anything. I know plenty of lazy, mediocre old programmers. But in order to run a project it does help to have gone through many project launches using different methodologies. That usually takes a few years of professional development, seeing fads come and go, learning to work with different personalities and, honestly, even screwing up a few times.
I definitely see the value in taking the time to design things correctly and making sure you're evaluating all angles — I'm not sure if it's 100% necessary to say that to do that you always need to be slow though.
I think the difference between a fast programmer and a slow programmer often isn't that the slow one is methodically designing and making everything perfect, it's that the slow one has so many more inefficiencies in their workflow.
A good and fast programmer generally knows their tools inside and out, and they're willing to learn new tools when they need to (and not dismiss them because their current setup works good enough and it's what they know).
Speed isn't an indicator of good or bad quality. I would say that both mastery and improvement of tool is more of an indicator.
One of the key points in the article is that the OP had lots of experience with Bay area startups. This could be part of their culture. If you use agile you can achieve a code velocity a touch higher than standard, and if you really want to push the limits you can. I agree, however, that good design takes time. I was at a conference in Sweden last year, and the conference App was a bit problematic. The poor dev team was chained to their keyboards right in the middle of the breakout area trying to fix it in real time. I felt like saying, "Guys, it failed, go enjoy the conference."
My experience lately in "normal" shops (non startup) is that the skills sets are just plain missing. It's hard to find anyone that can code at all, let alone code fast.
The problem is that Software is in fact a new kind of object, i.e. it has a different ontology from most other objects which is defined by what is called Hyper Being by Merleau-Ponty and Differance by Derrida. This is in contradistinction to most objects which are either Pure Being (present-at-hand) or Process Being (ready-to-hand). Hyper Being objects are defined by Derrida in terms of differing and deferring, but we can talk about instead decoherence and delocalization of Software. Delocalization has to do with the fact that design elements are spread out within programs and in spite of object oriented design are not self-contained at the program level. Decoherence has to do with the fact that programs lack internal coherence intrinsically due to the nature of general purpose programming languages and their multi-paradigm nature and the solution to this is aspect oriented programming which attempts to render coherent elements that are non-localizable. But neither aspect oriented programming nor object oriented programming completely solve the problem of the intrinsic quantum like properties that occur in software. Software designs are like classical physics in relation to the quantum like nature of software. See my original paper on this called Software Ontology at http://kp0.me/SoftOntos. There are in fact various ontological levels identified in Continental Philosophy and beyond Hyper Being is Wild Being and Ultra Being which both have implications for our understanding of Software.
Also I don't see any reference to Fast and Slow Thinking by Daniel Kahneman. There is some basis for understanding the difference between Fast Thinking and Slow Thinking in his work. Fast Thinking makes up narratives on the fly while Slow Thinking creates arguments more ponderously. Actually in Software Craftsmanship we need both and we need to balance these two forces in the patterning of our development processes.
To all the web agencies who expect jobs (design, code it & WordPress it with custom post types) a mid-size website; 20 to 40 pages) that take 60 hours to be completed in two to three days (pay you for 24 hours of work only) you can go take a flying leap.
This industry especially in the web agency world has me burnt out. The agencies all are in competition with each other .. lowering costs and thus expect their people to get things done in manner where their employees have no life but getting their work done. It's exhausting and the work is not very fulfilling.
Now I have done the same work at Fortune 500 companies and for someone with a family and who wants a balance work & family/social life that is where you want to be!
> And the latest clever development tools, no matter how clever, cannot replace the best practices and real-life collaboration that built cathedrals, railroads, and feature-length films
There is really one design process with all of the great cathedrals, railroads and films. One person is designing and running the show. It really just reads like this guys doesn't like to work with people who like to work by committing a lot. I work with a programmer that is in his late 50s and he likes to work as the writer does. I can get his same level of results with my method, which is much 'faster.' It's not that I am much smarter than him we just have different ways of doing it and we get the same success rate.
Glad he doesn't touch type? What a lame excuse for not mastering the tools of his trade! That's like a carpenter saying he's glad he never properly learned to use a hammer. There is a time and place for a steady pace and thorough design process, but not in the early stages of a startup. If you can't keep up with your user's feedback and market demands then you'll lose to a scrappy team that can, no matter how beautiful and elegant your code. 95% of software startups that become wildly successful were built on an initial base of excrutiatingly ugly code. Gold plating code for a business that is bound to change directions is a great way to ensure failure.
Great intent, but modulating speed is only one of several important vectors toward good code architecture.
I would add planning out on paper, in (small) groups. Also, reviewing designs, objects, protocols, schema, etc., before writing code. The author mentions thinking while gardening, but that doesn't share with others. Also, planning for the future, for maintenance, debugability, verification, validation, for re-use, for modularity, for upgraded hardware and in some cases regulatory, etc. And developing documentation before, during and after coding.
So yeah, sneaky trick to get us all to talk about our best practices by hyperfocusing on speed :)
I recently disregarded the competence of a coworker (who is in a programmer position) because I took a class with them and saw that they could not type. They were hunting and pecking. (This guy is 50 years old so has had plenty of time to learn)
Is that wrong? Does being bad at typing force you to be more thoughtful and actually make you a better programmer? Or does it just mean it takes you longer?
My thinking is that you are not doing much thinking while you are actually typing, so not being able to type proficiently just means you're slower and that's it. So all else being equal, a programmer that can't type well is a red flag.
There's no correlation but at least it makes you more productive. Like using an IDE or good editor does not make you a good programmer, but it could improve your productivity in the right situations. For example, I don't want to waste a lot of time hunt-and-pecking emails or IM. Touch typing helps.
I type about 105 wpm using four fingers (my index fingers and my thumbs); I don't touch type at all but I also don't look at the keyboard, I just know where the keys are from muscle memory (when switching to a keyboard that is unfamiliar to me it takes me a minute or two to adjust if the keys are particularly different in shape/size from ones I usually use). I'm not sure what that says about me as a programmer.
In any case I don't think it makes sense to use typing speed as a measure of programming competence because typing speed is never the bottleneck in producing real, useful code.
I guess it is touch typing if you define touch typing as any kind of typing without looking, but it isn't what most people think of touch typing, I don't have any sort of "rest" or "home" position.
I never really considered a "home" position, but I think my fingers do tend to end up around roughly the same keys between thoughts: [L-shift]SDF / KL;'
Things get really weird when you consider the effect of template / tab completion systems like yasnippet in emacs and numerous non-free alternatives.
(template systems are an editor extension where you type "i" "f" "tab" and magically an entire if statement stanza instantly appears all perfectly formatted and ready for the details to be filled out.)
So if I type at 40 wpm, but 4 keystrokes of mine do more than 16 keystrokes from someone who doesn't use a template system, does that mean I'm effectively typing at 160 wpm? I guess so.
I also use an extension that magically aggressively indents so I don't indent, my editor takes care of it for me, so I guess I'm typing at infinite wpm when formatting code, compared to someone who hand indents.
I would say it's wrong, typing faster will not produce better software. Most of the time involved in writing software is not actually writing it. You may type 3x faster but if he is 5x faster at coming up with a quality solution then your gains will be lost.
Think it the other way around: if someone can come up with a solution 5x fater than you and also type 3x faster than you, then how long you are going to catch up with him.
Sure, if they are hunting for the correct keys that is a bad sign. However I don't equate super fast typing with actual speed of progress. Very rarely am I coding as fast as I can type. In fact, that never happens.
I feel like "agile" has come to mean "get shit done fast" in today's programmer culture. But just as "agility" and "speed" are not the same, agile development does not necessitate a race to the finish and similarly is not the antithesis of the slow programming style (as some commenters seem to be suggesting).
I think people have wrongly used agile as an excuse to implement first, reflect later, refactor never. In the original agile movement, there is a lot of time devoted to refactoring. You hack something together that accomplishes your goal. You get to see how it's working, get user testing up and running, and then reflect. But underneath the hood the program is a complete mess. This is the time for an agile developer to go back and majorly refactor the program into a neat, well-organized system. Maybe even rewrite. The problem is people on the business side of things see a near-complete project that they want to launch ASAP. It may even feel that way to the developers. I think the refactor part of the process gets cut short because of the desire to push the product out the door or work on new feature sets. You're only doing half the agile process in this case.
I'm definitely not in the "plan first, build later" vein of thought. It's not how I program or do artwork (I'm a hobby artist as well). I need time to play around and experiment with different ideas in real-life implementations before settling on my plan of attack. There are so many ideas that may seem great in your head, but don't do well IRL. Additionally, you may stumble upon a new idea in the process of experimenting.
The thing I long for the most as a programmer is more time to refactor. Time to refine my code. There is something to be said about craftsmanship as opposed to mere production. One thing I've enjoyed about my own personal programming projects outside of work is that I have time to sit back and reflect. Is this the best way of modeling X? Is there a more simple way of expressing X? And then I can do major refactors/rewrites that would never fit into a sprint at my workplace. But I do believe the investment will pay off in the future when it comes to maintenance. And extra day refactoring now could save weeks of debugging later on.
To me, the confrontation posed there is not between speed and quality, but which constraints affect the system design most: coherence, or overall simplicity. I would claim the "worse is better" school of design aims to lower overall complexity by cutting the corners where the added complexity to maintain design coherence does not seemingly add any value to the implementation nor the end users.
"This advice is corrosive. It warps the minds of youth. It is never a good idea to intentionally aim for
anything less than the best, though one might have to compromise in order to succeed"
The main issue are not the developers. In my experience its the pressure the company puts developers under. Most developers would love to take time develop well designed software. But most companies push, push, push for more features in x time. The first thing that gets cut is design discusion and the ability to go back and improve old code.
The older the developer, the harder they are to push around(They've seen it all before, probably been a manager before) and probably why we have a agist industry because companies like developers they are easy to manipulate.
Speed, is a constant demand. The rate that we are creating new frameworks, languages and platforms is an all time high. We toss them away as quick as we build them, by the time its reaching mainstream adoption we are refactoring the core, and expecting developers to adopt the changes before they master the first mature iteration. Born to die software is here and its going to cause a massive fracturing. If we adopt reverse compatiblity maybe we can offset the repercussions. I expect my code to start to deprecate in atleast 6 months in some manner.
While it's true that most nontechnical managers over-prioritize speed and underestimate the dangers of technical debt, it's also true that many otherwise great engineers are inclined toward premature optimization and tend to lose track of business goals.
In most contexts, an engineer that understands both when solid, thoughtful, slow engineering is appropriate and when it's NOT is worth immeasurably more than an engineer whose only gear is extreme rigor.
IMHO, writing a good program goes through lot of thought process. You keep thinking about the solution of same problem till you get the best solution (or at least you are satisfied with it). So it is going to take some time. But I guess after having some experience, you will find a way to create well designed code more quickly than before.
One of the first things I learned about programming was to write the problem down on paper. Diagram it. Don't even start coding until you have a good map of where you're going, at least at couple levels of detail.
Most of the problems I've encountered myself or seen with others is directly related to coding too soon.
wow, this is great! I definitely have always fallen into the "slow" camp. Though I've always considered it more a matter of being careful and with purpose, and not being haphazard.
Quality over quantity. Painting a wall is easy, painting a painting takes much more precision and care, especially when you have to exercise your design skills as a developer.
Though often, a lot of the time goes into understanding what it's supposed to be doing in the first place, and from there it's a matter of making it more resilient, isolated and friendlier to work with.
I'm the slowest dev on my team, but I squash the bugs that nobody could figure out, or knew were there. Taking things apart takes time and patience, but when you're done you know how it all works and can refactor it to be clearer/simpler, and appropriately comment on what it does and how.
I like iterative development but I also really like the author's "cauldron of soup" metaphor. I've definitely worked on projects where I feel like I'm rebasing/updating more than I'm actually doing work to add any type of new value.
"Yes, new software and new businesses need to grow. But to be sustainable, they need to grow slowly and with loving care. Like good wine. Like a baby."
Why can't I grow quickly "and with loving care"? :-)
Really liked this part:
" And the latest clever development tools, no matter how clever, cannot replace the best practices and real-life collaboration that built cathedrals, railroads, and feature-length films."
Historically, cathedrals might stand unfinished for decades at a time. Also the early days railroads were famous for brutally exploiting labor in order to get built as soon as possible. Haven't you heard of John Henry?
I haven't heard of him.
I did not understand you. It seems you believe there isn't collaboration to build cathedrals, railroads, and feature-length films.
This is a good article, and the constant need to code bigger bowls of spaghetti faster is definitely a problem. However, I think that it's a stretch to imply that only older engineers do this.
* Try to assimilate a mental model of the bit I'm working on, and as much of that bit's dependencies as is practical, into my head. This may involve fiddling with the code and seeing how it breaks when I do certain things to it, or hammering at it inside a REPL or similar. The bit that I'm concerned about could be a single method, but is usually larger.
* Try to develop a mental model of the bit as it _should_ be. Do a diff between the 'is' and 'ought', make code changes as necessary, fiddle with the new code to make sure the shape of its function is roughly the 'ought' shape.
* Update the unit tests with specific tests for the 'ought' case, run tests, fix failures.
* Check in or submit code for review.
This is not how software is written now. How software is written now is a hill-climbing algorithm where each cycle is one "red-green-refactor" sequence. Write a tiny failing test which no matter how small or trivial still has a few lines of setup/teardown boilerplate, make the smallest possible code change to make the tiny test pass (change minus sign to plus sign?), run the ENTIRE test suite to make sure that your sign flip didn't break anything anywhere else, if the code needs to be refactored do so and run the entire test suite again, fucking repeat. There are a number of theoretical advantages to this from a management perspective:
* Developers produce working code quickly in the early stages of the lifecycle, whereas with the slow approach you do a lot of sittin' and thinkin' up front without much to show for it -- maybe some design documents or UI mockups, I dunno.
* Development no longer relies on developers' internal mental models of the code. Developers now code against a model which is incarnate in the test suite -- itself a deliverable, documentable artifact.
* Because programmers no longer work from internal mental models but the single-bit state of "do the tests as written pass or fail?", the concept of "code ownership" -- along with related management problems of coders waxing territorial over the pet modules they've spent days or weeks ruminating over in order to understand deeply enough to change -- evaporates.
* Programmers look busy all the time, because they are constantly typing out test code to exercise even the most trivial of changes.
* Thanks to pair programming, you no longer need worry about the "guy in a room" problem. Your dev team is always seen to be interacting with one another.
But I hate it because it is incompatible with my temperament. I like to think about _what_ it is I intend to write, before hammering out test cases to exercise it.
This rings true with me. Some of my best work has been done laying on my bed (working from home) staring at the ceiling.
On one project, we initially gave ourselves a ridiculous deadline and subsequently made a lot of ad-hoc "fuck it" decisions without properly thinking out the consequences. The system is sufficiently complex that we ended up with something that _barely_ did what it said on the box (if it worked at all). I dreaded even adding the simplest of features, because I could just feel how fragile the whole thing was.
We ultimately made the decision to scrap the project (and piss off or outright lose some customers waiting to pay us money) and restart from scratch, and I spent a few months part-time just building up a mental model of the system and stress-testing it against all of the different requirements and scenarios we had to handle. I went through a couple of notebooks in that time, but I didn't write a single line of code.
For me, it was a lesson in the importance of deliberation and knowing what you should be building before building. As my uncle liked to say when we built houses together, "measure twice, cut once."
When DHH lambasted TDD at Rails Conf earlier this year I felt vindicated. TDD enthusiasts market the practice like TDD is a one stop shop for better code. Fad diet style. IT WORKED FOR ME, IT'LL WORK FOR YOU. THIS ONE WEIRD TRICK WILL GET YOU PROGRAMMING BETTER.
I think TDD works great for certain personality types. Definitely not mine. (I'm like you, as far as I can tell.) I find that writing tests first inherently makes assumptions about the shape of the code that I'm going to write. I so often zoom out and say "no, no, no, I'm going at this all wrong" and completely change the way I'm structuring my code. If I've written test-first, then that's two places (my code and the tests) that I need to completely redo the structure. And 2x the programming time for refactoring.
I will advocate TDD for debugging. Test-first is wonderful for isolating software bugs and resolving them.
> - Development no longer relies on developers' internal mental models of the code. Developers now code against a model which is incarnate in the test suite -- itself a deliverable, documentable artifact.
And yet I have run into too many people who valued using the test suites for documentation.
IMO test suites should test that the software implements the documentation. Nothing more and nothing less.
It seems like most of his problems would be alleviated by working on a non-volatile version of the code in a private branch. The kids these days tend to favor a VCS that has this feature.
Furious activity is no substitute for understanding.
Before you start to writing the code, YOU HAVE SPEND A LOT OF TIME STUDYING YOUR PROBLEM. (Brian Harvey, CS61A)
If you can’t write it down in English, you can’t code it.
Programs must be written for people to read, and only incidentally for machines to execute. Immortal classic.)
The sooner you start to code, the longer the program will take.
Details count. (Devil is in the details).
Get your data structures correct first, and the rest of the program will write itself. (Data structures suggest algorithms).
Premature optimization it the root of all evil.
There is nothing new, of course. In "old times", even if "old programmers" didn't have such marvels like Java or Javascript, being mostly scientists, they have figured out "how their minds work" and in what kind of processes they are mere parts.
An one sentence summary could go like this - "Programming is an engineering discipline, Coding is a translation skill". To write down quickly one has to spent a life-time thinking and doing.
Yes, someone in another thread is discussing about skill sets to be learned to engineer software, but I think academy already does a decent job.
To engineer a good architecture you need time to thoroughly think and explore alternatives. Coupled with deep field knowledge and an even deeper know how, gained by means of experience. And to build the best is impossible by definition: you can only hope to build a better one.
I didn't know what a fast programmer was until I had to work with one. The king of copy and paste. All his neurons on his fingertips. He showed to me that thinking with their hands wasn't limited to manual workers. Luckily for me, his awesome speed allowed me to become his "boss" a couple of months after being hired. And I'm not being sarcastic here. The manager saw he was better left typing and me better left thinking.
That's why, perhaps, in pre-Java era, in good schools they taught "big ideas in CS" instead of Java syntax and how to OO everything.
Algorithms, Data Structures and reasoning behind them, Scheme courses of CS basics (based on SICP) and Operating Systems, so a student could understand the principles and design decisions.
Then such student could code in any language, because syntax and common idioms are much less important than appropriate data structures and processes. Then, perhaps, they will use a modern file-system as a database (like this very site back-end does) instead of bullshitting each other about what re-implementation of optimized in-kernel routines in Java is more popular, etc.
I know syntax of about dozen languages, but I am inferior programmer due to lack of appropriate theoretical background. I could spent days trying to figure out, should a request be a closure or an assoc or this fancy but costly "persistent-map", and these nuances like aliasing and locality in Clojure and CL.
Of course, one could always Ctrl-C/Ctrl-V from millions of online tutorials and "get shit done", but I cannot write anything without understanding why this instead of that, not because they say so in some narcissistic, over-confident blog-post. Thank god programming is my hobby, not my job.
So, it is not about syntax or how pointers were arranged (in structures and ADTs or in classes and objects) it is about whats and whys instead of hows. Coding is easy. Programming is difficult.
> I know syntax of about dozen languages, but I am inferior programmer due to lack of appropriate theoretical background. I could spent days trying to figure out, should a request be a closure or an assoc or this fancy but costly "persistent-map", and these nuances like aliasing and locality in Clojure and CL.
I have a deep understanding of various areas of CS and maths and I never resort to "Ctrl-C/Ctrl-V" to get things I don't understand done. I rather spend enough time to understand them. Every. Single. Time. Ah, and I didn't attend a university almost at all.
You could do the same, you know? If I could learn all of this, without any help from friends or from lecturers or from senior programmers (until very late in my learning), basically with books alone (from a library - that was before the Internet!), then you could have done this easily. But you didn't and now you're whining and shifting the blame and making others responsible for your personal failure to learn.
Really, stop that and start learning for yourself, it's not too late!
For every scientist involved with computers in "old times" (at least after personal computing took off) you get easily 10 hobbyists, enthusiasts who actually built stuff.
There are AIMs (lots of them) for example. And besides MIT and Stanford ans CMU there were also research facilities staffed with scientists, like Bell labs, Xerox PARK, etc.
In early programming (Lisps, pre-winter AI, early OO and pre-OO languages, UNIXes) have been done by much brighter people than in modern J-world.
>This is why I believe that we need older people, women, educators, and artists
Remember, if you're a young man, or you don't have the title of artist or educator, you're obviously not a people person and you're fucking up software, shitlord.
The dupe detector is left porous on purpose to allow good stories multiple cracks at the bat. A small number of reposts is ok.
It's true that who ends up getting the karma on a multiply-posted story is just a roll of the dice. On the other hand, you can stack your long-run odds as high as you want by submitting more and better stories. So it tends to even out in the end.
You're not really stacking your long-run odds any more than you're stacking your odds by playing the lottery a lot. The odds are stacked in favour of anyone who submits a lot and submits early rather than in favour of someone who submits well.
I'm not sure what you mean by submitting "early", or what experience you're drawing on here (your account has submitted zero stories!), but am pretty sure that the odds favour a user who submits a lot that is good [1] and that the latter is where the emphasis should be. We're also working on systems to amplify this effect, so it is going to get more true with time. The point as far as karma goes is not to be attached to any one post, but rather to play for the long run.
1. What counts as good on HN: stories of intellectual substance that many good hackers would find interesting.
I guess I just found the response "submit more, it evens out in the end" a bit glib (as is, 'your account has submitted zero stories'). I do have some experience submitting stories as well as plenty of just reading and commenting on the site. It seems to me the current system amplifies negative tendencies like being 'first to post' or reposting variants of the same story.
I'm perfectly happy to take your word that you're working on something better.
I don't think it has a direct correspondence in terms as you might be implying i.e. the fast in this context is mostly still the slow Type II reflective thinking DK refers to. The "fast" coders are making small iterative changes but that doesn't mean the thinking is automatic, just localised and frequent.
The deeper analytical and design oriented thinking where someone takes extended time to identify goals, prototype and review different implementations is an important variation of Type II but not really addressed in TFAS. There might be plenty of Type I micro-decisions involved in that process too.
I'm pretty interested in how cognitive biases do rear their heads in design problems. I'm scared silly by framing effects where given options A and B, an individual chooses A but adding a third option C and now they would choose B. There are so many decisions that crop up during design/development that are vulnerable to those issues.
Our goal is architectural soundness. I believe the biggest fallacy of our industry is we think the only way to get these is to go "slow". Not true.
What we're really saying is our industry is short on skill sets. With specific skill sets, you can build architecturally sound systems at no extra cost.
If I were to build a house today it would take me much longer than someone else because I don't have the necessary skills. I might hurry in which case the house would be shoddy. Is the shoddiness of the house necessarily because I hurried? No. It's because I didn't acquire the required skill sets first.
The software industry has no such (practical) concept of the skill sets required to build architecturally sound systems. We have a bunch of well-meaning hackers, and as a result shoddy systems that decay into technical liabilities.
Our industry needs to solve this skill set problem. The challenge is that academia has a hard time teaching these skill sets, because they are so removed from the practitioner. And businesses can't teach it b/c it takes years and special experience to actually teach it. So it's not advantageous to a business to teach those skill sets.
So how do we do it? And how do we organize an industry around professionals who know how to build architecturally sound systems and code? This is a very difficult problem for a world that has such high demand for code and such little understanding of what the professional skill set would afford them.