"Our intellectual powers are rather geared to master static relations and ... our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible."
As Systems Engineers we're probably better than 99% of the population at visualising the dynamics of the system, and even then we're still pretty bad at it.
Humility in the face of this human limitation helps enormously I find.
Humility should not preclude aggressive problem solving towards eliminating the gap. Be humble and be open to the possibility that this problem can be utterly destroyed.
The list of "things I believe" the OP cited mentions functional programming as a fundamental belief as well, because it "eliminates side effects" and doesn't "mutate state". And yet, these traits give us little or no help in troubleshooting problems in distributed computer systems, where side effects are inherent and state is in constant flux. Oh sure, to within the CAP theorem, you can keep your data store relatively consistent and immutable, but that's only helpful so long as all the rest of your infrastructure is infallible. Which it isn't.
Specifically, it’s about controlling side effects by isolating them. That is, code with side effects has to explicitly declare that it has them, and calling any code with side effects allows the compiler to infer the calling code has side effects. Typically, the isolation mechanism is either explicitly monads, or, is equivalent to monads.
This is incredibly useful for debugging, as is single assignment style code, although single assignment is not necessarily implied by “functional programming.”
>"utmost to shorten the conceptual gap between the static program and the dynamic process"
I'm mostly referring to this part of the quote. Pure FP is basically a static program and not a dynamic process. Nothing is changing in FP, nothing is dynamic.
FP makes this gap zero. Outside of FP (the distributed system) is a different story of course.
The abstraction of FP, which is the same as mathematics, is that all functions can be treated as resolved instantly in time and from a given starting point you'll always get the same answer.
However in the real world things take time and time flows in one direction, and have a nasty habit of being non linear.
FP abstracts away these complications with the notion of 'actions' which are then dealt with by some outside 'runtime system'.
Yes I know about the IO loop. FP reduces the gap to zero when segregated away from IO.
However I literally said for distributed systems this does not apply. This is IO. If you're referring to something like the DOM, the architecture is similar for redux. Within the bounds of IO your program is not dynamic.
I think Djeikstra was talking about our limited ability to reason about what program actually does by looking at it source code. FP helps but there is still a long way ahead of us. For example I can create spaghetti code in any language and any programming paradigm. FP helps me to eliminate state, next big thing should help me to eliminate unclear and complex control flow, bad abstractions and clunky API’s.
The state of the art in distributed programming is Erlang, which uses functional programming to enable levels of scalability very difficult to achieve with any other development stack.
This also applies specifically to the problem of lacking "infallible" infrastructure. Because of the largely stateless nature of an Erlang process, they can be killed, migrated to other machines, and quickly scaled up and down to meet changing demands.
> Most measures of success are almost entirely uncorrelated with merit.
A more important and useful thing to understand is that merit is measured by rules that bubble out of social systems.
A mistake you can make is to believe the the rules of merit that you believe are the same ones others believe, that the ones they state are the same ones they actually believe, that the rules don't depend on context, etc.
It's hard for people to accept this because we're taught that merit is an inherent quality, we may internalize the rules of merit we learn, treat them as constants, and then build our self-images based on those rules.
(I'm not suggesting to give up on the idea of merit altogether, but you'll do better to think of it as something you achieve for personal satisfaction and not necessarily expect external rewards.)
BTW, the rules of success (beyond the most basic stuff) are also social constructs, but they are different than the rules of merit. It's a mistake to expect them to necessarily be harmonious or consistent.
> Our schooling and educational systems are very merit based.
I'm not sure where' you're based, but that's certainly not the case where I live. I once worked for a company that made grading software. I quit after 3 months because they wanted to implement features like, "Teacher wants to be able to change grade to a specific letter, even if student's scores add up to a different letter grade." (In other words, even though this student received an "A" based on test scores, I want them to have a "C" instead. Or vice-versa.)
I took Calculus AB in high school and the teacher was notorious for giving students D's and F's because of his teaching style was poor and the exams were difficult. The only saving grace from that class was that you would be given a C if you scored a 4+ on the AP exam.
He still kept his job because majority of the class got 4+'s on those exams, despite at least 1/3 of his class getting D's and F's.
I wouldn't generalize editing letter grades as a way of "cheating the system". There's usually good intentions in every act.
In general a student should be able to calculate his own grade given available information so that acts as a buffer to this sort of "grade editing." In general teachers won't do this due the fact that it can easily be found out.
>One could say that in the work place, success is uncorrelated with merit.
Do you truly believe this? In my career experience, success is highly correlated with merit. Sure, I have seen plenty of incompetence at high positions, and I've seen brilliant people who received little success in their careers.
But overall, I'd say the correlation is very high.
I would be willing to bet that advancement to any given level correlates with a certain base level of merit. However, I would also guess that the higher up one goes, the thing that correlates best would be luck. I have no empirical evidence for this other than myself and what I know of people around me, but it definitely rings true to me.
> Being aligned with teammates on what you're building is more important than building the right thing.
So, let's cave in to peer pressure or management dictates and continue down the wrong path (technically/socially)?
Strongly disagree with that.
> Peak productivity for most software engineers happens closer to 2 hours a day of work than 8 hours.
Here I mostly agree. The thing is, that you don't know in advance when these hours are going to happen. Also, sometimes it's 2, sometimes 0 and sometimes 8.
> The amount of sleep that you get has a larger impact on your effectiveness than the programming language you use.
> So, let's cave in to peer pressure or management dictates and continue down the wrong path (technically/socially)?
Seems like a very unfair restatement of what the author said. I would read that as "it's more important that everyone has a common understanding of what you are building than it is to spend time worrying about whether you have the exactly perfect goal."
Or to phrase it another way: if the understanding of the product isn't clear across the whole team, it doesn't matter if any one individual has a bunch of brilliant ideas for/about the product, because you aren't all working towards the same goal.
> I would read that as "it's more important that everyone has a common understanding of what you are building than it is to spend time worrying about whether you have the exactly perfect goal."
The fact is, that unless you have a functioning crystal ball, you don't actually know what the perfect goal is; you only have hazy guesses at best. At some point, extra time spent looking for the perfect goal is wasted, because one hazy guess is no better than the next. It's much better to get something that is well-structured and functioning out the door. If you've done your due diligence, it has as much chance of succeeding (or failing) as the next thing. And to get something well-structured and functional out the door, you need everyone to be working towards the same goal, even if they don't all necessarily agree that that's the perfect goal.
There's an argument that this is how ancient divination worked in practice. You don't know whether you should attack from the north, attack from the south, or hunker down and build defenses. You definitely can't do all three. It's much better to pick one and do it boldly than to dither around in indecision. Divination gets everyone fully behind one option; that option may not be the optimal one, but it's certainly better than "do nothing".
Which makes me wonder if maybe modern product management would be better with some oracle bones...
Sure, but hopefully the choice isn't a binary one between "lack of alignment" versus "management/leadership dictates", since that would be evidence of a rather toxic team culture.
> So, let's cave in to peer pressure or management dictates and continue down the wrong path (technically/socially)?
That's not alignment. For alignment to happen, everybody should be on the same page. And any party can tank this if they refuse to negotiate.
I read this as getting alignment between all parties is something that leads to successful projects and therefore is something that should require attention and effort to make happen.
I've ruminated on Jeff Bezos/Amazon's "Disagree and commit" leadership philosophy a bit and think it pertains here.
Leaders are obligated to respectfully challenge decisions when they disagree, even when doing so is
uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for the
sake of social cohesion.
Once a decision is determined, they commit wholly. [0]
Where I think "leaders" is a quality of a person's actions, not a job title.
Just make sure you're negotiating the thing you're supposed to negotiate. I've been in meetings where junior or mid-level developers with minimal experience are arguing the merits of the business model with executives. I'm all for asking thoughtful questions but if you have 2 years of experience at a different company in a different industry, saying you don't think Product X is going to make money doesn't really qualify.
Likewise, I wish "business folks"/product owners/whatever you want to call them would give a little more leeway on technical decisions to their technical team. We shouldn't be discussing choice of database tech or service architecture with a VP Finance.
Whenever this happens (juniors questioning decisions which have a lot of context requirements that they don't have OR VPs from other departments questioning something core to my world) I look at it as a problem with myself not presenting the case in a convincing manner.
The junior engineer is probably trying desperately hard to apply something he read in a book somewhere or hacker new the previous day. You will have to invest time to tease out the differences, ensure they feel listened to.
The VP is probably trying to either catch you off foot to make a point he is itching to make to belong OR is trying to ensure you know what you are doing. Good senior folk build knowledge like a tree: Some branches are full and dense, some are bare - but overall they make up the whole tree. The bare branches here would be your core world and once they get a sense of you knowing what you're talking about, usually you should be fine. Bonus if you add to their knowledge without being defensive.
RE time taken for this: I look at it as future investments. If it is a junior dev in my team OR a VP who can actually impact my day to day life, it is worth building that relationship where they get to a trusting spot. If it is a random person, I might just agree / nod my head and move on to wrap up the conversation.
And yes there are bad actors everywhere - tough luck if it is the VP (happens), hopefully you can manage out the junior fella.
> I've been in meetings where junior or mid-level developers with minimal experience are arguing the merits of the business model with executives
Oof. I'm a big supporter of questions not statements in these situations. I've witnessed people with this behavior, where a person makes claims and argues a point but doesn't have a clear understanding of the situation. They don't realize they're ignorant to a certain portion of information which they need to ask questions to learn about. This happens both with those with minimal experience as well as leadership. Though in the latter case I suspect it is not wanting to be perceived as not having grasp of a situation or being perceived as lacking in knowledge. Which is sad, as people who ask smart questions generally are the best leaders.
> Though in the latter case I suspect it is not wanting to be perceived as not having grasp of a situation or being perceived as lacking in knowledge.
Ha, amateurs! One of the best techniques for making an argument is to ask a leading question that challenges some unstated assumptions. Allows the other party to provide clarifying information. Or if they don't have a good response, that makes your argument for you far more strongly than directly asserting your position would.
So to further emphasize your point, nothing makes you look smarter than asking a really insightful question.
>So, let's cave in to peer pressure or management dictates and continue down the wrong path (technically/socially)?
He didn't say cave in. Your first step in the above scenarios is to get buy in and alignment with management and peers. If you cave in then you aren't in alignment. He is saying the latter event is worse.
Here is my (unpopular? original?) opinion about software engineering: software engineering is more like writing a novel than building a bridge. Bridges are fairly well understood, and the parameters necessary to build them successfully can be approximated very closely once it’s decided what materials to use. Software will surprise you, in the sense of “no battle plan survives contact with the enemy.”
You will have bad data in prod. Users will do unexpected things. You won’t always be able to reach S3. And, this is all true even assuming the software is written perfectly to spec.
Knuth has famously said he thinks reusable code isn’t all it’s cracked up to be. It’s re-editable code that we ought to all be striving for. There are two aspects to this. First, how often do you get to write entirely new code inside a mature software project? Not often, I would wager, so why not make the job as easy on future you as possible? Second, re-editable code needs to be understood before it can be edited to extend it. This means we need to write code for humans to read, and computers, only incidentally, to execute (I think EWD might have said something to that effect).
> You will have bad data in prod. Users will do unexpected things. You won’t always be able to reach S3. And, this is all true even assuming the software is written perfectly to spec.
Surprisingly this nonsense happens in a discrete setting where we can (in theory, of course) enumerate all possible inputs. Again surprisingly we cannot find effective way to automagically partition input space into equivalence classes to assist us in testing.
Novel works in conjunction with our brains and it is not surprising that there are no definitive rules or “good novel equation”. But programs implement defined algorithms and super predictable. Yet creating them feels like writing a novel.
I agree with your statement but at the same time it feels wrong that we find ourselves in such position.
Couldn't agree more with every point listed. These are fantastic points.
I do not know the author, but reading these points makes me suspect he/she is an experienced software engineer that has been doing this for many years now.
I expect that especially his first point (about being humble in the face of software systems complexity) will provoke many hubris-filled comments. I fully agree with the author: we are incapable of building complex and correct software systems. This is why I am afraid of the hype behind self-driving vehicles.
I have a different expectation of correctness; absolute correctness is impossible, in practice correctness is not a binary condition. Even if something provably executes correctly according to spec, the spec may be wrong (and probably is, since everything interfaces with humans eventually). Everything is in a process of becoming; 100% correctness is not achievable, but it's not desirable either - it costs too much; it just needs to be better than the next best alternative.
The proper goal is robustness.
It is either achieved via clear indication of failure and guidance on alternative solutions - so that the user can handle it.
Or by actually failing gracefully and handling as many error conditions in an always reasonable way.
Usually letting user handle it is more general as long as the failures are rare enough.
Internal correctness is perfectly achievable though. External (correct spec) is not.
There are always some unhandled conditions, due to hardware or external component failures...
Even with 100% correctness, your solution still has the capacity to fail spectacularly.
The universe is infinite in its desire to mess with your expectations and assumptions of what correctness is! Oh, you thought that was 100% correct? Well, have a look: A platypus.
I once worked at a place where a bug in the spec was treated as impossible - if people couldn't get the results they wanted because the spec made it impossible, there was no way to get it fixed. Any attempt to log it as a bug was rejected.
> reading these points makes me suspect he/she is an experienced software engineer that has been doing this for many years now.
And yet, a cursory review indicates they are barely out of high school and has never worked on software in a commercial environment. Not casting stones (I agree with some of what they say too, and what I agree with I find insightful. How's that for confirmation bias?), just observing that this is in fact not based on much experience at all.
autonomous driving is a great example here. It will never be "correct" because there will always be accidents. So the author is right about that. But it is very possible that those cars may be 10x safer than the average human driver some day.
What if you are an above average driver? Maybe for most people it would be an improvement in security, but for you it'd be more dangerous to be in an autonomous car.
If you're an above average driver, a below average driver will ram your car unexpectedly. And there will be nothing you can do, because your reaction times are human.
Also, you're likely not as above average as you think yourself to be.
> If you're an above average driver, a below average driver will ram your car unexpectedly. And there will be nothing you can do, because your reaction times are human.
That doesn't require level 5 self driving cars, only brake assistants.
> Also, you're likely not as above average as you think yourself to be.
That doesn't mean that there aren't above average drivers. I'm not assuming I'm among them, but for them, driving in a self driving car would make matters worse.
It requires more than brake assistants. People are, in the limit, idiots (at least when driving). If you drive long enough, you'll see lots of really hair rising things - and the benefit for the above average driver is that the below-average drivers are off the road.
The benefit to self-driving cars is not only that they are driving (hopefully) better than humans - it's also that they minimize the risk of completely erratic behavior. People don't have accidents when they're doing average. They have accidents when they're at their worst.
Last example I witnessed: At a traffic light, a car going ~30mph suddenly and without warning pulls into oncoming traffic because for whatever reason they thought "must left turn" - despite left turns being clearly disallowed.
That wasn't just a "below average driver", because that doesn't happen every day. That was somebody who was very clearly way off their game. And that's the main source of accidents, doing something clearly wrong that you likely wouldn't even think about on a good day. Or being distracted.
Taking these things out of the equation is way more important than "driving better than the best human on their best day".
And this will benefit the above-average drivers as well. (Also, the average will likely move up, so "above average human" might suddenly not mean a lot)
The idiots you encounter are not all the same people all the time. And sometimes, you're someone else's idiot. You might not even notice when that happens.
All it takes is a moment of confusion, or distraction, or frustration. Or a chemical impairment. Or having a stroke. Or drowsiness.
Braking assistants are good, a self-drive function that has the goal of safely removing the vehicle from traffic--for operators that can suddenly no longer function at speed--would be better. Hitting a panic button and waking up on the shoulder with the hazard lights on is preferable to getting intimate with a pole or guardrail while the cruise control is set to 70 mph.
Our informational machine assistants would also be better with access to vehicle state and sensor data. Your navigation assistant could then tell you, if you are approaching an intersection with your left turn signal on, that left turns are not allowed, before you get there and possibly make a bad snap decision. The nav app on your phone doesn't know that you are signaling left. The GPS signal isn't quite reliable enough to pinpoint which lane you're in. It only knows the positions of vehicles whose operators are currently using the same app.
Obviating snap decisions by drivers should be a goal, because less thinking time means more mistakes. Mobile nav apps have gotten better at this, saying which lane you should be in, and whether destinations are on the left or the right, but they could still do better for in-vehicle improvisation, when the destination changes while enroute, or temporary detours are needed.
Driving ability is almost the perfect case study for Dunning–Kruger effect. Most drivers think they are better than they really are.
So you maybe right in that really good drivers may be safer when in control of the car, but it won't be the case for people who think they are good. And if you leave individual drivers a choice, you can expect to see a lot more of the latter. And, to follow up on groby_b, they are going to ram your car and there will be nothing you can do.
In the end, unless we have a system to reliably select the highest skilled drivers, and making sure they are at the peak of their ability, autonomous cars will be safer for everyone.
> This is why I am afraid of the hype behind self-driving vehicles.
Well, they don't have to be absolutely correct. Just a lot more correct than human drivers across a broad range of driving scenarios to justify replacing human drivers.
I would say he/she is an experienced "person", but many of these points, including some about software, can be understood simply by an honest understanding of human beings.
> Being aligned with teammates on what you're building is more important than building the right thing.
I would write this as:
Building the right thing wrong is more important than building the wrong thing right.
And I firmly believe that agreeing on _what_ you are builing is more important than _how_ you are building it. Mainly because it is easier to change the _how_ than it is to change the _what_.
> Building the right thing wrong is more important than building the wrong thing right.
I entirely agree with this, but don't think that's what the original point was getting at.
> And I firmly believe that agreeing on _what_ you are building is more important than _how_ you are building it. Mainly because it is easier to change the _how_ than it is to change the _what_.
This seems like it should be true. In practice, I've found that the _how_ is the part that gets embedded in the wet-ware of people/companies and is the hardest to change.
Given an application written in a language with some framework, some datastore, deployed to some cloud, it's actually exceedingly easy to make another that follows that pattern and does a completely different business function. Much more challenging would be to change those parts of an application and keep it doing the same thing.
"Build consensus before you start building any specific implementation".
... the idea being that it is better for a team to collaborate on a compromise solution than any given person pushing into their pet solution because they "know better".
> Writing non-trivial software that is correct (for any meaningful definition of correct) is beyond the current capabilities of the human species.
Bullshit. Humans have been sending computers into space for decades, and while yes, some have had programming problems, most worked correctly, for a very specific definition of correctly.
Thing is, NASA (and Russian and Chinese and Indian and ...) space engineers have approached their challenges from the point of view of actual engineering, while modern "software engineering" is all about craftsmanship, and just a sprinkle of actual engineering to make ourselves feel good.
And it's obviously not the developers' fault, but the clients/customers'.
You want a 100% bug-free web browser? OK, so let's define precisely and formally what the browser is supposed to do. Then, you'll sign a very expensive contract, and in 6 months to 1 year, I'll deliver the software, with a formal proof it works as expected. Of course, the requirements won't change during the development process, and any update after delivery will be costly and will take time.
Now, who would want to pay thousands of dollars for a web browser that will be delivered in a year and that will be obsolete as soon as a new fancy web tech will be used everywhere else? I can't really blame the customer/client either.
But the thing is, "agile" requirements rely on craftmanship, not on engineering. You can only engineer durable things. Buildings would crash constantly, too, if they had to be updated each other week, ASAP, and while keeping the costs as low as possible.
Your general argument seems reasonable, but it also doesn't necessarily contradict the GP's point.
If we're going to build something non-trivial that is correct, we first need to specify what a "correct" result would be, in some rigorous, comprehensive, unambiguous form. That in itself is already beyond the vast majority of software development projects, though not necessarily outliers like very high reliability systems.
That is partly because the cost of doing so is prohibitive for most projects. This is a common argument against more widespread use of formal methods.
However, it's also partly because for most projects the desired real world outcomes simply don't have some convenient formalisation. The potential for requirements changing along the way is just one reason for that, though of course it's a particularly common and powerful one. There's also the practical reality that a lot of the time when we build software, we're not exactly sure what we want it to do. What should the correct hyphenation algorithm for a word processor be? How smart do we want the aliens in our game to be when our heroes attempt a flanking manoeuvre? If you're a self-driving car on a road with a 30 legal limit but most drivers around you are doing 40, how fast should you go and why? Once we get into questions of subjective preferences and/or ethical choices, there often isn't one right answer (or sometimes any right answer), so how do we decide what constitutes correct behaviour from the software?
What fun proposal! No one wants it but it is exactly what we need.
It should do a bunch of really creative things then further new fancy web tech shouldn't exist. Something like: Distribute documents signed by their author in a truly robust way. It should have A review and certification system and It should help monetize publishing in a truly transparent way.
Write it in hardware design language and run it in an emulator. :-)
I've seen the claim before that it was better in the olden days and that "no real engineer" would create software bugs, especially in connection with space flight. Meanwhile, here is a list of space flight software bugs that has cost over a billion dollars and put human lives in jeopardy:
> I've seen the claim before that it was better in the olden days and that "no real engineer" would create software bugs
But who would claim that seriously? That would be obvious hyperbole. The claim is rather that certain teams of engineers have created software/hardware combinations that, for all we know and for all practical purposes, did not have a bug and have not failed due to software error.
The development of high integrity systems is costly, though, so the discussion is a bit moot. Sure it would be possible to develop an Android app that - within the limitations of those devices and the buggy operating systems - would not fail due to a bug in its own program. It's just really expensive, particularly if the software is also formally verified. Generally speaking, it doesn't even make sense to consider "high integrity software" without the accompanying hardware. You can only satisfy real-time constraints for specific hardware anyway, and the certification and evaluation should be for software+hardware.
I honestly think the post I commented on more or less made this claim by calling the original statement "bullshit", writing off billion-dollar crashes and real risk of death as "programming problems" and calling it "actual engineering".
All humans are capable of unkowingly making mistakes which can have dire consequences. We've yet to see an approach to software development that will, without a shadow of a doubt, erradicate the possibility of bugs.
It's kind of curious coming out of university where I learned about doing requirements properly and up front (which I suspect kind of tends towards a waterfall approach) then comparing that approach to my work now.
Project managers constantly change their mind about requirements and then wonder why we're shipping late, although I suspect some of this is not considering failure/bugs when planning. Even when it comes to breaking down and estimating work, if the estimates software engineers produce don't align with the high level schedule they're automatically wrong or people start asking about what we can trim down (which somehow is always testing).
Perhaps it's idealism vs reality, but I find it a bit bizarre.
It doesn’t bother me when they change their mind. What bothers me is when they refuse to document it. Every change should be documented. Print it out and get them to sign it. Then when they complain whip out your pile of documents and say, “Here’s where you asked for X, changed your mind to Y, then back to X, then something new to Z. Is that your signature?”
Devs are expected to have a ticket to track everything they do, so should the product managers. There should also be tickets for meetings so I can track how much time was subtracted from development by useless meetings.
Because it's much easier to change software than for example a bridge. I don't see it as a bad thing as long as all sides are aware that changing requirements changes the deadline as well.
> I don't see it as a bad thing as long as all sides are aware that changing requirements changes the deadline as well.
In my experience, this could not be further from the truth. There is simply no awareness that change remains expensive it's just that now the process allows it without any fanfare whereas before a change was a big deal.
Not saying that change requests are the way to go but certainly an understanding that change is not free would go a long way
Change in a software project definitely isn’t free but I do think it’s fair to say it’s easier than changing a bridge design halfway through.
Maybe there’s not as big a difference as people think, though?
It occurs to me that is actually possible to change the design of a bridge late in the day. The extra dampers to fix resonance problems on London’s Millennium Bridge are a nice example.
Bridge tolerances are measured in cm, and I suspect that the contractor does make changes on site from the original design the consulting engineers delivered.
> I don't see it as a bad thing as long as all sides are aware that changing requirements changes the deadline as wel
It's extremely dangerous, and software is quite comparable to a bridge. Imagine starting at both sides of the river and the bridge not meeting in the middle.
Changin requirements / features is very simular. And while on paper it sound like no problem, thats when you need to push back against your PO/Whoever is pushing the change.
They are building a bridge over the river near me. Headlines a month ago where that the two ends were within half and inch (1.25cm) of each other - which was considered the best possible case. Last year the bridge contractor build a dozen different adapters plates so cover all possible mismatches so that construction wouldn't slow down when the two ends got close enough together that they could tell how far off they were. (the bridge was supposed to open 5 months ago, but that is a different story...)
I'm not sure what the point is, but it somehow fits into yours...
Yes, there are some analogs to bridges, but software is much more flexible than concrete.
We can pretend we're building a bridge, but there could be another company, that is more comfortable with changing requirements and may deliver their product later, but more suited to a market.
> Thing is, NASA (and Russian and Chinese and Indian and ...) space engineers have approached their challenges from the point of view of actual engineering, while modern "software engineering" is all about craftsmanship, and just a sprinkle of actual engineering to make ourselves feel good.
You are, of course, absolutely correct. Context is key here: you treat software engineering like "real" engineering when it's appropriate to do so (space flight, aircraft control systems, nuclear power plant monitoring and control, medical devices, and so forth).
However, I think for many organisations - and for much of the time - the engineering aspect is often heavily overplayed, and actually you're trading that off against two key aspects of software that can be used to build a real competitive advantage. Those being (i) speed of implementation, and (ii) malleability. Building software can get you to a solution very quickly. You can also iterate or modify that solution very quickly.
Clearly there are limits to this but for many companies, and most startups, getting something working quickly, and iterating on it quickly is far more important than any engineering merit. Same goes for larger companies introducing a new product: don't overegg the "engineering" because it will slow you down and time may be the only sustainable competitive advantage you have.
There's a time to be an engineer, and there's a time to be a cowboy hacker (and everything in between). The real trick is in knowing where on the spectrum to execute in the context of the customer base you're serving and the software you're building.
I think correctness is achievable for a lot of things, but the industry at large isn't chasing it. People ask for a feature. How long does it take, how much does it cost? Dang, can we get it sooner?
Nobody ever asked me about what it takes to ensure it is correct. Nobody asked me to work with them to produce a detailed specification that exhaustively considers all cases that must be accounted for, nobody asked for a formal proof, nobody asked me to run a model checker, not even do something like design by contract. And when I look at job postings, all this stuff is completely absent. I was excited when I found one company that recognized Ada and Spark, and got me into an interview for mentioning them.
Just sprinkle some assert and make a few tests. If shit stick to ceiling, it's cooked. That's how the industry works.
We aren't even trying. It's pretty funny that everyone's making claims about what's possible when nobody is trying.
Yep, I bet there are niches, like in aerospace, where some try, and I'm sure some of them also succeed at correctness. We only hear about the failures.
Exactly. Correctness (and often mere robustness) is almost never a requirement, so it's never even attempted. There are industries where it is important and where it is done routinely, but evidently not in the places most HN commenters work. To say it's impossible probably sounds kind of silly to the engineers who are currently doing it.
I call bullshit on you too. Software is impossible to get correct for the vast majority of cases because there is no specification, nobody even knows or agrees what the specification should be or whether there is even such a thing in the first place. Writing the software is as close as we're ever going to discover a fuzzy description of what the system should be, and the vast majority of the times people change their minds about what they really wanted once they get their hands of that iteration of the implementation. Repeat ad nauseum.
The notion that most(*) any non-trivial program can be formally specified is a myth that largely exists in educational institutions or research institutions disconnected from the realities of production software engineering, and has no basis in real life. Yes, there are a handful of exceptions for extremely constrained control problems in avionics, and even those turn out to have errors in the specification.
Given all of the comments on here, I think I might have not stated the central point clearly enough. It isn't about getting things 100% correct 100% of the time. No engineering does that.
What we like to call "software engineering" isn't actually engineering, and more like a craft than anything. Actual engineering in software is possible.
I take issue with calling our field software engineering. It's a buzzword.
Yet formal correctness is about "getting things 100% correct 100% of the time" to the greatest extent possible (you can mathematically prove the "correctness" of a program using formal methods, but it still has to run on actual hardware and you're assuming there are no bugs in your tooling).
Having started my career in spaceflight software engineering, I can 100% confirm this comment is correct.
Having now left the field, working on most other software systems and teams tends to basically be "code-til-it-kinda-works" (despite the paeans to "Agile"). It's been amazing how just applying a little bit of the rigor I developed from earlier on in my career from working on embedded flight systems pays incredible dividends later on.
That said, one should have no illusions that NASA is this wonderful hub of innovation and a model for organizations to emulate. Not at all. Definitely not. But there is something to learn from a track record of consistent success flying computers deep in space.
It is said that the Apollo 11 lunar lander computer rebooted twice during the final approach. We could probably not say it was correct, but still it was robust. "for any meaningful definition of correct" sounds a bit to strong as an hypothesis. Maybe we could rephrase: for all piece of software S, there exist a definition of correctness for which S is incorrect. Timings in particular are very difficult to ensure.
most worked correctly, for a very specific definition of correctly.
That doesn't mean incorrectness isn't there. That means the circumstances necessary for any incorrectness to manifest in a measurable way didn't happen.
Perhaps, but that is true for any form of engineering. There is no building on earth that can withstand impacts by Texas-sized meteorites, making them incorrectly constructed in those very specific circumstances. Correctness is whether it performs to the requirements of the spec, and if that spec contains tradeoffs that accept non-functioning in certain extreme circumstances then not functioning in those circumstances is NOT incorrect behavior.
I'm not talking about extraordinary events like a giant meteor strike though. I strongly suspect there are plenty of buildings that would collapse if there was a small flood or if they were hit by a car in the wrong place when they have been designed to remain standing in those circumstances. They're incorrectly designed, or incorrectly built, or incorrectly maintained. There's so many ways for things to fail. The only reason they've not collapsed is because those circumstances have never arisen. They probably never will. That doesn't make them correct.
I am always amazed when I think about how many instructions a computer goes through every time it does a cold-start and the first pixel is shown on a screen. I particularly liked that you wrote "most worked correctly".
Margaret Hamilton's pedantry on the Apollo scheduler saved the mission, and possibly lives, when a low priority process started flapping. From what I understand nobody forced her to write the code that way but her own conscience.
We know that now, but it's an example of a rule that was not written in blood. Many are. Someone had an intuition and nobody died because of it.
The effort needed to create a "hello, world" program is not trivial. ie, creation of the tool chain (compiler, linker), the OS which to run the "hello, world executable", the text editor to write the code, the code that is in your keyboard, and the list goes on. All software.
No software is trivial if the author wants to broaden his argument to the human species.
I always wondered where this multiplicative factor of "several times" comes from. In my experience, writing correct software was marginally slower than writing sloppy software as long as most thinking is done with a pen & paper. Would you mind to elaborate a bit more?
I'm guessing: because of cheap labor. Writing correct and/or fast software involves quite a bit of "don't do stupid things" all across the development process. Doing these things right doesn't add much time to development, but one has to first learn how to do these things right. A fresh and inexperienced developer, or the "one year of experience repeated 10 times" person that only worked at "move fast and break things" project isn't going to have this knowledge, but will be cheaper to hire.
Not OP/GP, but I think it's mostly about the definition of correct.
Does it correctly handle every possible sequence of inputs? For the vast majority of software in use today, the answer is "no"; The follow up question is "does it matter?" and (luckily or unluckily) the answer for the vast majority of software in the vast majority of use cases, is "no" as well.
>>> The failure occurred only when a particular nonstandard sequence of keystrokes was entered on the VT-100 terminal which controlled the PDP-11 computer: an "X" to (erroneously) select 25 MeV photon mode followed by "cursor up", "E" to (correctly) select 25 MeV Electron mode, then "Enter", all within eight seconds
Software was not obviously correct or incorrect; In fact, it had been acceptable for an earlier model (which had some hardware protections missing from the newer one). Reaching the incorrect state required the race described above to trigger, which did, in fact, happen in practice a handful of times.
You can work very hard to formally prove your implementation, only to find out that the compiler had a bug and makes your software bad. Or the CPU does; Or all the designs are fine, but there's an bit flip due to electro-migration or cosmic rays. Many people consider this "force majeur" - "an act of god" one cannot anticipate, but cosmic rays are in fact an expected -- and hard to avoid -- input to many systems.
You are in control of a logical model which you can, with extra work, do (provably) correctly. But that IS, from experiencce, 10x to 100x more expensive, and unless you go for 10x-100x more expensive hardware, reduces and moves the sloppiness factor around, but does not eliminate it.
How far down the stack does it need to be correct? Can a 100% correct Python script be considered correct if there could still be bugs further down the stack? Also what's the context of the software? Much software has pretty safe failure modes, but if it's a medical system for example, knowing level of correctness right down to hardware level may be useful.
I think in that light, most modern software can be built to be robust and easily maintainable rather than correct. Like the difference between building a bridge, where you understand everything in the system, versus building a vehicle engine, where you have multiple dependencies but you still need it to not catastrophically fail should any of them stop being correct (eg: fuel pump fails, engine management detects lower fuel pressure and cuts ignition, saving the engine).
Everyone seems to think that, but I feel this ends up being death by a thousand cuts.
I don't know a single piece of software on my computer right now that doesn't crash, act weird, hang or do something else to annoy me on a daily basis.
Now yes, each infraction might be just a minor niggle, but god it adds up!
By the end of the day, all the bugs that on their own shouldn't be too bad, leave me frustrated as hell and wanting to say fuck everyone and just go live in the woods.
It's especially worse when I know this shit doesn't need to happen.
Compare that to the cost of software that works perfectly, but costs 10x what it currently does in order to achieve that perfection. Think of all the free (gratis) software you use that maybe would have a huge cost associated with it to be as reliable as you would like.
It's all about tradeoffs. Everybody wants cheap software that works perfectly, but reality dictates that can't happen. Most of the time, cheap/free with minor annoyances is better than perfect but high-cost.
Depends on what consequences your software can have in the real world. Most of software that you run on the laptop can glitch and crash all it wants as long as your users can tolerate that, but software for cars for example can have very real consequences on lives.
We should be worried about poor development practices in the automotive industry specifically. Looks like many manufacturers approach the tasks the same way any other hardware manufacturer does, i.e. software is secondary in their minds. Software for a car is kind of the same as software for your next "smart fridge". Among other things, it can be outsourced for example.
Whereas, as Bruce Schneier put it, a modern car is a computer on wheels, not the other way around. A computer on wheels is a lot more dangerous (and there are security implications too, it's what Schneier meant actually).
Software that doesn't have changing requirements is eventually trivial. You can logic your way to a reasonable solution eventually. Building an app that constantly evolves is never trivial.
Software without changing requirements is absolutely not eventually trivial, unless ‘eventually’ is a nice word to mask all of the required project complexity.
The Mars Rover has specific requirements, is this more trivial than a startups new web-app?
Many successful interplanetary probes have launched with significant faults that had to be patched or worked around to complete the mission. Luckily the teams are usually able to do that.
I kind of see where they're coming from with this one, although I don't think it's true. The definitions of trivial and correct can be moved to fit whatever they're trying to say.
Rephrasing it as something like it's incredibly difficult to write a large piece of software with no bugs makes more sense. And I think for most organisations it may as well be impossible.
But will they pay millions of dollars for a WiFi firmware stack in energy IoT devices that the grid is increasingly depending on that isn't vulnerable to memory overflows or other hacking vectors?
Software is becoming more and more depended on for life and death use cases every day.
Life-and-death software is already regulated fairly strictly and generally has decent quality. But of course the companies developing it also try to cut costs and in the end it's more about checking off boxes to avoid liability than producing correct software.
Hmmm, on the sliding scale of harmless to life-and-death software, it seems that as time goes many programs and services migrate from being closer to the harmless end to bring closer to the life-and-death end.
I feel that migration is often ignored or discounted. For example, Facebook in the early years was considered mostly harmless, but now has migrated to being exploited by state actors to brainwash populations into hating each other, interfering in elections, or at worst performing genocide on a minority group. We need to stop assuming that just because a software application is harmless now, that it will stay that way, and we need to adjust its "correctness" accordingly as it migrates along the harmless <> life-and-death scale.
I agree with you. And it's ok allocating a limited budget, if you keep in mind that you can get at most a limited software.
In my experience, nearly all delays in software shipments have been only in the eyes of idiot managers: developers and smart managers recognize the constraints being constantly added to a project and know in advance that the result cannot be what was promised in a totally different scenario.
If you have to qualify your statement with hedge words like "most" and "for a very specific definition" then it's hardly "bullshit".
Especially with the plethora of stories about fixing bugs in probes mid-mission, crashes due to poorly sanitized inputs, and byzantine failure conditions.
Yeah, I think the author is being a little pessimistic here, but for the benefit of the doubt it may be interesting to understand what correct means in that sentence.
I mean, if correct = 100% uptime, 100% effectiveness, etc., then it's not so much of a software problem, but a hardware one.
Exploding batteries, bad wiring, design faults, silicon bugs and erratas, bad connectors, temperature problems... I've seen them all. It's more common than one may usually think, especially if you work in embedded. And somehow I wanted to tie the answer with the concept of the parent about space equipment.
BTW, "is it software or hardware?" is exactly what the boss asks first when a customer has a problem.
I agreed about the bias. When I read "software" I try to think about a broad spectrum of software applications, including the ones that are running in your fridge, router or the ISS.
When author (or most people, I believe) mentions software, it seems it's just apps (like food apps mentioned elsewhere) or web apps, that is just a fraction.
I wouldn't be so sure about that. My PC has an overclocked AMD processor with 32GB non-ECC RAM (because ECC ram wasn't even available when I bought it, let alone affordable).
Even if the software was 100% perfect, such a machine is expected to crash once or twice every year or so under full load just from the failure rate of the chips (cosmic radiation / quantum tunnelling effects). Especially RAM seems to have become fairly error-prone nowadays.
What software are you using that once or twice a year doesn't count as "rarely"?
My Viaplay app stops streaming and tells me I'm not connected to the internet several times a day (when I'm obviously still connected to the internet.)
Several times a week my phone's Netflix state doesn't match my TV's state, so I'm either not presented with the controls or when I press them they do nothing.
If the faults were with the hardware, I'd expect the errors to be spread around the software that I use. But Firefox and Gmail always work for me (even if the JavaScript I'm served makes the machine unresponsive), whereas Netflix, Viaplay and HBO constantly misbehave.
Let's not turn this into a discussion of the meaning of "rare". We don't disagree, I believe. Sure, software bugs are more frequent. The point is merely that non-ECC RAM is fairly buggy by design and no amount of software will eradicate the crashes due to RAM faults. If you have 32GB or 64GB of RAM these should appear not too rarely, provided you actually use that RAM. Standard desktop CPUs also have surprisingly high error rate, but it's hard to find good figures. I've tried to find some, but apparently Intel & Co. hold them under wraps nowadays.
All uptime problems I have ever encountered were software related. It was just a matter of time before the software failed, but it would eventually fail in some way. The hardware was just doing what it was told to do.
"Correctness" in programming refers to mathematical correctness. As in formally proving your program to be 100% correct as apposed to proving your program correct for a couple cases using software blackbox testing.
This can be done and has been done but is not well known among the javascript boot campers that populate HN.
Good developers can write bad software too. Primarily because there are a lot more factors to software quality than just developer quality: management quality, team dynamics, cost/time pressure, market dynamics, etc.
Couldn't agree more but I can say it wasn't just NASA. The industry as a whole used to be more aligned with engineering but this has changed considerably since, say, the rise of "Agile".
> The industry as a whole used to be more aligned with engineering but this has changed considerably since, say, the rise of "Agile".
I'd argue that it changed once smaller businesses could afford (and have room for) their own computer. My best guess would be somewhere around the PDP-8 era (late 1960s-early 1970s).
Once things started to move away from large-scale computer rooms and consoles of blinking lights, towards a more "hands-on" and interactive approach, where the programmer didn't have to wait between "batches" of runs to see what their code did (correct or not), that is when (from a software development point of view) so-called "engineering" went out the window.
>Thing is, NASA (and Russian and Chinese and Indian and ...) space engineers have approached their challenges from the point of view of actual engineering, while modern "software engineering" is all about craftsmanship, and just a sprinkle of actual engineering to make ourselves feel good.
What would be the quintessential differences here? We plan, we test, we revise. I'd argue we're all doing the same thing, but if you're working on a food ordering app you can afford not to be as stringent with the testing and quality, versus software that takes people into outer space.
It's just a matter of stronger concerns. Things matter more in a field where you can kill people if you have a bug, so everyone strives to do a better job, top to bottom.
The difference is measuring and quantification. Engineering means you know, or can estimate, a load and perform calculations to see if your design meets that load.
Most food order apps seem to be more like, eyeball it, slap something together, then it works until it doesn't, at which point you figure out a fix. That works for a while until something else breaks, so you patch a fix into that. Ad infinitum.
Funny you mention food order apps. UberEats web app is my prime example of how a flaming train wreck can make you a ludicrous amount of money, and how there's almost no correlation between engineering quality (not necessarily product quality) and revenue.
Space starts with quality long before testing. They do formal mathematical proofs of their code. They spend a lot more time in code review. Those are just the things I know of without looking it up, I know there are other things they do.
I came to write something simular.. Just because you havent come accross it, doesn't mean its beyond our capabilities, infact, by definition its within our means.
Even the NASA curiosity rover encountered a software error and now it's stuck. I think OP's argument stands.
And NASA software is not that complex if you think about it; at least it has fixed requirements; once they ship it, it's done. No more changes. This is different from some complex software which require constant changes and adaptation to changing requirements; in this case the code has to be designed to adapt easily; this is difficult to achieve and most developers/companies fail at that or in the case of companies, they only just barely succeed through the development of extremely expensive solutions.
Big companies like Google don't write efficient software; they use their capital to hire as many expensive engineers as possible to brute force the requirements until it works. That's why they keep rewriting projects from scratch every few years over and over; they don't know how to write adaptable software. Their software is somewhat complex, but still simple enough and they have enough capital so that brute forcing solutions are still feasible (for a high cost).
The same cannot be said about more complex tech like blockchain or large-scale machine learning systems; you can't brute force your way to a solution and full rewrites will actually set you back.
I work with high-assurance software in the blockchain space and only recently read a paragraph from some old NASA software quality textbook and thought damn.. this is awesome. It's anything but trivial!
>Being aligned with teammates on what you're building is more important than building the right thing.
Goodness this is incredibly naive.
It can only be true only if youre working without customers or stakeholders and who the hell works that way other than hobbyists and startups that are indiscriminately wasting other people's money? The vast majority of software I've written (I'm outside the valley) is written for actual paying customers, and it must be "the right thing," ie, exactly what they paid you to build.
Beyond that, if you're, for example, writing an emulator and the emulator works different than the system it's emulating that's a failure no matter how "aligned" your team is. Nobody will buy it and nobody would even use if you gave it away.
Except that for a lot of developers, "the right thing" has nothing to do with what customers want, but rather with language used, libraries used, CI/CD setup, monitoring active... So many technical aspects that sometimes have nothing to do with the customer wants. And that's where being aligned makes sense.
> Being aligned with teammates on what you're building is more important than building the right thing.
I don't buy this. I'm not sure these two things should be placed in opposition (or at least tension) in this way. It makes for a nice soundbite but I don't think it withstands scrutiny.
I've seen and worked in teams where alignment was great and we all worked really well together but, at the end of it all, nobody bought the damn product. I.e., we didn't build the right thing. Let's not kid ourselves: aligning and working well together to build the wrong thing is somewhat pointless (granted, you might learn some useful lessons along the way). Now, if you take that team and then assign them to build the right thing you have something really powerful.
If your cross functional team (Product, UX/Design, Eng) are aligned, it doesn't matter if you build the wrong thing initially, because you'll follow a good agile and/or lean process to validate early and thus learn what you should be building earlier.
It's not saying 'being aligned is more important that building valuable things' - but "if you focus on building a team that functions well operationally and strategically, you'll figure out how to build the right thing quicker".
If you are building right thing, but you are not aligned, then you will not build the right thing and you will fail. But if you are aligned, it's easier to steer the ship closer to the right thing.
I read that as building “the right thing” there are certain architectures that are pretty well known to be best practices at scale, or at least are fad enough that that’s what the consultants can push. On the other hand a well aligned team that meets business objectives while using out of date language, framework, unscalable architecture, etc delivers a lot more value and is generally easier to be a team member of.
In sports and esports this is a well known paradigm. The team committing to execute a poorly strategised play, is much more effective than uncohesive action towards a well strategised play.
If everyone is on the same page, the execution will be great even if the idea/requirements are not perfect. Which is better than a a perfect idea executed poorly.
>Now, if you take that team and then assign them to build the right thing you have something really powerful.
Except if not aligned then the team won't actually build the right things but their own personal conflicting views of the right things.
Alignment and having a team that values the right things are not opposites. However, if a team doesn't value the right thing then trying to force them to won't achieve anything.
> Except if not aligned then the team won't actually build the right things but their own personal conflicting views of the right things.
Read the remark in context and you'll see that at this point I was talking about a team that is aligned and is building the right thing, and that is really powerful. Sorry if it was expressed clearly enough.
I agree. Group Think is bad in general. When no one is permitted to speak their mind and is encouraged to just go along with things, that's generally a bad environment.
Based on the title, I was especting yet another opinionated story written by a successful outlier littered with biases, but actually every point resonated with me.
Maybe this article illustrates the difference between sharing knowledge and sharing wisdom.
Knowledge can be divisive because it's often communicated through rigid (black or white) statements and founded on outlier experiences but wisdom is generally not divisive; wisdom usually doesn't get people as excited but it's also harder to refute; wisdom is knowledge without the bias.
There are plenty of extremely clever (and extremely biased) developers, but very few wise ones.
I tend to think that upvote/downvote mechanisms (Like on Reddit and HN) are somewhat of a threat to the sharing of wisdom because wise ideas don't create that dopamine rush which clever ideas do (they don't trigger the strong feelings required for upvote/downvote). Wisdom is rarely surprising or controversial.
Even discussing the idea of wisdom seems to be taboo. As if it's some kind of outdated concept; but the irony is that it's more relevant now than ever.
I don't know if it's a "classic", but I recently picked up a copy of Wicked Problems, Righteous Solutions: A Catalogue of Modern Software Engineering Paradigms [0] after reading through another HN thread on good software dev books. I haven't cracked it open yet though.
Pragmatic Programmer and Code Complete are usually mentioned as classic programmer text books. I own both and have probably read 10 pages between the 2 books.
I haven't read Code Complete but I did flip it open to a random page around halfway through the book, and was treated to a page-long explanation about the fact that conditions in loops aren't constantly being evaluated, but only once at the top of each loop.
I think it's reasonable to simply assume the entire book is bullshit after that.
By some remarkable coincidence, @JanStette's version of this (1) and mine (2) have a lot of overlap. Maybe we should all write a version of this and see if common themes emerge.
Jan and I clearly agree on coding, design and testing, for example.
> Peak productivity for most software engineers happens closer to 2 hours a day of work than 8 hours.
Is it possible to improve this by training? For example if I'm productive 2 hours and force myself to be productive 10-30 more minutes each day for a number of days and when I'm comfortable with 2:30 hours of productivity, force myself to be productive for 30 more minutes.
Would this eventually lead to 8 productive hours? Or is burnout (or another complication) a more likely outcome? Has anyone tried this?
> Would this eventually lead to 8 productive hours?
In same way that sleeping 10 minutes less each day leads to you not needing sleep at all or showering at 1 degree more each day makes you impervious to boiling water. So, no, absolutely not and potentially dangerous to your health.
Your example is not quite as obviously wrong depending on your definition of "running" and "indefinitely". With a bit of hand waving I would personally count hiking all day, a marathon also feels long enough to me to count as indefinitely. Furthermore, cross-country marathons are a thing, see for example the movie "Forest Gump".
I tried something like that multiple times - removing all distractions and non work or replacing them by educational activities. Each time it lead to overall slowing down. Peek ceased to be as good. Once it lead to depression, when I replaced all entertainment and chill in my life by "productive activities".
You can do that in short term, the real problem is usually when I tried that for weeks.
Some amount of downtime is necessary. Now I intentionally take breaks and exercise a bit in between coding. The exercise actually helps. With breaks you gotta measure them a bit tho, else you risk taking too long breaks too often.
Do you think that those outcomes would not be so detrimental if you did those tweaks more gradually, say over a year period you would get from 2 hours of focused productivity per day to, for example, 5 without those side-effects?
That's why I stay home when I need a long stretch of productive time on one or two things.
But the office has upside when the task is not well defined or I work on a lot of small stuff. At home, my brain will wander around more easily between context switches.
> I'm working on something that's been de-risked. I know what I need to do and I know how to do it
Thanks for including this one. I think the biggest stresses in my dev career have come from times where I didn't really know what needs to be done or when I was way out of my depth. I believe this reason alone was a huge driver of my procrastination in the past, especially when I was the lone programmer on the project.
That's why I hate doing interviews and always procrastinating when I need to find a new job. Once I talk to an engineer I know if the job is a good fit for me or not, till then I fell HR/recruiters just wasting my time.
Productivity is a function of concentration and understanding. If you have sufficient knowledge, what lacks is concentration. I'm perpetually amazed how people 1) fail to see this and 2) think their concentration power is fixed for life.
An example of this is when you do pair programming, your pair forces you to keep concentrated (and you force him too) - if you are screensharing you won't alt tab to reddit) so it's an example on how you can keep focused for longer periods than normal, at the expense of being more tired at the end of the day.
I'm guessing this is an average rather than absolute amount every day, and I also don't quite think it's possible to 'force' yourself to be more productive, as being productive is both about how fast you think, how fast you type and how fast you come up with solutions to a problem.
No matter how much you force yourself you can't just force yourself not to feel exhausted/restless/depressed or whatever else is effecting your productivity, you sure can alleviate these symptoms, but that's improving your well-being generally rather than forcing yourself to be productive.
So it sort of depends what you mean by force, I'm guessing it would just cause you more stress and make you eventually burn out because you're not in reality becoming more productive by focusing more, you're only stressing yourself out further.
It really depends on the context of the work. Usually I'm getting interrupted, or I worry that I'm going to get interrupted. Then there are meetings, coffee breaks between them. All these things prevent me from being productive.
I don't think working 8 hours of productive work will get you to burnout. It's more a matter of habit and discipline and interest in the project.
In my experience, what leads to burnout is a bad working environment and/or working a lot for long period of time.
How do you define productive so that you could add a time goal to it?
Because I read it as "There are on average about two hours of productive in-context work to be done and the other hours are about building that context for those two hours that not often transfer between days." You can't add 10 minutes to that.
I found that best way to improve productivity is automating development work as much as possible. 2-4 productive hours a day are enough if you spent it on solving important problems. I have always worked less hours than my colleges with roughly the same amount of work done and the same error rate because they are refusing to change their workflow.
Write test for everything, don't do manual testing. If the test is green my code is good. Don't spend time manually testing your code.
Touch typing was a huge improvement. I'm not a fast touch typer so it wasn't really improved my typing speed. It's more of that I outsourced my typing from my brain to my hand, so while I type I can stay focused on the problem.
The whole point of this is that it's not a binary state, and it's not "writing code".
So, tl,dr : no, it's not because you are most likely not the bottleneck. You are not the master of the cognitive overload you operate in for instance.
> Writing non-trivial software that is correct (for any meaningful definition of correct) is beyond the current capabilities of the human species.
I think it's worse than this: Defining correctness for any nontrivial system to the level of detail required by software is beyond the capabilities of any human or group of humans operating to a deadline. The only way to get there is to pare down the scope such that physically possible things are defined to be out of said scope, such that the system punts and relies on humans to figure it out.
This has implications for job automation.
> Thinking about things is a massively valuable and underutilized skill. Most people are trained to not apply this skill.
Attempting to prove yourself wrong is a massively valuable and underutilized skill. If you get a theory, develop a test which the theory is vulnerable to, such that if the test comes out a certain way the theory is disproven, and then run that test. Some people seem unable, or unwilling, to think like that.
This has implications for software testing. It has implications for all kinds of testing.
> Being aligned with teammates on what you're building is more important than building the right thing.
It depends. If iterations are quick and the current direction is roughly on the path to finding the right thing yes, otherwise no unless you are choosing to redefine software engineering.
> The amount of sleep that you get has a larger impact on your effectiveness than the programming language you use.
For me this is not true. The type system has a bigger impact on me. If I use a language with static types + IDE, I am quite productive even while chronically not getting enough sleep (currently looking after a newborn). Probably much more productive compared to writing in a dynamic language while getting enough sleep.
And of course, that does not mean that sleep is not a huge factor. It most definitely is.
> There are many fundamental discoveries in computer science that are yet to be found.
That's a strange claim. It could be, but there are large evidences that we discover most of the fundamental things immediately up to the 80s, and then the other improvements are kinda really incremental, a program from 1970 is not alien today. Technology of computing progressed a lot more than CS itself, a computer of 1970 kinda is alien, and a toy.
Your description of how things are is exactly how Thomas Kuhn describes the process of "Normal Science" in The Structure of Scientific Revolutions. Cracks in our understanding grow, and patchwork theories are put around those contradictions, but eventually the dam breaks and a scientific revolution reshapes the way we think about a subject.
I don't think we will sit at our computers writing code in the same way a hundred years from now. It just feels very inefficient to me, it's a good abstraction for now but better will come, new paradigms that probably look radically different. I personally would find the opposite claim rather strange :)
> Being aligned with teammates on what you're building is more important than building the right thing.
It's important to define the terms here. Does "important" mean important to you or the company? I could understand the former, but how are you ever going to build the "right thing" if you can't agree as a team what you're building?
> The fact that current testing practices are considered "effective" is an indictment of the incredibly low standards of the software industry.
This is an interesting one. I first took it to mean 'current testing practices are inadequate', which isn't an extreme opinion, and one I bet 99% of HN agrees with. It's 'common wisdom' that teams should be doing more testing, TDD, etc.
But now that I read it again, it's specifically saying that current testing practices are 'ineffective', not 'inadequate', which would indicate we should be doing less or even none of it. 'ineffective' to me means worse than nothing, since testing, like anything, has a cost. (time wasted, more code to maintain, lower morale etc)
I'm not sure which the author meant. But I do think the latter is a hot take, and I get the feeling I'm in the minority in agreeing with it.
I'm reminded of this PG quote (obviously written a while ago):
"Indeed, these statistics about Cobol or Java being the most popular language can be misleading. What we ought to look at, if we want to know what tools are best, is what hackers choose when they can choose freely-- that is, in projects of their own. When you ask that question, you find that open source operating systems already have a dominant market share, and the number one language is probably Perl."
If I think about what I've written unit tests for at home, that'd be a bunch of maths-y stuff that I was having trouble debugging, and a few functions here and there that I consider 'tricky' and want a bit of extra peace of mind for.
When I'm building web apps at home though, like I always do at work, how much of it do I write unit tests for? Zero. I can't quantify why. I just know intrinsically that they're useless and it's a waste of time. I just know that 95% of my code works and I know what the 5% I'm unsure about is and what manual testing or browser testing I need to do to clarify it.
If anyone on my team at work ever said that, everyone would look at them like they just took a shit on the carpet (including me, because I'm happy to smile and nod for the right salary).
Then there's this, from the same essay:
"One difference I've noticed between great hackers and smart people in general is that hackers are more politically incorrect. To the extent there is a secret handshake among good hackers, it's when they know one another well enough to express opinions that would get them stoned to death by the general public. And I can see why political incorrectness would be a useful quality in programming. Programs are very complex and, at least in the hands of good programmers, very fluid. In such situations it's helpful to have a habit of questioning assumptions."
I definitely know a few coders I've worked with who I'd be comfortable raising my views on testing with. We might differ on the details (I quite like browser tests, don't find a lot of value in snapshot testing most of the time, we might unit test a different tiny subset of the app etc), but by and large we'd agree that most common testing is a crock. And coincidentally, they're all the devs that I think write fantastic code, and that I'd happily build a startup with.
Anyway, that's my Thing I Believe About Software Engineering, with a bit more clarification than OP's ones.
High test coverage is mainly useful for code that will be worked on by more than one person, so it makes sense that you wouldn't want to add it to your personal projects. Doesn't mean it's not worthwhile.
High test coverage is also more important for large projects than small, even when working alone. I never write unit tests for hello world - it would be a waste of time. I will write tests when I'm hacking on a large project even if I'm alone because I know eventually I will make a change that breaks something that used to work.
> I just know intrinsically that they're useless and it's a waste of time. I just know that 95% of my code works and I know what the 5% I'm unsure about is and what manual testing or browser testing I need to do to clarify it.
I'm with you 100% here. I would even go one step further:
Unit tests, where Bob is writing code to test that the code that Bob writes does what Bob wants it to (that's a mouthful), are pointless. They don't test Bob's assumptions, they don't test for correctness, and they don't test for robustness beyond the corner cases Bob can come up with (if Bob bothers to test robustness beyond ensuring that the coverage metrics hit 100%).
Even if Bob comes back in to refactor the code later, the unit tests begin to fail solely because the code has changed locality. That changed locality means that the unit tests have to change as well, invaliding any usefulness of the original unit tests.
> I just know intrinsically that they're useless and it's a waste of time. I just know that 95% of my code works and I know what the 5% I'm unsure about is and what manual testing or browser testing I need to do to clarify it.
I think this is all based on experience. You know when the tests are going to give you value, because you know when you’ve been burned by a lack of tests in the past. You put tests where you’re worried about bugs appearing.
I write tests when 1) they help me bring up code (you’re gonna do some interactive test/debug cycles in the beginning, might as well do that with tests), or 2) I can’t sleep at night because the code is subtle or complex and I worry about its continued correctness, or 3) there’s a bug and it’s easy to write a regression test for it.
Sometimes writing code without tests is like putting sand into a pile: it works fine for a while, but as soon as you start overloading it, whole faces suddenly shear off. You can shore it up with your hands, but above a certain scale you’re battling to prevent collapse. Tests are like scaffolding you put in place to keep the sand from sliding away.
> When I'm building web apps at home though, like I always do at work, how much of it do I write unit tests for? Zero. I can't quantify why. I just know intrinsically that they're useless and it's a waste of time.
Serious, not-loaded question for you BigJono:
At work, do you (or your team) primarily write unit tests or integration tests?
Until my current job, I spent the first 5 years of my career writing unit tests in OOP systems where I injected mocks of dependencies. I realized in hindsight that these are basically useless. You end up proving very little about the code.
At my current position, we do primarily integration tests. (Have a test DB, essentially run code "end to end" minus the HTML UI.) These tests are _very_ helpful. Have stopped me from creating bugs dozens to hundreds of times. (Over a few years.) Same with other co-workers.
It's worth mentioning the product is a large monolith though.
I will write "unit tests" but they will be at the level of a whole module or assembly, and so often blur the lines with integration tests (particularly because spinning up a test data store is often easier than buggering around with mocks).
"Real" integration tests look at multiple modules or systems.
You catch a lot of bugs this way, have far fewer tests to maintain, and have the freedom to make internal changes without worrying that you're then going to have to fix up 100 tests to get your build working again. And you still get the coverage you need.
It's a better world.
Tbh, I find writing a tests at the class or method level only to be absolutely ridiculous except for cases where I'm dealing with a method that does something particularly tricky or esoteric.
Again, for all this context is key. As developers we should be thinking about what we're doing and the trade-offs inherent in that, not just blindly following a set of rules that say "You should do X; you should unit test; you should write tests for every method of every class."
> how much of it do I write unit tests for? Zero. I can't quantify why. I just know intrinsically that they're useless and it's a waste of time. I just know that 95% of my code works and I know what the 5% I'm unsure about is
Surely if you were, for example, writing an automated trading system, or something that handles money, or anything that actually impacts the running of a business in a bigger way than the website being down you would see the value in writing tests, right? Even in the situation that the website is the product, don't you see the value of having tests to make sure that future changes don't break existing behavior without having to run through an exponentially expanding manual test regime?
For me tests are about catching regressions and verifying behavior, e.g.;
* I can visually see by reading the code that if someone submits an order that breaches a risk limit, it's rejected. Still, I wouldn't feel comfortable letting that system trade until I had verified with both a unit test that handles the generic 'reject a number > risk limit' _and_ an integration test that verifies 'I send N different types of orders into the system, where N/2 are above the risk limit. Which orders pass the risk check?'
* I've done some refactoring to some low level connection logic to handle some rare condition that last happened on Friday at 2300. I wouldn't release the code with the refactoring until I can verify with a test that both the new condition is handled, and all of expected unchanged behavior is unchanged. Without a test suite, I would have to spend a day setting up environments and testing each invariant, which is a waste of time for something that I can automate and have more confidence in. Also, without a test suite, how would I know that a change in the future wouldn't result in me having to come online at 2300 on a Friday because of a regression that someone didn't catch in their manual testing?
* Someone has submitted a PR that modifies how often our log files get rolled - without a test suite to verify the failure modes, I would never let that get to production. Why would I want to risk losing log files, at the cost of spending 30 minutes writing a test for the failure modes (at least, the ones I know about, which is still better than none because testing isn't a zero-sum game)?
Having the "it's going to fail anyway, and I already know where" attitude is a terrible way to approach risk management (which is, ultimately, what testing represents) IMO. It might be fine in certain domains, but in others its a quick way to tank a company.
I definitely agree that it's not worth testing everything (e.g. CI scripts - if they work, they work!), but for actual business logic or infrastructure not writing tests is straight up negligent. Even if you're certain that the code does what it says and you can keep all of the failure modes in your head at once as you make changes, I guarantee that someone else on your team (this could be you in six months) doesn't, or can't.
Of course, those are good examples. And I do write tests for stuff like that. But it's a tiny, tiny part of what the average front-end web app dev is doing on a day to day basis. My claim isn't that testing is useless. It's that you can get 90% of the benefit from doing probably 10% of what the average dev (in my field, YMMV if you're not a FE web dev) is doing at the moment, and the business value of testing falls off a cliff once you hit the low hanging fruit.
Lemme go back to the same PG article, again, cause it's that good.
"A couple years ago a venture capitalist friend told me about a new startup he was involved with. It sounded promising. But the next time I talked to him, he said they'd decided to build their software on Windows NT, and had just hired a very experienced NT developer to be their chief technical officer. When I heard this, I thought, these guys are doomed."
If I walk into an interview at a startup, and they have 100% unit test code coverage with a nice suite of integration and end-to-end tests, snapshot testing, nice contract tests, the absolute works, all for their 50 customers generating $5k monthly revenue, my first thought is they're probably fucked. It might be a really nice place to work, but if I want job security I'd be going somewhere bigger, and if I want shares I'd much rather be in the share house down the road with the guys that polished off 3x as many features in a couple of weekends of hacking.
Being perfectly aligned with teammates on everything but ending up with a totally differently or straight wrong result. How do you justify that? Otherwise I'm with OP.
> Pair programming can easily double that and the low productivity periods are not nearly as low.
Pfft. Prove it.
People are very different and work best in different ways. I don't mind pair programming for specific reasons but as a the "normal" way of working? You can keep it.
Pair programming performed relentlessly selects for certain personality types. I, for example, didn't get into programming because I wanted to sit around talking all day. Far from it.
Programming is a creative endeavour, kind of like songwriting. Some people are great solo songwriters, some people work well together, people often struggle to cross that divide and, even if they collaborate well, may struggle to collaborate well with everyone.
I can work effectively in a team but under most circumstances I cannot program at my best with somebody hovering around or buzzing in my ear all time. And, believe me, I am by no means unusual in that regard.
Mandatory pair programming is one of those things that, I think, seems like a good idea to third rate management who don't want to take the time to understand how the people on their teams can work together most effectively.
All good points and I was of the same opinion before I underwent almost 2 years of mostly pairing. My takeaways:
People are different. The core bunch came from Pivotal which performed pair-programming interviews so were self selected. That accounted for about half the engineering staff so the other half didn't undergo this process. I myself have always been a solo developer so pairing was very unnatural at first and I didn't pair well with everyone. We rotated pairs every week (sometimes two) so that wasn't a big problem.
There are times when solitary deep thought is useful and breaks for that activity could be taken at will while the other pair works on maintenance tasks etc. Most of the time the work isn't of this type and more often less complex requiring light whiteboarding or on-screen prototyping which both work well when pairing.
The style of code that comes out of pairing tends to be more plain than when working solo. There is incremental progress in both implementations and design refinements when pairing. When working solo there tends to be grander designs with a moment where it all comes together or not. I enjoy the latter but find the former higher by average throughput. Pairing tends to eliminate more of the 'might need soon' or 'will need in the future anyway' implementations.
After pairing with someone a few times, verbal communication becomes very terse and fluid. An unexpected benefit was the ability to immediately resume context after interruptions or breaks.
The biggest advantage of pair-programming that doesn't get mentioned much is the organic spread of good practices and conventions. With approximately 20 devs it's not really worth writing and maintaining standards documentation and conventions for handing new cases that keep appearing. With rotating pairs this discovery of what works well and normalization upon it is natural. Minor tips and tricks aside from the source code can also be invaluable power-ups when learned by others.
Pair programming isn't about maximizing 'your own productivity' it's about maximizing the productivity, quality, and consistency across all dev teams.
Within that period we also tried full-stack dev pairing. This was much more challenging and didn't work nearly as well. We didn't stick with it long enough to know if it could become more beneficial because basically people self-selected into front or back-end development and liked it that way.
These are my findings from being both a solo and pair-programming developer. If you're curious and have an opportunity I highly recommend trying to stick with it for a while and see what your findings are.
Except when it does. It's not fast, but these findings in CS find their way into general software development. See: Rust, Haskell, Typescript, ML, Ray tracing, and so forth.
I clicked this expecting an overly opinionated list on how to do things the right way, but came away agreeing with nearly everything said.
> Writing non-trivial software that is correct (for any meaningful definition of correct) is beyond the current capabilities of the human species.
This is the one point I somewhat disagree with, and it leads to one of the things I believe about SE - it's all about perceived risk. Mission-critical systems work well because the risk is well-defined, and it's usually "people will die". In business, managers are happy to break rules when the perceived risk of it biting them in the ass is low. Small budget? Skip the tests, click around on deploy to see if it works, and ship it. Not enough time? Deliver a minimal product and build the rest later when we have the budget.
There are two examples I often use for this - Panera Bread and Pipdig. The former leaked millions of customer records, ignored the press for a few days, and got off with zero consequences. Pipdig did even worse, they did backdoors and DDoS code to attack competitors in their WP themes/plugins, and when called out they lied, hid the evidence, and then went back to selling themes to unsuspecting bloggers with zero consequences.
Both sides likely knew what they were doing was wrong, but the risk of getting caught was minimal, so why not break the rules? It probably saved them a ton of money in the long run.
> Being aligned with teammates on what you're building is more important than building the right thing.
I've no idea why so many of you are up in arms about this. It isn't about bowing to managers, or being a punching bag for others. It's about making concessions as a group to define what you need to build, and the best way of doing it.
> Peak productivity for most software engineers happens closer to 2 hours a day of work than 8 hours.
I'd stretch this to say that "on average". Some days I'll get 30 minutes of stuff done, some days I'll fly through work for a solid 8 hours.
> Thinking about things is a massively valuable and underutilized skill. Most people are trained to not apply this skill.
This is so true it hurts. I'm currently working on a project in an agile structure, and it's going like many agile projects I've worked on in the past. Agile is used as a buzzword for "fuck planning, just write user stories and be done with it", all while team mates bitch and moan about spending too long in "planning" meetings. The second we took the time to actually have these meetings and plan out our backlog, we made key decisions and discoveries about how things work, what edge cases we need to think about, what doesn't work from a user perspective, etc.
> How kind your teammates are has a larger impact on your effectiveness than the programming language you use.
Over the years, I've always believed that empathy is the best skill you can have in software, and that is often paired together with kindness. An empathetic team is often a kind team, and when empathy is a core part of a team it highlights areas where certain stakeholders don't share that trait.
To offer an opposite position I pretty much disagreed with --or at least would heavy qualify-- with most points. The rest I thought were trivial.
> Writing non-trivial software that is correct (for any meaningful definition of correct) is beyond the current capabilities of the human species.
Perhaps, but this is a worthless distinction, especially if the author does not define correctness. If it is mathematically correctness he is after I think experience has shown a program does not to be correct to be very very useful, and except for extremely critical situations is not a consideration. The same way a road does not need to be perfect.
> Being aligned with teammates on what you're building is more important than building the right thing.
Hell no! Every leader would prefer to go in the right direction at 50% of maximum speed, that going 100% in the wrong direction.
> There are many fundamental discoveries in computer science that are yet to be found.
True, but trivial. The same thing can be said about every theoretical branch of science like math and logic, and even for many experimental branches.
> Peak productivity for most software engineers happens closer to 2 hours a day of work than 8 hours.
Perhaps, but if you work the other 4-6 hours at 25% of your peak you still would do more than double during the day. Besides, the 2-hour figure, although reasonable strikes me as anecdotic at best. If anyone knows about a study I would love to read it.
> Most measures of success are almost entirely uncorrelated with merit.
Without a definition of merit, this is worthless. And for that matter, a definition of success is also needed.
> Thinking about things is a massively valuable and underutilized skill. Most people are trained to not apply this skill.
Really? Thinking is a valuable skill?, who would have thought?
> The fact that current testing practices are considered "effective" is an indictment of the incredibly low standards of the software industry.
Or perhaps reaching that "effective" threshold is beyond human capacity. I think this claim is pretty insulting to the software industry in general too.
> How kind your teammates are has a larger impact on your effectiveness than the programming language you use.
Perhaps? It is not that straightforward. Doing a CRUD app with Ruby in a team full of assholes will have a "larger impact" than doing in on assembler with a bunch of goodie-goodies.
> The amount of sleep that you get has a larger impact on your effectiveness than the programming language you use.
Perhaps? Without further qualifying it is worthless.
"<Writing non-trivial software that is correct (for any meaningful definition of correct) is beyond the current capabilities of the human species.>"
You can say this about any tech field. Heck you can say this about social domains as well. And yet world marches over with progress being made.
"<Being aligned with teammates on what you're building is more important than building the right thing.>"
Because yeah, building the wrong thing was always good and had nothing to do with economic failures, right? Tell this also to NASA teams that build the moon race, is littered with teams doing different stuff and not being aligned. My favorite story for that is the one about how they made the suit.
"<There are many fundamental discoveries in computer science that are yet to be found.">
There are many fundamental discoveries in all fields that are yet to be found, CS is no more special then Psychology for example, it only pays better these days.
"<Peak productivity for most software engineers happens closer to 2 hours a day of work than 8 hours.">
Peak productivity depends on each individual and within each individual there are seasons. I can be productive at nights or during day time, and when I am productive during certain hours for the life of me I would not be able to be productive on other hours. Procrastination is perception of individuals during their non-productive hours.
"<Most measures of success are almost entirely uncorrelated with merit.">
Finally, the single statement in this otherwise useless article that I can agree with.
"<Thinking about things is a massively valuable and underutilized skill. Most people are trained to not apply this skill.">
That depends on what your education/how your parents raised you. As for how much value this has in current society, well just ask Trump (he's still a billionaire). Also, in that regard see previous point.
"<The fact that current testing practices are considered "effective" is an indictment of the incredibly low standards of the software industry.">
Actually the standards are quite high, you should've seen them 40 years ago when Ford preferred to allocate 200M USD for paying victims than rather improve its safety when building the chassis, because that would've eaten their profits ten fold.
<"How kind your teammates are has a larger impact on your effectiveness than the programming language you use.">
Become freelancer, be your own boss and you couldn't care less about teammates. One man show where you call all the shots is awesome.
"<The amount of sleep that you get has a larger impact on your effectiveness than the programming language you use.">
This point goes hand in hand with above one about productivity, it all boils down to each individual.
> The fact that current testing practices are considered "effective" is an indictment of the incredibly low standards of the software industry.
It's not tests' fault that programmers are lazy and write tests not for every function, or write unit tests without integration tests or write vice versa.
"Our intellectual powers are rather geared to master static relations and ... our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible."
As Systems Engineers we're probably better than 99% of the population at visualising the dynamics of the system, and even then we're still pretty bad at it.
Humility in the face of this human limitation helps enormously I find.