Hacker News new | past | comments | ask | show | jobs | submit login
Why software projects take longer than you think: a statistical model (2019) (erikbern.com)
185 points by thunderbong on July 14, 2023 | hide | past | favorite | 173 comments



All this smokes and mirrors, when the reality is: if you gave accurate estimates, you'd get reprimanded, not get the project approved, etc. Same reason that infrastructure project go over budget. The way anything actually gets done is by initially being overly optimistic, ignoring potential future problems, getting the project approved, then lock it in via sunk costs, now as problems turn up, you can claim it could never had been foreseen, it's an impossibly hard problem bla bla. But best to bail from the vicinity of the project once reality starts to manifest itself. By that time you should be on the next project perhaps at the next company again promising the stars.


> if you gave accurate estimates, you'd get reprimanded

I like the cynicism, but this implies that you can give accurate estimate, which I also take exception to. To give an accurate estimate, you'd have to first have a complete specification, which I've never seen.


It's realism, not cynicism. I've absolutely been in situations like that where the sensible estimate is 5X what management is angling for and I know it's the case because we've done that exact thing (In this case, new hardware bring up) many times and with the inevitable problems, it always takes the high number. :/


I worked at a place where everything had to be gold plated. Some devs had an idea for a json driven form. It was super simple. Less than a week of dev time. Management got it and an architect got involved. The project was taken from the original devs. The new team spent 2 months building a DSL, that didn't support mobile (so didn't meet the requirements). In the end it was canned.

The original 3 man dev team (Rails/JS/IOS) did a bit of skunk works, and built the original idea in 2 days. We had a good head of tech at the time so we deployed it and just kept quiet when we estimated new forms.


I worked at one place where if the estimate for any task was over 2 days it would not get approved. Some really required two days tasks could take months. If we didn't want to do it, high estimate. If the business really wanted it they asked different devs until it was two days. Such a strange place to work.


Was the intention to force people to break tasks down into smaller chunks? There's no way you're spending months on something that can't be broken down at all.


Breaking projects down into <2 day chunks isn't a bad idea (although max 2 days is a bit aggressive), but those chunks are likely going to be useless to stakeholders and shouldn't be communicated outside of the team.


Pretty much correct, IMO. Really, the first build (and usually the only one, unfortunately) IS the process of estimating. I try to POC as much as possible to make sure I know where the landmines are. Just this week, I blew by one of my own recent estimates because I assumed I could pretty easily determine the output column names and data types for any arbitrary SQL Server stored procedure using a SQL query. Many implementation hours and attempts later, I realized my folly, and I am not new to SQL.


> Really, the first build IS the process of estimating.

You hit the nail on the head.


Did you look at sp_describe_first_result_set?

https://learn.microsoft.com/en-us/sql/relational-databases/s...


yes this

difficult to scope some projects without starting the build


It is possible to give shockingly correct estimates, but management doesn't like to hear how long it'll actually take, they just want to hear the number they already imagined.


How so? Genuinely curious.


The only consistently accurate estimates are based on previous work for which you have data. As a general rule, if you did X and it took Y days, the next time you do something similar to X, it will take an amount of time very similar to Y.

The problem is that (a) most organizations don't track and record the time to do anything and (b) people don't want to take the time to break down tasks to units similar to something they did before. Estimation is itself a time-consuming task.


Agree, splitting up tasks and writing all out to a T and then estimating it, could take basically all time or budget available for project.


Doesn't take anywhere near that long, but if you want accuracy, then you have to put in the work.


The easiest way is through ridiculous padding. If you consider carefully and think it will take a day, say it will take two weeks. If two weeks, say six months and so on.

This is the sort of thing you do when the unthinkable outcome shifts from idle workers to late delivery: Today, it's unthinkable to management for workers to be idle. If instead it were unthinkable for the project to be late, they would have to accept some risk of idleness to create that outcome.

Of course, you can always start doing prep work to de-risk the project, such as spikes, POCs, et cetera, within the planning phase. This might allow you to pad less but retain the certainty of your outcome.


Just an intuition developed over time with experience I suppose, as long as you have a fixed team you've worked with for a while on similar projects, with requirements that don't depend on many external factors.


If you are fully in charge of a project you can often create complete specs. When working with PMs, designers, other stakeholders, you can often not have complete specs because the requirements are constantly shifting. Because you're "lean" and "agile".


I don’t see this working out. Mostly people want something working right away and writing out specs would eat out all dev budget - you could create complete specs if you pay with your own money, maybe?


I have found a sure way to get estimates... Do the work, and say how long it took me.


You should probably still add a week to the estimate.


It's undeniably true that estimates are often driven mostly by what number will be acceptable. But it doesn't invalidate the points the article makes. Even when this distorting element of business drivers are removed, estimation is still very hard.

You might get reprimanded if you give accurate estimates - that doesn't change the fact you mostly can't give accurate estimates even if you wanted to.


> you mostly can't give accurate estimates even if you wanted to.

Ah, an optimist.

I long for a world where software development estimates and those who expect them are perceived as the unfunny jokes they are. Why do naked emperors make such wretched despots?


On the other hand, you really do need some kind of estimate. You'd never hire an hourly contractor to remake your roof if he refused to give any kind of estimate, why wouldn't the same apply to software engineering?

That being said, accurate estimates are usually not needed, but the order of magnitude is. Knowing if the change is days, weeks or years of work is important - and while we're bad at estimates, we're rarely "I estimated 2 weeks but it took 6 months"-bad.


Estimate how long it will take to develop room temperature superconductors.

To assist you, let me paint a picture to put you in the right frame of mind:

You have never worked with superconductors of any kind and you’re not even a physicist. You’re one of those “scientist types” that are indistinguishable in the eyes of account managers.

You’re in a thirty minute Teams meeting with a disinterested project mismanager that wants not just a finish date (on a specific date), but the milestone dates on the way there.

You haven’t even met the team you’ll be working with. You haven’t yet spoken with the customer. Your “requirements” (lol) is literally just three words.

Your “obstinance” at refusing to be professional and “do your job” is being thrown in your face by the PM and is being recorded for your next review meeting.

Replace “room temperature superconductors” with any one of dozens of IT technologies or tasks and you have a nearly verbatim replay of my career and my challenges with estimation.

Here’s the thing: if you’re doing something for the first time, you can’t estimate it. If you’re doing it a second time, you can’t estimate it either because it’ll benefit from reuse in a way not experienced the first time. If you’re doing things three or more times in IT, you had better automate the process… another unpredictable first-time activity.

You’re either a meat robot doing repetitive work best done by scripts or LLMs - or by definition you are doing things for the first time and can’t estimate accurately.

I don’t do the equivalent of putting down roof tiles in my work.

Do you?


This is definitely true in some places. I have worked on Defence contracts that worked in exactly that way - win the initial bid by being cheapest, and then have the MOD pay you to fix all the bugs later.

OTOH, it's not the culture where I work currently. We are expensive, we know we're expensive but the reason we can get away with it is because we deliver. That's on several fronts like technical capability but also because when we say "It'll cost X to get you what you want" it usually ends up costing them something close to X.

Part of doing that is that if we don't have enough information to build the thing, we might do a "Phase zero" for a fixed period of time looking at the biggest question marks and that means we're much better informed to quote the main piece of work.


It depends on the company, or more specifically on how the company manages the budget (people working on projects, infrastructure budgets, etc.). In many companies people would rather do the other way around: inflate their estimates to make sure that they definitely fit the reality later. This also often leads to continuously increasing budgets because or the corporate logic: “if you asked for budget but didn't use, then why did you ask for it’? So people would be actively looking for ways to spend any excess, to make sure that next time they will be trusted with their estimate, which in turn would likely be bigger than the previous one. Crazy, but true.

So, it can work both ways really.


I have worked with both types of managers and many in between. My perception is that the sandbagger tends to be more accurate and executives tend to respect them more. The problem for the sandbagger comes when the project is large enough to get multiple estimates and the sandbagger gets put on the spot to explain his estimate. Our current project was estimated by our former VP at 4X and an outside consultant estimated the project at X. The board called the VP in to explain why his estimate was so high. We are now well into the project and former-vp's estimate is reality and former-consultant is gone.


The reason for this perception is because, if you claim it would take 4X, but you finish it in 1X, or 2X, or 3X, you can just pretend to be still working on BS things, and then viola you're finished at 4X. Otherwise, if you claim it when you actually finished, which was earlier then estimated, you look like a rockstar for having finished early.


I'd also add once the project gets going and leadership realizes it's starting to fall behind they'll ask what they can do to help. I'll reply w/ very real answers like we need wireframes, or language strings or this endpoint isn't ready and we don't even have a spec blah blah but that never seems to go anywhere. Anyone w/ real power is so far out of the weeds and usually busy w/ other things and those of us in the know are in the weeds and have no real power.

While I'm at it there's another thing that boggles my mind. Leaders will say this is the most important project at the company and so much is riding on this project blah blah blah and then they give you a max of 10 minutes to explain all the issues you're facing before they seemingly move on to the next thing. I don't doubt it's importance. I genuinely believe them when they say this is really important but then put your money where your mouth is, roll up your sleeves and actually start driving shit.


Very similar experience in my ~decade of experience in software dev as an entrepreneur.


By this logic, programmers should be able to accurately estimate the schedule for personal projects or small scale open source projects where no one can "reprimand" them.

It's so far from reality that it's almost satire.


> By this logic, programmers should be able to accurately estimate the schedule for personal projects or small scale open source projects where no one can "reprimand" them.

In my experience many good and experienced programmers are able to do that. The reason why you get different observations is that in many cases, if you do such a project, you do it because the journey is the destination, i.e. you do such projects to try out new interesting technologies, i.e. the central gola is not finishing the project, but to get exposure to programming topics that you consider interesting. Finishing such small scale open-source projects to a given maturity is just rather a desired side effect and not a goal.


If you gave accurate estimates you'd get fired for taking so long to produce those estimates.

Ask your stakeholders how accurate each estimate needs to be and how much they want to spend on estimating. Where I'm at now a simple complexity score of 1,3,5,8,13 works fine and gives us +-30% variance and predictive power for new work when translating points into time spent after about three months of work and collecting the velocity.

We spend about five minutes answering how complex is this for each story. To get a 10% variance takes the team a half day for anything non trivial. You have to go through this exercise in both extremes with the stakeholders and then you get buy in for the fast estimating.

Google Mike Griffiths cone of uncertainty for the real take on this.


A team I was once on would take 2-4 days to estimate how much work we could get done in the next two months. We were usually pretty close! The important thing to understand here is that we had complete buy-in from Management. This was part of our process, and they understood that we needed that time to do a good job of estimation and risk management.

The problem in most organizations seems to be that no one is willing to absorb the cost of real estimation.


While that may be a reason, I strongly suspect the real reason is that engineers simply don't want to spend time estimating. In my experience most people think breaking projects down into small chunks and estimating each chunk is terribly boring, they'd much rather have bigger and less defined chunks so they can start coding faster.

I think thorough planning and estimations usually makes for both better and faster outcomes, but I'm unlikely to push for it simply because a full day of planning (or more) every two weeks is pretty damn boring. Doing some light high-level planning and diving into things is less efficient, but way more fun even if you end up having to backtrack more etc.


I agree that it's boring as hell. It usually starts out as kind of fun, but halfway through the first day, your eyes are starting to glaze over.

OTOH, it's part of a quality process and if your organization is driven by the need to produce high-quality output, accurate size & effort estimations are a part of how you get there.


Some questions are asked expecting a particular answer, rather than the truth. "How long is this going to take" is one of those.


If you get to the destination too quickly someone will make your destination farther away and harder to reach.


That's when you respond with "when do you need it done?"

And you can probably work together to see what can realistically be completed by whatever date they had in mind.


> All this smokes and mirrors, when the reality is: if you gave accurate estimates, you'd get reprimanded, not get the project approved, etc.

Depends on the workplace I guess. Haven't seen this where I've worked. quite the opposite. Some managers I've worked with taught me to multiple estimates by 2X before writing it on paper.


An overblown estimate is arguably just as damaging as an overly optimistic one. The thing about estimates I’ve found is that the actual amount of time it takes to complete a task will never shrink to match an estimate, but it almost certainly will grow to match an estimate.

The reason planning poker exists is to create a sort of prisoner’s dilemma between developers to stop this getting out of hand.


You do understand that an estimate is not a guarantee and should not be treated as one, correct?

Planning poker is a sham. Between the abilities of variours sw engineers, their experience level and the knowledge in the area that needs to be touched almost any estimate can be given. In really it's just a way to create peer pressure and extract more value from people.


Some peer pressure can be good for the team/people.


> The reason planning poker exists is to create a sort of prisoner’s dilemma between developers to stop this getting out of hand.

Planning poker creating a sort of prisoner's dilemma is an intriguing thought. Mind explaining a little how it leads to something like prisoner's dilemma. I'd like to grok the connection between the two things.


I'll have a go:

If you ask developer A how long it will take, the best outcome for them is that they AND EVERYONE ELSE go high. So they should go high.

BUT if everyone else goes low, but they go high, then they look bad. So they're forced to go lower. But then not too low, or they won't be able to deliver in time.

So perhaps this settles on an estimate that's "as low as possible but no lower"?


> the best outcome for them is that they AND EVERYONE ELSE go high

But it's really not. The best outcome for them is that they are 100% accurate.

Now, I understand that that's generally not feasible, but assuming that you need to pad everything by a huge amount is part of the problem.

As a general rule, I've found that the kind of answer that gets the best response is on the order of "about X time to complete development, Y to get through testing, Z if you need any documentation, release notes or user instructions, I think A, B & C are risk items that could delay completion. If Sally and Jared aren't around for us to ask questions, that will also delay things so we need timely responses from them."

Give good answers and you'll find people take you a lot more seriously.


Very well put and so true. You can exercise much more leverage by exuding professionalism and experience.

I've successfully talked stubborn execs out of bad ideas by framing things in terms they understand (risk, costs, reputation). And by doing so earned more respect which I can further leverage in the future.


That cynical take is actually quite accurate in many contexts.

People are definitely trained via bad managerial incentives to do this and end up doing it even without thinking.

It's also funny how people talk agile but then want full project estimates with milestones down the line because that's how they get to pretend that: - they can give real assurances to the customer - they're valuable PMs with a tech contribution (when it's mostly the M&M customer management and meetings, they're good at)

There's all flavours of situations and people, of course, but it's definitely a common trend in companies, even those that have high renown and are seen as leaders of an industry.


Indeed. Development should not begin until all requirements have been finalized.

Every project I've ever been on that had wildly wrong estimates was because management didn't actually plan jack shit either due to inexperience, laziness, malice, or all of the above. They then play the blame game to try to keep their job, and man it's so hilarious to witness the sheer carnage as everyone tears them a new asshole nowadays with the rise work from home (group chats/calls, DMs, etc.)

We seriously need to drain the swamp of inept management and nepotism within business ranks.

I've experienced better and it is absolutely possible to be off by less than ~25% of the total time estimated.

i.e. it took 6 days instead of 5, or it took 5 months instead of 4, which is way less depressing.


> Development should not begin until all requirements have been finalized.

I agree, but this runs directly counter to Agile methodologies which are all the rage these days.


"How to end up shipping your project three years after the planned release"-starter-kit.


> The way anything actually gets done is by initially being overly optimistic (...)

I don't think so. At most, planners plan for the happy path and then add slack to help them manage expectations. Sometimes stuff emerges and projects adapt. Sometimes your slack covers it, sometimes it doesn't and you get delays and extra costs. That's a Project Manager's dish of the day.

> then lock it in via sunk costs

You're presuming that clients are oblivious to issues of delays and cost overruns, and the negotiation stages don't include accountability specifications.


My last employer required us to do project-long estimates based on whatever they asked for up front. However, they then felt like changing everything on a daily basis, making the estimate pointless.


> making the estimate pointless.

If it's any consolation, the estimate would have been pointless even if they never changed their minds about anything too.


> making the estimate pointless

Did anyone tell them this?


Not the same, but I once worked in a place that had reasonably accurate production data generated manually on paper, and required this data copied into Excel workbooks which generated reports that were used at weekly executive planning meetings. The various workbooks were to be created by copy-pasting the newest existing workbook of each type, entering the data, and copy-pasting some other data into the workbooks.

None of these had any access controls, not even for which cells were editable. When I queried about the obviously misaligned pastes and no-content graphs rendering the reports useless, a couple people responded like "Yeah, we know the reports are crap. They want them anyway.".

This was at a company that "was continuously growing at ~3% since the founding" (some decades) and "had never laid off any worker, not part-time or temps".

But somebody had given up. Maybe the guy who came before me, who quit with little notice. A week? I don't remember.

I was given three months for creating and documenting a physical filing system (previous guy left no documentation), and physically filing the backlog. I made it take two weeks, including going back through old Excel workbooks and physical files to repair, correct, and apply access control on the workbooks.

For my next two weeks, I spent a tiny bit of each day keeping things updated, and the majority of it helping out in another department.

Then they suddenly laid off all temps, no appeals from supervisors accepted.

Fast-forward a few years, and I heard from an active employee that they very nearly foundered, before returning to profit, thanks to one of their largest customers taking some of their debt (calculating that their reliability and precision was worth more than the cost of trying to get any other supplier tooled, schooled, and proper rigorous).

I like to think I worked myself out of a job their, by making the estimates not pointless.


> the reality is: if you gave accurate estimates, you'd get reprimanded, not get the project approved, etc.

I have never actually experienced this personally, but I have no doubt that it's a thing in many companies.

But it's really inept management. If management has some sort of hard date in mind, then the way they should pose the problem to dev is: we need this completed by xx/xx/xx -- what can be accomplished in that time?


This feels bleak.

I'm not implying you're wrong. Frankly, I wouldn't know enough for or against. And if I had to choose, I'd be inclined to agree.

It just feels bleak.


Are you trying to say you correctly estimate your own projects?


Shoot for the stars and you might hit the moon.


Another factor here is in a meeting-heavy agile environment, many engineers can't even give a good forward estimate of the number of "in the zone" hours they will have in the coming week.

Will I be on 0 outage calls or 5? Doesn't matter if I am on pager duty rotation or not..

Will I get pulled into 3 calls this afternoon or 0?

Is my boss gonna come bother me with some urgent non-ticket task tomorrow morning? Etc.

I'm currently in a no-jira, no-meeting, get-it-done environment and I find myself getting tremendous amounts of work done for the first time in nearly a decade. I would not have estimated my ability to get 1/4 of what I have gotten done, done. My last 2 jobs were basically the opposite. Everything took 4x as long as it should..


Another factor here is in a meeting-heavy agile environment, many engineers can't even give a good forward estimate of the number of "in the zone" hours they will have in the coming week.

As a developer my highest level of productivity was on a 2 person team with a bi-weekly meeting with our manager. We would have impromptu meetings when needed, but for the most part I was free to work 6-15 hours in a row whenever I got in the zone, and if I slacked off other times noone noticed.

Once we got more people and switched to scrum my productivity plummeted. My anxiety went through the roof. Most days the standup left me feeling like I wanted to go back to bed. Most of the time when I was in the zone and feeling engaged I had to waste it on meetings that had no tangible benefits.

Busy work is draining, management knows this, they don't care.


Yes I self-select for sub-5 member teams now, its such a better life.


This is a big reason I'm still 100% satisfied being a solopreneur and working with specific clients I've chosen with lots of autonomy. Soooo much more productive and effective in terms of Getting Things Done™


> in a meeting-heavy agile environment

Everytime I see that, I cringe. But sadly, you're not wrong.

Agile is anti-meeting, anti-heavy and definitely anti-meeting-heavy.

I really don't know what to call this weird thing that was created.

Just as an example: why are there standups? Because in XP, meetings are frowned upon. So the idea is not to have meetings whenever humanly possible (better to pair up, talk to individuals etc.).

And if there's a meeting you really really can't avoid or eliminate, let's at the very least incentivise everyone to keep it as short and meaningful as possible. By having everyone stand and thus be slightly uncomfortable.

How that fun, focused, no-nonsense, get-things-done approach got turned into ... whatever that "meeting-heavy" thing is ... is in some senses fascinating.

And terrifying.

> I'm currently in a no-jira, no-meeting, get-it-done environment

AKA, an agile environment. ¯\_(ツ)_/¯


>I really don't know what to call this weird thing that was created.

Micromanagement. https://en.m.wikipedia.org/wiki/Micromanagement

It's not a new concept, agile and project management tools around it decided to break things into small enough components that they essentially just streamlined and concretized micromanagement as a normal accepted daily business process. That's all "agile" is in practice in most case. If you see "agile" substitute "micromanagement" and everything that happens will make complete sense.


Don't get me started. The agile manifesto says “individuals and interactions over processes”. So, of course a whole industry popped up of managers (and agile consultants) prescribing scrum.

In practice, it's not unlike waterfall, because requirements and timetables are prescribed from above. But hey, the team has a sprint kickoff meeting every two weeks, so we call it agile.


It's so much worse than waterfall. Every "agile" shop I've worked at still had deadlines 6 months out, they just didn't give you any time to design and estimate our the work because it was "agile". They were all twice as process heavy and had twice as many meetings as the shops that didn't call themselves "agile".


Yeah instead of defining where we need to be in 6 months and the steps needed to get there, we micro-focus on the next 1 week of work.

It's almost like Manhattan projecting your own team, except theres no one who actually knows the full picture. Amounts to short term "wins" and constantly missing the actual target.


It’s fun talking about Agile as something different from agile.

If a team practices Agile, they’re not likely to be agile.


On a tangent, a metric I've found strongly inversely correlated to productivity is average-mentions-of-"agile"-per-meeting. I've seen teams with AMAPA scores of 5+, holy crap what a shit storm that was.


Just wait until the Scaled "Agile" Framework[1] experts help to improve your efficiency with even more process overhead.

[1] They weren't even concise with the acronym. Hey, lets throw a random e on the end!


Scaled Agile Framework is possibly the least agile thing anyone has ever done.


I completely agree. I've honestly been amazed at how many people are willing to advocate for it.

TBH, I'd have a lot more respect for it if the framework were sold honestly, as something completely opposite of agile. There's merit to a non-agile approach, but the dishonesty is something that I find very distasteful.


agile is like the no true Scotsman. There's never a true agile :D


Yeah it cracks me up every time. 90% of people are like "the agile industrial complex is killing my job" and the other 10% are like "well you haven't REALLY done agile because..".


Both of these can be true at the same time.

And in fact ARE true.

The Agile Industrial Complex is a horror show, but that doesn't mean it's not possible to do agile well.

There absolutely is a way of doing agile well, I've been involved in a number of teams that did. And it worked extremely well. In fact, we did agile way before the agile manifesto came out, and before I was aware of there being a name for the things we were doing. We were just doing them because they make sense, they make us go fast and they let us have fun while doing it. Which is basically what the XP people said when they started writing things down. They never claimed to have invented a brand new way of creating software. No, they were observing that certain teams were very productive, and looked at what those teams were doing. If there was a pattern. Spoiler: there was.

Anyway, I also introduced agile "practices" in a large company. Well, in a small team in a large company. We didn't do a single one of the practices the Agile Industrial Complex proposes and often mandates. The much more important team next to us did. They did all the AIC practices. We did the technical. And interacted closely. Did TDD, worked on trunk, paired when necessary, did stuff alone when not. Did the simplest thing that could possible work. ("Where's your database?" "We'll put it in when we need it". <later> "Oh we're done. I guess we didn't need the database ¯\_(ツ)_/¯" ).

The more important team next to us that was doing Scrum with the standups and whathaveyounot failed. We delivered. Hmm.

And if you can point to the practices being promoted by the AIC as being in direct and obvious conflict with, for example, what's written the Agile Manifesto, and can in fact point to ways of doing it right that are in harmony with the AM and in conflict with the AIC, then it ain't a No True Scotsman fallacy. It's a simple case of the AIC doing it wrong.


I think we went from one extreme to the other. While being an "island in the zone" is super productive, some communication and team work needs to be there to avoid the high bus factor. It's nice when a team member leaves and no one is freaking out because there is a level of redundancy in the knowledge.

That said, the modern communication culture is out of control. We spend way more time talking about what needs to be done instead of actually doing something.

"Let's break the whole thing into 40 microservices" does not help. Unlike the neat theory of "everyone maintains their own set of microservices", every change and new feature needs to be over-analyzed for side effects. You are stuck in analysis paralysis. Not only do you not know what you should talk to other teams about, you may not even know who you should be talking to.


This doesn't really sound like an Agile problem specifically, more like a broader problem of "your on-call technical support roles shouldn't be shouldered by your core development team."

I've worked in an Agile team on a pre-launch product and after the 10min morning scrum (which served more to cement personal accountability and discover blockers than as the showboating and bikeshedding venue that some people here seem to have encountered) the rest of the day was more or less uninterrupted dev time with a few impromptu soccer games thrown in (for research purposes). It worked pretty well.

I'm currently in a 100% Agile-free environment, experiencing the delightful after-effects of good product-market fit, which means even if I have a clear schedule I never know if I'm going to spend half the day trying to help troubleshoot issues or helping end users integrate our product. I don't mind doing this, I want our customers to have the best experience possible and so far it seems to be working pretty well. It does, however, wreak havoc on my ability to 'get in the zone' because subconsciously I know I'll probably just be interrupted anyway.


> no-jira, no-meeting, get-it-done environment

Best env, but sadly a bit rare?

I used to select for shops like that, no wonder lay-offs are scary if you've been trapped in meetings instead of shipping.


I seem to manage getting into these types of roles 30% of the time, which is pretty nice.

Granted I am working a zillion hours right now, but I'm less stressed than when I was working 9-6 but in 5 hours/day of zoom.

Just feels like I am exercising my mastery vs fighting against inertia.


Something similar happened to me too. I switched from working 4 hours a day to 9 in a new job and my burnout healed.

4 hours of bullshitting, meetings, an unclear vision, constantly being blocked, anxiety over a lack of productivity was swapped with 9 hours of clear aims, clear goals and hard work.

It was still not sustainable but it felt way better.


Yeah especially if in-office, or if theres any operational aspects of the role.

It's not like having 4 hours of BS job gives you 4-6 hours/day of free time to go to the movies, gym, spend time with family, cook for yourself etc.

Instead you are tied to a PC bored out of your mind.

The worst thing in life is a lack of purpose and feeling lack of control.


It did mean more free time, but it also meant that the free time was infected with worries and so was much lower quality.

It was easier to put work out of my head in the latter job. Anxiety didn't overwhelm my evenings and weekends. Sunday evenings were restful instead of steeped in anxiety. It felt more like they belonged to me.

When I came home each evening I felt tired in a good way.


I don't do "Agile" and never have. But every time I read or hear about Agile it's described like above. Everyone seems to hate it and it all sounds terribly inefficient and frustrating.

I scratch my head...

Why? Why are people still doing it? Who actually _wants_ that?


I think a lot of engineers say they want agile and they mean it in the sense of the ideals in the Agile Manifesto (https://agilemanifesto.org/). There isn't a lot to complain about with those ideals. Working software, people, collaboration and flexibility are all very good things.

When you look at the agile frameworks that are commonly implemented (Scrum, XP, etc.) you see that however well-intentioned they are, they seldom live up to those ideals. Do we care about working software or hitting burn down numbers? Do we care about flexibility or about keeping our iteration plans nice and clean? Do we care about collaboration or about having a paper trail to cover ourselves? Do we care about people or tickets?

As agile has gone mainstream more and more experiences are bad, courtesy of Sturgeon's Law. Then you add in the fact that the bad experiences will always be amplified (after all, when things are working well they're "fine"; it's still work).


We do that, especially in big companies, because management needs metrics, meetings and things like that to keep themselves occupied and relevant.


What do you mean? I find the exact opposite. People are still wanting their waterfall project with an upfront design and "requirements analysis", and shocked when things run over-time with a crappy end-product. This also goes for SAFE and many implementations of scrum.

What don't you like about agile? What do you use that's better?


Management has a solution for that, though - they'll bring in an efficiency coach who will talk to you for an hour insisting that every time you say "I can't", replace that with "I will".


My last place liked to bring in an astronaut or general to talk to us about things like this lol


Same


Management: We've got a great new project that has all the features that our users have been asking for. We need it done as fast possible, it's the number one priority. How fast can we get it done?

Engineers: For all of it? At least a year, probably more. Even a subset of those features will take six months, minimum.

Management: That's too long. You can have all the resources you want, we need to have it done in 90 days.

Engineers: That cannot be done. No amount of effort or engineers will get it done in 90 days.

Management: It will be done in 90 days.

180 days later

Management: Why are so many of our projects late?


Tell them to rank the features by importance. Go down the list with individual ETA estimates and they can deduce the final ETA. For bonus points provide error bars for each item so they can see how it adds up...


The Mythical Man Month


I'm genuinely looking for a "calm" company. Is there such a thing? I have a few anecdotal stories of companies being absolutely chaotic (my current one included). I don't know where to point fingers to. I could start at pointing myself. Customers demanding custom features. Execs and sales people asking for unreasonable estimates. Engineers not feeling safe enough to say "no" but have to make something work, introducing tech debts. Engineers picking technology tools without much research, because there is no time, which increases complexity. People leaving, and then new people can't understand everything holistically. All of these factors combined into one gigantic bowl of mess.


Go find a well established company where you can work on an existing product. I work on a legacy point of sale product that started life in the 80’s and is still powering a good chunk of commerce around the world. I work on a small yet efficient team who customize the product for retailers. It is all about the team you’re on though. My company is also releasing all kinds of new products and services. I doubt anyone would describe working on those teams as calm.


> I'm genuinely looking for a "calm" company.

I'm contracted to the IT branch of a certain pharma company and it's been calm the whole time.

Projects are either about regulatory compliance (boring, but not challenging) or supporting research - the big brains do the thinking, while you just klaka klaka klaka away at the implementation.

A visible chunk of the latter goes nowhere and eventually gets cancelled, but everyone is fine with that.


The more insulated you are from sales / overhead cost reduction pressure, the calmer things will be. Just make sure you're insulated, not so far that you're seen as a cost center as opposed to a revenue generator


Both companies I've worked for rely on government contracts, and in general feel pretty calm. It's hard to feel super rushed when the software isn't delivered for a few years. I don't think I make anywhere near industry salary though.


I had calmness when I was individual contributor working alone on difficult long-term projects, involving both research and development.

Bi-monthly or weekly meeting with the CTO, and that was it.

Delivery was on time simply because I adjusted the work and constraints to match the expected deadline. If something wasn't possible or required compromises, I would just let the CTO know. It was often fine.

The problem for me starts when you have chaotic product people in-between business owners and you.


Depends on what you mean by calm.

I work as a developer for a company that sell a somewhat niche B2B software that integrates with a lot of customer systems. However we got several large, well-known companies that rely on our products for their daily operation (and a ton of smaller ones).

We got a lot of stuff to do, between new customers pouring in and gov't changing their systems with little warning[1] and such.

However we also don't have most of the other stuff you talk about. Sales ask for estimates, but if our provided estimates don't work for the client, say because the contract with our competitor is due in a month, then they'll work with us to try to find some way of making it work rather than force it through.

There's a lot of freedom with responsibility, so sure for low-impact stuff a developer might try some new tech to learn. However for larger shifts it'll have to be discussed in the dev group, especially if it impacts support. We do have a mature codebase so some tech dept is inevitable, but we have it as a goal to try to improve those things if we need to work on a particularly bad area of the code.

As for people leaving, in the years I've been here there's been a very stable group. So stable our customers ask us how their systems work that we integrate with, as they have much higher churn.

I don't think we're particularly unique though. But we're a relatively small company with a name that you can't flash on a CV, and at first glance our niche might sound boring.

[1]: "yea we redesigned our API, we will be doing a hard cut-over in a couple of months"


That might be unrealistic in some cases but one way to deal with this is to take control over the situation. If you have a recipe to fix a particular problem whether it’s about unrealistic expectations or people not feeling safe - propose it, take full responsibility for the implementation and push it through. It takes a long while though.


Yes, that's absolutely possible. My company is making a b2b product. We have challenges and sometimes the scope of the new features can be huge. But I never felt big pressure.


What you're looking for exists in the government and large universities.


This model is working to identify the Inspection Paradox [0] - most time is spent on doing tasks that take a long time to do. Combine that with high variance, and one task will blow out and a project will be spent working on something that is much harder than people thought, and that the estimate process didn't see coming.

My interpretation here is software engineers are better at estimating tasks than I thought. The next step is analysing tasks being performed in serial to show how many sub-tasks a project can have before the estimates are effectively meaningless / likely to blow out an unpredictable amount. I bet it turns out to be about a sprint's worth of work. Real people have a knack for feeling out those sort of tipping points empirically.

[0] https://en.wikipedia.org/wiki/Inspection_paradox#Inspection_...


I have two simple parameters I use to apply to all estimates. How many people are involved? And how many times has each person done exactly this thing before?

The more people involved, the more invisible work there is. Not just meetings, but handoffs and general coordination.

And things are only predictable if they are done to the point of being known things. Rehearsed activities are known activities. Even if you shake up a part of it, people that have rehearsed are more capable of dealing with the shake ups and keeping things moving.


What I find ironic is that everyone in the industry knows that estimating is very hard, thus estimations are nearly useless, yet everyone insists on the importance of planning and on basing such planning on void estimates.

Nobody has yet had the balls to state the obvious: we have to learn to work without estimations.


> What I find ironic is that everyone in the industry knows that estimating is very hard, thus estimations are nearly useless, yet everyone insists on the importance of planning and on basing such planning on void estimates.

Obviously, planning is critical. Planning means resource allocation. How is this not a critical aspect of any project?

The mistake you're making is presuming that if estimates are not crisp then they have no value.

> Nobody has yet had the balls to state the obvious: we have to learn to work without estimations.

This belief is detached from reality. Failing to provide estimates means a failure to scope how much work is required, which translates to not even knowing how many people should be working on a simple task.

The problem you're failing to address is limited information, and specifically limited context from developers and emerging requirements. Projects have been adjusting to this for decades. Agile developed concepts such as sprints and spikes for this very reason. It's not only about changes in requirements. It's mainly about gathering info and updating projects based on that.


> Obviously, planning is critical.

Considering it very often fails even for successful projects, obviously, it is not.

> This belief is detached from reality. Failing to provide estimates means a failure to scope how much work is required

Your line of reasoning is detached from reality. The real world is full of project leader who do everything in their power to know as soon as possible how much work and time are required, still they never get the right answer until the project is completed.

Development is what makes a project successful; planning is what makes a project late.


> > Obviously, planning is critical.

> Considering it very often fails even for successful projects, obviously, it is not.

This is not a relevant strawman (if it fails for some, it works for none?). Working without a plan isn't how humans work. You want to argue about scale, that's fine, but far from absolute.

> > This belief is detached from reality. Failing to provide estimates means a failure to scope how much work is required

> Your line of reasoning is detached from reality.

You provided nothing to indicate that the reasoning is detached from reality.

> The real world is full of project leader who do everything in their power to know as soon as possible how much work and time are required, still they never get the right answer until the project is completed.

Any estimation I make for myself is not subject to it. My cofounders are not subject to some project leader who is trying to sabotage us. Simply put, this is another strawman to abdicate explaining the pros and cons (due to a lack opinion or thought?). It is not compelling.


> This is not a relevant strawman (if it fails for some, it works for none?). Working without a plan isn't how humans work. You want to argue about scale, that's fine, but far from absolute.

OP clearly fails to understand that the role of a project manager is to navigate ambiguity and uncertainty to deliver results in spite of them, and project managers are fully aware there is no such thing as crisp estimates and absolute certainties. Therefore OP puts strawmen regarding absurd notions of rigour that no one in the world adheres to.

I mean, hasn't OP noticed Project Manager's don't carry around stopwatches?


> The real world is full of project leader who do everything in their power to know as soon as possible how much work and time are required, still they never get the right answer until the project is completed.

They don't know the true answer until the project is complete.

If my boss asks me how much time I need to integrate with some client system and I say I estimate one week, then if it's ready after four weeks that might be just fine for my boss.

My boss knows that there are factors that can affect the delivery of a project. What he almost always wants to know is are we talking days, weeks, months or years?

> Development is what makes a project successful; planning is what makes a project late.

So when you travel to new places, you just head in the compass direction? No looking at maps or similar?


> Considering it very often fails even for successful projects, obviously, it is not.

You are using a very personal and peculiar definition of failure, which is not shared by anyone else.

Back in the real world, companies hire project managers and put them in charge of projects without any concern or regret regarding the methodology. Do you think the whole world is wrong and you're the only one getting it right?

> Your line of reasoning is detached from reality.

Please explain exactly why you think scoping efforts have no relation with estimates.

> The real world is full of project leader who do everything in their power to know as soon as possible how much work and time are required,

Yes, it's their job. What exactly do you struggle to understand?

> still they never get the right answer until the project is completed.

Yes, what about it? Exactly what do you have trouble understanding?


Experienced people develop different things than inexperienced people. Planning could simply involving experienced people to ensure the right thing is being built.


Because no customer wants to write a blank check.

If you don't know if its 500 or 5000 hours - you're simply not buying the product.


I think the fundamental misunderstanding is that we aren’t building a product. We are doing research and development. We are figuring out how to build something novel, otherwise the customer could just go out and buy it already. Once we’re done with discovery, the computer builds it. So, the customer is like any other who is paying for research and development. It’s not a blank check, but they should go into it without the expectation that something new will be discovered on some accurate schedule.


> We are figuring out how to build something novel

Is it possible to estimate the duration of a process that has never before occurred (e.g. the construction of something novel) with any expectation of accuracy?


This is hubris of the field of software engineering. The no-code market and the existence of tools as old as MS Access prove that there are software projects for which a better analogy is not R&D, but construction.

I do note that construction projects are also not good at estimating, but it's folly to suggest that at some level we are not _manufacturing_ rather than _developing_ a widget.


Construction is a bespoke non-factory manufacturing process. If you extend the metaphor:

Estimating large projects is a big construction projects. Those tend to be large buildings and relatively on schedule. Estimating small projects with small teams is bespoke. And more likely to go off schedule when the plumbers can't time well with the electrical services, i.e, you have a dependency on another teams API or something, similarly to how software struggles in small team open ended scope projects with unknowns. But a tightly scoped single-trade software project won't struggle just like construction.

In end, I agree with you that construction is a decent metaphor. But construction is very far from manufacturing a widget, as it's a bespoke manufacturing process with ample site specific constraints


If we're going to talk about construction as the metaphor, then it's done by the computer (e.g. compilation), and it's so cheap or trivial on many projects that developers will ask the computer to build the software hundreds or thousands a times a day (e.g. via test-driven development). In this metaphor, I think the developer would be more akin to a designer or architect, but where the building in question is at least partially novel. If it's not novel in product, people, process, etc., then the client could just go out and buy or use one that has already been produced. The developer wouldn't be necessary.


All the successful projects I've participated in could have been managed with this simple sentence:

"After some indulging in alcoholic beverages, we have signed a contract for <vaguely specified features> to be shipped on <date out of someone's hat>: please, do your best!"

And the outcome would have been even better if we hadn't waisted so much time in preparing detailed estimations and later inventing a justification about why those estimations were not precisely met.


You estimate as a forcing function to know what you are trying to accomplish. The nuances of scope, how to do it and tradeoffs get discussed.

I think estimates are a problem where you have say 10 estimates, you add then up to 47 days, divide by 3 people, call it 16 days… ah about 3 weeks that’ll be done. That is dumb!

But as an order of magnitude it is useful.


Planning and estimating are orthogonal. I don't think inaccurate estimates should prevent you from anticipating the future.

> have to learn to work without estimations.

A closely related, but different time management tool is a budget. Consider the same website but made in: 3 hours, 3 weeks, or 3 years. The methodology and tradeoffs the engineer approaches each of those projects is very different and needs to be known up front.

Part of the goal of estimation should be to create a budget for various key areas of the project, not guess how long it will take to fix 5 critical system bugs.


> we have to learn to work without estimations.

That's all well and good in a world that's pure software (and that does happen in those environments). It's not realistic in a world with lead times on physical items that are at least 1 year. For example, if you want to set up a high volume manufacturing line for a new product that takes a lot of planning in advance. You need to spend 10s to 100s of millions of dollars on a contract and schedule with the manufacturer and you need to be able to have SOME indication of where in that process firmware might be released (or more realistically - given the timescale, what scope of software is possible).


I have been told that "as long as the firmware update on the device works, the rest of the firmware doesn't have to completely work until devices start getting shipped and installed." Yes, people are playing very dangerous games out there. :(


>Nobody has yet had the balls to state the obvious: we have to learn to work without estimations.

Age old conundrum: "Who will bell the cat?"


I've been advocating for 3-point estimation for quite a while. [1]

Unfortunately it doesn't happen very often (mainly because tooling support for this has been missing in popular trackers such as Jira).

In situations where I was the decisionmaker on the methodology I've had excellent success with it. (using about 90%-95% confidence interval was typically right on the mark)

[1]: https://en.wikipedia.org/wiki/Three-point_estimation


I think that's a pretty smart model - best/worst/most likely. One thing I found lacking in kabuki agile theatre was that we were ABSOLUTELY NOT supposed to "get into the weeds" in meetings where we talked estimates.

But how does one uncover the best/worst/most likely without doing so?

You can spend 20min writing the most beautiful Acceptance Criteria but if you don't get a bit into detail on what the user actually wants it's all for naught. It could be way simpler (hey what if I just did X) or way more complex (oh well now that you mention it..), etc.

Typically it was engineering manager & engineers sitting in a room discussing requirements with each other. What could go wrong obviously? lol.


Thing is, you can always put in "never" as worst case.


I'm doing something similar I guess. It's more of a two-point plus uncertainty/confidence estimation. I'll give a range (2-3 days say) and I'll estimate how uncertain (or how confident) I am in that range. For larger tasks I'll break it down into smaller subtasks and estimate individually.

For example if I need to use some new library that I've never used before, my estimate for that will be quite uncertain. Perhaps the library has some weird quirks it takes time to figure out. If I have to do something I've done often, I'm quite confident in my estimate.

Effectively similar, except I don't like to estimate unknowns so I don't give an explicit worst-case time.


This is good, but I think the "most likely estimate" is going to be misleading a lot of the time.

I don't have numbers to back it up, but I have a strong sense that the actual time to completion tends to be either at the quick end or the slow end of the estimation range, and only rarely somewhere in the middle. A bactrian, not a dromedary. If we hit the problems we were anticipating we might then the whole thing can take ages, but if we don't hit them it's pretty quick.

Best and worst case are going to fairly predictable in advance (the latter improving a lot with experience) so I'd stick with just those personally.


I write code for 15 years and I still have no idea how to do any estimations. All my estimations are off by 2-200 times. Basically they’re useless. I write code until I’m happy with it. Sometimes I need to rewrite code 5 times before I’m satisfied. Sometimes I have extremely bad mood and can’t do anything creative at all, so even expecting me to work a hour tomorrow is not reliable. Or may be I’ll be in perfect mood and conditions and will spend weekends writing perfect code for 30 hours.

I envy people who can work on schedule. I’m so terrible on that.


> All my estimations are off by 2-200 times. Basically they’re useless.

I know, hyperboles and all, but if your initial estimate for a task is 1 week but you end up requiring 4 years to deliver it then something went terribly wrong, and asking your for estimates ain't it.

Even story points scope tasks with granularities that range different orders of magnitude for this reason. One day, one week, epic/spike. It's ok if one day means 4 days, or one week means two. If one day turns to 3 years then the developer was completely clueless and unfit to continue working on the project.


In one of the Agile training courses I took we were taught the concept of "horizon of predictability". The base one is two weeks.

Within two weeks you more or less can estimate accurately. Beyond the two weeks, the estimates become pretty much useless. "Three months" is not an estimate. It just can't be one in good faith.


I love those training courses and their narrow view of the world. I can be accurate with 3-month and 6-month projects.

This is because long-term projects give me the space to allocate specific kinds of mental energy and attention efficiently and have steady progress.

Short-term projects, I never know if I have the right kind of energy or attention to move forward. I often end up blocked by my brain, making my two week estimates unreliable.


This is a simple project, it shouldn't take nearly that long. Let's set the deadline a little earlier and leave the rest as buffer.

This project is ahead of schedule, so you're not going to need the buffer, we'll reallocate resources for that time.

The project is on schedule, let's move up the deadline a little just so we have a bit of buffer at the end.

This project is slightly behind schedule, let's schedule a daily hour long meeting to go over everything to get things back on schedule.

The project is further behind schedule, how about you redo a major part of the project to try a different approach that will hopefully be sufficiently faster that you'll not only recover the cost of the reimplementation but also get back onto the original schedule.

The project has fallen even further behind schedule, and other things are starting to pile up, we need to pull some resources away to deal with them.

The project is ridiculously far behind schedule, people are losing faith in your ability to deliver, you need to get back on schedule before they'll approve more resources for this project.

The project has fallen too far behind schedule, we're pulling the plug. I've scheduled a lengthy post-mortem meeting so we have plenty of time to go over how you failed to bring this project to completion so you will learn from your mistakes.


I know a plan is good when my bottom-up estimates (padded with a buffer for unforeseen contingencies) align with my top-down gut instinct of how long a project of a certain complexity "should take".

As an experienced professional, one often compares with past projects and looks at the duration of a similar one - not necessarily regarding content but complexity.

(I also conduct research on methodology, e.g. https://arxiv.org/abs/2201.07725 / https://dl.acm.org/doi/abs/10.1007/978-3-031-08473-7_48 .)


I have written quite a bit about this in my blogpost https://www.fabianzeindl.com/posts/the-codequality-pyramid

Also check out this study which links software quality to deviations in estimation: https://dl.acm.org/doi/10.1145/3524843.3528091


Start from an empty file and write to an imaginary API or pseudocode what you actually want to do and coding is easy and fast!

I think the high level imaginary API/pseudocode of any system is straightforward and fast when you start from scratch. (Because you're not focused on detail and just intent)

But when you start from an existing codebase, existing infrastructure, existing APIs, it slows to molasses. You've got to understand and have a mental model of ALL OF THAT before you can complete your intent.

I think this is why software takes longer than expected. Your mental model has to be accurate, and there is always more to learn.

How do you solve the problem?

I'm an advocate and trying to design what I call "commutative programming", which is another way of saying that the behaviour is the product of every statement about the behaviour that is desired, not an explicit instruction of what to do next which is what modern programming is and it is slow and tedious and doesn't compose. In this dream, if you want to change behaviour, you just add statements. Programming is additive! Not edit sourcecode or try to find out where to insert your lines of code or tweak existing lines. Let the computer tell you about conflicting requirements or missing criteria.

We need query engines and cost based optimisers for behaviour.

Imagine taking lots of files that are simple "writing to an imaginary API" - that's your intent specification. You just need to merge them together and your system is finished.

I started work on a commutative GUI https://github.com/samsquire/additive-guis but I'm also thinking from time to time about commutative code where we define refinements to desired behaviour and the computer behaviour query engine or rule engine generates the code or configuration that exhibits this behaviour. (A bit like Kubernetes for behaviour.)

In the future: we don't get LLMs to code for us, but we have highly expressive semantic vector spaces that map to architectural relationships and meaning of behaviour, and changing the architecture of a system to support new behaviour is a matter of changing the serialisation of semantic vectors via the prompt.


> way of saying that the behaviour is the product of every statement about the behaviour that is desired, not an explicit instruction of what to do next

I believe you'll enjoy reading "Notes on the Synthesis of Form" by Christopher Alexander [1]. There are many ideas in it, but at the later chapters the author proposes a system for solving design problems as a large graph of interacting requirements, where smaller sub-graphs are (hopefully reusable) components solving a smaller set of requirements.

Sounds similar to this idea of yours, just with a key difference in that the author sees it as a two-pass solution: first you "carve" out your solution by establishing a context boundary, and then do you "fill" it with an implementation - such that the solution is the minimal implementation that fits the context, and not anymore.

After reading the book, I have also adopted a similar approach (but with less formalism) to gather requirements for projects in the past.

I have thought about developing the ideas of the book further, if you find this subject interesting we can discuss (I can ping your profile's email). I'm certainly interested in hearing more of your own ideas, as it seems to intersect.

[1] Same author of "A Pattern Language", that inspired the design patterns concept in software


Thanks a lot for your comment!

I once read "The Timeless Way of Building" by Christopher Alexander but not "A Pattern Language"

I actually owned that book "Notes on the Synthesis of Form" but I never read it, I gave all my books away. Maybe I'll have to get it after I finish reading "Database System Concepts" which I am slow to read.

I would like to hear YOUR ideas too! Send me an email, I hope you shall tell me your ideas and thoughts and we can bounce off eachother. Please (and anybody else reading) do ping me - I love reading other people's thoughts and encouraging others.

The idea of using graphs to solve problems and breaking them into re-useable patterns resonates with me, I didn't realise that particular book had that in it. One of my programming projects was using the A* algorithm to do code generation in Python ( https://replit.com/@Chronological/SlidingPuzzle3 ) based on generating graphs of potential instructions.

> Sounds similar to this idea of yours, just with a key difference in that the author sees it as a two-pass solution: first you "carve" out your solution by establishing a context boundary, and then do you "fill" it with an implementation - such that the solution is the minimal implementation that fits the context, and not anymore.

Thanks for sharing that. It reminds me of Java interfaces but more powerful.


Interesting perspective. It reminds me of Eve [1], which was all the rage over here a few years ago.

[1] https://witheve.com/


We (or at least I) may base our estimates on the happy-path.

That is, I can see how to solve a problem in the common case. But as I code, I encounter the myriad of error cases, exceptions to the common problems, and especially the interactions of all these. Then add in performance considerations and the complexity can multiply quickly. The long-tail of those errors and exceptions can be very time-consuming to solve, so the mean is likely to be way off.

It's obviously much easier to estimate when you're experienced with a problem space. This should mean that you know how to solve a particular problem as well as the likely exceptions (non happy-path) you'll encounter.

If you're experienced with the domain, you can fit the relevant partial solutions in your head and more accurately judge what could go wrong with your code. But more generally I think this is similar to solving problems in other domains as well, such as architecture, building a car, or anything with many moving parts. These domains do not usually have loosely coupled sub systems.


Cars and buildings are probably more loosely coupled than most HN readers imagine. Up through the engine in the 1995 Mustang (I think that was the cutoff), the Ford small block engine line would all bolt into Mustangs all the way back to 1965. I've got a 1989 block, a modern 5-speed transmission with overdrive, hydraulic clutch, disk brakes, dual circuit braking, limited slip differential, and a semi-modern radio in my 1965. All of those were either direct bolt-on or straightforward upgrades with minimal field engineering needed. Many of the VW and Audi engines also have common engine/trans mounts and interchange across years and brands. You can often bolt heavier duty suspension or brake packages onto base model cars/trucks. "Parts bin engineering" is a phrase to search on to learn more. It's done aftermarket, but it's also done by the manufacturers.

There's a similar story in the mechanical engineering for buildings. Sure, some architectural choices will make it harder or easier, but most buildings can be retrofitted system-by-system over the decades a building is in service.

Part of that is to allow configuration pre-sale, but a lot of it is just "it's too damn complex if everything affects everything else", which is not that different from software.


This seems pretty reasonable to me. Maybe some of the details are wrong, I don't know.

But my big question from this article is... why isn't this a well known aspect of project planning, scheduling, and software engineering? It's not even specific to software really - I mean it's possible that software tasks have higher variance than others but I'd be surprised if task variance is irrelevant for scheduling non-software projects.

Is it really the case that no one has applied mathematical modelling to the problem of project planning, scheduling and estimation before? Do the ideas in this blog post represent the core of a new avenue of research? Seems unlikely. So how come this, or something like it, wasn't taught as a basic and critical part of project planning, in the same class that taught me the term "Gantt chart"?


Related project of mine is https://uncertain.io/ -- I have no statistics background but this is a caveman chart drawing tool that I thought made sense


Has anyone tried the Shape Up method by Basecamp? Their concept of “fixed time, variable scope” seems like it might break the Gordion Knot of software estimation challenges.

Their book is available for free online, here's a link to the chapter "Estimates don’t show uncertainty " https://basecamp.com/shapeup/3.4-chapter-13#estimates-dont-s...


> "A reasonable model for the “blowup factor” (actual time divided by estimated time) would be something like a log-normal distribution.

Research has shown a power-law distribution to be a better fit.

See "The Empirical Reality of IT Project Cost Overruns: Discovering a Power-Law Distribution" https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4204819


Software projects take longer than expected because every single time it turns out what actually gets build is nothing like what was originally planned.

It’s as simple as that.


I wonder why people are so focused om probability distributions that look like one peak sided by two tails.

There's another way to look at it.

For project estimations I've made people give up on the whole fiction of estimates by simply confronting them with the fact that maybe it's going to be very easy (eg, there's an API for it already, or there's a nice library), or maybe it'll be quite hard (eg, it may require writing a sqlite extension, or using multipath TCP, or something else you haven't done before).

That's a probability cost landscape with two peaks! And the number you get by interpolating between the peaks will be the most wrong.

Makes some people's heads explode, because they want to slot a single number into a field, and you're just not giving it.

Some others will see the light and leave you to your explorations, after which you'll sync with them on your findings and where to go from there.


It's my observation that the primary obstacle to software completion is specification and feature drift, not estimation.

The reason VB6 and Delphi for Windows were such great tools isn't because it eliminated the need for programmers, nor made programming that much faster... it's because those systems were comprehensible to domain experts, who could rig together something that actually worked, albeit in an unpolished and buggy state, skipping a cycle or two of prototype, requirements mismatch, adjustment, etc.

This gave the programmers an executable specification they could refactor and bug fix and polish up a bit. (And document.... you should always write good documentation)

Backstory:

My first major software project was writing an inspection system that utilized HP handheld computers, and I was able to crank out a prototype in about 2 months. The client didn't like the hardware (an HP box running basic with lots of "smart buttons"), and chose a different vendor to try again (after he did his own hardware testing and reviews)

My second major software project was the same system via the different vendor... and my estimate was about 2 months... it took 3 months (one week was away at Norand, the vendor of the new rugged handhelds, learning PL/N). There was a final month tweaking UI design, etc. and user testing. (Back in the days of MS-DOS, code in Turbo Pascal on the PC, and PL/N on the handheld) This included building a set of screen based editing libraries to allow forms based CRUD, customizing questions directly by the customer, the ability to flow through questions, etc. It was a greenfield project.

The client loved it, we met all the specs that were agreed upon, but it was totally unusable in practice. We made a deal to sell the system to 10 more locations, but would spend all the time required for me and the customer to work together and ruggedize/customize it for a better fit. That took a year. Everyone was happy, and it got integrated into some other products over time.

Then Windows and networks showed up, and the cost of handhelds fell like a stone. I moved into IT administration.


> I have two methods for estimating project size:

> (a) break things down into subprojects, estimate them, add it up

> (b) gut feeling estimate based on how nervous i feel about unexpected risks

> So far (b) is vastly more accurate for any project more than a few weeks

Here lies one of the main problems with “estimates”. They don’t capture a confidence interval. You can only give one number (and it’s usually story points, not real time). “That’s a five point story”. How confident are you in that assessment? How many unknowns are buried in there? So, a five point estimate that’s arrived at after careful consideration has the same weight as one given off the cuff with no investigation at all. Of course estimates go wildly wrong.


Estimates != 100% project time spent.

I'd say the main issue is misunderstanding of concept we are planning between business and tech people. Those groups think so differentely. We use Planning Poker tool - https://scrumhub.it/ for each user story, but still there area projects unestimated with strongly extended timeline compared to initial estimates. But on the other hands mostly it happens when business add new features, change requirements and the wheel comes full circle, we do retro and it is what it is :)


It's not only software. Ask to masons how long it will take to perform some significant multiday or multiweek task. They will complete it with as much delay and variance as a comparable software project.


Software engineering down to at most 60-lines per function goes a very long way to knocking down the fluctuations of project management by the most.

Largest I have done is a 120,000 code line project to within two months of missing its original project deadline; of course, you'd need some block time just to do the planning, architecturing, and with existing manpower.

Most fluctuations are geared toward the unit and integration tests coding and making it "work".

This is what I have been doing to nail any project planning to a promotional-level degree.


I had one project where I estimated and gave a "project price" based on that. Mid way through (right after I wired up the payment provider) the client found out they were not approved to use that provider. So now I had to delete a weeks worth of code and start over.

After the change I sent a revised bill, and the client ghosted me.

Specs change, but people don't understand. If you have a wood framed house and mid way you decided it should be metal framing it changes the price.


This is great, and reminds me of estimating random walks and stochastic processes.

Basically, the time estimates for uncorrelated steps can be added together, but if there's correlation then the times should be multiplied. Because a change in one step can require a rewrite of one or more of the other steps.

Which means that a 3 step process with average step time t can take t * 3! (factorial) units of time instead of t * 3.

Also it might have been exponential instead of factorial, I wish I could remember.


    Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.[1]
The above universal constant was not taken into account by the article.

(I mean, edge case that are impossible to complete exist and are annoying, but mathematically they are a kludge. Hofstadter already explained why it takes so long. ;)

[1]: https://en.wikipedia.org/wiki/Hofstadter%27s_law


One of my favorite techniques is to estimate the average and, critically, "the worst case if everything that can go wrong does go wrong," and plan on the range. It's an honest approach, albeit more difficult to plan around.


Can we train a DL model to predict the duration/outcome of software projects?


I think the truth is much more simple:

  - Did you ever see any project finished ahead of time?
  - How many project did you see where scope was shrinking during execution vs. expanding?


imposing uncertainty on task estimates is a natural way to push back on management pressure to low-ball estimates

if everything has a potential 2x blowup, you monte carlo all the tasks to produce a distribution of finish dates, not a single 100% delivery date

institutions naturally push risk away from political power, so mostly this uncertainty is applied to line workers by managers. but senior managers have an incentive to push uncertainty to middle managers, not to line workers. uncertainty is a management trick to do this.


Because managers already "know" how long it will take, based on how little budget they have, and they get very upset on anybody who contradicts them.


> Decent fit, in my opinion!

Why wouldn't running an Anderson–Darling test answer that question?


My favorite paper on this matter is: https://www.researchgate.net/publication/247925262_Large_Lim...

A crude tldr is: writing software is solving a math problem and time estimating how long it will take to solve a math problem has an infinite error bar.


this blog made me realize that i don't think i'll ever pursue deep knowledge in stats. i find it boring, personally. thankful for the folks who find it interesting.


Are there simple tools around to help reaching better estimates?


Because most of leaders/architects are bad.


Conversely: “chicken nugget” estimation.

Traditional agile suggests “sizing each task” (work estimated to be less than one sprint).

I’ve found that simply “counting each task” is more effective... instead of trying to estimate the difference in sizes, it's easier and more accurate to try to divide the work into similarly sized chunks. There may be "the usual" variation around delivery pace, but at the end it all averages out. Usually 5-10 chicken-nuggets-per-sprint (or 10-20 or whatever your pace is) gives you the "fine grained" velocity or pace that is useful for getting rough estimates in the medium-term.

Extraordinarily large "chicken nuggets" become clearly visible because they "take too long" and that's a conversation with the task owner about trying to keep sizing down, or a clue to break them down or add more people (and consequently- more tasks/stories).

For overall project size estimation, you can/should have more S/M/L/XL concepts, with effectively back-testing of estimated size to task breakdown count, or be able to state "in the last quarter, this team delivered 2XL, 4L, 3M, 8S".

Who knows the true size differential between the 17 delivered (well, maybe if you do further breakdown, and simple task counts you can...), but it's a starting point.

Team size may change, focus may vary, vacation varies, amount of non-productive (maintenance, planning, migration, compliance) work may change, but at least it starts giving you a starting point.

Finally, recognizing that variance should narrow over time. Driving from SF-to-NY may have an initial estimate of "1234", but if you get to Iowa and you're at 800, then you have a much better chance/estimate that the remainder will be another ~800.

Your variance should be smaller because you're estimating a smaller sized item (only half the work), and you're estimating (nay- duplicating!) actual observed delivery pace.

TL-DR: don't size stories, do size "epics/features", consistently track delivery, backtest based on previous data, refine estimates as you get closer, be able to estimate when you're "past" the 50% mark


This kind of cheerleading is a big hit with the people who consume estimates... not so much with the slobs stuck producing them.


The slobs stuck producing them (aka: myself) then break down a project into:

Project => Area(epic/feature) @ T-Shirt Size => ...you're done...

If you can backtest/backport simply the COUNT of stories associated with enough epic/feature sizes, then you can forward-predict an arbitrary new "@ SIZE" to determine an approximate story-count.

Inputs:

    XL @ [ 25 stories, 33 stories, 52 stories, ...etc... ] => Mean/Median/StdDev
    L @ [ 20 stories, 15 stories, 18 stories, ...etc... ] => Mean/Median/StdDev
    ...etc...
...basically "the slobs producing estimates" are on the hook _MOSTLY_ and _ONLY_ for accurately accounting/tracking/submitting _CURRENT_ work (ie: put your stories in the sprint/board/backlog). The only other responsibility is being _CONSISTENT_ on submitting and tracking mostly "similar sized items" instead of "wildly varying sized items of indeterminate complexity.

If there are "large and wild" areas, they should be bumped up and "promoted" to an epic/feature rather than tracked as a sub-task on an existing epic/feature (talk to your manager/product-owner ;-).

It's relatively straightforward to maintain that "story hierarchy" of "Project => Feature/Epic => Story/Task". It's relatively straightforward to estimate Feature/Epic sizes (S/M/L/XL, with extra attention or avoidance of L/XL, strictly because there tends to be more variation on those sizes).

With relatively straightforward math tracking backwards four quarters (how many stories in total were closed by this team?), breaking down a "new project" or "new work" becomes more an exercise in "is this big or small?" rather than "what precise date or size is this particular piece of work."

For a high confidence interval, commit to _LESS_ Projects/Features than you'd done before (and you have StdDev to guide you on the probable upper and lower bounds), other than that, just prioritize at the Project/Feature level to get the most important or most valuable work scheduled ASAP.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: