Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What do you mean ‘we need more time’? Project schedule estimation in software (dropbox.com)
238 points by hepha1979 on Feb 12, 2016 | hide | past | favorite | 147 comments


Almost at the end : "Yes, all this sounds like a lot of work". Exactly. And it will most probably still turn out to be wrong, because you will forget to ask some questions, and there will be things that you simply cannot foresee that will influence the time.

Turns out it's pointless to even try to do this. So why waste valuable time to do all this, if you know it's not the truth anyway? The answer is to accept that you will learn things as you go along, and that things take as long as they take, and to rather deal with that. Marketing wants a product by Christmas. Fine, we can do that. They just cannot say what needs to be in that product by Christmas. They can have whatever the last production quality release is at that stage. Instead of taking many weeks or months to try and answer all these estimation questions, we start and tackle the high risk items first. We learn from them, readjust, and as we get closer to the deadline, we keep communicating with all the other stakeholders what we think will be delivered on that date. It's a narrowing cone of uncertainty. One sprint before the deadline we'll be pretty accurate in predicting what it will be. 2 or 3 sprints before, accurate enough to be able to do marketing. At the start : the best we can predict is simply whether we feel we can have something useful at the deadline.


"So why waste valuable time to do all this, if you know it's not the truth anyway ? The answer is to accept that you will learn things as you go along, and that things take as long as they take, and to rather deal with that."

This maybe is applicable for a company developing new products internally, but for projects delivered to third parties, delivery dates are part of contractual obligations.

Estimates are hard, but it's not like predicting the future with a crystal ball. Good developers in general get better as they gain experience in the domain, and in estimating itself.

Good project managers in general get better at spotting murky requirements that might be causing problems down the line, and try to clarify them early with the customer.

It's not a perfect science either, but it's a necessity in many real-world scenarios.


Contractually obligating pi to be 3, the Earth to be flat, or software features and ship dates to be specified, doesn't make them so.

All you've done is restate the problem as an error on the part of the negotiating process. Not a problem to be dumped on developers.


When you buy something from Amazon Prime and it gets there three weeks later, I hope you are as amused by the "negotiating process."

(Yes, I know, the product you build is different than manufacturing, different than construction, different than paperwork, and every other deliverable.)


Commoditised manufacture, inventory management, and shipping are pretty much precisely the opposite of software development.

Or: your entirely counterfactual counterexample rather neatly proves my point.


Software development is commoditised. Or at least a lot is. Static web sites, simple mobiles apps, etc. Some individuals or organizations build them over and over with regularity.


Don't put hard commitments into the contracts then. Why slave yourself to something that you aren't sure you can deliver?

Good developers should ideally only ever do the same thing once, which means that there is very limited utility in trying to improve estimates for a task that you should never perform again.


>Don't put hard commitments into the contracts then. Why slave yourself to something that you aren't sure you can deliver?

Because you know full well that the contractual time limits will not, in fact, ever be relevant as you've added some neat tricks so as soon as the spec changes you get to adjust them (and the spec always changes). One assumes therefore that the commitments are simply part of the bidding process as you try to craft a somewhat realistic but very attractive looking bid


The secret of a consultant's work is not to craft software, it is to craft contracts.

Evidence : CGI never got burned by their fiasco on delivering Healthcare.gov


In general, this is a terrible practice.

"How much is it going to cost to build my house?"

"Er, I'd rather not commit to a price. We'll figure it out later."


Software development is very different from building physical structures.

"How much is it going to cost to build my house?"

"Give me the blueprints and tell me your finishes."

[Waits on client to provide specs...]


What's the difference?

"How much is it going to cost to build this application?"

"Give me your specifications for the app."

[Waits on client to provide specs...]


I'd guess the usage of [Waits on client to provide specs...] in the software scenario is a bit facetious.

Often clients can't produce specs sufficient to build an application. Some clients may produce 'specs' which are incomprehensible, making them impossible to estimate.

The crude part is, that such projects may still go on, and worse, the client may believe that the application is actually being built to the specs.


Often clients can't produce specs sufficient to build a house/building/pipeline. That's why there is an entire category of firms that exist that are hired by clients to produce these documents.

Guy wants an apartment building, he hires an AE firm and tells them "I want an apartment building" and they say "well what kind of apartment building?" and then goes through the exact same process of defining scope, budget, and schedule that would be done in any other project management field. Lawyers get hired when someone wants to sue someone - "well what do you want to sue them for?" Sourcing managers get hired when someone wants to make something - "well what do we need to buy?"

The fact that the software engineering industry refuses to acknowledge that their industry is not special is a constant source of confusion for anybody who has been doing this in another industry. Why is the software industry special? What makes software so nebulous that scopes cannot be defined and estimates made? The argument that "people don't understand the impact of requests" does not hold water - people don't understand that simply wanting more room in a particular part of their house can cause an entire redesign. That's why you, as the expert, have to explain the impact of their requests clearly rather than just agree to them.

The problem with estimates and scheduling in software engineering isn't a result of work, it is a result of failure by the project management.


You can ask an architect or a builder how much it is going to cost to build your house. They will ask for blueprints and finishes. If the person asking doesn't have specs then the builder or architect will often offer to help them create them... For a price. Most people that want software developed want to know how much it will cost to develop software without having to invest time or money in first developing the specs. The issue is that people who what software developed either don't know too get or don't want to pay for blueprints, then they complain when things go off the rails whether cost, delivery date, or quality... They fail at managing risks themselves (by not minimally getting specs first).


"The problem with estimates and scheduling in software engineering isn't a result of work, it is a result of failure by the project management."

The problem with estimates in software engineering is that few want to pay for them. (Which hinders the formation and demand for the category of firms to produce specs.)


That is not the only alternative.

"$300/hr + expenses; hours and expenses determined by necessity to complete work; work as described represents at least 100 hours effort and $x in expenses"


> as they gain experience in the domain

Because "gain experience" means that there are no more "unknown unknowns" for them, anymore.

But, unfortunately that is a luxury you can't afford for too long. Reality has the nasty habit of changing, shit happens and good developers need to get replaced as they burn out and run away from development into more sane activities.


But there's a difference between "production ready" and "customer ready" which Agile practitioners sometimes seem to forget.

In many businesses, software is not the core business, you're not building software for yourself, you're building it for other stakeholders. It's not acceptable to say "Well it only does 20% of what you wanted".

What happens when things start to slip, so in fact you won't have something the business thinks it can present at christmas? "Don't worry, we'll just keep working on it until it's ready" after having spent 6 months on it isn't good enough.

It's not a bad thing to do a lot of estimating up front so the business can make an informed decision about whether it really wants to commit to the effort, rather than finding out during all the sprints about the growing effort required to get it to where the customer or internal stakeholder is happy.


> It's not a bad thing to do a lot of estimating up front so the business can make an informed decision about whether it really wants to commit to the effort...

You're assuming the estimate is worth much. Which is what much of the industry is doubtful about. To underscore the point, most estimates are single values (eight months), not confidence intervals.

A better answer would be, "There is a 50% chance that we get it done between six and seven months from now, assuming requirements don't change."

...but even that assumes that all along the way, the project stakeholders are keeping in mind how many (how do you quantify?) requirements were added and discovered.


How it's said: "There is a 50% chance that we get it done between six and seven months from now."

How it's heard: "Wa wa wah, wa wa, we get it done between six and seven months from now."


You forgot the "assuming requirements don't change" part.

What's actually heard would be "Wa wa wah (up to) six months from now (guaranteed) wa wa."


Yeah, and they'll push for four months total at a meeting a month into the project, while simultaneously changing their mind on a half dozen requirements.


Ugh. I was on a project like this. I figured that it was mostly just that specific customer...


...Unfortunately, there's a lot of "specific customers" like this.


Except some of us make a living selling development of projects and solutions to customers that require pretty accurate (cost) estimates prior to even getting the contract.

Crazy world, I know...


Absolutely this, clients need to know their costs upfront.

Meanwhile, I've been on the other side, working with a shop that insisted they were agile, and so refused to give an upfront cost.

That project ended up delivering a substandard project where key parts did not work well. Their answer was to tell us they would of course be happy to be paid for another 2 week sprint to fix those bugs...

As a customer it wasn't very satisfying.


I'm curious, did your contractors host regular demos, sprint retrospectives, etc.? If so, was it clear early on that the results were going to be substandard, or was it a surprise fairly late into the process?

I'm not going to debate about whether they were doing "true Agile", but it seems like even without settling upfront costs it would still be possible to control risks by tracking value added vs. cost on a sprint-to-sprint basis. Hopefully it should be evident early on if the project isn't worth it[1].

Of course, this requires that the contractors be able to deliver value incrementally, which might not be possible in all projects. In which case, upfront cost estimates (and more thorough planning) will probably be necessary.

[1]In fact, this is exactly why Zed Shaw says he uses Scrum on risky projects(http://zedshaw.com/archive/the-c2i2-hypothesis/)


The problem isn't with the requirement for fixed-time, fixed-bid contracts. The problem is that agile methods and tools, and all the advantages they carry, are incompatible with those requirements: You can't start fast with minimal specs. You can't apply what's been learned along the way unless, miraculously, it costs less and takes less time. You can't make changes to suit changing business goals without a major negotiation on change orders. You can't access most of the benefits of agile inside that box.

You CAN, however, find contract developers who have mastered the art of faking agile methods and being buzzword compliant while delivering no feedback that prevents you from having bad ideas implemented in bad ways.

You will also find contractors that will lead you down the garden path. You could call them "half agile." They won't warn you that you are asking for something dumb, or unworkable. They will treat your mistakes as revenue enhancement opportunities and lay on the agile talk pretty thick.

This is why the "no true agile Scotsman" argument is so easy to make: That's not agile. You have the wrong people. You fail at agile. That's an easy case to prove but it isn't very helpful.

Real agile is hard to do and doing it with compromised ingredients, like using low-bid contractors that want to minimize their effort and/or enhance their revenue, that you have stuffed into a fixed-bid box, is agile poison.


I don't think working fixed price quote would have given you a different outcome though? The team produced bad quality product, fixed cost or agile won't change that.

Selecting the team to build the product on price is often a bad idea, and that's all fixed cost gives you above agile (the ability to select on price).


A fixed cost makes it easier to turn and around not pay (in whole or part) if the product is really not up to the standard specified. It's harder to do that if the cost is not fixed but overrunning and the response is "well we just need more time".

Also fwiw, the team wasn't selected on price.


In my experience it's easier and less risky doing it the agile way. You give them a 2-week sprint target. If they fail to hit that, then:

a. You don't pay them. b. They're probably going to miss every other target.

If you paid the agile shop for everything they did no matter how crap, and withheld payment from the waterfall shop because quality control, then that's got nothing to do with agile vs waterfall, it's got to do with your QC.


Not pay? Heh. For work performed? OK...

That is why my contracts require payment up front.


Sounds like they should have negotiated for a base rate with bonus pay out on meeting QC targets. For heavens sake people, align your incentives!


Everyone, on both sides of this river, they need to grow up.


To be fair, they're talking about subpar, unacceptable work.


Or it could be solid, perfect work that meets the best possible interpretation of the contract, but the client failed to communicate what they actually wanted and perceive it as subpar and unacceptable.


The question is how this shop ended up getting the gig in the first place ;)


> ...customers that require pretty accurate (cost) estimates prior to even getting the contract.

If you require that accurate of an estimate, your business model cannot handle the risk of software development. Really, really wanting an accurate estimate does not make risk go away.


That's not exactly true, but it means holding the customer to exactly what they asked for, and having a formal "change request" process for any deviations. The track record of that model of development is not one of successful on-time delivery, but it does seem to make some types of customers more comfortable, even if it shouldn't.


Unfortunately, that doesn't change the fact that customers want estimates, and the more complex the project the more they need it for budgeting and scheduling purposes.

In the end, you have a few options for estimating:

1. Estimate the most optimistic scenario, knowing you'll end up using change requests and extensions to deliver (one of the worst options, but if you're responding to a proposal that's what the opposition is assuredly going to be doing);

2. Propose an initial phase to nail down requirements and deliver a schedule using traditional estimation tools (an easier sell if you're an internal team, tough if you're a vendor);

3. Use experience with similar projects to estimate, and then refine your assumptions with the client, using change orders to account for unexpected problems (roughly accurate to the degree of imported ambiguity, but now you have to risk going to war to get your change orders signed off on);

4. Use a top-down estimation model such as function points-based estimation to get in the ballpark, and then add buffer (and pray you aren't going to either lose money or blow the top off the client's budget levels -- or both);

5. Delphi the problem and get a bunch of developers and PMs to estimate the effort, then refine from there (useful for sanity checks, not useful as a real estimation process, yet one of the more common ways small shops build their numbers).

As you say, you can't make risk go away, you can only try to segregate it within your estimation. In my experience, clients really need stability in proposal numbers -- they're often willing to accept a significant embedded risk buffer in a proposal if it means they can limit unexpected costs. Conversely, you should also have a rough idea going into a project of what functionality can be cut and when so you can go/nogo those pieces when necessary.


> ...they're often willing to accept a significant embedded risk buffer in a proposal if it means they can limit unexpected costs.

So if they want 95% confidence in estimates, you need to pad the estimate a lot. It doesn't make risk go away. It just offloads risk onto the contractor.

That just means the contractor is assuming the risk of missing the 95% estimate. It also means the head of the contracting operation needs to be diligent about backing up the estimates of his employees. Or mitigating risk some other way (making sure each contract is a small fraction of the business, having enough cash to absorb risk, owning lots of other businesses, etc.).

Now, realistically, the risk is also absorbed by the developers in the form of unpaid overtime, stress, lost bonuses, and in other ways. That's just all the more reason for individual contributors to insist on honest estimates. Or find bosses that understand business models better and don't offload risk onto employees.

[] I understand that this is complicated and bosses don't typically mean to make employees miss bonuses and have terrible work/life balance. But to some degree it doesn't matter. Intentions don't make down payments on houses, make sure kids do their homework, or get us a full night of sleep.


Estimates can be fairly accurate if they're made for software that's a composition of previously iterated work.

If someone asks for a login system with X,Y,Z and you've done that 1000 times, you can be fairly accurate in stating what it will take to do it 1001 times.

The issue is when a piece of software is custom, and there's no domain expertise available for the custom bit then you cannot accurate estimate how long it will take because the developers will be learning while doing.


And some developers bring this on themselves. They could use the boring conservative technology that they know inside and out from the last project, or they could use something that's new and hot this month, which they have never done or have much less experience with (but is exciting).

Double whammy if it's an unfamiliar domain and a cutting-edge tech stack.


And it takes longer, so more billable hours.


Customers don't need to know the cost, they need to know the price. The most important factor in determining that should be the value of the software to the customer, not the time it will take to produce it.


That gets into some interesting points:

1. Cost-plus pricing (or contracts). Here, the vendor (that's you) gets costs plus some value. Which means that the customer probably wants to know what goes into determining those costs. This in turn requires a close familiarity of both parties. This was more common say, in the 1970s and earlier, and especially in government major-projects negotiation. But it's why negotiating contracts itself is a substantial activity.

If you're creating a one-off product, there isn't a market price. There's what you and a buyer agree to. Depending on circumstances, neither of you may be particularly free to walk away and go elsewhere.

2. Moen's Law of Bicycles: bad customers make for bad products. This is almost certainly a variant of Gresham's law, itself a generalisation of the point that information sufficient to distinguish product quality is itself difficult to come by. This manifests in multiple ways: crap quality cheap products flood markets, or flashy, easily-distinguished, but not materially significant features are added to products.

True quality, measurable in the use value of products, is often quite difficult to assess. This is tightly coupled to the Dunning-Kruger effect.

3. Map-terrain relation. A plan (or contract) is a map, it isn't the terrain itself. In many planning (and legal dispute) instances, there's a confusion of these. A contract which fails to encompass materially relevant information is at risk of being voided.

The upshot, again, is that contracts don't define software development schedules. Software development does.

It also means that you probably want to desgin the contract scope for developing complex products much as you do those products: iteratively, over time, with conscious trade-offs of features and costs.

Everything else is setting you up for a mismatch of model and frame to reality. Reality wins.


True, but difficult to sell, especially to repeat customers who can calculate the price/hr on previous work.


Let's say they figure out you only spent 100 hours to deliver them $50,000 of value. It's your job to persuade them that that guy on the internet who says they would sell them 100 hours of dev time for just $5,000 will only give them $5,000 of value (if that). If I have a choice between getting someone who delivers value at a rate of $500/hour versus someone who delivers value at $50/hour, and I need $50,000-worth of software, I should choose the more efficient developer, not the cheapest one.


For the kinds of projects my company tends to do, when we're getting into repeat business we're generally not competing with other contractors. Our clients know that the other contractors would have a ramp-up cost that we don't, and an unknown working relationship vs our good working relationship. Instead, they look for per-hour discounts and ways to 'cut costs' while delivering the same value. They also typically have their own development team who don't get paid nearly as much per-hour as my company charges, so they look to share more of the work with their internal team. Since we do product, services, and support, including training, it's hard to argue that the internal team can't or shouldn't do the work.

It gets complicated, but in the end we mostly come out alright. Our clients are well aware that taking on additional work reduces our likely development cost but increases overhead costs and raises project/schedule risks. It's a tradeoff they accept, and we work with them to ensure the project is successful.

My point is, in the real world you can't just charge by the value you provide rather than by the cost of providing that value. That's a simplistic point of view.


I do appreciate that the reality is that development contracts work that way. I've just spent a lot of time being frustrated at how often the business conversation around software assumes a basic hourly contract cost-plus basis.


I have no problem telling a customer that something along the same lines as a previous project will be more expensive today, because it was initially underestimated and I ate the additional cost. They realize they got a bargain the first time, and I don't have to eat the same costs to keep the customer. Of course, starting from the first project, I work very hard on building a trustful relationship to support such a claim.


I used to as well, until I started to refuse to do it. It truly is in the best interest of the customer to work this way, and I explain it to them. The ones that understand it and buy into this way of working are the ones I work with.


Sounds awesome. Unfortunately I don't work in my own company. Fortunately, I don't work with sales (much).


I'd love to know more.


It's better than this actually. If they want a release by Christmas I can with, high certainty, tell them:

    Features will be in: A, B, C
    Features likely to make it in: D, E
    Features unlikely to make it in: F, G
    Nope: H, I, J
    Hell No: K, L, ..., ZZZ 
The problem I've always found is that, no matter what, you are going to be arguing with sales and management about E, F and L instead of them just working with A, B, and C.


Ask them to give you a guaranteed schedule of sales that you will hold them to. They'll say it's absurd, you can't predict that. Turns out it's the same with software dev.

If you find yourself arguing about this with management or sales, they're not trusting you to be the expert at what you do. Tell them that.


Actually it's remarkably similar. Sales has a pipeline of deals, and someone has an estimate of how much will be booked over the next quarter. This estimate is a key part of managing the sales team and is directly tied to their compensation.

Can you imagine if developers explicitly had a code quota to work to? We usually don't, for very good reasons, but salespeople certainly are held to delivery standards on work that isn't fully predictable, so you can kinda see how they might expect the same from software teams.

And the analogy continues to work. There is a pipeline of feature requests, some at the vague-idea stage and some nearly shipped. You can provide more accurate ship-date estimates for the nearly shipped ones. And a person with good judgment who knows the stakeholders can provide a better estimate than someone uninformed or unthoughtful whether we're talking about sales or software builds.

Further: sales has a bookings target, but also has the freedom to meet that target with various deals. Similarly, product teams will have launch targets, but need some freedom on what exact features are included because per-feature costs aren't predictable.


Have you ever done sales?

Sales quotas are extremely common.


I do the same, except they give me an updated priority list every month, and once the priority for the month are set they cannot be changed. Misc. fixes are done eating up the time from the low priority features.

It removes me and my team from the losing proposition of agreeing to get something done before any other thing, as nothing of value can come from that discussion. And anyone who tries gets immediately redirected to the wiki page for the next month feature list, which opens one week before the iteration starts, by which time we know which feature will be complete and which will get back in queue.


because it treat as creative not engineering.In reality A can be expand to A1, A2, A3.The most issue would be, client paid for A future but not A1, A2, A3


"Hi. I need you to build me a house. It needs to have three bedrooms, two garages etc. How long will it take?"

"Well, we can definitely get you something by Christmas. It might be a bedroom, we may have just done the garages. Who knows? Can you write the cheque now please?"


I am not so sure there is a fundamental difference between what the article proposes and what you describe.

You start by saying this will not work, then in your description of what you do, there are many places where you estimate - or should I say guess? - what is feasible, and when.

The methods described in the article are analysis and extrapolation from experience, with an emphasis on identifying those areas that are most problematic due to some combination of uncertainty, importance and dependency, and giving them priority. You don't seem to be doing anything very different. Nowhere does the article say you have to schedule weeks or months on this.


Analogies like "painting a room" are pretty useless because they are very misleading. Especially when it is an innovative product, a better analogy would be: "painting a room on Mars".

With software development, there are a lot more "unknown attributes" that determine the main part of the effort estimation unless it is a repetitive task.

So, a critical/skeptical engineer would always drive the customer nuts by asking questions nobody can know and the conclusion will end somewhere like it is not possible to give a upfront estimation and stop the project. Or the effort to give the estimation approaches the effort to develop.


There are several different categories of software projects, as you say, the innovative ones might fit your definitions, but I'd argue that most don't.

The most common kind of software projects I've encountered are very straightforward, mostly just CRUD with an interface + business logic, and reporting, or relatively straightforward websites. Those are the "paint the room" type of projects.

There are challenges with those kinds of projects, but they should be reasonably easy to estimate, as long as the team is somewhat familiar with their development tools.

Other kinds of projects ARE more difficult because they have unique or innovative elements (probably the case for a lot of startups). Those might be the "paint a room on Mars" kind of projects.

As an employee, I've encountered the first kind 90% of the time. As a startup founder, I have encountered a lot more unknowns, but I've still found my project reasonable to estimate.


Before NASA sent their first drone to Mars they made a lot of predictions, a lot of tests, guessed a lot of things that may get wrong, etc. Do some of their missions still fail? Sure. But the amount of success is much much larger because they didn't just sent a drone there and watched what happened.

So even if all you say is correct it's still worth improving one's estimation skills.


Since Apollo, NASA has been suffering from too large projects. It's a spiral: once a project grows beyond a certain size, failing would be politically catastrophic. Hence the project must have even more tests and simulations, and the time table and cost move to the right, and it gets even more risk averse. The vicious cycle.

Instead if there were small enough missions and failure was tolerated as an unfortunate part of the process, progress would be a lot faster, and new technologies could be attempted etc.

This was the whole Faster Better Cheaper approach. Small teams of highly competent people, quick iteration, little bureaucracy...

Pathfinder/Sojourner was a FBC mission. It had a freaking webcam there. :) Opportunity/Spirit were somewhat bloated and the latest Curiosity rover was already a relatively traditional megaproject. Naturally there has been science instrument maturation etc as well, it's not a one sided story, but it is important to understand.

Look how many launchers SpaceX has developed and how much it has iterated, while NASA has worked on SLS. Sure, SpaceX has crashed and burned plenty of rockets and test vehicles on the way. It has changed directions dramatically. But looking as a whole, it has made large progress.

There is some kind of spectrum here. Of course if you launch deep space probes for 20 year missions, you test differently than for a launch test vehicle. But if you had many of those space probes as well, it wouldn't be so catastrophic if some of those failed.


> Before NASA sent their first drone to Mars they made a lot of predictions, a lot of tests, guessed a lot of things that may get wrong, etc. Do some of their missions still fail? Sure. But the amount of success is much much larger because they didn't just sent a drone there and watched what happened.

How much effort went into estimating Opportunity's lifetime at 90 days? How useful has that estimate proven to be?


That is cherry picking. Did you try to comprehend the post you are replying to? Take into account all of the estimations, not just one failed estimation.


Sure, take into account all the estimations. How much value have they delivered? Show me how NASA has got more science done than if they had "just sent a drone there and watched what happened". Because I don't believe they have.


The funny thing is despite having the opposite opinion I understand so well how you see it. I wish I could show you my point of view better.


That was exactly how I felt. It is a matter of pay grade, I think.


Not a useless analogy, it gets the point across that good estimation requires breaking a task down into little steps, questioning the requirements, discovering constraints, and iterating.

The estimation process doesn't change much whether you're painting a room on earth or on Mars. If there's more uncertainty when estimating steps, you add more buffer time for instance. The goal of the exercise is to improve your estimation. Where the room is located should simply be factored into your estimation process.


http://qr.ae/ROlwZp - I've found this answer on quora to be one of the best explanations as to why software estimates are so off.


Digging down into the details is one of the best responses to absurdly unrealistic schedules.


Estimates are a tricky thing. Ultimately, I don't think they work in software. We should be looking for a different solution.

Think of it this way - a physician can prescribe a medication and tell you how long you'll be on it, but can she estimate how long it will take before your blood pressure falls within normal parameters? She can tell you what has happened in the past, but suppose you take the medication and your blood pressure remains stubbornly high. Has she failed to meet her targets?

A lawyer may be able to tell you, roughly, how long litigation tends to take, but can your lawyer estimate how long a case will take, what it will cost, what the outcome will be, and the date it will all be wrapped up? If that doesn't happen, did the lawyer fail to meet targets?

Well, in both cases, sort of, yeah. You didn't get the outcome you wanted. But these professions, run by the people who staff them, understand what can be predicted and what can't.

This blog entry uses the task of estimating painting a room as an example, and focuses on breaking the task into subsections. Ultimately, I think the entire approach is deeply flawed, and it all starts by thinking that software development is similar to something like painting a room, just a more complicated version of it.

In my opinion, it's very different. Of course, it's very different from prescribing medication or litigating as well - in fact, I'd say that it's generally considerably less predictable than even these things. I'd say it's closer to writing a novel, creating a film, or recording an album, except that software needs to work, and can be much more mundane at times as well. Ultimately, I don't think the analogies are useful, we need to realize that software is its own thing, and steer clear of too much reasoning by analogy or example.

Unfortunately, the anxiety caused by making estimates under conditions of uncertainty (and possibly duress) is a source of stress and unhappiness for developers, and I do think it's one of the things that drives people out of the field. I actually think this may also be one of the reasons people love creating apps as a hobby but hate doing it for money.


The thing that annoys me most about project estimation is when the micro-managing types don't let you even look at the problem space first, an instant answer is expected. Of course the figure plucked out the air could be 'x days' with 'x' being greater than the time one expects internally so backside is covered, yet that is not really a true answer.

At the moment I am putting together a 'simple' feature on a website, where I have been able to explore the problem space first. For the $content there actually needs to be half a dozen date fields that were not in the original brief, I doubt I would have discovered this had I not rolled a little bit of code first and explored 'prior art' in the area.

I need to put together a semi-working prototype to discover such things - particularly when UX is involved. Yet, far too often, the opportunity to do actual preliminary dev work is denied by those that 'know best', preferring a spec. in a Word document (or something else equally 1990's) instead of a rough working prototype in code (with back-of-envelope sketch).

My current feature-being-added will need to be re-factored at some stage, once users have started using it and problems/features overlooked in the original brief catered for. It is not as if the users of my new feature will know that 'actually they wanted it to work this way' until they have something to see. So why we have these silly cast-in-stone processes I do not know.

I also must say that I find the metaphor of painting a room is quite a poor one, I would not need to paint 'an alcove first' to get a handle on how long the whole room would take.


> don't let you even look at the problem space first

this is the core of the issue. when one has the time to break down all unknowns and discover them, estimate are quite spot on. but who want to pay for the estimate?

so we are left trying to guess what time a particular unknown is going to take to solve.

the good metaphor here is asking how much time it will take to solve a puzzle magazine, from cover to cover by looking at the title.


The point you're making is exemplified in this article which makes an analogy to a hike from San Francisco to Los Angeles:

http://www.michaelrwolfe.com/2013/10/19/50/

from

https://www.quora.com/Why-are-software-development-task-esti...


Actually, the metaphor is pretty good, except they missed a key change: in real life if the "test painting" shows you need to "prime first" you now have no idea how long it will take without starting the estimation process all over again.

So, in real life, paint the room is "2 days or, if the test painting goes badly on day one, I'm not sure and won't know until the end of day 1". And people don't like those kinds of estimates.


If that's the case, you assume priming will be needed in your original estimation, then go do the test anyways, and at the end of day one you will look like a hero when you say "hey guys, I think we can get away with cutting priming from the plan, we just saved $abc from the budget and xyz days".


Something I took from several articles is, most managers don't really need an estimate - what they need is a commitment, and/or a budget (and the estimate is an input to generate said commitment or budget).

https://www.mountaingoatsoftware.com/blog/separate-estimatin...

https://hbr.org/2014/12/your-agile-project-needs-a-budget-no...


As a new grad who has worked a couple different jobs now, some things I don't get about this:

- What do you do when you don't have a spec and when you ask the senior engineer you are working with questions, they don't respond or push back against having a spec or a plan longer than the very next feature?

- The author says that estimation is easier to do if you are doing something you have done before. What do you do if you are always working with a new framework or language?

- How do you break down the steps if you don't know what a project entails until you are finished?

- What do you do if asked to estimate how long something will take to debug? What if you just started on a project and the error message you are trying to debug doesn't make sense? My approach there is to try to understand the system I am working with. If the system doesn't have documentation and the existing people on the project don't have time to give a walk through of things, I figure that means source diving. However, I've gotten the feedback that I should avoid trying to "understand the universe" when debugging. What can I do to produce accurate estimates of how long it takes to find a bug in a system without spending the time to gain a mental model of the system?

- This excercise takes a lot of time. What do you do when asked for an estimate in a meeting rather than over email? What

- What do you do if the thing you are building relies on getting an external API to work and that API is either undocumented or is documented in a foreign language? Assume that the google translation is not making sense and this is your first project and you don't have a budget to hire a professional translator? This sounds like something you would need to get working before you could actually give an estimate.

Why do experienced engineers ask for estimates anyway? Every time I give one I feel like I am lying and I try to warn people "I really don't know how to come up with estimates but I am guessing 3 hours". How do I deal with it when they are upset about it taking a week and a half?


These are all excellent questions.

> What do you do when you don't have a spec and when you ask the senior engineer you are working with questions, they don't respond or push back against having a spec or a plan longer than the very next feature?

Oh man, it drives me nuts to hear this. I'm a senior engineer, and you know what my biggest complaint is about many junior engineers? They don't ask enough questions! So to hear that you're asking questions and not getting a helpful response is beyond frustrating.

And in my view, a senior engineer who won't plan beyond the next feature is incompetent. I mean, I sometimes am forced to admit that I haven't figured something out yet, or in some cases haven't even thought about it yet, but if it's important, I make a point to figure it out. Maintaining architectural coherence of the product in the long term is, in my view, one of my most important jobs.

> The author says that estimation is easier to do if you are doing something you have done before. What do you do if you are always working with a new framework or language?

> How do you break down the steps if you don't know what a project entails until you are finished?

I hope you're not really _always_ working with a new framework or language, but yes, in software we're usually doing something new for every project, because it would be pointless to do the same thing over again.

You'll have to take anything I say about estimation with a grain of salt, because I've been in the business 35 years and I still suck at estimation. But your second question hints at the best answer I know, which is that you have to get well started on a project -- at least 1/4 of the way in -- before you can begin to give confident estimates of how long it will take.

What the OP is suggesting is to try to go at it breadth-first rather than depth-first: figure out what the top-level tasks are, then their immediate subtasks, then the level below that, etc., rather than just diving into the first thing until you get it done. I'm sure they have a point and I don't do enough of this. But the counterpoint is that doing this is a task in itself, and takes time -- to do it well takes a significant fraction of the time of the project. If the business needs are such that that makes sense, then do it.

> What do you do if asked to estimate how long something will take to debug?

You say "I don't know." If you've tracked down other bugs in the same code before, you can offer that in the past, bugs in that code have taken so many hours or days, but you can't promise this one won't be harder. If you have no experience with that code, though, stick to saying you don't know.

> I've gotten the feedback that I should avoid trying to "understand the universe" when debugging.

Ay ay ay. Find a new job.

The last thing I want is my team members making fixes without at least trying to fully understand the consequences. Sometimes we think we understand but it turns out that we didn't, and that's life, but not to even make the attempt is a sure path to code rot... making the code even harder to understand for the next person. I guess some people get inured to the idea that they'll never fully understand what's going on and they have to try to fix it anyway, but in my opinion, the best engineers always try hard to understand first.

> Why do experienced engineers ask for estimates anyway?

Well, in fairness, there are cases where we have multiple things to fix, and their relative priority depends partly on how much time they're likely to take. So getting some kind of order-of-magnitude estimate, if possible, can be helpful. But how experienced engineers can forget that there is always an element of guesswork in these is beyond me.

> How do I deal with it when they are upset about it taking a week and a half?

If they won't give you walkthroughs, and they won't answer questions, and they still get upset when things take time... yeah, get a new job.


For a lot of these, rather than trying to estimate the thing you're asked to estimate, you need to either estimate how long it'll take you to do the analysis you need in order to come up with an estimate, or you need to provide a time-limit on the task after which you'll provide a status update and have a discussion about the next step.

Managers don't really care about time, they care about budgets and risks. In your debugging example, the existence of the bug is a big unknown risk, and your manager needs to manage that. You obviously can't tell how long it'll take to fix the bug, but you can say "Give me an hour to debug it, and after that I should have a better idea of what the cause is and how long it'll take to fix. I might even have it fixed already by that point." That gives your manager a budget (one hour) and risk control (more information in a fixed time), and the option after a small investment to bring someone else in to assist.


It sounds like you are saying that when I take an hour to work on debugging something, I should spend 55 minutes taking notes on the system and where the problem might be and 5 minutes turning those notes into a coherent email. Is that a reasonable approach?

I certainly prefer to think of debugging as a process of learning about a system rather that trying to try things at random without making a mental map of where I am.


I'm saying that when asked for a debugging estimate, you should commit to spending no more than an hour (or other amount, depending on the problem) before stopping to report on your progress. How you spend that hour depends on the circumstances; if you could predict that then you probably already know enough to give a somewhat accurate estimate.

When you stop to report on your progress, you don't need to deliver detailed notes. It could be as simple as "yeah, fixed it" or "figured out the problem, I need three hours to refactor the code to resolve it" or "based on what I'm seeing, Bob would be better at fixing this; I can spend a few minutes to show him how to reproduce it."

"try things at random" is not debugging, so you're right about the process.


usually when I debug things my method is sort of a mix of linear and binary search where I start at one place, see if the a value is what I expect it to be, then move along to a different part of the code to see if something is wrong there, reducing the search space and repeating as necessary.

Sometimes my method is instead to look at data output at a certain level of abstraction like system calls (which is usually not helpful except for detecting a network hang or a path config whose directory doesn't exist) or database queries or something. That is usually more helpful if I either already suspect I know where the problem is and need to know what model name or file name to `ag` for.

If (Hephaestus be praised) there is actually an automated test framework in place, I'll start by writing a failing test or copy-pasting an existing test and modifying it to fail, then (if it isn't JavaScript), inserting a debugger into a function the test calls.

All of this tends to be pretty successful.

But I've occasionally found myself on a project where I just have a sense that I don't know what is going on or how pieces fit together and I feel a strong temptation to stop and fix that sense of bewilderment. When I resist this urge is when it feels like I am trying things at random.


It comes down to experience really. The more you are exposed to, if you are paying attention, the easier it becomes to relate to previous events, which can help inform estimates.

If you don't have a spec, get a spec. A Senior engineer may be more reluctant to help you with implementation direction without knowing the requirements.

Sometimes you need to do research to give an estimate. If someone asks you in person, or over e-mail, and you honestly don't know ask them if you can get back to them. Do not give them a number. Do the research, then reply.

If you're relying on external APIs or Frameworks etc. You need to factor in Risk Assessments that include what happens if the fit is wrong, or the interface changes mid project, etc.

When it comes to debugging a lot comes into play, system design, documentation, experience with the code. "I don't know" is an acceptable answer until you understand what is going on. Once you understand what is going on, you then estimate the fix based on the code. Giving a number before you understand what you are fixing in detail is a bad plan.

I am exceptionally experienced with a certain code base and system. You may be experienced with a different code base. You're more qualified to handle estimation on what you know. That said, if you honestly don't know, say that. Don't low ball the amount of work if you actually have no idea how long it will take.

There are a few good books on estimation and debugging, look around maybe they'll give you a few ideas.


> comes down to experience.

Yea. I really should get into the habit of keeping better records of things so that I can look back on past projects. I write in a paper notebook as I program, but that is usually just thinking out loud or trying to artificially increase my working memory.

> If you don't have a spec, get a spec.

Yea, I think I've concluded that I need to always write a spec for myself before I start a project. I know Joel says that PMs should write specs, but I should be prepared for the reality that projects rarely have separate PMs.

The hard part is how to write out the technical details ahead of time before I've gotten my mind into the codebase Im adding onto or the new framework I'm using.

> Risk assessments

Any tips on what to read to do these effectively?

> Once you understand what is going on

So I should feel safe in disregarding the advice to "avoid trying to understand the universe"? Or at least attribute it more to how I communicate about being stuck on a problem? That is kinda a relief. I really didn't understand how to put that advice into practice.

> A few good books on estimation and debugging

I've read Debugging: 9 Indespensible Rules, Mythical Man-Month, Critical Chain, and a few chapters from Software Estimation: Demystifying the Black Art. Do you have any other recommendations? The latter three books seem to be talking on the level of projects that are anticipated to take at least 6 months. I'm looking to start out with being able to reliably estimate how long I can get done in 90 minutes.


> Do you have any other recommendations?

You might take a look at www.amazon.com/Rapid-Development-Taming-Software-Schedules/dp/1556159005/. Although it's 20 years old, it has some good ideas in it. It's also more about 6-month estimations than 90-minute estimations, though.


The Pragmatic Programmer: From Journeyman to Master is a good book I always recommend to developers I'm mentoring.


I read that as a freshman in college. I should probably re-read it though. It seemed kinda like How to Win Friends and Influence People.


> What do you do when you don't have a spec and when you ask the senior engineer you are working with questions, they don't respond or push back against having a spec or a plan longer than the very next feature?

What I would suggest is that you should learn the practices of wherever you are working, and not spend much effort trying to change them. Your first couple of jobs, you'll hopefully be exposed to several different methodologies, and only then will you have the experience to advocate for best practices, as well as being able then to speak to evidence of it working at a past job.

> What do you do if asked to estimate how long something will take to debug? What if you just started on a project and the error message you are trying to debug doesn't make sense?

If you can't accurately estimate debugging, instead explain what you understand, what you don't understand, what you've learned so far, and what you're planning on trying next.


Well I'm currently on the job hunt due to failure to deliver on projects at my most recent job, so I'm trying to figure out what to both change about my practices and what to look for in the practices at the place where I work next.


As a general rule, if management expects results without giving you the tools for the job, ask yourself whether you really want to be in that job, and start looking for alternatives. That said, there's several ways you can make things better:

> What do you do when you don't have a spec and when you ask the senior engineer you are working with questions, they don't respond or push back against having a spec or a plan longer than the very next feature?

Illustrate why you need the answer. "I'm trying to decide between approach A or approach B. If we're doing X then approach A is easier to implement, but that'll suck if we're doing Y, in which case the extra work involved in B is worthwhile".

> The author says that estimation is easier to do if you are doing something you have done before. What do you do if you are always working with a new framework or language?

If you have experience with Express in Node.js, Ruby + Sinatra or Python + Flask will seem familiar to you. Rails or Django not so much. C++ would be more alien to you than Python or Ruby. Learn to identify such similarities and differences. I tend to find that domain knowledge is harder to come by than language/library knowledge -- The exact details of the work might be different, but if you're used to writing web apps or compilers or games or whatever, then you should be able to ballpark estimate such a project in a different language. Larger projects are easier too -- the cost for picking up the tools is roughly fixed and gets amortised over the whole lifetime of the project.

> - How do you break down the steps if you don't know what a project entails until you are finished?

First off: If we're talking about more than a couple of days' worth of work, estimating a project isn't a 5-minute job. Think a few hours for a 2-3 week project, at least, if you want a precise estimate. And there will be things you miss, but that's why you always add some padding.

At a minimum, you need to ask yourself the following questions: You're writing a service that does X. How does it receive input from the users? How does it pass output to the users? How does it obtain the data necessary to do X? Once you have the user input and the data, what work is involved in performing the task X?

Note how all these questions all apply irrespectively of whether you're writing a web service, a compiler or a game (with the details of what they _mean_ being quite different). You spend some time researching potential answers, and come up with a sketch of what an implementation looks like. Then you iterate: for each piece of the puzzle, what's involved. One of the easiest rookie mistakes to make here is to account for the time necessary to _implement_ all the pieces (and often that will be accurate), but allow remarkably little to no time for integrating the pieces together, for testing, for documentation, and for all other such things.

> What do you do if asked to estimate how long something will take to debug?

Tell them to f* off, in the most abrasive tone possible that is appropriate for your relationship with the other party. Seriously though, estimating how long it takes to sort out a bug is much, much harder than estimating new development. Time-to-fix-a-bug is also a long tail sort of phenomenon, so be prepared for that.

> This excercise takes a lot of time. What do you do when asked for an estimate in a meeting rather than over email?

After some projects you'll soon start being able to come up with a few good questions that need answering before you can give a good estimate. Bring them up, say you need to think about it, and be firm.

> - What do you do if the thing you are building relies on getting an external API to work and that API is either undocumented or is documented in a foreign language? Assume that the google translation is not making sense and this is your first project and you don't have a budget to hire a professional translator? This sounds like something you would need to get working before you could actually give an estimate.

Estimate the estimation. Say you'll need some time to figure out the requirements before you can safely say what work is involved.

> Why do experienced engineers ask for estimates anyway? Every time I give one I feel like I am lying and I try to warn people "I really don't know how to come up with estimates but I am guessing 3 hours". How do I deal with it when they are upset about it taking a week and a half?

This is probably the single most important part, and most places I've seen fuck it up tremendously. It's crucially important that people get to talk about this without turning it into a blame game. Why did you think it would take three hours, and why did it take a week and a half instead? Clearly either the estimation was wrong, or something _really_ bizarre happened that made the estimate invalid. Estimation is a skill that you need to learn, and you'll need to learn it before you're any good at it. My experience is that you _can_ estimate things correctly (down to the half-day on projects lasting multiple weeks), once you have the context to do it in.


> As a general rule, if management expects results without giving you the tools for the job, ask yourself whether you really want to be in that job, and start looking for alternatives.

Well the hard part here is determining if lack the tools due to management disorganization or due to my own lack of skill or intelligence. I have already left the organization.

> you _can_ estimate things correctly once you have the context

Maybe my problem comes down to not recognising when I lack a piece of context or being stubborn enough to get it.


You've been given a lot of good advice on here - I concur with all of it. After almost 20 years of writing code I still couldn't tell you how long it would take to debug something in my own codebase. If I can visualise it, sure, I can take a guess, but if not, who knows what I'm going to find.

Given the questions you've asked on here, it really sounds like you were just in a bad environment. You said 3 hours, it took a week - you're a junior, why the hell didn't someone jump in to help out?

Everything you've asked comes down to experience and you need people around to support you in gaining that experience.

Good luck finding a nicer workplace - honestly, sounds like you have the makings of a good inquisitive developer mind. There are plenty of places out there that will help you develop your full potential.


A pet peeve of mine:

Scenario: Finished requirements, analysis, design; completed development and testing; in UAT; Production deployment: 1 week away.

Customer: Hey this field name does not exactly represent what we want to convey. Can we change its name quickly before we go live?

Eager Novice (who is unfortunately in front of the customer at that moment): Oh yeah, that's just a small change in the Customer model. I think we an get this ready and unit tested in 2 hours.

Customer: Great. Lets git er done!

Project Manager (wary of customer and afraid to rock the boat on an already shaky project): Sure we can do that.

Grizzled Veteran/Grey Beard (who for some reason wasn't consulted on the change request): Hmmm....have we done an impact analysis of the change? What are the downstream impacts on code that references this field? What do we need to do to propagate the change to the db? What about the seven or so Web Services that are impacted by this field change? How long does it take to do a regression test? etc...

Net effect: 2 Hrs initial estimate given to customer; 4 Days worth of work in actuality; Production deployment moved down a week..


Agree 100%. Golden rule: Never give off the cuff estimates (even if you're sure).


Regardless of project/planning methodology, having a ever-growing checklist of less-obvious estimation items is a simple way to avoid forgetting to consider them. The book "Software Estimation: Demystifying the Black Art" [1] by Steve McConnell contains a number of these.

Some examples:

  - Team or company meetings
  - Team member sick days, holidays, vacations
  - Bugs in third party software
  - Maintenance of previous systems
  - Maintenance of build automation
  - Demos
  - Interviews
  - ...
... or your own list of whatever other items you have historically forgotten, or tend to forget when estimating.

In the painting example, the moving of the furniture was something I overlooked, despite having made the effort to carefully think of all tasks. A estimation checklist built up from previous painting projects could have triggered me to consider this task.

[1] http://www.amazon.com/Software-Estimation-Demystifying-Devel...


I'm about to being a rant here because this is a larger problem than anybody likes to admit.

Developers are good at development. By and large, they are not good at task management, project management, and certainly not estimation. One of the primary reasons for this is that their plates are always overflowing and they work more efficiently than many people in other professions. They shift priorities on the fly when work in one area cannot be progressed. Development time is social currency. You can make your boss happy by progressing his priority, or you can make someone else happy by progressing their priorities. Not many developers truly answer to a singular master. And few allocate time for learning although we all spend a significant amount of time doing it - how else do we maintain relevant skill sets?

Can a developer really estimate a project and tell his manager that 3 weeks in the timeline are for 2 unknown and unforeseen problems that require interaction with vendor support? No, because the manager will fight back and say that those 3 weeks are really just zipping up logs and making three phone calls and "if it takes anything more than that, hand it over to me!" Most can't tell their manager that it will take 2 weeks to parse a CSV because the source data is shit and they have too many other things they are working on. So they stay an extra 6 hours in the office and get called a rock-star until the day that they opt to go to their child's school play instead.

The solution to this problem is simple. Take task management and project estimation out of the hands of developers. It's time for the return of the secretary to the workplace. A single $50-80k administrative assistant could easily support, manage the priorities of, and improve the outward facing communication for five developers. With experience doing so, that admin could provide better time estimates for all of them and would be better at negotiating realistic timelines than any of them individually.

People have been coming up with methodologies and software to solve this problem for decades. Developers don't have brains built for project management. Yes, they can do it, but they don't get paid to work in MS Project. They get paid because they know the difference between a bubble sort and a quick sort and when to use either. Conversely, I don't know a soccer mom who doesn't successfully manage a more challenging schedule and competing set of priorities better than every nose-down developer I've ever met. Give appropriate work to appropriate people. Developers have their talents. Scheduling, estimating, and task management are not among them.


Going to join you in this rant if you don't mind.

> Most can't tell their manager that it will take 2 weeks to parse a CSV because the source data is shit and they have too many other things they are working on. So they stay an extra 6 hours in the office and get called a rock-star until the day that they opt to go to their child's school play instead.

Ugh! For real! I had this happen to me recently. I have trouble telling people "No", and pushy PMs who also happen to be my friends means that I often bite off more than I can chew, and have to either "be a rockstar" (which you start to realize isn't such a compliment), or deliver late / fail to deliver.

So I just started being a dick about it, and it works for me. If I say no too politely, it's really easy to convince me to change to a yes. My turning point was when discussing a project that was much more complex than management assumed it would be (I'm compelled to link the relevant xkcd [0]), and I already felt overwhelmed on my current projects.

The discussion was something like "Ok, how about by this date? What about by this date? What if we dropped these features?", I thought for a moment and responded "Ok, look. I can promise you any date you want, and we can shake hands and leave this meeting. But I'm telling you the reality is I can't do it, and it won't be done. If you'd like to take some of my other projects and move them into the will not be done category, we can talk. I'm only saying this because I don't want to agree to a deadline and then be constantly thinking 'Oh well, that's the date I'm getting fired.'"

I'm very lucky that everyone here is really nice, and realized how stressed I must have been to react like that. So we've reprioritized some things and are beginning to outsource some of my more menial tasks that took up a lot of time.

Didn't really intend for that story to have a point, just venting something my non-dev friends don't relate to. I guess the point could be that the problem isn't just caused by bad managers, or bad dev estimates, or bad process, it's a little bit of everything, and the ratios are different for every company or person.

[0] http://xkcd.com/1425/


Personal experience here - everything got better when I really started being a dick. I flat out tell them that they can't change reality and things take time; and I will not work extra hours or during the weekend to keep promises I didn't make. I had the same feeling of "when are they going to fire me" but actually it looks I am getting much more respect.


Exactly the same here. I think it's not really "being a dick", but just a dissonance between how the different types of people operate. I'm usually so passive that any sort "No, I want my free time after work" feels rude, and I'm probably not very good at being assertive and nice at the same time yet, haha.

The other developers always seemed to get the message that when I said "Maybe... I'm not sure if I have the time for it... I can see...", I really meant "No, but please don't make me say no.", while anyone else hears it as "Cool, he said he would see if he has the time for it" and then follow up with me a few days later "Did you make time for that project?"

I'm making an effort to flat out say "No" more often, but it's nice that other people are also beginning to realize that I only mean "Yes" when I actually use the word "yes".


That is the worst conversation. You have to admit you can't do it, and you have to force them to evaluate what is worth what.

Of course, part of that evaluation is you. Almost unavoidable.

They have to admit you are not an infinite resource and that they are asking for more than is realistic.

The very worst is by the time this all happens, they have most likely already pre agreed to just get it done, or they themselves were told to just do it.


I feel your pain.


Ideally a product manager solves or assists with many of those problems-

1. Reduce the ambiguity of what features to work on and what are the priorities. This involves being the barrier from other manager and team members trying to pull a fast on and changing what should be worked on. 2. Stay on top of what the latest estimates are in an honest and transparent way. Product managers as project managers should be able to translate what a particular developer says into what estimation might be in reality. They can acquire or release resources, or even change scope accordingly or at least make the case. 3. Product managers should provide developers a constant opportunity to break down tasks into components that can be tracked easily in Asana or another task management tool. This takes discipline as a product manager to enforce a system onto developers in ways that make sense.

Disclaimer: Product manager here.


Theoretically yes, and if you're the unicorn, then I'm sure you are very appreciated (and probably under-compensated). Seriously. I'm not mocking you.

With a person who's job is to help you and who is not above you on the food chain, you are more likely to have an honest self assessment. Also, with a third party in place, fairness becomes part of the equation. When a developer has to defend his timeline it's personal and can be very stressful. When a third party has to defend the same timeline it's just a matter of business. Conversely, when a third party is in play that is told that a developer needs to produce more, their instinct is to prioritize accordingly rather than simply add more hours to the work day (which is a common instinct among developers).


Which is why there are Solution Architects who are (hopefully) familiar enough with the technical and business side of things to provide a high level solution overview and maybe a POC that will play into the scoping of the SOW. Also helps if the same ppl who estimate (architects/PMs) then end up being in charge of the project delivery. This will keep them honest and prevent severe under-estimations while keeping them on the hook from over-estimating provided that there's an internal review process for the estimates.


I think also one of the biggest mistakes is making over estimates, or playing too safe. Especially when working with big teams and multiple projects (with a lot of dependencies).


Underpromise and overdeliver — that's the secret to professional success.


and a rookie mistake in sales


Sales pressure is sometimes the primary reason when our estimates are under.


I'd argue that promising what you can't deliver is a mistake in sales, too.


Why - has that led to a spate of projects completed early?


It leads to what I call "do it right syndrome".

Since I have time, I'll do it right, with a factory class, and extendible configuration package (that I'll write myself so it will be just perfect) and, for optimal speed, I'll use Red-Black trees that I'll have to write my own implementation of since there isn't a standard library ...


What's wrong with separation of concerns, extendability, optimal speed, etc.?

More often than not, I observed doing it right the first time is actually a huge time saver compared to hacking some stupid prototype nobody can understand later.


Agreed, except in cases where the code will never be maintained again (e.g. game releases on N64). In my experience an "over-estimate" that leads to the so called "do it right" syndrome happens if someone else estimates for a developer. When the developer makes the estimate him/herself and doubles it, you can be damn sure they are going to need all that time and then some to even get it in the ballpark of long-term acceptable.

(That being said, the company needs to have standards for code acceptability in the first place, which doesn't always happen)


You are saying I'm wrong because good engineering is better that writing crap. Except that's not what I said; I was saying over-engineering is bad and we should stick to engineering.


no... effort required seems to expand to fill the available time allocated to it... and then spill over some.

Which is why it's also a problem.


How do you get the right estimates then? Mind sharing your techniques?


I was on the developer side, whose estimates are usually doubled by PMs. (or if we go by the story theme, they always assumed 'primer' may needed on every paint job) Then we had a lot of scheduling problems, and low effiency ( you finish painting the room, but people who will gonna move furniture still not available)


The tendency to estimate poorly - particularly to underestimate - is one of the key reasons to use a very tight, short-iteration agile process.

If I estimate something will take two days, and I'm off by 50%, it'll take three days. If I estimate it'll take two years, and I'm off by 50%, it'll take three years. And because I can get a lot more practice at two day schedules than two year schedules, I'll improve my accuracy over time, through steady practice.

This takes us back to the triangle of constraints that Kent Beck talked about in the first Extreme Programming book. There are three constraints on projects - schedule, scope, and resources. You can control two of them. Generally, resources are inflexible. So in practice, projects are either schedule-bound (we must hit this date) or scope-bound (we must have these features). A project that tries to do both, without a significant resource surplus, is probably doomed.

Since estimation improves over time with short iterations, it seems better to me to be scope-constrained than schedule-constrained. With steady progress, you can achieve the goals eventually. Good agile processes are much better at saying where you are and where you're going than when you'll get there.


I've never experienced estimation improving over time in an Agile process. In fact, since short estimation cycles are just as susceptible to political games as longer cycles, and this dominates the whole process in any organization, the length of the estimation cycle really has nothing to do with the estimation issue.

But, what shorter estimation cycles are responsible for is a decoupling of the overall big picture progress from the immediate tasks being prioritized. This often leads to sprints that are well scoped, and everything in the sprint gets completed close to on time (at least as close as in longer work cycles, but not usually any closer), but, crucially all of the work has to be scrapped because the whole sprint, conceptually, was not right. In my experience, this happens maybe 1/5 of the time in Agile exactly in ways that would be prevented with longer-term planning.

It's very similar to the classical fractal effects of measuring the coast of Britain. By refining your ruler, you merely think you're being more accurate, and a lot of people are doing a lot of performative managerial crap for the sake of the Scrum performance, like a ritual. It feels like you are estimating better, but really because the whole meta-level concept you are working on isn't guided by longer-term thinking, you end up having so much round off error, added up over more and more cycles, that the overall waste is colossally greater than systems that involve some more significant longer-term pre-planning and which work on variable cycle lengths instead of a cookie-cutter, one-size-fits-all approach (which is the core philosophy of Agile, though people try to obfuscate this fact by No True Scotsman-ing it with the Agile Principles).

I also can't miss the opportunity to point out this great article on the political games we play during estimation: < http://research.cs.queensu.ca/home/ahmed/home/teaching/CISC3... >


All excellent points. But having done big waterfall, bad corporate agile, and good agile, the problem of work failing conceptually is common to all of them. The question is, how do you recover from it? I think good agile recovers more gracefully from conceptual failures than either waterfall (which tends to throw good money after bad, because admitting failure is not an option), or corporate agile (because corporate agile is usually just buzzword-compliant waterfall).


I'm not saying waterfall is a good alternative. Waterfall is just as shitty as Agile, but one benefit is that wastes less of everyone's time with meetings and busywork. But waterfall is equally guilty of a one-size-fits-all mentality, from the opposite view of Agile, and this cookie-cutter property is what makes both methods attractive to bureaucracies.

I'm a fan of common sense. If it's clear in a given team and project context that two week sprints, story points, and frequent meetings are really going to help, then just do it. And when that stops working, just switch and do something else. When architects or researchers are telling you a given project direction needs to slow down for some more intensive piloting research, do it and don't hesitate to violate sacred Scrum principles.

This stuff should be figured out organically, based on the given project and given personnel. Never with a fixed mandate to a single prescribed method or time frame.


Continuous improvement of the process itself is part of any good agile process. Corporate buzzword compliance agile is often guilty of not doing that - the "cookie-cutter property".

But importantly, I've found that a refined agile process saves more work than it costs, by figuring out what doesn't need done before doing it, and prioritizing immediate value over future value. It's very important to be able to say "This is good enough for now, but we know it's not good enough for the future". It keeps the perfect from being the enemy of the good. Likewise, it's important to be able to say "Well, I guess that didn't work", and have it be an integral part of the process. Without process, you have no guidance over what wrong turns you took and what those wrong turns cost.


I dunno, what you say just reads like a bunch of buzzwords that Agile, even supposedly "refined" or "good" versions of Agile, never prioritizes or delivers. More often, when quality work is produced it's produced in spite of Agile, not because of it, which is a shame because then people falsely attribute that success back to Agile.

I admit though that it's hard to know exactly what you mean just exchanging textual comments like this, so I'm happy to leave it at that.


One of the fundamental issues is that upper management sees estimates as bell curves as though 50% of the time a project may finish early. In reality, very few projects on time, let alone early. No one wants to hear about reducing scope but at the same time an MVP that solves the customer problem without feature creep should be evangelized within organizations. In reality, schedule follow more of an exponential function where slippage time goes from bad to worse very quickly as resources/scope/schedule snowball very quickly. Things get scrapped before we see this in practice.


I may be cynical at this point but overeager project scheduling/estimation processes and top-down software processes in general are really where software development as a creative and experimental occupation dies.


If you can do "creative software development" and get paid for it, count your blessings. And better yet, give a very big "Thank you!" to your manager.

The reality of this trade, regardless of you doing in-house work or a product for the market, is that everything is required for yesterday. There is an "economy of pressure" where every stakeholder pushes everyone else as much as they can in the expectation that they will be pushed as well. If you do not get to experience this, it means someone in your chain of command is very good at pushing others, and that some other part of the overall system is shouldering most of that pressure all by themselves.


So if I understand you correctly you're saying that "creative software development" is inherently incompatible with the hierarchical realities of the corporation?

If so, I think you're probably right.

"The reality of this trade [..] is that everything is required for yesterday"

But it just isn't. This is an artificially created pressure for selfish reasons, and should not just be accepted in passing.


The game is rigged. Only way to win is not to play.


Or according to the parents of this reply, to play under the wing of an expert player.


article assumes specs are known. they might be for some kind of brochureware but for any serious work they never are, and in business reqs change day to day. 60% of software features are never used - so that's 60% overspend right there. there's no reason not to do fixed cost agile (personally wouldn't do it any other way). trying to fix scope otoh is just bad business from the get go - a) you might have target outcomes but you don't know exactly what you're going to need to get there b) useful software tends to be ongoing in terms of reqs


You could just use data, machine learning and optimal control to make better estimates. It has aleady worked for fighter jets, self driving cars, and automated factories. With enough data it will work for anything.


What kind of features would we use? Which prior projects will help predict how long it takes a team of 3 developers (one of them senior with a history of underestimating time, two junior with little track history) to produce a novel photo sharing app where the requirements might or might not change every day?

The problem I see is that there isn't enough data, and the problem space (the number of possible projects and teams) is too large to draw meaningfully on past data.

Unless we're looking to predict specific CRUD apps where the requirements are known up front and a team cranks these out consistently.

Humans have already been flying fighter jets, driving cars, and doing work in factories, so we know it's a learnable skill. We have not yet learned how to estimate arbitrary software production.


As long as sales, or its local equivalent, commits to as yet non-existing products and features, it will ever be thus.


> The fact that I technically only asked how long the fix would take is something only an engineer would bother pointing out. -_-

Well excuse me for not fitting your prejudiced profile for a neurotypical extroverted 20-something white male. Yes, I might have needed the clarification between "how long will it take you to paint the room" vs "how long until the room is back the way it was with the walls in a different colour", but you help noone when you talk down to the painter like that. You asked a question and I misunderstood, because we use slightly different language and I have a ton of other things to think about all the time.

Can we stop this "oh let's treat engineers like they're children with mental development issues" already? I'm not some god in an ivory tower, I don't want you to kneel before me, I just want the usual everyday respect you afford to your peers and to engage with me as one professional with another.

This kind of treatment is not ok.


Agreed. As an engineer, I can also claim that if I say "we can deliver that March 15th", then I get a argument about why can't it go out with the Feb 28th release and why is it going to take 30 days to do something simple and why ... all because marketing people can't understand delivery cycles.

So, let's just not have that type of conversation at all.


Ever talk to a contractor about a home renovation, or even a painter as in this story? If you start questioning why something will take so long, in my experience the usual answer is something like "if you think you know better than me, do it yourself." Especially if they are good at what they do. Most good tradespeople have more work than they can do, and will just walk if they think a customer is going to be a pain in the ass. How often will developers take that approach?


Well tradespeople usually work for multiple clients on smallish projects. For good tradespeople, a significant number of those projects come from repeat customers, which means their optimal strategy is not to serve all customers equally, but to prioritize those that are more likely to bring repeat business down the road. A difficult customer is usually not someone you want to do repeat business with, so they get "managed out" pretty reliably.

Most software engineers, on the other hand, work for one big customer at a time (your employer). And though I agree that a bad employer at the end of day is detrimental to your career, you cannot afford to switch jobs on trivial matters, on risk of getting labeled as a grasshopper. That's why you do not see developers "walking".


If I am talking to my PM I will say, "It will take me an hour to fix that. But I'm working on project XYZ right now so it will be sometime next week. If you need me to do this first though, just let me know."

Because to me part of being a PM is deciding project priority.

In room painting analogy, "Painting the bedroom will take about 2 days of work. But I am working on the kitchen cabinets which will take at least 2 more weeks. Do you want me to stop that to work on the bedroom so you can move in your bedroom furniture first?"


> These are the skills that elevate someone from a good programmer to a great software engineer

I stopped reading at engineer


You really shouldn't.

While the term engineer is loaded with the licensing and professional aspects, and understandably so, the people working with software very often do actually have to perform the task of engineering solutions.

Dismissing a valid dialog on this basis does no one any real good.

Despite the issue you raise, the query and discussion remain valid and high value.


In tech, we spend little time talking about the softer skills like communication, project management, and prioritization.

Well, that's not a good start. Maybe my career has been abnormal, but I've spent a lot of time thinking and talking about project management skills, particularly estimation. It's something my employer expects all non-junior employees to do - more than half my team are certified in scrum or have taken courses in other PM methodologies.

For most of what we do, the hard part is project management. Sure, the techie code stuff is usually more fun, but that's not even half of my daily workload (and hasn't been since well before my title changed from "product architect" to "project manager").




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: