Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Planning and estimating large-scale software projects (tomrussell.co.uk)
328 points by kalleth on July 21, 2021 | hide | past | favorite | 136 comments


> Estimates are one of the hardest parts of software development.

And also fundamental. When I was directly estimating big software projects the key, for me, was to trust developers recommendations but apply a different multiplier for each developer. Multipliers ranged from x1 to x3. Those rare devs with x1 were, of course, a blessing. And those with x3 were not necessarily bad; they were often the ones working on the really hard problems. Of course, it meant getting to know those developers and a prior (which we set at x2, for new starters).

Individuals were remarkably consistent in terms of their actual performance; so a x1 developer would almost always be a x1; a x1.5 would almost always be x1.5.


One thing I’ve found to be helpful is to make it clear that I won’t be mad about a long estimate

If that’s how long it’s going to take then that’s how long it’s going to take. I think there’s a group of people who are used to getting lots of pushback on their estimates and so they estimate low to avoid that conflict. The future conflict of things being late is not something they have to deal with right now and maybe they actually will get it done.

The key is backing it up continually and really not getting mad about estimates that are longer than I’d like. And, for people who I do think are sandbagging, asking more specific questions about the details of the estimate in a non-combative way.

And of course some people really are bad at estimating. But I find if they don’t improve with coaching then it’s a symptom of a bigger problem (not thinking things through all the way) which manifests in other ways beyond estimation (eg poor design)


> I won’t be mad about a long estimate

So... my experience with this mindset is that you won't be mad, but you'll just say "that's too long/expensive" and cancel the project entirely. Then the same project will come up again in two months with different wording, again and again, until I give you the estimate you think you can afford. And then the thing will end up taking longer than the original "too expensive" estimate (in part because too many things were rushed in the beginning to try to meet "the date"), but nobody will ever compare the final outcome against the original estimate anyway... because estimates are never meaningful.


My rule of thumb is that if you give me an honest estimate, then it’s my job to make it work. Negotiate scope, change implementation, move resources, increase cost, whatever. But I need estimates that at least reasonably correspond to reality in order to make my case successfully.

The biggest issue I have is developers making business decisions without realizing it or without saying it — saying no to something because it would take too long or be too expensive, without first asking how much time/cost can be consumed

The same occurs with estimating — they’ll give a risky estimate instead of a safe one, because they’ll unilaterally decide the stable one is unacceptable; and not bother mentioning this with their estimate, or knowing what the rest of the timelines look like.

The other half of the problem is that people tend to only want to give one number (and people typically ask for a single number), but what you really want for project planning is the median +- margin of error timelines. The 1x,2x,3x rule is just a hack to work around that unwillingness.


Yeah, from the product management side this is the way to approach it. Make priorities clear and make it clear whether the important part is the date or some set of features.

If the date is what's important then start the conversation with "here's the date, can we get this all done by then? If not then what can we get done? If we can't do it all, here's the stuff that I think is important." And make sure to allow some time for people to think about things and give an honest estimate.

If the features are what's important then don't even put a real estimate on it because then that just turns into features + date which never works well. SWAG it by weeks or months and refine as you get further along.

It's basically a law that we always want t do more than we have time for, so it's important for the decision maker to be clear about which parts really matter.


> the decision maker to be clear about which parts really matter

which is good and all, but the reason software projects have such a high failure rate is _because_ these decision makers aren't clear about which part really matter! And not to blame them solely, because it's a hard problem to know.


100% agree. They aren’t clear because they don’t know. They might not be able to think in abstracts, or simply don’t have their priorities straight.

The same would happen with ANY other type of project.. marketing, home renovation, building an aircraft, or even baking a cake


> And make sure to allow some time for people to think about things and give an honest estimate.

I like to tell people to give an estimate for when they can provide an estimate.. and estimate for that, if needed :)

Eventually someone knows something


The biggest issue I've seen is developers making business decisions knowingly, but without saying it to the business people. I was on a team that literally sat around saying "if we tell them it will take 18 months to do this then they'll just cancel it, but we want to do it, so let's say 6 months and then we'll just be late but they never cancel stuff in progress". I quit that team for many reasons, but they got started under that 6 month estimate and four years later the project is still going.


My favorite PM to work with never wanted any estimates to be optimistic. Things had to be within reason, but he preferred very conservative to very lean. Once the numbers for a given proposal had been collected, it was his job (and anybody else from the team who went into the pricing/costing delegation) to go back and forth with segment management:

"We can't win with these numbers. Are you trying to lose this business for us?"

"Nope. I'm letting you know what we believe is necessary to finish the work on schedule. We can be more aggressive in some areas and, if so, we'll add items to our risk register."

"You need to hit XYZ target or we're not even going to send a bid."

"Roger."

<Team decides whether it is possible>

The internal cost debate usually ends up with a good budget and the PM/team have an "I told you so" card in their back pocket in case the team can't meet the delegated EBIT. It isn't perfect but this friction (PM sticking up for his team and management pushing for competitive numbers) is better than nothing. I imagine it is similar at most engineering shops.

edit: The fun part is that everybody knows its a game. PM wants as much cushion as possible, management wants great profit at a low price, and the team wants to keep getting work so as to stay employed. Sometimes the costing/pricing stuff goes a little over the top and the infamous "death spiral" gets tossed out by management. Always a hoot.


I've seen this, but I don't think it was a problem or a bad thing.

Let's say the project was first proposed in January 2021. Got estimated as taking a year. Didn't get scheduled, since that was seen as too long, and there was other stuff people wanted done in 2021.

It's pitched again in June 2021. Same story.

Come Feburary 2022, it's pitched again. Now it's estimated as 15 months - there's more to do since more code has been written in the meantime! - and gets started, and ends up taking 19 months (estimates are never perfect!).

You might say "this was stupid, we should've just done it in January 2021" but in the cases I've seen, pushing it off a few times made perfect sense. The payoff wasn't seen as the most valuable thing that could be done, and the effort was high. The effort became higher after it was postponed, but by that point so was the relative value compared to other proposed projects.

On the other hand, if you hadn't come up with that first estimate, maybe the assumption is "ok this will take several months but we'll still be able to do this other stuff in 2021", but instead you work on it throughout 2021 and ship it in March 2022 or so (so ~15 months, still faster than doing it later), but that causes you to not do 80% of those other things you thought you could do.

Postponement or cancellation of a project because it's just too expensive to be worth it right now is a perfectly valid use of an estimate!


Hum... The way to GP is written looks more like the scenario that the dev manager/PM/whatever comes to the team with the problem and gets the 1 year estimate, says "it's too expensive" and closes the project. A few weeks later, somebody states the problem again and pushes the manager to get another estimate, that is too large, so the project is closed.

Repeat that until the problem gets a singularly misleading wording, or some key person is away, and the project gets a 6 weeks estimation. It will take 19 months anyway, because nothing changed, but for 17 of those you will be late.

(Anyway, I have never seen a problem statement to be well defined enough for this to be the problem.)


> Repeat that until the problem gets a singularly misleading wording, or some key person is away, and the project gets a 6 weeks estimation.

At that point we're definitely at a point of "severe company process problems" and the method outlined in the original article probably isn't feasible - I can't really imagine that manager accepting the time required to create a GOOD estimate - but I've definitely seen it work properly at some of my jobs.


I've found that it's important to show your work - don't just cough up a number, show them your disciplined process at how you arrived at that number, (and I use PERT, as a process, and it's rarely steered me wrong).

As an engineer, it's my job to give an estimate. It's my boss' job to figure out whether that can work within his budget or if the task can be re-scoped. But I have to give him a number he knows is legitimate, and not some half assed BS I pulled out of my ass.

At a recent job, I witnessed an engineering team use some kind of agile "planning poker" game to arrive at estimates. Everyone on the team thought this was a great method. Yet they consistently failed to hit their targets, and it had a very profound impact on the performance of the ENTIRE company, (where various teams had serious backlog-coupling issues).

Because I'm a devops engineer (despite previously having been a development lead at a different company - which was a Fortune 500 by the way) and therefore, not considered a "real programmer" - of course I wasn't taken seriously when I tried to advocate a more data-driven, disciplined approach. But what do I know.


> my experience with this mindset is that you won't be mad, but you'll just say "that's too long/expensive" and cancel the project entirely

Well, as far as this stuff goes, I kind of have it on easymode because I'm a VP Eng who also does a bunch of product management work. When I'm doing product design, I have the benefit of knowing approximate engineering LoE/order of magnitude, so I can make sure that I design products where the engineering effort fits into the product development cycle.

But it's not unusual for something to be harder than I thought because of some detail that I'm just too far from to realize, and in those cases it's clear to the team that they should be honest with challenges and we'll work around them together, vs. some bullshit because they're afraid I won't like their estimate. The last part is built with trust though and basically never "shooting the messenger" when someone tells me that there's a challenge.


I think this attitude is too fatalistic. A good approach would be to see if there’s some way to break up the feature in smaller pieces, into smaller milestones, etc.

If you have a manager that is unable to deal with this kind of stuff, then your problem is the manager, not the estimates. Estimates are extremely useful, and you’re doing yourself a disfavor if you think that they are never meaningful.


If I was surrounded by people who could put together accurate software estimates that made upper management happy, and I was the only one who was always got it wrong, I'd hang up my hat and see if I could get a job selling life insurance. Hell, if 10% of my peers could put together accurate software estimates that made upper management happy, I'd lobby to have them elevated to senior positions and spend all my time begging them to explain their secrets.

But they don't. Nobody does. They don't just get the timelines wrong, they get the tasks and milestones wrong - which makes sense, because the people asking for the estimates don't actually know what the goals are, usually even after the software is delivered.

The only thing an intelligent software developer can do is play along, learn to read subtle cues as to what they want you to say, say that, and get on with the actually messy business of delivering working software.


> because the people asking for the estimates don't actually know what the goals are

And this is the real problem. People who know how to build & design products should be able to explain the goals of what they're asking for. If they don't then either you need to move or you need to get them moved.

One of the first things I tell new engineering managers is that an easy hack to is to always be asking yourself "what am I trying to accomplish?" Whether it's an emotional conversation or writing a document, if you can't answer that question then you probably shouldn't be moving forward with whatever you're about to do. And if you're writing a document, make the very first section "The goal of this document is..." because as you write you can constantly ask yourself "is what I'm writing accomplishing my stated goal?"

Communicating your goal also helps your team - if they know the goal then they don't act as automatons following directions, they can make actual decisions on their own.

If you know the business goal of what you're building then you have a better shot at getting the requirements and tasks correct, which gives you a better shot at giving a good estimate. I've worked with a number of product managers who had no real goal behind what they were asking for. I refuse to have the team start building things until someone can explain why we're doing it and what we're trying to achieve.


> But they don't. Nobody does. They don't just get the timelines wrong, they get the tasks and milestones wrong - which makes sense, because the people asking for the estimates don't actually know what the goals are, usually even after the software is delivered.

I definitely agree that the people asking don't actually know what the goals are. They usually have a very fuzzy picture there.

But I've always found the estimation and planning process to be hugely valuable for revealing the questions that reveal to THEM that they don't know all those requirements yet. And then we have a discussion about them, and we get better specs as a result, and then go on from there... which is way better than when we don't discover those gaps until we are writing the code for it!


You’ll spend more than half the total time on meetings and discussing. Client will not be happy, but at least they can’t deny it was needed.

Still it’s the only way that works.

They pay for someone who types code, but need someone who understands their business.. oh wait, isn’t that a consultant?

Consultants get a bad name because they usually don’t understand either the business or the tech. But they have good verbal skills


Just get more experienced developers, who understand that there needs to be done a lot more than writing some code. Pff wait, you’ll have to pay more, and you’ll have to trust their experience. Money and status…

It’s probably easier to get a younger dev who’ll go into crunch mode bc they made estimates based on incomplete specs and lack of experience, but who feel the responsibility to deliver. They’re more easily pressured into working more, when the PM should step in and get more resources or do damage control for the mess they created.


I'm reminded of the classic saying: "The first 95% of the work takes 95% of the time, and the last 5% of the work takes the other 95% of the time."


Arggh.

I hate multipliers. It's not poor estimates that need to be doubled. It's the lack of experience in identifying the complexity or likely risks in a project.

So rather than multiply, ask for risks. You'll find that most engineers know the risks (or at least they may know it is without risk). As you explore risks, you'll find the estimates will lengthen automatically.

So rather than do the multiplier, explore the risks. They are what makes projects late, not intrinsic engineer productivity. You'll end up with better engineers who know themselves and their problem spaces better. You'll also get a clearer view of what could screw up your project.


I don't think OP is saying one engineer is 3x "more productive"; rather they're saying that engineers tend to be consistent in how much they over- or under-estimate tasks. It sounds a little like "story points", where each developer may end up having their own estimation scale.


I agree that project risks should be explored and multipliers are not substitutes for them. However multipliers are useful for ongoing dynamic distractions outside of the project that reduce your productivity, for example company meetings, hiring/debriefs, getting pulled into critical bugs, helping to support colleagues, etc.


I don't trust multipliers. I often find those rules of thumb to be very poor predictors. Since if you don't know how can you predict some fixed fraction of the work?

I led a small team to deliver software over a multi-year project. The software was used to control a machine. The machine was contracted as part of a multi-million dollar deal. We delivered software on time and with good quality. It was very far from trivial. The key to this is that the team had experience in doing similar projects, so we generally knew how long things take. We knew what needed to get done. We knew we would be debugging the machines. We knew we would need to work around issues. We knew we'd be limited until the first prototypes are assembled. We knew what existing tools and libraries we could reuse. And we also knew what's the minimal viable product and what are nice to haves. For me personally this was the third consecutive project of building very similar machine control software, for very similar applications, using very similar technologies. This wasn't exactly the same but I went through this process and I knew how it works. The first of those actually failed because the software wasn't ready on time (and other reasons, but that was the main one).

A team with no experience, building something new, has zero chance of correctly estimating anything. Not only will they likely not deliver on time, they might not deliver at all, ever. Or they'll deliver something that doesn't work. Someone external who has experience with these teams might be able to provide the right estimate (e.g. they'll say it'll take them 3 years and they'll deliver nothing). In my current job I asked a junior engineer for an estimate, he broke down stuff to tasks, and came back with something like "a month". Ten months later it wasn't done. Much simpler technically than the other project but the engineer didn't really see all the details, didn't have experience with similar projects, so he basically just made stuff up.

You always have to plan for surprises and you need to have contingencies. An estimate is always some sort of distribution but for a large enough project these distributions do form some sort of coherent picture. Sure, you might have a surprise, but you do the risky things first to reduce those risks.


Yay #noestimatemultipliers


Good management advice. I was a first-line manager for a Japanese company (hard estimates, with lots of international frowny-faces, if the estimate was missed). That was pretty much what I did.

We usually came relatively close to our estimates, but would sometimes encounter anomalous situations that would force us to re-evaluate. I found that my managers were pretty good at accepting these, as long as they were not too frequent, and as long as I didn't do the "Lucy in the Chocolate Factory" thing. They set hard plans, and we were not to deviate from them.

But our customers often did not like the software that we delivered on-time, and to-spec. Since we were a hardware company, it was considered OK, but we hated it.

So we delivered yesterday's technology; tomorrow. :(

But I also ran a small team of really high-functioning C++ engineers. They stayed with me for decades, and I got to know them quite well.

I can't even imagine running one of these shops with half a million inexperienced engineers that flow in and out like guppies in a pond. My management experience would be worthless for that environment.

Since leaving, I have taken a very different tack with the software that I write.

I assume that my estimates are basically fiction. I am going to make big changes, to suit changes in the deployment environment, customer expectations, competitive landscape, platform tech, etc.

I tend to write very flexible software, in layers and modules. It allows me to react to these changes fairly quickly, and to maintain my very high quality standards, throughout.

I often toss out huge gobs of code, when I hit one of these challenges. That actually makes me happy. The code I don't write, is the best code of all.

Flexible software is a very double-edged sword. Most quality processes advise against it; for good reason.

But I tend to know what I'm doing. I've been at this a long time, and have been very humbled by many, many mistakes. I would probably have a difficult time trusting developers with lesser experience to take the kinds of chances that I take.

So my personal process depends heavily on me being me. I know myself, fairly well; warts and all. This means that I can trust the Principal engineer on my projects.

I manage scope. My projects are small-batch, artisanal, projects. I don't bite off more than I can chew. That said, my "small batches" are bigger than a lot of folks' personal projects. I often spend months, writing my code, so change comes almost invariably, during my projects.

Works for me. YMMV.


Honest question: how do you function in a team?

I know a bunch of very self-aware devs, but they almost all have a very double-edged relationship with team work: theylove to learn and share yet see other people as impediments or risks on their way to delivery.

How to improve on these situations?


Very well. You don’t last long in a Japanese company, if you don’t “team” well.

I lasted for almost 27 years.

I’m working on a team, now. I am doing the native app development, and I was the original author of a couple of the servers, but there’s a relatively young chap, working on another server (we’re up to 3 servers). He has some experience, but nowhere near mine. I’ve learned to work with a light touch, in these circumstances. I demand a lot, from myself, and expect his work to meet my bar -but only at the API level. I stay out of his kitchen. I make sure that he has no problem, asking me any questions, or for help.

When he asks for help, I immediately provide it, with no judgment. In turn, he has introduced me to a couple of new tools and techniques that I have adopted.

Here's an example:

Before this project, I'd never used Postman. I used much simpler REST explorers. You'd probably laugh at the primitive tools I used, when developing my servers.

I suspect that you wouldn't laugh at the results, though.

He asked if I could use Postman to give him examples of using the API for one of the servers that I wrote.

I could have been a dick, and said "RTFM" (It's a very well-documented API). Instead, I learned Postman. Didn't take long. He's thrilled at the results. Our exchanges seldom take more than a couple of Slack messages and a Postman query example.

I now have experience using Postman, and will have a new tool at my disposal, for working with others. It probably won't be a regular part of my solo work (Insisting on using team tools for solo work can be a bit problematic), but it's nice for teams.

I am encouraging him to learn Charles Proxy. He's not really following up on that. It's not the end of the world, but he's missing out on some really awesome inspection capabilities by ignoring the tool (and it means that I have to be a bit more creative with my Postman examples). It may cause problems down the road, and I'll deal with them, if they crop up. I will do so in a non-judgmental way. Our relationship is valuable, and needs to be treated with respect. I treat him with respect, and he does the same for me.

In my world, that’s how we team.


Do you mind explaining why you say estimating is fundamental?

I don’t disagree that it is important, I probably wouldn’t label it as fundamental, at least in a context of a company developing its own software internally.


Even when developing your own stuff internally, you have a limited resource: your dev team's capacity to do work.

The most important question for your internal software team is still "what should we work on" and estimates are a fundamental part of answering that.

It's someone's job to figure out what things that team could do to provide the highest value. Usually some combo of product/sales/marketing/tech.

But that's only half the equation! You also want to know how much it's going to cost, in terms of those scarce dev resources. If one project would add X value but take a year, and another set of projects would add Y, Z, and A value, but each only take four months, you want to compare Y+Z+A vs X to see what gets you the most value over the next year. So the better the estimates, the more rational you can be about what work gets done. If you have no estimates, you might pick a project that will go on for way too long, and not get the value out of it that you expected.

(However, I doubt anywhere truly uses NO estimates. I've never seen a place that didn't at least have an informal level like "oh that'll be really hard" or "that'll take us a long time.")


There's a qualitative difference between the sort of informal estimates you mentioned ("it'll be hard") vs the sort talked about here—people don't treat the informal estimates as commitments!

As soon as an estimate is really a commitment it totally changes the dynamics around it and actively impacts autonomy, so my impression is that what people mean by "no estimates" is more like "no timeline commitments".


It's one of the top two or three most important things.

All teams have far more work than they can possibly do. A handful of things win; most things lose. Reasonable cost tradeoffs to pick which two or three things per quarter get worked on require estimates. And even in small startups, millions of dollars are spent on salaries per those decisions.


Fair enough. It was the word that came to mind and I might have used a better one. Important doesn't quite capture it. Time is invariably the most precious resource, so perhaps "crucial".


Joel Spolsky's team actually built that approach into a feature: https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...


Fogbugz really was (is!) an underappreciated product. Jira’s become a necessary evil, but I really wish FB had managed to make a bigger dent in the market.


Something about the letter 'z' always screams out to me as a forgotten marker of 90s-00s culture. Really, any substitution for the letter 's' these days feels almost _antiquated_.


Did you ever come across a person who was < x1, like a x0.5 (they overestimated)? I know developers generally undershoot, which is why we joke about making your best estimate and multiplying it by 2-3. But does no one really ever reliably overestimates?

If the answer is never or rarely then that lends a reasonable amount of credence to the idea of just multiplying your estimates. Because you most likely should and the chance of overestimating is low.


Not in my experience


Estimation depends on the unknowns which you can typically put down to overall task time and complexity of the task. If its is something they have done before and a short task (hours), x1 is fine. If it is something new and long (weeks), x5 may be underselling it.

Breaking down a long task to be as granular as possible then rating each component against a scale of how much experience/confidence you have is the best approach to managing risk. As a manager, if you aren't asking people to explain this, you should be.

I've never seen anyone do it (I also haven't worked anywhere massive), but monte-carlo analysis with a tornado chart etc. would be the best way of managing this kind of risk at a complex-systems scale. It is a system used by the top real-estate developers.


I find that if you take the estimates and roll them up statistically, you get a pretty good result.

The problem is that everybody schedules to the estimate assuming it's the 90% point (project done) when it's actually a pretty accurate mean (50% point). In reality, the number is probably closer to the 33% probability because software can be an unbounded amount of time late but only a bounded amount of time early.

The primary problem is that when you feed people the 90% number up front they freak out. The secondary problem is that nobody goes back after the project was done and checks whether the 90% estimate was accurate (it generally is pretty close).


The multiplier was the same way I dealt with estimating time budget for homework in university. I would estimate the amount of time I thought it should take and multiply by 3 and it would usually take between 2-3x at the end.

It's a little harder when the work is more nebulous and I don't have to do it for others but it's all based on personal estimations of productivity as well as understanding of the problem. It's not easy to do.


A Jira plug-in to use some sort of Bayesian system to adjust developer priors at the task and sprint level. That would be amazing.


Yeah I've always felt like the missing part of the loop (in the context of pointing stories) is looking at point estimates and then tracking how accurate they were, _by developer_. E.g., I throw 2 points, but it turns out the story takes something along the lines of 13 points, then my estimation was pretty low and my personal "estimation multiplier" is increased. Subsequently, actual point estimations could take the estimation multiplier into effect.

Of course, this is mostly a toy idea that's fun to think about, but seems pretty untenable given that it requires keeping track of time, which most developers hate. That, and there'd no doubt be incentive to game the system by padding estimates or other shenanigans.


Just look at completion rates within sprint. How many points did they commit to, how many did they finish. 2 weeks is a fine unit of time.


FogBugz used to do something of that nature. I never used it, so I'm not sure how well it worked out ultimately.


Individual factors are useful, but I've also found it very effective to ask for more that one confidence level, e.g. tell me how long you think it will take with 95% confidence, and with 50%. The spread on these is informative.


How much learning time did you typically need to arrive at that multiplier?


At the end of two completed projects from each new starter


Author here. I've been lucky enough in my career to hold some senior positions, and I thought I'd give a step-by-step on an approach I took with an "enterprise-scale" software project, and how I stole some techniques from university project management courses to meet with some success. Happy to answer any questions :)


I'm curious that you mention project management courses from university. How much benefit have you found them to provide in practice?

Asking b/c I've taken two courses in dev process or project management in my academic career, and neither provided substantial value or benefit to how I've lead projects professionally.


In the early stages of my career, or in startup life? Absolutely not. Very little relevance; XP/SCRUM were both covered together in a single 50 minute lecture, the rest of the PM aspects were tackling paperwork-generation methodologies like the "Rational Unified Process" and "Dynamic Systems Development Model", both of which I feel like would be _hell_ if I actually had to work within.

However, there were techniques (like critical path analysis) that as I've got more senior, started working at larger companies, and started stepping down the senior leadership path, I've started to see some applicability to. Not direct applications - they still need taking with a massive pinch of salt, and modifying for modern learnings in industry, but they do start to provide some value, even if it's just learning what the grey-hairs in the exec are used to seeing :)


Fully agree. There are huge nuggets of wisdom in old-school engineering. For example PERT is something that most people don't know about. But it's simple to apply in the real world. Get three estimates - optimistic, pessimistic and realistic. double the realistic, add the optimistic and pessimistic and divide by four. It's basically a centerweighted estimate - but it forces you to think about the the corner-cases and what could go wrong.

Lots of other old school stuff that we "just turning grey" people need to translate for the new kids.


How are deadline dates assigned? Is the deadline exactly the same as the estimated completion date?

Realistic estimates aren't padded, but they still have significant probability of being inaccurate. After all, they're estimates, not information from the future transmitted to the past.


This is where the ideal meets the annoying reality of The Enterprise (tm).

I can't talk in too much detail, but in general, the deadline date was fixed through commercial contracts signed at a high enough level that engineering didn't have sight of them. The concept and commercial case was sound, but the implementation hadn't been worked out yet, when a date was set.

My strong preference would be for estimation to come first, of course, before a deadline is picked (and even then, only picked if it is really a necessary deadline), which is then based on reality, and also include some slack for unintended discoveries.


I had this "Enterprise sets the deadlines" experience just yesterday.

I delivered a spreadsheet with detailed information of how long it would take to develop each feature of the new module they wanted. It totalled around 3 months of development.

Fancy suit folks told me 3 months wouldn't do because sales promised it would be ready in 2 months. Then I was asked where could shortcuts be taken.

In the end we had to cut features that will certainly upset our client given their initial expectation.


Thanks for the article. Curious how you ensure dev outputs match up with other (external) deadlines, for example, the sales/marketing teams firing up a promotion of your new functionality.

How do you incentivise devs to hit those timeframes?


Our paths crossed on the engineering team of a certain letterbox flower company, for a short time at least. Good to see you doing well Tom, nice article.


Thanks! And I couldn't possibly comment on which company that could be...

Hope you're doing well too!


Do you think there is a different in how US and UK tech companies in project planning?


I've never worked for a US tech company, so I wouldn't know, sorry!


What tools and methods do you use for creating, sharing, and modifying project charts?


Depends how much of a perfectionist I'm feeling. For the initial development, a sharpie, index cards, and a whiteboard wall - or Lucidchart, because it's basically an in-browser whiteboard/drawing tool.

Once the project is ongoing and you'll need to account for changes, I've either done it manually (which takes an age) or handed it off to PM's to oversee using either MS Project, airtable with a custom-authored set of actions/etc, or PrimaVera.


> you should expect to be held accountable if your estimates are too far off the mark because you failed to do your due diligence when coming up with them.

In a perfect world, I agree.

In the real world, which has a remarkable knack for failing to live up to expectations, what I find is that companies are rarely willing to allow the development team adequate time to do their due diligence. Answers, in and of themselves, are cheap. I can give you those all day. If you want to be able to hold me accountable for their accuracy, though, then you need to be looking at my correct answer rate sheet.

For me, the magic of #noestimates is the magic of open, honest cynicism. If my realistic options are silence and blowing smoke up my boss's ass, I'd really prefer it if they would allow me to choose silence. That way we can both keep our dignity.


This brings back memories from the days of my early career (an ex-PM survivor here). I would be curious to see some data, even anecdotal, on the success of this approach. Here's some interesting statistics from the industry (not specific to software, but you can extrapolate): http://apepm.co.uk/project-management-statistics/

In my view, traditional software project management is ineffective. I would put it somewhere between the Myers-Briggs personality test and modern day astrology.


Insightful, thank you! The entire project management industry doesn't have that great a "hit rate" -- consider the budget overruns for the last few Olympics, or for Crossrail.

I'm just not sure why software projects are "special" -- if you can avoid it being a project and instead make it ongoing OpEx like, for example, GDS managed for the UK in 2016, then great, you've sidestepped that, but until the entire PM industry discovers how to improve overall project management techniques, I don't see why we'd consider our industry "above" them.


> instead make it ongoing OpEx

Every non-tech startup founder who's approached me with "how long will it take to build an MVP for my startup idea?", I've answered with this. Development is an ongoing cost, a process, not a once-off capex cost.

I recommend to them going the other way. Start with "how much can you afford to pay a dev team sustainably?" then work out how many devs that works out to, then work out how long your MVP will take to build based on their estimates (and estimates are not deadlines).

Not quite the same as #no_estimates (which I also try to argue for whenever possible), but close.


It's not special, except that for whatever reason people insist on treating random data as gospel.

How long until we cure retinoblastoma? Everyone understands there is no way to produce a meaningful timeline. There are too many inter-related unknowns - various causes, various treatment modalities, varying funding on various fundamental and applied research, no real idea if the final answer is gene therapy, nano-something, chemo, etc.

I used to develop software for a defense contractor, and it was pretty waterfall-y. But we built risks in, to an extent. Not by multiplying Sally by 1.3x, Joe by 2.7x or whatever, but you'd chart it all out, showing interconnections (x depends on Y, which depends on Z and Q, which...). And then roughly figure out risks of each of those sub-tasks going long.

The idea NOT being you then just multiply by a weight, and ta-da, you have an accurate schedule. The idea is that you have now identified particularly risky chain of events, and now you at least have a chance of managing risk. Every day/week you'd have a risk assessment meeting. Where are we on X, Y, Z. What can we do to get X back on track? Can't, okay, can we we-jigger the dependencies, or is this a hard slip? And so on. I've never seen this done on the commercial side, and it just seems like people are flying blind as a result.

"Waterfall is terrible" you reply. Sure. But when you are building an airplane, ya kinda need the 1553B/ARINC bus installed before you install and test the avionics. You can't attach the engines if the wings haven't shown up. You can't redesign the fuselage after the wing builder started building the wings (in general). These are hard, unavoidable dependencies, and changes are often extremely to destructively expensive (hence the endless mind numbing meetings arguing about change control).

It is just (IMO) not an unsolved problem, but unsolvable. Too many unknowns results in unpredictability. Your only bet is to manage the risks, adjust as necessary, and accept some things are just unknowable. Agile does that in one way, sophisticated waterfall in another.


I'd say that software is "special" because it's ephemeral, meaning there's incredibly few limitations on the possibilities (or changes to requirements mid-project) when compared to projects involving physical items. It /can/ be managed like a physical engineering project, but the cost and time ramp up so severely that it's not practical for most situations.


A project plan is only a point in time prediction of a project end date. It should be continuously updated as new facts are known (scope changes, additional complexity, unknown risks happen).

Obviously as you progress through your project, you predicted end date should become more and more accurate.

A project plan, though, is a prediction of the future. I've not seen anyone that can predict the future perfectly. That's not to say that you shouldn't do it, but defective project management creates a project plan on Day 0 and then tries to bend reality to meet the plan.

*Obviously all the above is caveated with reasonableness - you do try and bend reality a certain amount to meet your plan, and you try and keep to your plan as much as possible.


You can't really do that with the Olympics though as there is a hard deadline. Once you have a fixed date then either quality or cost are going to have to give.


- Projects only become official and tracked once someone has hacked together enough of a prototype to prove it works.

- at this point all project management is pretending it takes 100 managers to land something one girl / guy got flying.

- stop project managing, stop estimating, and just start treating companies as VC firms. Hire good devs, make them care about your mission, invest in those that take off. Don't take the control away from the original devs


This is a great breakdown of your process. I've seen many quite like it and have also been asked to make these kinds of estimates myself on projects for many of the same reasons: someone made a promise to another person, signed a contract, planned a release date for something they just made up, etc.

What I find frustrating about the whole situation is that no matter what process you use for making these estimates you have a roughly 30-some-odd percent chance of being right. It almost never has anything to do with the process you used when it does go well. If it did estimating software projects would be trivial, wouldn't it? Everyone would use this process and we wouldn't have 60-some-odd-percent of large enterprise software projects going over time and budget.

In reality people have used this very process, I'm sure, and have been in the 60-some-odd-precent. People have been studying this phenomenon since before I was a nerdy kid hacking on my Amiga.

Having a roadmap or a plan to get from A to B is good. It will need to be readjusted as you explore the problem space and navigate the waters so to speak. But the only real guarantee we can make as engineers is that we'll make progress in my experience. I'm only giving really rough estimates in the beginning and those estimates improve as we get closer to our end goal. I only start talking about actual release dates when we get close to being finished and are mostly polishing out the rough corners and have already done a few iterations internally.

If someone makes a promise they can't keep or have no business making -- in my books -- that's their mistake and they've made it a problem for everyone else.


Yeah. I'm a big fan of making the person who made the promise fix the mess. You dreamed up something and promised it to the customer? You get to go back to the customer and eat the crow. Maybe that will teach you not to do it next time.

Anything else is just enabling (or even rewarding) bad behavior. If you do, expect to get more of it.

Note well: I have never been a manager, especially not an upper-level manager over both sales and engineering. I don't know how well my recommendation will fly in the real world. (Hey, I guess that makes me the guy who just sold something without knowing if it can work...)


Estimate is but one part. The other is planning for obstacles and changes, i.e. have escalation paths and responsibilities in place:

Clear procedures to remove impediments, decision makers in the loop to ok scope/feature changes, resourcing agreed upfront, user engagement for testing locked-down, senior leadership aligned and kept informed regularly. Someone running the administrative side of the project, keeping people on target, etc. (could be a double hat, of course).

Sounds all rather "menial", but high degrees of organization really make a difference in delivery on larger projects.


Great post. A key point you don't bring up is the aftermath, even if you do deliver. Especially in non-tech companies there still remains the tendency to view these projects as "done" after the end of the project/MVP etc., with no understanding that sites need ongoing maintenance and improvements. And that this work is still considerable.


Agreed, especially for MVPs and "phase 1" projects.

At my current company, it's reached a point where we flat out reject product proposals for features or changes that would need to be hacked together for an MVP without a time commitment from all necessary stakeholders on how it will properly be implemented for phase two (iff phase one is a success). It's amazing how quickly "critical" features become irrelevant to product when they understand even half the amount of work required to properly implement them.


Knowing when to say ‘No’ is an important (and probably underrated) skill.


That reminds me of that scene in the series Chernobyl where the main scientist briefs everyone on the cleanup effort and ends with something like "The first battle is won and now begins the long war" with everyone being suitably cold in their response.

I get the same response sometimes when I talk about the long tail of maintenance at work.


Haha, love that show. There's definitely a parody in there somewhere

> Now that I know what software estimation is, I no longer need you.


> It also tells us how many team-weeks this fictional, idealised project would require [...] by adding all the estimates together.

I would be wary with just "adding all the estimates together". That's because we tend to estimate the median or the mode of the task duration, and not the average. Means can be added together, but not medians.


> Means can be added together

Is the error distribution of task size estimations normally distributed? Because I do really expect it to have a fat tail, and if it does, you can't add means either.


There’s a variety of analyses out there and they very consistently show a log-normal distribution for release predictions. I’ve analyzed Star Citizen’s publicly available data and found the same for their task estimates. It’s very reliable.

You do see truncated log-normals, though, when the estimates are padded.


I think most of us in software engineering assume the probability distribution has a fat tail. I've seen some authors name this the "blowup factor". For instance, your most likely estimation is 10 days, best case is 5 days, and worst case is 30 days. I think adding means is still meaningful (see central limit theorem and law of large numbers).


It's exactly the central limit theorem that breaks for fat tailed distribution.

Some fat tail distributions also break the law of large numbers, but I don't think task size estimation is this flawed.


How to make developers want to hit their deadlines with quality? Startup land.

There was no feedback loop that rewarded developers to meet the estimates. Stock options weren't an option, and I didn't want them to do a sloppy job just to hit the 'deadline'.


Honestly it's not developers' job to hit their estimates.

If we're talking about a long-term estimate, as in "this project will be finished in 6 months", it's your job in management to find a way to do this. You've got to break the goal down into achievable sub-goals, and monitor progress along the way.

Long term estimates will be wrong. Software projects generally take longer than expected, so it's up to you as a manager to anticipate this and communicate to stakeholders with the correct degree of uncertainty.

If it's an external deadline which must be met, firstly you should engineer enough extra time into the timeline to handle inevitable delays. And if at any point you feel like the timeline is unachievable, it's up to you to renegotiate with stakeholders, or adjust the scope to make it achievable.

And if you have the feeling your team is slacking off and not getting work done, honestly this sounds like a lack of leadership skills. It's up to you to have the kind of relationship with your developers so that they are motivated to meet the team's collective goals and take responsibility. That's basically all that being a manager is.


Thanks for that. I'm talking about the difference between hitting the deadline on Monday versus Friday. How to incentivise that? ie, do half an hour more work for a week, or skip the table-tennis when someone asks, etc.

As a developer, I was into making sure I hit my goals, and at work to work. As a manager I do struggle with how to emphasise that ownership of product, quality, time. Why should developers care about hitting Monday with effort, instead of coasting to Friday?


You can’t and you shouldn’t. Pressing isn’t how you get more or better software engineering work done.

What can change the game is spending as lavishly as possible on the factors which make a project take extra days. Bad interfaces. Missing documentation. Sharp edges. De-prioritized bugs. Deferred refactors. Commit the greatest number of the most expensive people to work which is not connected to customers, deadlines, or business metrics, but to mitigating their own frustrations and sensibilities. Of course that is anathema to business culture so it’s rarely done.


I think you might want to ask yourself: why do you want this? Do you think it will benefit them somehow? How? Are you orienting around results you care about, or around results they care about?


This sounds like you need to be asking very different questions.


Bonus and/or equity grants via review feedback are a blunt but effective tool. You could probably reduce this to "My team is not highly engaged/motivated, and I think it would better meet my company's needs if they were. How can I improve that?"


The more projects like these I plan and execute, the more I find the need to account for external parties affecting timelines.

In the case of the author's payment system, say they go with a vendor. At that scale, there may be weeks of contract negotiations for fees, rates, minimums, etc. What if some piece of documentation is flawed and you need to have a back-and-forth with their support? What if they need time to onboard you into their systems?

It makes me appreciate Apple's supply chain mastery that allows them to deliver exactly on time because they own their whole process inside and out and demand similar rigor from any vendors that supply them. If we could imitate that in software, we could eliminate a huge source of uncertainty in many projects.


Apple also cuts scope and delays committing to a release until they know they can deliver...


There are arguably two major phases here, and its only the second one that most folks here find controversial.

* Steps 0-2: determining "what" the project "is" (design, architecture, ontology)

* Steps 3-6: procuring and allocating resources to complete it (economics, management, politics)

It is tempting to decouple the two phases, and as a technologist focusing solely on architecture while leaving the economics up to leadership. However, social factors (the real people involved with the project) are an integral part of actually getting anything done, so I agree with the author's premise that the whole process should be viewed holistically (and ideally run by one technologist).


Well phrased, thank you! I didn't think of it in this way, but yes, that kind of phasing makes sense.


Seems like there's a structural issue at play here: the team is accountable for building a specific set of features at a specific time. High-level functionality is fixed and the timeline is fixed. If the estimate is off or something unexpected comes up, what's going to give?

Answer: everything else. Anything that the planners and executives didn't consider ahead of time. Other aspects of the work: code quality, product design, accessibility, performance, robustness, edge-case-handling... Team learning, culture and satisfaction. At the limit, the team will compromise on anything that you don't need to claim a feature is "done" with a reasonably straight face.

It's simply not a good system even when the estimates work out reasonably well—and, empirically, they usually don't.

To be fair, this isn't entirely the fault of estimates in general or this estimation approach in particular; I believe those contribute, but it's primarily a reflection of how the company and culture are structured. If you're already in a system like that and you can't change it, trying to do estimates well might be the best option forwards, but only because you're already in a corner.


I found it really fluctuates based on the team members. We definitely had 3x 4x differences in productivity between the worst and best on the team. Our estimates had to be good as it determined the size of the sales deal. Most everyone on the team was there for years, so we knew each other well. We could give a solid estimate but with a new team member is more challenging.


Call me crazy, but if we're going to call these goals, "estimates," they should actually be based on real data. Otherwise we're just making up numbers.

If the relevant data aren't tracked or stored anywhere, that kind of tells you how serious the org is about making accurate estimates.


https://rightingsoftware.org/

"Based on first principles in software engineering and a comprehensive set of matching tools and techniques, Löwy’s methodology integrates system design and project design. First, he describes the primary area where many software architects fail and shows how to decompose a system into smaller building blocks or services, based on volatility. Next, he shows how to flow an effective project design from the system design; how to accurately calculate the project duration, cost, and risk; and how to devise multiple execution options."


Project management is hard. I wrote and won an SBIR award this year. A much different scale than the article, but I spent a lot of time writing the plan and budget and sourcing components and estimating software tasks. Two months in and a big chunk of that is out the window… haha. Finding connectors and other components has been a big source of pain, especially if purchasing in small quantities. Another example: I bought a consumable product which then immediately became unavailable… so do I try to make do with what I have on hand or make the decision to switch to a replacement?

Stressful, but I try to have fun. And extremely satisfying when things work out!


Doesn't this assume you have the "spec" before you start implementing it? But then how do you estimate how long it will take to come up with that spec?

If you have a very good detailed spec you have already done much of to work to make the implementation easy.

If the spec is high-level and "fuzzy" it leaves the work of "resolving the spec" to the programmer.

So trying to estimate the time it takes to code a system depends on the quality of the spec, and therefore is difficult if there is no standard on how detailed the spec is to be.


If you don't have the spec, then you likely don't have a deadline, or a "project", really -- and something like this would be the wrong choice for an approach to follow.

I'd say leaving the overall roadmap (which is all this produces, at the end of the day, if you ignore the estimation piece) fuzzy and allowing the team to work that out with users/subject matter experts is the right approach, imo.


I've started estimations by their true name: Assumptions.


For those who are new to software development, the following terminology may prove helpful:

Software Project Estimumption (or Assumptimation, the professional community is split)

Software Requirements GatherWhims

Software Requirements Analysthetics

Unit Test Coveroverage


> Engineering does not work in a vacuum, and there are commercial contracts that will be signed, at a high level, without your involvement. Deals that will have been agreed before you joined the organization.

Indeed. And they’ll have been built on lies, damned lies and wishes.


I like how he says little-a agile - what most teams use. big-a Agile I reserve for specific Agile methodologies. Most companies aren't willing to invest in the rigor and effort for big-A Agile, and so you end up with this weird-ass hybrid.


From what I've seen the biggest problem is that the estimates are done as part of a sales cycle or to get approval to spend money of some sort.

Which is a different motivator to actually getting it done.


I would divide the projected total task in to 5 equal part and again each part in 5 equal part, and just worry about first of the first, after every part completion we can revise the target


No seriously, no.

Dive straight into debunking #noestimates but don't get to the fact that no one ever really knows what they want.


Bullshit. You can make guesses about the future, but since that is going to inevitably change, it's a guess and can never be anything but a guess. At most companies, the scope is going to be radically expanded and looking too far in the future is a complete waste of time.

Other people in other fields are held accountable for deadlines because their work does not completely change and is not severely under-specified. If it is, then they are also just guessing.


I think author addressed that explicitly as valid and true software engineer perspective, right at beginning of the article; and explained that such software engineers, and products they build, then usually slot into larger company's ecosystem (as by strict math, most IT team members will work at a large company as opposed to a startup), full of people and leaders and departments and teams and project who have plans and deadlines and dependencies, which are also valid and true.

I've had either luck or misfortune to "flip a switch" from two decades of being a techie/architect, to being a mid-manager, on basically a specific date as opposed to over the years, due to project's needs; and it's like that B&W picture of two faces and one vase in middle. Both perspectives are true, even if opposing and contradictory.

Good business lead will understand software engineer's perspective - even if it's not their primary view of the world, they can squint and catch a glimpse as needed. Likewise for good software leads.

Obstinate unwillingness to see or ascribe merit to other perspectives puts a ceiling on everybody's progress.


> it's a guess and can never be anything but a guess

Of course it's a guess. The question is how to make more accurate guesses.


> The question is how to make more accurate guesses.

I think the question should be about how to maximize the net return on time dedicated to development, which may mean spending less time on (and investing less reliance on) estimates rather than expending unbounded effort improving the quality of estimates.


What's the value to Sales/Marketing/etc of knowing what's coming when? How do you estimate that? It's estimates all the way down!


I have to admit I popped a monocle when I saw the estimate for what is described as a rudimentary ecommerce website without even basic features like authentication and payment come down to 43 team-weeks, or in other words 12,040 team-hours. Even at one person per team being billed as $20/hr this works out to almost a quarter million dollars.

I understand the numbers are given as an example but I think the problem is that the scale of the task hardly justifies the complexity of the planning and using "teams" and "weeks" as sizes instead of "developers" and "hours/days".


The thing with software is that it takes two weeks to get to 90% finished, then two years to get from 90% to something 100% production ready. Having worked over 5 years with ecommerce software I could write a fully working e-commerce site in two weeks, but then the marketing teem want to have it all integrated with 50 different tracking systems, and support 10 different payment systems in 30 countries, and have the correct VAT handling as well comply to local laws, integrated shipping systems, and of course 30 different versions of the site in all languages. And integrated with business software. And every moth the marketing team want to start a new compaign, buy 3 and get 20% off, buy only milk and get 30% off, buy a red shirt and get 25% off, discount codes, discount codes that includes shipping, etc. Did you know that different products have different VAT in different countries and within the same country as well? And shipping should use the average VAT from the products ordered - depending on country...


Sure but the article was about a linear process with the final product being understood to be an MVP, so the remaining 10% (which really feel more like Zeno's arrow, i.e. the sooner you get to 100% the closer the time necessary to complete the remaining work approaches infinity) won't be part of the calculation and the 90% solution is the first "definition of done".

The article clearly points out discount codes as an example for a feature that could be moved out of the initial spec. And the spec itself is clearly missing various features necessary for the app to be even remotely usable.


This has been the case for almost every project I've worked on. It starts out as super simple, and then the edge cases roll in, and then the feature creep, and then the bugs start piling up, and now it's been years on a "just a couple months" estimation.


> I popped a monocle

This is why I don't bother with estimates any more - not because I think it's impossible, or even necessarily too hard, but because I've observed (consistently over a 30 year career) that it's pointless. Even if you could estimate with perfect precision exactly how long a software task was going to take, they would just push back, say, "that's too long" and argue with you until you told them what they wanted to hear.


Early in my career I had this. I was given a task and a deadline, and told to prepare a plan. I estimated the plan, and it came out to longer than the deadline. I presented this, only to be told "but that's longer than the deadline! go fix it!". So I shortened all the estimates and it fit the deadline. When I presented this I was shouted at for shortening all the estimates, and told to go back and do it properly.

Around this point the penny finally dropped that software estimation is a political process, not a technical process.


That's really tough, and I'm sorry you had to deal with it.

I was lucky in that I was dealing (in both cases where I've run similar flows to this) with above-board exec teams who wanted the best quality information I could give them - even assuming that estimates are just assumptions - even if it meant having some tough conversations about scope or headcount.


Like I said, it was early in my career. I now know how to handle this - ask more questions and work out what the real objective is. Also refuse to let people shout at me ;)


Unfortunate; most of us have had experienced such situations separately, it sucks when they come together in a "rock and hard place" situation.

Good managers / business leads/ execs / champions CAN be reasoned with, as long as you find common language, think and understand their priorities, provide alternatives that meet their underlying goals (all of which frequently falls on the presenters). E.g. in your situation, it may be that unspoken expectation was to cut scope or increase resource contour or find another way to meet deadline rather than just changing estimates; or something completely different.

Occasionally though, you're as you say stuck between other people's indecipherable politics. I find in such situations, I'm most comfortable speaking the most honest truth and working hard, openly and explicitly, to understand/ask/bring to surface everybody's actual critical goals.


> CAN be reasoned with

Sort of. What you really end up doing is making enemies and burning bridges to defend a software estimate that ends up being completely unrealistic anyway. Sure you can "win" an argument with a "stakeholder" if you fight hard enough, but you'll pay for it later. They want to hear what they want to hear.


> understand/ask/bring to surface everybody's actual critical goals

This, exactly. Now I'd be working out what the actual objective is and how I can achieve that.


I've seen some managers who like to turn your estimates into a stick they can beat you with. They can never be satisfied.


Instead, we as an industry prefer to adopt magical thinking where we pretend that software projects aren't hard and lengthy and risky (even when the domain and tech are well understood), then we act surprised when the project "overruns".


This is extremely difficult to convey to non-developers though because they often look at software as a house and think "there's tons of houses (ecommerce sites) built every day and it's down to a science - how hard can a house (ecommerce site) really be", but miss out on the fact that all houses have pretty much the same structure of a foundation, framing, wiring and plumbing, and a covering of paint and a roof - this is largely kept consistent and we'll implemented by building codes.

I can only speak from my own experience, but every single web application I've worked on has had a wildly different structure than the others and the only consistent thing between them has been endpoint routing mechanisms.


Then this is, IMO, a problem with the software developers (in the broadest sense, not just programmers). Most e-commerce in fact, DO share the same framing, wiring and plumbing as all the others. This is why you can buy an e-commerce package off-the-shelf and customize it.

If the business is a relatively generic e-commerce store, it should usually not be building bespoke e-commerce software. Unless, of course, there is some technology feature that will be your competitive advantage/differentiator. But let's be honest, that's pretty damn rare in the space of e-commerce stores.


I'm not saying that this isn't a good estimate for "doing it properly". I'm just saying that no client will let me bill half a million dollars and spend the better part of a year to build them a bare bones ecommerce website from scratch that doesn't even have invoicing or an admin interface.

I'm mostly just saying if the author is able to sell what amounts to half a broken Woocommerce installation for half a million dollars (assuming it's at least a team of two billed at $20/hr), I must be in the wrong market.


I think I would have popped a monocle too, had that been the real example and estimate from a team!

I tried to think up an accessible example that didn't require too much context on the part of the reader, so obviously, as you correctly point out, all the numbers are made up, and I'm trying to use it solely to demonstrate the workflow :)


> all the numbers are made up

Well... have you actually applied this process successfully? If so, wouldn't you have some actual numbers to point to from a past project? Names and details changed a bit to protect the innocent, of course.


I have, yes - or I'd feel a bit like a charlatan writing about it!

The problem with the real world examples is the business domain, which was hyper complex and the specific "pieces" of work I described wouldn't have been easy to grasp for most not familiar with the esoteric side of fintech that the project took place in.

So I went with a simpler, albeit contrived and more accessible example.


Were they monocle-popping results?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: