Hacker News new | past | comments | ask | show | jobs | submit login
It’s Time for ‘Maximum Viable Product’ (debugger.medium.com)
188 points by herbertl on Dec 4, 2022 | hide | past | favorite | 134 comments



> And you can understand why it happens! Developers love to develop; marketers want new stuff to market. Adding new features, shipping new code, is fun and exciting. Merely maintaining a successful app? Bleah. So dull. That drives a lot of feature creep, right there.

> The thing is, we can usefully flip this concept on its head! What if more developers developed a sense for the “maximum” number of things a product should do — and stopped there?

I'm sorry, but the author misunderstands entirely the reason for feature creep. It generally has nothing whatsoever to do with "fun" or developers "wanting to develop" or marketers "wanting to stuff to market".

Rather it has to do with segments of customers genuinely needing a given feature, and they won't buy the product unless it has the feature.

There's the old quote about how 80% of the people only use 20% of the features -- but then the punchline is that they're all using a different 20%.

That screenshot of all the Word toolbars? Guess what. There are different user segments that actually use each one of those toolbars. Yup.

So the idea that there is a "maximum" number of things a product should do, that can be known in advance, is sheer rubbish. If you're a business and you want to make money and you'll profitably sell to more customers if you can build in a new feature, then you do it.

Why would any company restrict itself to an arbitrary "maximum" and leave money on the table...?


As an example I offer up a review of an Open Source Word-Alternative back in the early 2000's.

The reviewer liked the product enough, but slated the fact it didn't have a word counter. Now, I for one, have never cared about the number-of-words in a document. But, it turns out, journalists tasked with writing a 2000 word review _really_ care.

With great popularity comes great variability, and more "corner cases" matter.

Incidentally the solution is not fewer features, but better UI. Notice thats a very old screenshot of Word. Whereas Word to day looks very different, has the same (or greater) feature set, and a much more streamlined interface. The much-hated-on ribbon bar is actually a better interface than endlessly-deep menus and a million toolbars.


How can you claim a user interface is 'better' while in the same sentence referring to the fact that most people hate it? People don't know what's good for them?

I'll have my Word with endlessly-deep menus, combined with discoverable keyboard shortcut and Ctrl-shift-p please!


I suspect people hated the ribbon when it was introduced because it was an unexpected change, forcing users to learn something new when they just wanted to be doing actual work.

I don't think most people hate it any more.


The problem with the ribbon is that discoverability is extremely slow. If I'm looking for a piece of functionality, it's much faster to read through text menus looking for related words than to inspect a similar number of unfamiliar icons to figure out if they are related. Its keyboard usability is also not as consistent (e.g. alt+f, o to open a file from almost anywhere in the app)

Once you know where everything is and what the icons mean, it's probably better than nested menus for use with the mouse. Since that describes most of the person-hours in the software, this is an improvement on average. New users and those who lean heavily on the keyboard still aren't going to like it, though.


I still hate it - it takes attention, and I want to give the app as little attention as possible, so I can concentrate on what I'm writing, not the process of typing.


double click it


That is twice the work


What was a good way to temporarily hide all the toolbars at the top until I toggled it again back before the ribbon?


Im surprised my passion with witch im hating it hasn’t ceased a bit though it’s only when I use Word and I forget about it when I close it. Some things don’t make sense to me still, it feels dumbed down and more inefficient.


Since this is turning into an anecdote collection: I was sceptical of the ribbon UI initially, to say the least, but it works pretty well for me.


I still hate it, because it's a unicorn UX that doesn't follow any pattern that other software uses, and IMO is not an improvement in any way over the other standard options (normal menus). There's no special problem that Word faces that other word processors do not, so there's no reason for a screwy and impossible to decipher unless you've already memorized it UI.


So I hated the ribbon when it was introduced mostly for Excel. Because I did maybe 6-8 spreadsheets a year, but I had been doing that for at least a decade. It took a long time to build new muscle memory for the location of the small bits I use. Now? It doesn’t bother me, but at the rate I use it, it took years to get there.


I still haven’t gotten there. Drives me crazy any time I have to use an Office app. It’s rare enough that I cannot ever get work done.


> I don't think most people hate it any more.

Still hate it. Most apps did not follow Microsoft’s lead and add a ribbon to their app.


I think most people were just used to the old UI and resistant to change, so they complained about it. Yet, if you asked someone completely new to computing which UI they'd prefer, they would choose the ribbon.

If anything, placing the Styles menu front and center in the Home tab with live preview has made document formatting very discoverable. I used to receive so many Word docs with manually adjusted fonts and line spacing. Not so much after the ribbon was introduced.


> Yet, if you asked someone completely new to computing which UI they'd prefer, they would choose the ribbon.

How do you know that? This seems like a made up thing I’ve seen it stated by multiple people and it’s probably not true. If you take someone new to computing and show them the old idioms of UI and they’d likely prefer it since it makes way more sense.


Yes to be fair to Word, his screenshots were taken when all the optional buttons turned on.


Not all users know what they want, unfortunately.


It's not only a very old screenshot, it's also a screenshot you'd never get by default, you'd have to manually activate all these toolbars.


>> Incidentally the solution is not fewer features, but better UI.

I completely agree in principle. The problem is IMHO that the best UX is almost no UI.

Microsoft UI is IMHO essentially industrial automation. Select you data, then push a button (or menu item, or ribbon) to cause something to happen to your data. Apple attempts to encourage more direct interaction with data. The best example is pinch zoom. Remember when maps used navigation buttons to pan and zoom? Direct touch manipulation is better. This is very hard to do with a large feature set though.

One solution is hot-keys, but they do require learning and are limited in number. Context dependent hot keys can also be confusing.

In solvespace we have almost no UI and hot keys are critical to productivity. We avoid context dependence partly by having consistency. The constraint solver is always present, from 2d sketching to building an assembly, so those function keys are universal.

UX is hard, but also distinct from UI.


Also wouldn't most devs much rather fix bugs in an existing app than add features?

Piling more chairs on top of each other to get features out the door can be demoralizing. You know it's going to collapse under its weight eventually, and you're just speeding the company along to The Big Expensive Rewrite.

But fixing bugs? An actual chance to make things better? That is satisfying.


> Also wouldn't most devs much rather fix bugs in an existing app than add features?

No, for a few reasons: 1) it's often tedious work 2) often the bug has been around a long time and nobody's really noticed or cared, so how important could it be? 3) organizations rarely reward this work.


I quite like fixing bugs. It's a different mental challenge to truly understand what is going wrong, often something that's really subtle.

While individual fixes are often unnoticed, the overall effect is profound. Users are happier, support staff are more confident, and programmers are able to build the next layer on a firm foundation.

I agree bug fixing my code is more enjoyable to bug fixing other people's code, and I agree that most programmers resist the former and hate the latter.

But for me, bug-fixing foreign code, is one of the truest tests of skill.


I love bug-fixing other people's code for some reason. I often get nerd-sniped by another team in our slack talking about some bug they've encountered in their project(s). I kind of wish that could be my full-time job. I suppose that's what some fields of consulting are.


It's only tedious work if what you enjoy is cranking out another CRUD feature.

Personally I prefer the thinking, reasoning, tracking down, supposed 'tedium' that comes with bug fixing.


At essentially every company I've worked at the devs were asking for more time to fix bugs, improve performance, etc.

Sure, bugs are boring, but most of the time the new features aren't terribly exciting either.


Definitely not true. From a career optimization standpoint , every single major tech company’s leveling guideline that I’ve seen (including mine) has a criteria of “scope” and “impact”. No one ever got promoted for fixing bugs.

Even when it’s time for your next job, you can’t very well put on your resume “I fixed bugs for a year” and expect to standout.

Then there is answering behavioral questions at the interview where you have to talk about accomplishments.

On the other hand, I can’t pretend to know what most developers on the enterprise dev side go through at interviews. Even though that’s where I worked most of my career until 2020, I targeted small companies who were looking for “adult supervision”.


Developers have managers that tell them what to do. Developers can at most decide whether to accept a job in a maintenance or in a development team. Sometimes they just accept the first decent job, especially the first one out of school.

And managers care about their career and the size of their team is a proxy for their degree of success. If a company has 100 developers to create new software and 10 to bug fix, they want to be in the development team. If it's 100 to bug fix and 10 to build, it's the other way around.


The fallacy here is that it’s somehow necessary to divide into feature and bug fixing teams.


It's not necessary but I saw big companies with different budgets for development, (manual) testing, maintenance, operations. The team for testing is definitely separated from the development one, different managers, goals, etc. Operations more like so. Maintenance and bug fixing could be done by the same people but there is always tension at management level about whether to prioritize new features or fix problems. There are different internal customers behind those requests and turf wars are fought every now and then. Different teams and different budgets solve part of that problem but of course the fixes of some bugs can become new features on their own and more turf wars follow.


Haven’t met a single developer who doesn’t prefer feature development to bug fixing.


(Assuming you do this professionally) that doesn't sound great for the make up of your team.

I think I like bug fixing/performance fixing/'design' (as in db, infrastructure, code structure, refactoring - not UI) most. I don't mind new feature work, mainly because it typically involves some if the latter I suppose. I really don't like total greenfield work though, where you have to faff about with all the initial config of stuff (which assuming you don't start a new project every day you haven't done for ages) - especially anything to do with JS where you also have to work out how to do it without their 'helpful' cli tool, or how to combine a slightly different 19 tools & frameworks than the example you found.


Nice to meet you. I enjoy nothing more than figuring out a really tricky bug. I like the puzzle.


I also enjoy the detective work, especially for performance issues. I find I don’t like actually addressing the problems once I solve the mystery, however…


Does this count as meeting? :)


>> But fixing bugs? An actual chance to make things better? That is satisfying.

I don't think fixing bugs is the only satisfying thing. Sometimes, the need for new features arises only after users use your product for some time.

For example, PriceTable lets users generate professional-looking estimates quickly. After about a year of using our product, some customers said, "we use it everyday and have sent thousands of estimates to our clients. Is there a way to run reports to gain sales analytics insights?" So we added that feature too. And I found it satisfying. :)


It depends. A lot of devs, from my observations, take no satisfaction in the "last 10%". They stay for a project during the fun, risk-free green field development phase and then peace out.

These engineers never really learn from their own mistakes or the mistakes of others. They just jump from one startup to another. "What we need here is another microservice!"


> So the idea that there is a "maximum" number of things a product should do, that can be known in advance, is sheer rubbish

I disagree, I think the Unix philosiphy solves the problem. You build simple tools that do one thing, and do that one thing well, but you make it easy for the tools to work together and then you can have pieces that do one simple thing while still serving needs.

When new needs come up that aren't addressed by current tools you simply put together another simple tool that does that.


That's also how Word solved the problem with the toolbars. It hid the functionality using the ribbon and added plugins. Unix is a monster in terms of features if you look at it as a whole. It's just modular so it can solve all problems.


I don't think this adequately addresses the counterexample of the Word toolbars. Sure, there's Excel, but what if I want the tabular data to live in my Word document? Now the Microsoft engineers are building a whole secondary table editing system into a word processor...


This is one of the things that motivated the creation of OLE and COM, so that you could embed an entire Excel spreadsheet into a Word document.


If everything is modular, you can copy any feature between the various parts of your office applications. You've got a plugin doing one thing, and a host whose one thing is bundling all of those one things together.


This could just call Excel with COM


The unix philsophy might make sense for command line apps — though even there if you look at any manpage you find that they usually do more then one thing, including things you could easily do by piping the output to another command — but what does it mean in the context of a word processor? You want another app to count the number of words in a document, for example?


Have you looked at the help of some of these so-called simple tools? Pages and pages of options that you can configure. Not so simple.


Could this explain why the average person doesn't directly use linux?


It would be a start.


That just moves the complexity somewhere else (writing shell scripts, for instance). Not a real solution.


Not true.

Let's say you take a mega-program, break it into 100 programs.

Each now becomes more stable, due to simplicity. Yes, it really does.

But my point is, when you use those individual things, you use a random allotment of 3 or 4 of them. Not all 100.

Thus, the stringing-together bit is far, far less complex.


> Thus, the stringing-together bit is far, far less complex.

This is the statement I disagree with the most.

The problem is that combining different programs, that aren't strictly written to interoperate in a specific way, gives you an order of magnitude increase in the complexity. Sure, running find and grep together is common enough, but how about running {obscure_command_rarely_used} through {other_command} through {other_other_command}. There are always corner cases that can crop up, and even before we get to corner cases, just understanding how to fit the pieces together correctly is not so simple.

One cohesive program designed to all work together sometimes just makes more sense, especially when we're talking about complicated, high level actions, as opposed to the more common low-level actions of unix commands. (Or well, maybe high level and low level is the wrong way to put it - maybe the lower abstraction stuff in Word vs. the higher abstraction stuff that Unix tools deal with.)


> One cohesive program designed to all work together sometimes just makes more sense

Yes, your example of find|grep is perfect for this. Sometimes (not often) I have a situation where a simple pipe combination doesn’t work and I have to fall back to awk, which would be the single cohesive (and more complicated ) program.


This trivially becomes very fragile. Each of those 100 programs now doesn't just do stuff, but is held to a contract. A contract that it's well possible nobody intended to provide, but yet organically happened anyway.

And so you find that a small change somewhere like deprecating an option, fixing a typo, or adding extra data to the output makes the integration explode horribly in a confusing fashion.

There's also that plain text is a horrible serialization mechanism. Every program has to deal with parsing text that's mostly made for human consumption and deal with things like imprecise boundaries and arbitrary limitations per tool and per system.

Eg, /etc/passwd is a nice simple format, until you consider that somebody might want to use a ":" in a field, or to add more fields, or to store multiple lines of data. And all those quirks are specific for that particular file and may be handled subtly differently by other parts of the system.

And internationalization is an extra bit of fun, which ensures random bizarre bugs that use ',' instead of '.' as the decimal separator, or just emit text in some language that isn't English.


The object model of PowerShell is in my opinion really great for, at least, mitigating the text munging issue. The others are still there though.


Indeed, I think PowerShell deserves a lot of credit for taking a new look at this idea and doing it better.


If this was true, non-technical people would be stringing these together instead of using products that were created with the end user in mind


Figuring out which of the 100 I use and how to get them to work together is much more complicated than finding the feature in an omnibus program that already does exactly what I want.


Customers don’t want to glue things together they want whatever solves their problem efficiently.

Integration and ease of use are features in themselves.

The best of both worlds would be building things that is modular from the developers perspective but integrated from the perspective of the user. Few will pull that off though, and when they do the modular systems become complex and expensive bespoke setups (Typical of e.g SAP, Oracle).


You build simple tools that do one thing, and do that one thing well

You're right, systemd does fail in this.


Does it? If you mean 'all things under the systemd umbrella' then sure, but if you think in terms of systemd (core), systemd-networkd, systemd-homed, etc. isn't it pretty much as 'one thing'(s) as it could reasonably be?

And the umbrella just seems arbitrary - GNU, Mozilla, X11, anything that's a package group, etc. are also failing.


It's not the only cause but empire building[1] is a very real phenomenon. Maximum viable features is a very interesting concept but it's just that, a concept, nobody says you have to stick to it, it can just be a tool in your toolbox to limit scope creep. One thing I don't have too many of is tools to limit scope.

> the punchline is that they're all using a different 20%.

You could try creating different products.

1. https://www.investopedia.com/terms/e/empirebuilding.asp


The problem is that each additional features imposes costs on those users who don't need it. An example would be JIRA. JIRA is designed to have every feature possible. But as a cost it is hellishly slow and is hard to find anything in.

Each additional 20% of the features you support cuts the UI and technical effort that can be put into the other 20%s of features.

The end result is the standard big corporate apps that feel heavy weight despite doing what feels like a pretty simple thing.

It makes sense for companies to do it. Because the cost is born by the consumers.


>"But as a cost it is hellishly slow and is hard to find anything in"

There are examples to the contrary. CAD products I think are way more complex than this JIRA abomination but are not slow at all. And for what it worth I went from knowing nothing to producing complete design of a relatively complex piece of equipment (main assembly, all individual parts etc in a form accepted by factory) in less than a month using Solidworks. This says something about quality of this piece of software. Out of fun when I had nothing to do I tried to repeat the design in FreeCAD - never fucking again.

So I think the problem is not the amount of features (the more the better). It is just very hard to design proper ergonomic GUI and workflow.


This definitely applies to autocad though! A lot of the better products are perhaps a reaction to the bloat of older products.


> JIRA is designed to have every feature possible.

If only that were true. There are several long-requested features that they refuse to implement on "agile purity" grounds (such as the ability to split a story while also distributing points, so you can mark it as partially complete in a sprint, or at least leave behind completed subtasks in the old sprint).


Counterpoint is IntelliJ, a pleasure to use, more feature than Eclipse.


> But as a cost it is hellishly slow and is hard to find anything in.

Fun fact: a single Jira Issue page could download up to 100 MB of data. Not joking. Hope they fixed it in time.


Yep came here to say this. The article is incorrect in the diagnostic and thus the prescription


I liked some of the takes on building Minimum Lovable Product (MLP) instead. Retaining the focus on why people would want to use it, and with your vision for the product central.

https://news.ycombinator.com/item?id=30799083


Scope and feature creep has nothing to do with customers. It happens in all creative endeavors, most of which do not have anything like customers. It happens in art, music, hobby programming, event planning, etc etc. Everything.

If you sit down to try to write a program that no one will ever see but yourself, you will have to fight against feature creep. It comes from the creators themselves. It's basic human nature to find it easier to desire things than to restrain oneself.


> Why would any company restrict itself to an arbitrary "maximum" and leave money on the table...?

Because when there's too much options and features users are going to be overwhelmed.

Also, feature creep has nothing to do with developers wanting or not wanting, those are product decisions.

I saw that in the past with an ecommerce, we had a good portal, then stakeholders in corporate that need to justify their salary started shoving more and more stuff, now what was a simple search-find-pay has become interrupted and overwhelmed with features and information.


>Rather it has to do with segments of customers genuinely needing a given feature, and they won't buy the product unless it has the feature.

there is another reason feature creep happens for certain categories of products, which is regulation. If a law is passed mandating industry X needs to support Y that feature will be added for any product wanting to sell to the market that has mandated the feature.

any analysis that misses these kinds of real world scenarios is not that interesting.


I agree with you on your points but the quote even contradicts itself here.

Adding new features is fun and exciting but feature creep is bad?

What is feature creep if not new features? (Well new features + the idea that we're adding more features than we're thinking about their implementation and/or before the app does what it set out to do in the first place)


I've been exploring this concept a bit at my startup for our B2B offerings. Where I've landed is a robust set of feature toggles that Sales and Account Management configure. Within the confines of your Microsoft Word toolbars analogy; since the software we offer is similar to SaaS and delivered via a web browser this effectively means we can "hide" the toolbars that the client isn't interested in.

This is all pretty early days, but my hope is that we can iterate the core "have feature toggles" concept to the point where we have the ability to turn functionality and function accessibility on across many verticals. I hope that one day internally we can configure things a per-customer and per-user-persona basis. I also hope that one day we can expose some of those switches to admins and/or everyday users, possibly with paywalls or other strictures.

We'll see where it all lands :D


We've been doing this for 25 years in our desktop software. Every new feature is behind a checkbox on a "features" screen. This screen is in the program, so users can turn things on and off, but typically its set by one of our staff during the installation phase [1].

The upside is a clean interface which helps during training, support, and day to day use.

The primary downside is discoverability. Occasionally customer needs change and some assume "the software doesn't have that feature" rather than call us, or check online.

That's a problem best solved with good communication, but that can never be perfect.

[1] this is in the B2B space, and has a proper sales, install, support cycle.


It's not just about the users. As the software becomes more complex it's hard to change/iterate on, hard to find bugs/fix things, hard to reason about etc. And the difficulty seems to rise non-linearly.


How does that scale for enablement, training materials and documentation? Will be very confusing for users if the material shows things that are not in his UI.


tl;dr: giving users the ability to show / hide some features.


How about maximum viable product definitions for go to market? I've worked with PMs that have engineering build-build-and-rebuild delaying release until they feel it is "good enough", or what I think is really going on, "perfect". Then it gets into the hands of users who don't care about most of what the PM felt was important or necessary. In those cases (what I call "Steve Jobs syndrome) it would've been nice to say the product was limited to doing x, y, and z for the release.


I forgot to mention, the point is to limit the cost of the product before getting valuable feedback that ought to direct the future of the software. It's also not fun toiling away on a product the end users never use because it only made sense in the PM's mind.


(unsarcastically) I'd be pissed if a computer manufacturer placed an arbitrary limit on what I should be able to do with my computer.


Because they might get outcompeted by 5 different products that aren't 80% bloat?


"Bloat" means what exactly: Download size? Slowness to start? I suspect you really mean complexity of the interface, but with careful design its possible to hide the complexity and provide various easy entry modes.

Anyway no one uses an alternative to MS Word / Office 365 because the alternative has fewer features.


One of the reasons that we have feature creep is that many (most?) software teams either don’t know to, or don’t know how to, dissect feature requests from customers.

Imagine one of your customer requests a feature, e.g. “add a button on form X that transmogrifies a foo into a bar and then copies it to form baz”.

The request is communicating multiple things and blurring them together. First, your customer is telling you that they have a business problem that needs to be solved, eg converting foos into bars and making them available to other business processes. Second, they are telling you that in their current process in their business, which likely is centered on particular people with particular personalities, the best solution for them (“best” here means “least amount of stuff I have to change and grief I have to deal with”) is a button that does the conversion and then takes the next step in their workflow.

If you build the feature exactly as requested, it will likely only delight exactly one customer.

On the other hand, if you decompose the request into its different constituent parts, and then build the unique features in such a way it can be composed into arbitrary workflows, you are likely to delight many customers.

In my example, maybe you build some kind of library function that converts foos into bars, and then have a general way to compose it. If I were writing Microsoft Excel, I might implement a ConvertFooToBar() function and make it available in both formulas and in script.

In other words, dissect customer requests carefully, implement the uniqueness and throw away customer specific workflows in favor of composability. In fact it’s a lot like the Unix philosophy.

There are other kinds of feature requests that may not initially seem to fit this model, like security features. One I have gotten a lot (and made myself) is “federated login”. You still need to dissect these to separate business problem from workflow, eg the business problem might be “I don't want you having a list pf all my users and their passwords”; again you need to aim for composability while solving the business problem.

Separately, at a previous job, we stopped using the term “minimum viable product” because such MVP products are almost invariably shitty. Instead, we adopted the term “minimum lovable product”, implying that the product must not only be useful and fit for purpose but must also not be onerous to use; you don't want customers to discard it as soon as anything else comes along that isn't as painful.


That's the difference between having features people use, and having features people think they will be used - communication.

Often times customers come with feature requests but dev/biz fail to translate that into problem resolutions.

"I want it to export all my data for this time period", should be followed by an in depth discussion on why that's needed, whose going to do perform the task, how often, under which circumstances.

A bad translation more often than not translates to bad UIs, half-features, and hours upon hours wasted on dev time and customer support time.


Simplicity is really difficult.

The more powerful machines have become, and the larger the screens we work with (I am writing this on an LG 49" ultrawide), the easier it becomes to jam a bunch of stuff in there.

A good exercise for Keeping It Simple, Stupid (KISS), is to write firmware for fixed displays, or mobile device software (although mobile devices are starting to look more and more like full-sized computers, these days).

I learned religion from Scott Jenson's excellent The Simplicity Shift[0]. It was written before the iPhone.

Most of the tips in there, could be applied to desktop/laptop computer software. I used to use his "triage" methodology, in the days that we wrote host software, at my old job.

[0] https://jenson.org/The-Simplicity-Shift.pdf


Thanks for your comment!

This book is over 100 pages long -- any chance you could summarize some of the most important ideas, like the "triage" methodology, so we can figure out if we want to read it?


Well, it's hardly more than a pamphlet. The book is physically, quite small, and each page is done in fairly sparse font. Took me about a day and a half to read. It's not a technical book. It's aimed at Marketing and Product Management folks. Jenson isn't a tech. He's a product designer. One of the best in the world.

But the main gist of the book is about prioritizing features. Remember that it was written, when the peak of phone technology was the Motorola RAZR, with its tiny screen, and people were writing “m.” mobile sites in WML.

It recommends a pretty intense selection process, with a lot of bathwater being thrown out, and care, taken, not to include the babies in that.

The methodology that I developed, based on this philosophy, was what I call "Front of the Box/Back of the Box."

Say we have a boxed product on a store shelf. The front of the box is facing out.

You only want to list no more than three main features, in big font, on the front of the box, to catch the buyer's attention.

The buyer then takes down the box, and turns it around, so you can list, maybe four features, on the back, in smaller font.

The rest of the features are listed on the sides of the box, in even smaller font.

When we look at our feature set this way, it requires a brutal triage. We need to do away with the thought that "Every Feature is Precious," and select the three (maximum) "Front of the Box" features, etc.

This also affects the project design and implementation, as well. We need to make sure that the "Front of the Box" features are done first, even if they are not the most problematic or challenging features. They are the ones that will be important to the end-user.

That way, when the inevitable "crunch time" comes, and we are tossing out features, the ones we toss out, are the "Side of the Box" features.


Thanks!


A good product is like art. It requires a vision and the ironwill to actually achieve it. It requires ruthlessly deleting features and annoying a subset of users for the greater whole.

In terms of teams, I don't think "developers" are here to blame. Where are the product managers? And I don't mean the project managers pretending to be product managers. Great products require the vision of a designer, organizational skill of a product manager and developers to be able to implement and/or tell everybody is asking for something feasible with current technology.

Its not easy, which is why so many products suck.


>It requires ruthlessly deleting features and annoying a subset of users for the greater whole.

Which generates bad PR and companies like to avoid that. I've seen quiet a few highly voted hacker news threads which get angry at:

- A feature being removed from something.

- An app adding metrics and analytics to track feature usage.


> What if more developers developed a sense for the “maximum” number of things a product should do — and stopped there?

Then what? Follow this line of reasoning to its terminal (or shall we say, maximal) conclusion: developers would be laid off or fired (or at best shifted to another product, which might not be what makes money for the business, so back to an increased risk of being laid off or fired). Tell your boss that the product is done, no more updates needed, and see how well it goes for you.

We already know why feature creep happens, people need to work to survive. One of the few ways to prevent this occurs in open source, where people's livelihood is (generally speaking) not tied to their output. If we want feature creep to stop, we would have to live in a society where people wouldn't need to work to survive. I'm sure that will come with AI automation soon enough, if ChatGPT and Stable Diffusion and other adjacent programs are anything to go by.


You are correct to frame this in economic terms. It is very much an economic issue as much as a software issue.

Assume for the moment you had a sales driven model, like say Word did in the era of those screenshot.

Your whole business depends on more sales. Developers, support, admin - stop selling for a month and the business goes under.

More customers means more support, which means more overhead, which means sales has to increase in pace. This clearly isn't sustainable (so, no one gets support.)

Part of those sales goes to existing customers. (upgrades). Which means new features.

It's a treadmill to dream up new things just so there are things to sell as part of the upgrade.

Recognizing this route could only end in a crash we switched to a SaaS model. Now support is a priority (and we'll funded), programming, and random new features less so. Customers get better support, programmers get rewarded for fixing bugs, new features can be considered, well planned, and done properly on a decent time scale.

And if the business never sells another copy, it's OK. Support remains funded.

It's a much better model for customers, for support and programmers, and ultimately yes, better for business owners as well.


> "The saddest thing for me about modern tech’s long spiral into user manipulation and surveillance is how it has just slowly killed off the joy that people like me used to feel about new tech. Every product Meta or Amazon announces makes the future seem bleaker and grayer."

I was reminded of this quote (from Shannon Vallor who I don't know via a Cory Doctorow post https://pluralistic.net/2022/10/20/benevolent-dictators/#fel...). It's not even just about having too many features anymore, it's about having actively user hostile features that want to manipulate you into doing things or steal your data.

The article mentions Word / MS Office which is the prototype for this. They long ago crossed the Rubicon into the "too many features" domain, but recently they have moved into "connected experiences" - we want to upload everything you write and will dark pattern you into some technicality of consent, plus nonsense like accessibility checker that complains about things I do in internal company presentations while (outside of connected experiences, through some other scam to steal my data) uploads all my charts to MS servers and captions them for me so that blind users will now they are "a picture of a chart"

Anyway, that turned into a rant. Tldr, 2005 was about too many features, 2022 is about ultra manipulative features that try to steal your data and impose tech designers' world view on users


There’s some truth to that sentiment about a bleak and grayer horizon. Now I shop in tech while looking over my shoulder waiting to get mugged by some shady dark patterns and outright user hostility. It’s not friendly solve my problems territory but a menacing gauntlet of wolves waiting to feast on the next unsuspecting prey to leak all their user data back to some awful place. It’s a bit hyperbolic but that’s the color of the feelings wandering around some new tech product. I just experienced this tonight shopping for a barcode scanner and having no clue how they operate and wondering what beastly proprietary software will have to be installed and worrying about all the security junk surrounding that. Turns out they’re not too bad, basically many can go into a keyboard mode.


This makes me think of something Antoine de Saint-Exupery said, "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."


We'll hopefully see more of this as unlimited money dries up and people will have to think harder about allocating resources.


I’d rather see maximally extensible products where users can customize extensions around existing products. If certain features are not wished maybe some mechanism to prune them as if they were an extension themselves that could be uninstalled.


I think this is the solution - a maximum viable product that can be extended to meet niche use cases.


I have a few problems with TFA, not least of all the fact that properly managed developers don’t generally add features for fun. Neither do marketeers, mostly those guys are just trying to interpret signals from their market and perhaps trowel some bait out there. However…

It is a brave product manager that declares his product more or less feature-complete. Because it is a far more frightening proposition for a company to begin a new product, perhaps only their 2nd ever, rather than glom-on extras to their first-born.

It’s one thing to manage a backlog of features into a product (or to discard them), but quite another to have the clarity of vision for your product that allows you to repeatedly say “no” even in the face of clear customer demand for functionality. Functionality that really belongs in your 2nd app, not your 1st. Even if you as a product manager have that clarity, you’ve then got to get the rest of the company behind you. IMO the only thing harder is getting a company to kill a product that needs killing.


We can’t avoid feature creep as viable company leaders because we would miss out on new customers. Maintaining a good UI amid constantly added features is really a product design issue. The real problem is rogue teams/line level product managers working in silos and missing opportunities to solve for feature demands with better planning and product design.

There are plenty of products have iterated over decades and maintained the same simplicity, think Ableton Live. I would even give Word some credit in that it has recovered from the earlier mess the author points out by delivering a toolbar architecture through its ribbon menus and advanced settings to hit those less common use cases while maintaining a level of approachability. Sometimes all it takes is a step back and a bit of thought on design.


we would miss out on new customers

You're right about the thinking. But that thinking is flawed. Must product X be used by everyone? No, there are (literally) millions of customers out there - pick a segment and serve it well.


That stems from the wrong idea that a software has to grow endlessly in its user base and revenue.

You start offering A, then you offer everything from A to F, putting complex strains on design and overwhelming your users over and over. Then you start losing the customers that liked A but find the software too bloated.


If your product is too simple it is easily copied by a competitor. Competition for limited customers is where most feature creep comes from. The fear that if you don’t do X then a customer will go elsewhere. When you are catering to the whims and sales cycle of 20 or 30 customers all hell breaks loose.


That depends.

If the simplicity is artful, then the copy just “doesn’t feel right.”

Anyone remember the Movado watch? When it came out, there was nothing like it. A lot of competitors tried copying it, and they just looked cheap.

It eventually fell out of fashion, but it commanded top dollar, the whole time, despite the copycats.

I never liked it, and would not have brought one, but I have to respect the design.

The same could be said for Apple’s stuff. Not all of it is something worth copying, but they are a multi-trillion dollar company, regardless of the hate heaped on them.

Nowadays, it’s actually fairly easy to copy complex stuff; especially if a lot of that UI is the result of leveraging standard chrome in a framework.


This. I grokked this when I tried to contribute to Unigram (an alternative Telegram client) in 2021. Back in school and even university circa 2010, what I expected from chat apps is passing around text messages salted with a few UI features like parsing and displaying emoji, highlighting links and maybe sending files. These days there are hundreds of types of messages alone, plus you also have all kinds of metadata exchange, social graph stuff, group management etc. When writing this comment, I feel like the whole IM industry would win from following Unix philosophy a bit more. But it is never going to happen because companies want to lock users in.


Ultimately, it's features per se, but the UI / UX that reflects those features. The point begin, a UI / UX paradigm that support 10 features might not be the ideal paradigm for say 20 features. Just look at the Word screenshot in the article.

You can't keep adding to a broken paradigm and get ideal results. True, that might disrupt current users but done right they'll eventually appreciate it if you've made the tool less friction-y. An alternative is to give user the option to see what they want and ignore what they don't. Maybe allow them to change background colors so their most used features are easier to see?

Put another way, it's not how much you say, it's how you say it.


> [image of slider going from red "min" to green "max"]

> This is what you get when you search for “Maximum” on Pixabay, not really sure what’s going on here.

This image succinctly summarizes the problem that the article is talking about. (It's an actual case of "an image is worth a thousand words".) The correct color scale would have green in the middle and red both on the "minimum" and the "maximum" side. Instead, "maximum" somehow has a generalized positive connotation, even though it is just as far away from "optimal" as "minimum".


I've written this on HN before. The most important thing to remember about the Minimum Viable Product is that, no matter what you may call it or try to think about it, it's still version 1.0. And that means when you start working on the follow on version, you will inevitably run into the second system effect.

Only because you've been referring to v1 as the MVP, everyone thinks launching the "real first version" will be a straight forward process. Only it won't be, because it's v2.0 and the second system effect blindsides everyone and the result is even worse.

Don't be fooled by the siren call of the MVP.


Honestly, I just kind of marvel that all us monkeys can hit keys on a typewriter and something usable comes out. Seriously, humans are bad at writing working code, maintaining working code, and shipping working code. That things usually work despite this is amazing. Yes, much software (and probably more than 50% of SaaS) is actively user hostile. Yes, centralization is bad and seems to be getting worse. For all that, there are tens of thousands of usable desktop software packages that usually work. Amazing times.


Yeah, I work in software development for an app with a large user base from over 20 years. This would never work, mostly because the demands of the users not only keep changing, but we constantly have ways to improve the core functionality as new technology/platforms get developed over time. Besides, our competitors would never agree to some kind of "maximum" either, so we would lose those users to our competitors.


I think the real point/value here is having the vision/will/ability to know when to say "no", and to say it. It's exceedingly rare for companies and products and etc to be and remain clear on their purpose/point/intent, so they lack clear guidance on when "no" is the right answer. "Somebody wants to pay us a lot for this" is basically never a good reason for "yes".


If the industry followed something like the Unix philosophy, the definitions of Minimum and Maximum Viable Product would eventually converge. But the trend is still to build morbidly obese products that attempt to solve dozens of problems, instead of building one good solution at a time. There is no straightforward way to fix it, since it's what most businesses think they need.


All I know is that if it's smaller than the minimum viable product or bigger than the maximum viable product, then it's not viable.


I've heard a lot of carping about products being "bloated" with "too many features" but I have never seen someone (including myself) stop using a tool because it was too powerful. Perhaps this is a personality trait; either you're a vi person or an emacs person (even if you don't actually use one of those most of the time).


> I've heard a lot of carping about products being "bloated" with "too many features" but I have never seen someone (including myself) stop using a tool because it was too powerful

Not because it's "too powerful", no; you'd stop using it because of the fallout. A newer, sleeker alternative comes out with 20% of the features, resulting in a cleaner, faster UX that eats market share (Trello/Jira). Bugs accumulate until people give up. New users open the program, take one look at 3 stacked toolbars of unintelligible options, and go devise workflows that let them avoid opening it again. Structural problems ironically make it hard to add new features so it starts getting rejected. The features are fine, but they're not free.


> A newer, sleeker alternative comes out with 20% of the features

Yes, and everyone tries it and sees it's missing their one rare feature that they use all the time and that's the end of that dalliance. In the words of jwz, "Mozilla is big because your needs are big."


Bugs accumulate until people give up Yes - all those additonal (seldom used) features have costs ... more bugs, harder to use, slower app, more storage & memory.


I submit that these costs are mostly insignificant stacked up next to "the alternative lacks that one niche feature you personally absolutely cannot live without."


They might be insignificant costs to the user who wants that niche feature, but they affect all users. So millions of people are pay costs associated with something used by a tiny niche.


The problem is that everyone is part of one niche or another. Each group, naturally, sees their own needs are the core feature set the software should have and thinks the other stuff is irrelevant junk that can be removed. But the reason this doesn't happen is a lack of broad consensus on which features belong in which categories.


WordPerfect Syndrome. People who have been scarred by the experience of learning a nightmare product are reluctant to move on to more capable tools that are much easier to learn.


I think it's more that the alternatives are missing one or two things they can't live without.


Minimum Lovable Product.

A PO once said that. And it stuck as something I should strive for.

I also practice Minimum Lovable Code.


I get what the title is going for, but I feel obligated to mention that the word 'viable' covers the complaint. If you have only the set of features you need, that's an MVP.


I hate bloat and feature-soup as much as the next person. But market forces will always push companies to keep adding more and more to otherwise simple products.


Idk I like to think MVP stands for Minimum Valuable Product


At my last (non tech, extremely large) employer we joked that every product (supporting our business, not standalone) was Maximum Viable Product, as every user story was required for launch, including endless change requests. Sometimes we even shipped something when they ran out of ideas...


I love the idea of considering the maximum viability of a product. we could even combine the two to create a viable product range for shipping a new product


"maximum viable" or "Maximally viable" is an effectively unattainable target. At what point do the developers ship?


ninja'd by jxramos

Another interesting model that avoids user-interface bloat is installable extensions. For example, Visual Studio and Visual Studio Code. Instead of making EVERY feature accessible via menu, right-click-menu, and ribbon or button bars all the time, make only those features that have been installed available.


¿What about the concept of plug-ins? Developing plug-ins as optional feature and let the users to decide.


Isn't this more or less solved by a well thought out plugin ecosystem?


Lol, and investors should accept Maximum Viable Valuation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: