Hacker News new | past | comments | ask | show | jobs | submit login
Startup Priorities (geoffralston.com)
149 points by katm on Feb 11, 2015 | hide | past | favorite | 35 comments



It's often impossible to get metrics on what new features you should build since your users have not yet experienced them, and those that give you feedback are not necessarily the ones you should listen to.

My experience is that you push out the feature and test them. If there us usage, you keep them, if not, you kill them. Testing works a lot better than trying to measure future usage.


I think you bring up a good point. These metrics, formulae, etc. can only be applied when thinking about new features. When building the core product we must rely on our knowledge of the market/users to decide what features end up in v1.0.

It is also prudent to bake in anonymized usage tracking code so you know how users are using your v1.0. This will provide a guideline on what area of the product to concentrate on when thinking about new features. This way we don't end up spending time pushing out features that few will use when our data had showed we should be concentrating on a different area of the product.


Any reason you suggest anonymized tracking? I like to pass user-id / segment information to my tracking tools so we can get a picture of WHO is using feature X in addition to general adoption rates


We track user specific actions too but that is more for providing a personalized experience for the user (this is specific to our use case). We notify users of why we are collecting that data. To get an idea of what feature is popular we don't need to know WHO is using it rather HOW MANY are using it and for HOW long? Another reason to do anonymized tracking would be to address any privacy concerns users might have. If someone raises such a concern we have data we can show that says we did not do any user specific tracking.

This is specific to our use case and in other cases it might be important to track who is using what feature.


Fair enough, thanks for the response


"d" is not usually a known quantity beforehand and therein lies the problem OP's approach. The "average user" is also not necessarily the most important one to be designing for.

I've seen this kind of analysis "(b*d) / c" lead to a lot of useless SEO tweaks. Ie, imagine you've got some section of your site that is driving a lot of organic SEO traffic. If you are able to increase conversion on that part of the site by a small percent, it would have a huge impact on the business and users would experience more meaningful parts of your product.

Of course, it is very difficult to increase conversion on those parts of your site and most of those initiatives fail.

Better to spend more time focusing on making the experience stellar for a smaller, but strategically important section of users.


> Better to spend more time focusing on making the experience stellar for a smaller, but strategically important section of users.

OP agrees for the 'early days' of your product:

> There is a bit more to say about the breadth and depth of your product. In a company’s earliest days, it is important to place more priority on d in order to ensure that you have product/market fit. It is useless to try to grow or scale before finding 100 users who really love your product. Only then should you refocus on b and grow your user base...

But at some point, you do need to grow your userbase, right?


> My experience is that you push out the feature and test them. If there us usage, you keep them

There are virtually an infinite number of features you could be developing. But you have finite resources to develop them.

And some features will take a lot more of your finite resources to develop than others.

You need some way to prioritize what to actually develop.

If your product doesn't actually do too much but is essentially just clickbait, then, sure, your features are very cheap to implement, and you just throw everything you can think of out there and A/B test.


I don't mind this strategy, but it requires that you already have a product good enough to get customers—assuming you are actually going to measure things to plug into the formula.

So, there's a bit of a chicken/egg problem here. The way to solve it is to do great customer development (see Steve Blank et al.) to make sure you are addressing a real problem, and then make the best damn product addressing that problem that you can within a reasonable cost. Maybe your gut/luck is good enough that you don't even need the customer development step. But still, an "MVP"—and I admittedly don't like the term—really can't be 'minimal' to make that formula usable in most cases; you've got to make something good enough to attract and retain users right out of the gate, and that requires taking on some risk.


> how should I be spending my time?

Good question

> It is best to pick a metric you are pretty sure you know how to increase.

Good answer but the game is a bit more complex. It's good to have a KPI driven behavior but it depends also on the stage and thus, KPIs can change quickly. Sometimes you should find best hires, sometimes the best angels and VCs and sometimes customers or reach.

> (b * d) / c

Not really sure about this thing. I like your thought behind it (that before implementing an idea check if it's wanted and build time is in relation to the potential market size) but I don't know if the formula is a good way to communicate this definitely good idea--it's a bit too abstract/cumbersome.


Dr. Eliyahu Moshe Goldratt explains how to manage complex systems, trying to manage everything at the same time is not only unrealistic - it will also not bring success.

Explanation in his own words is for example at https://www.youtube.com/watch?v=tWvMODJ9cVc but also in many other lectures and presentations of his available online.

"... managing a bunch of wild cats" was said somewhere in there.


My first reaction was to think that (b*d)/c was a cool way of expressing how to go after the low hanging fruit, or get the biggest bang for the buck. After thinking about it for a bit, it seems like a good starting point and better than nothing to act with some intent.

But it might be better to focus more on b/c OR d/c, or weight b and d, if you're trying to come up with a general framework. In practice, that comes out to generally, we want to go after broad OR deep features we can build easiest as we groom the backlog. Of course, you would want to maximize both b AND d, but often a single feature doesn't do both.

The whole discussion might be too nuanced. Overall, it's valuable to give some quick thought to why you're picking the features you pick, as long as it doesn't make decisions take too long.


Haha, I think that might be too nuanced :) I definitely agree though that (b*d)/c gives you a nice framework without being tied to a company's specific situation.


This feels pretty high level. Knowing that we want to build the most profound feature for the most number of users is not very controversial. But there are many ambiguities about how you measure profoundness. There are many things that I could fill up my time working on each day all in the name of building profound things - e.g. do I build this feature, do I refactor, do I do customer interviews etc.

In my experience, teams always struggle with that level of decision making, and the problems stem from optimizing for the wrong tasks, even if someone is overall agreed on what the "most important" metric is.


> Knowing that we want to build the most profound feature for the most number of users is not very controversial.

That's not the profound part.

> The application of this simple formula tends to arrive at a blindingly obvious conclusion: first build the features that affect lots of users as profoundly as possible, and which you can build quickly and cheaply.

The profound part is the ending of that sentence -- "and which you can build quickly and cheaply."

I know a huge number of startups (probably the overwhelming majority of them, actually) that think "we're going to do things right" instead of thinking "what's the next feature that will get us the biggest bang for least buck". Most new founders skip past the denominator in Geoff's equation, which later turns out to be the achilles heel that kills most startups.

Of course the fine grained level of decision making is very hard too, but you have to learn to walk before you learn to run.


That's fair. I was perhaps speculating more about my own personal experiences that were more skewed to the fine-grained layer.


"Task switching overhead is an intrinsically hard problem to solve. As a startup CEO, at some point you need to shut everything out, go with your gut and move forward."


I think the (b * d) / c formula is not just useful for startups, but for just about any software development.

It's what I try to use for 'in-house' development, although the (b)readth in that case isn't about expanding customers, but about how many of our userbase will be (positively) affected by the change.


If there's such a simple formula for success, why isn't everyone successful?


> If there's such a simple formula for success

Its not a formula for success, its a formula for prioritizing what to build (success is more than prioritizing features) and it relies on at least one value (the d in the formula) which is not easily quantifiable.

Its essentially a variant of the standard formula for prioritizing work effort in any software project that I've seen in many works on Agile and Lean methods, which is basically v/c, where v is the business value expected to be produced by the change (and usually the hard part to reasonably estimate), and c is the cost of delivering the change. (The b × d that replaces v in this article's version is an interpretation of what produces business value in a property with multiple customers, but, given the fuzziness is d, retains the difficulty of the base version is assessing expected value.)

I'd also suggest that the formula considers only the value to existing users, and not the value of a change in growing the user base, which may be important, particularly in the context of a startup.


Because not everyone is using it, or because it doesn't work.


So peter theil talks about building monopolies which are a union of different markets. How do you identify different metrics in such cases?


Note that Geoff Ralston is a Y Combinator partner, which is context that otherwise makes the article confusing.

EDIT: relevant info was added to the user profile after this comment was made.


For the life of me I can't understand why so many people take the time to write some helpful advice but fail to indicate who they are so you can put the advice into context. There shouldn't be a need to google, guess or make assumptions. To me this type of thing is sloppy, period.


Sloppy? Written on the left column (emphasis mine):

Geoff Ralston Startups, technology, and education. ==> Partner at Y Combinator <== Founder and Partner at Imagine K12


That was apparently added after I made my comment.


Thanks glad you said that I was kicking myself for thinking that I actually made a comment and missed that.


My bad, it's sometimes easy to overlook the sidebar.


Here's a braindead simple idea for how to get a startup going: 1. Don't hire anyone else until you're making money.

I mean, how easy is that? It's not flipping rocket science.


That only works if your product requires essentially no capital to develop.


You start small and scale up, bootstrap from smaller related projects.

For example, if I wanted to be the "next Google" I'd work as a consultant on Apache Solr and Apache Nutch, build up capital, then find a search niche that is not well served but potentially lucrative (fields of science and medicine spring to mind), build a usable proof of concept, sell subscriptions at a cut-price rate (whilst still in beta), then look to hire and scale, then look to expand into new markets after the core product is stable.

So you see, even with the largest of corporate ambitions, you're still able to bootstrap on next to nothing.


Actually, I think "don't hire anyone else" only works if you have very limited labor needs in order to achieve profitability; its almost orthogonal to capital requirements.


Capital goes on:

People Machinery and infrastructure (which may or may not include office space) Sales and marketing

It would be very interesting to see if there's a difference in the way successful and failed startups prioritise these different areas.

I suspect for tech people dev spend is overrated and marketing spend is underrated - but that's just a hunch with no numbers to support it.


How would you hire someone without capital?


See my reply to hueving.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: