>We didn’t have a product roadmap outside of these two weeks and every product meeting we would start from scratch with the new goal, new analytics data from our last two weeks, and also often new insights from in-person user testing, which we tried to do once a month.
Would it help to have some milestones a few months out or something? Do people really operate with zero goals/milestones other than the measurables of the current 1-2 week sprint? Does this work for a team to operate like this for several years? How do you plan for larger-scale product changes, new products, or tech architecture implementations/changes?
I think there are differing approaches for dealing with that, and it differs based on your target market.
In the enterprise space you absolutely have to have a six to twelve month roadmap of where you want to go. The trick then is to take the big features and break them down into manageable chunks that can be delivered over time. You also need to have some wiggle room so when something takes four weeks longer than expected you can adapt without just blindly pushing everything back a month.
The piece talks about letting teammates brainstorm ideas in addition to founders. What about getting ideas from the people who actually use your product?
I think in general founders and product people overrate their own voices compared to the hundreds, thousands, or millions of people who are using their product. In aggregate, these people have an enormously important voice.
OP says they would conduct user interviews. Our process looks a lot like this and user feedback is incorporated in these idea sessions because we're constantly getting feedback via email and phone conversations with customers -- the stuff a lot of users ask about gets remembered, considered, and discussed in these planning sessions. In general I think the job of founders and product people is to build/delegate building, talk to customers, and repeat based on what customers are saying. Is there a better way?
I liked several recommendations from this article, but it seems to be missing some important practices that influence success/failure in many product dev efforts as well. What I especially liked from the article:
* The Product Lead seems a good reco, and is similar to a Product Owner in other parlance.
* Engendering buy-in by letting everyone suggest ideas and feel heard is definitely a great technique for org management in product dev. This works doubly well if everyone has confidence that product decisions are made in a sensible, clear, fair/unbiased way after everyone's ideas are out there.
* Clear measurements of success are hugely helpful as well.
Things that seem to be missing:
* What is the purpose of the product? What is the true north / guiding light problem you're solving? This sounds squishy and it's easy to say something ambiguous and high level "we're gonna create a social video app!" or "be instagram for video". But this should sound more like a problem to be solved. A "why" or more than a "what". Ie "We haven't found a social video product we love yet, and we also think it's a problem that social media is always persistent and not private or safe enough." Or "We love our phone cameras and we want to make and share goofy videos. But we don't wanna post such random stuff to FB, Twitter, or Instagram where it'd clog the feed and live forever." Or "we wanna be able to make and share goofy videos without having them haunt us forever." Or even "there is no perfect dick pic app yet, and just texting lots of dick pics really sucks." These are shitty, off-the-cuff examples, but going through the process of clearly articulating this can really help you congeal focus and serve as a guiding light as you develop. You can expect to change this guiding light over time if you learn that, in actuality, not many users see the same problem or feel the same pain you do... but that is also very good to clearly know as early as possible! Stating your problem / what you're chasing down in clear terms helps you figure out if there is anything there worth solving sooner than later, and that is vital in the early days of product dev. I wonder that the beginning of this article talks about tactics like product dev cycle length vs this higher level purpose.
* Where does analysis of the market/strategic landscape fit in? This is another crucial element and it can help inform your "plan of attack" in terms of what to prioritize day to day or week to week. I think that SocialCam may have done better if they'd taken this more strategic approach to the landscape on mobile especially. For example, they may not have chosen to rely so heavily on Facebook early on. Or, they may have decided to explicitly target younger users, realizing that FB, Twitter and even Instagram left a lot of room there.
* Clear measures of success are discussed, but how do those relate to core product KPIs? In particular, it is vital to measure retention cleanly and effectively, and to measure engagement, and viral/k-factor/wom installs as best as possible. Zealously improving these metrics every week is critical in the early days, and improving these metrics should be a primary activity in feature experimentation (see below). Moreover, you should be looking for step-change improvements early on, not little incremental gains, and you should keep hunting until you find step-changes. You're waiting for some feature or use flow in your product to catch wind and drive a cycle of engagement, more frequent revisiting and word-of-mouth/viral recommendation. Gotta measure these and these are really the only "measures of success" early on.
* Where are structured feature experiments? Especially when you're hunting product/market fit (as SocialCam was early on), it's essential that you have theses on what will "catch fire" with users and prepare the best experiments to test them that you can. Here again, SocialCam may have more quickly iterated toward something like Snapchat if they'd had theses or testable ideas. In early product dev cycles while hunting true product/market fit and strong engagement+retention+virality, fully ~80% of product dev resources can be allocated to feature experiments.. and pretty much all features should be treated as experimental until fit is found. For a product like this (a game or a social product) the constant, daily refrain should be "is it fun yet?" "Is it really, truly fun yet?" "Do you just enjoy screwing around with it?" "Do you feel compelled to use it when you go too long without it?" For a product like this, keep testing out functionality until it's fun. Make that the singular, maniacal focus early on until it is fun and you've caught fire with at least some demo/psychographic.
* Where is a frequent customer feedback process? The article mentions "trying" to do monthly in-person user feedback sessions... but for free consumer-facing apps, especially in their very early/conceptual phases, it's much better to pull-in users every few days, if not every single day. That user feedback is your lifeblood early on and a feature isn't worth fully testing and polishing if it doesn't seem like it's gonna catch fire or move you closer to fun.
* For an early product dev set up, I think continuous int and daily or semi-daily functional builds you can test with customers/users are important (partially so that you can get rapid, regular feedback from users). The article says they iterated "extremely quickly", which I'm sure is true relative to their previous process... but a 2-week fixed cycle in the early days of a product's (especially a consumer-facing product vs a b2b product) dev and exploration of p/m fit is very, very slow. That's only 26 turns at bat per year, which is too slow when it's early on. More importantly/starkly, it's only 2 times at bat per month early on... that really makes it hard to truly rapidly iterate.
* Curious whether the author ever tried dual-track development. There are the rapid-fire, ideally daily builds and experiments going on on one track (the discovery track, in this case of early consumer-facing product dev) and there can be a longer-cycle track for things like underlying infrastructure improvement, UX improvements, bug fixes, and other incremental improvements. (This is different than the way dual-track pd would be applied in other contexts... this is a way to dual-track in the pupative stage of a consumer-facing social app.) Splitting effort in this way can be extremely helpful, especially after you have some initial traction.
In any case, this is a useful article and it contains some good advice. I think it could be supplemented with some addition practices that help a lot in this sort of context as well.
That was just when they were brainstorming new ideas. The emphasis was also on the team lead enforcing this policy, which is definitely achievable if you are a good manager.
In all I thought this sounds extremely effective, especially compared to how they started i.e. "meandering product meetings where we didn’t write down our decisions".
Would it help to have some milestones a few months out or something? Do people really operate with zero goals/milestones other than the measurables of the current 1-2 week sprint? Does this work for a team to operate like this for several years? How do you plan for larger-scale product changes, new products, or tech architecture implementations/changes?