> As a side note, I would highly suggest NOT
> building a custom analytics platform as a
> startup unless your model simply won't work
> without it.
Seconded. I work at a place that has gone down this path and there are a lot of pitfalls.
If you go with something like Mixpanel, then that software does what it does. If somebody wants to measure something that Mixpanel can't, they either have to go without or make a really good case for getting their desired figures some other way at huge cost.
Not so with in-house solutions. Because it's possible to constantly measure more things by investing programmer time, that's what happens. Be it product, marketing or finance, somebody always has another bright idea or another "I absolutely cannot do my job any more without this" measuring need.
Because this pressure to add features comes from within, rather than from outsiders, it's hard to resist. People will push you to ship "just this one thing" outside of the normal release cycle because it's always "so urgent". This happens to me several times every sprint because of our internal analytics software, and it's an enormous time sink when every "just this one quick thing" needs to be pushed through the QA and release process one by one.
Also, people are happier to put their faith into external products. If RJMetrics reports a higher-than-expected number of occurrences of event P, you assume RJMetrics is working correctly and the first question is "What bug in our system is causing too many of these events to happen?". If the internal analytics system says the same thing, it's an analytics bug until proven otherwise. As the maintainer of the analytics system, this gives me de facto responsibility for identifying and triaging the whole team's bugs.
If you go with something like Mixpanel, then that software does what it does. If somebody wants to measure something that Mixpanel can't, they either have to go without or make a really good case for getting their desired figures some other way at huge cost.
Not so with in-house solutions. Because it's possible to constantly measure more things by investing programmer time, that's what happens. Be it product, marketing or finance, somebody always has another bright idea or another "I absolutely cannot do my job any more without this" measuring need.
Because this pressure to add features comes from within, rather than from outsiders, it's hard to resist. People will push you to ship "just this one thing" outside of the normal release cycle because it's always "so urgent". This happens to me several times every sprint because of our internal analytics software, and it's an enormous time sink when every "just this one quick thing" needs to be pushed through the QA and release process one by one.
Also, people are happier to put their faith into external products. If RJMetrics reports a higher-than-expected number of occurrences of event P, you assume RJMetrics is working correctly and the first question is "What bug in our system is causing too many of these events to happen?". If the internal analytics system says the same thing, it's an analytics bug until proven otherwise. As the maintainer of the analytics system, this gives me de facto responsibility for identifying and triaging the whole team's bugs.