Sign ups were directly correlated to revenue, at least in my case, and likely most others, such as in @joegahona's comment about publishing. That is to say, the customer satisfaction decrease in popups did not decrease revenue, and that is assuming there even was a satisfaction decrease at all; again, most technical users vastly overestimate how many people actually care about popups, most non-tech people simply click the X to make it go away, and a percentage of all visitors will sign up.
This may be true in the short term but eventually more of these decisions begin to accumulate and cause more dissatisfaction. Each one on its own has barely any impact on the bottom line, it's just one more cut among a hundred cuts. In the end you end up with a product that is overall annoying to use. And the whole time you could be going along, measuring each decision and thinking that you're doing okay and no mistakes are being made. But measuring today with yesterday is not the same as measuring today with 3 weeks ago + 13 decisions reversed, 5 decisions reversed, 1 decision reversed.
Agreed, just following data can be a dangerous path. You have to build things because they’re the right thing to build. You need a cohesive vision not scattershot A-B tests. Unless you only care about short term profit, of course.
Play stupid games, win stupid prizes. That’s why the Facebooks die and the Apples and Googles last.
So what kind of experiment do you run to capture that dissatisfaction? Doesn’t have to be a/b. Just anything qualitative that doesn’t boil down to making decisions purely based on feelings.