> “you could serve ads on Yahoo, or Reddit, or the whole Google Display Network, etc., and see how much vastly worse those clicks convert for you.”
But that experiment could only possibly tell you if Facebook ads, in general, are more effective than ads on other platforms. And that might be caused by all sorts of confounders, like the relative demographics or usage patterns being different on those other platforms, which have no connection to whether anything that Facebook does contributes to positive ad effect.
In particular, comparing with other ad platforms could not tell you whether Facebook invisible audience ad algorithms are any better or worse than Facebook algorithmically targeted audience ads (especially since you still wouldn’t know if Facebook privileges algorithmic ads in some way just because it’s good for their business if it appears that their specialized product is better than a non-specialized one).
> “This isn't rocket science.”
It’s funny to me that a lot of marketing, product, and A/B testing people express this attitude about understanding what succeeds in marketing & product problems.
When really, those questions require a great degree of statistical rigor that is like rocket science.
It takes a great deal of advanced econometrics theory or formal statistics theory to answer ad spend attribution questions in a way that’s not completely defeated by methodological flaws, poor experimental design, causal indeterminacy, or various statistical fallacies.
Perhaps it’s one reason why the claim that digital ads work is still so hugely debated, with many claiming that quantitatively, digital ads (including Facebook) utterly don’t work.
But that experiment could only possibly tell you if Facebook ads, in general, are more effective than ads on other platforms. And that might be caused by all sorts of confounders, like the relative demographics or usage patterns being different on those other platforms, which have no connection to whether anything that Facebook does contributes to positive ad effect.
In particular, comparing with other ad platforms could not tell you whether Facebook invisible audience ad algorithms are any better or worse than Facebook algorithmically targeted audience ads (especially since you still wouldn’t know if Facebook privileges algorithmic ads in some way just because it’s good for their business if it appears that their specialized product is better than a non-specialized one).
> “This isn't rocket science.”
It’s funny to me that a lot of marketing, product, and A/B testing people express this attitude about understanding what succeeds in marketing & product problems.
When really, those questions require a great degree of statistical rigor that is like rocket science.
It takes a great deal of advanced econometrics theory or formal statistics theory to answer ad spend attribution questions in a way that’s not completely defeated by methodological flaws, poor experimental design, causal indeterminacy, or various statistical fallacies.
Perhaps it’s one reason why the claim that digital ads work is still so hugely debated, with many claiming that quantitatively, digital ads (including Facebook) utterly don’t work.