This is something I worked on for a couple years, and I fundamentally agree. I started a company around marketing analytics, eventually selling it and integrating it with another marketing analytics company.
Our product was B2B revenue attribution: for each dollar spent on marketing, how much revenue came of it? This is hard in B2B, because marketing happens to people, but purchase happen months or years later by companies.
We found that we could do this for several data streams (conferences, online ads, gated or cookied content marketing). What we then found is that _CMOs don't care_. We were asked multiple times to widen the definition. To push up the numbers, justifying more ad spend or a bigger user conference or more headcount.
And you know what? I think that's OK. Marketing works, and it's a critical component of any company. A lot of good marketing isn't trackable, because it plants a seed inside a person's head far before that person buys anything. It tells a compelling story. The world is full of data-addicted PMs and sales VPs and ad purchasers and CEOs, but good marketing is more than that.
If the marketers have to appear (or actually be) addicted to data to communicate well with the rest of their company, then so be it. Just like Jacques says: I don't know what the way out of this mess is, or what the path to success looks like. Good marketing can still happen with bad data.
I've always found that people have a hard time accepting that with marketing, data really only tells you what happened. It doesn't tell WHY something happened.
I also find (as you allude to) that as much as people want to be 'data driven,' they will conveniently ignore all available data if it suits their purpose.
> If the marketers have to appear (or actually be) addicted to data to communicate well with the rest of their company, then so be it.
I came to the same conclusion. Fortunately, collecting and storing the data is pretty easy, so I do so once a quarter and move on with my life.
> And you know what? I think that's OK. Marketing works, and it's a critical component of any company. A lot of good marketing isn't trackable, because it plants a seed inside a person's head far before that person buys anything. It tells a compelling story. The world is full of data-addicted PMs and sales VPs and ad purchasers and CEOs, but good marketing is more than that.
But how do you know it works if you can't track it?
I get your point about it being more than hyper-focused short term metrics, but how do you actually know it's working if you can't quantify it? Imagine if Tesla spent a ton of money on advertising (compared to the $0 they currently spend), would people not just assume the strong brand is tied to whatever marketing campaign might exist?
I am not in marketing (although I have had to do limited marketing in the past), but it feels like most of it is just carried along by cargo-culting and inertia. Similar to how everyone wants to be like Google so they copy their interview process, their monorepo, and other things without actually understanding why Google does that. Obviously some people are great at marketing but I'm not convinced most marketing money is actually seeing a positive ROI. That said, I'm genuinely happy to be proven wrong.
One is table stakes. What is the ROI of a website? Of showing up in Google organically? Of a well designed logo? On some level, customers expect to be able to find you. You need to show up, and show up well, at the point that customers are simply exploring. That’s fairly untraceable.
The other is distribution, holistically. Tesla might not have “marketing,” but they have showrooms, launch events, and referral programs. Every successful business started with precisely one highly effective way to acquire customers. Marketing, on some level, is about making sure that acquisition channel is working, start to end.
As companies grow, everything becomes muddy. Sure, Tesla or Walmart or whoever is probably unable to undertake how their marketing is effective, but they also can’t attribute success to every engineering project or accounting effort. That’s normal and true everywhere.
"What is the ROI of...a well designed logo?" For me (a consumer), absolutely zero. What do I care what some company's logo is? I don't, and I also doubt that I recognize most of them.
Perhaps you've seen the images of fake pennys, like this: https://technicallyeclectic.com/video-best-practices-details.... If so, you know what I mean; you could show me a hundred logos, and I couldn't tell you which ones are real, much less which of those real ones belong to which company.
And yet many companies spend an inordinate amount of time trying to decide what the best logo is, and then a few years later they decide to "refresh" it, or "clean it up", or "give it a facelift." To which I say, waste of time.
A sample size of one, but, I was able to pick out the correct penny immediately, and coincidentally, I work in marketing (but more technical marketing than anything brand-related).
In defense of the brand folks I know, I don't think any of them would say that the ROI from a well-designed logo is your ability to pick it out against a fake one. Sure, a poorly-designed logo would be one you would not be able to recall, but maximizing ROI from a logo is not maximizing your recall of it.
When my company rolled out a new logo a few years ago, some of the biggest selling points were making it consistent and easy to use, particularly in conjunction with our product names, which reduced time spent by marketers working around a hard-to-design-around logo. It also focused on make our workmark clearer, which was a real issue because even a large number of our own employees mistyped our company name as CamelCase instead of two words, which has real implications for trademark defense.
Yeah, sometimes logo refreshes are unnecessary. But not always. More often than not _you_ are not the end user benefiting from the changes.
> But how do you know it works if you can't track it?
We can track what zero marketing does vs some: companies with marking sell more product. Most bad marketing is more successful than zero marketing as to be worth it. There are enough metrics to track this.
The important question is how much demand does each dollar of marketing create. It would be poor decision-making to just assume marketing is a bad idea because they are already able to sell through everything at current prices. Presumably there’s somebody doing that math at the company.
> I get your point about it being more than hyper-focused short term metrics, but how do you actually know it's working if you can't quantify it?
There are always going to be ways to measure things, they're just going to be more or less fuzzy than what we're used to (even if what we're used to is wrong).
Only Nike and Red bull are really progressing marketing by innovating culture. Nike is creating trillion dollar economies by introducing the hijab for women which leads to more equality and stronger women with more money to spend on Nike gear.
Good marketing doesnt have to look like trash though. I remember there were times when ads were sometimes great stuff aesthetically and artistically. I wish they didn't just give that money to google and we didnt have to tolerate uninspired ads.
I’d frame it slightly differently; it’s not exactly “the right guy.” B2B marketing is frequently about getting a critical mass of people at a company to come to consensus. The best strategy here is frequently getting 1 person very excited and 10 people aware enough to not pitch a fit when they hear a deal is happening.
The best B2-large-B marketing strategy I ever heard was fundamentally account-based. This guy sold a $1m+ product, and he had a list of the 1,000 companies that could possible afford it. His metrics revolved around a hierarchy of countries > sales territories > accounts. He’d literally evaluate programs based on “have we scheduled enough demos with IBM this quarter?” If the answer was no, they’d run a smaller campaign focused on just the accounts they were behind on. Sales loved him.
The hard part about attribution is so much of the internal discussion is opaque. So especially in large accounts, you have to develop some feel for what is “enough” activity in an account that sales won’t have a hard time of it.
(And fwiw, you don’t have to do this through pure marketing. Bottoms-up products aka freemium is a great signal that sales should reach out — Datadog, New Relic, Github, and more do this.)
My working hypothesis is that major media companies are basically selling dashboards to marketers. The more complicated their work, the more steps they have to do, the more it looks like they are busy, the happer they are , and their bosses as well. The ROI from advertising seems to be following a sigmoid pattern but nobody knows how wide or tall it is.
I don't want to sound too dismissive, but middle management does love their dashboards. I mean, I can definitely see the allure ( I am currently learning how to prepare them in R -- fun times ). So as long as they can blame.. sorry.. as long as they can point to something that informed their decision, it counts as a win. Whether it is some variation of the current buzzwords 'natural impression timeline increased holistically' or shiny dashboard that shows they are doing something, it seems to do that job.
And good god do they need it; I can tell my boss's boss is not the same since Covid since he can't walk around and badger us about the recent sports team event.
Right, and it doesn't matter whether that data is accurate or anything. They just need the number to point to if anyone questions them about what they're doing or why they're doing it.
Advertising in a competitive market has all participants locked in a prisoners dilemma. Cokes ROI is probably pretty bad, if they cut ad spend then Pepsi's ROI would go up.
Everyone says that but does anyone actually try it? At 30 years old I already have my beverage preferences, if I never saw a Pepsi ad again Coke would still be too sweet for me. Who are these people who are swayed by this? Who hasn't tried both and determined a preference already?
Yeah, Pepsi tried it. They spent their superbowl ad budget one year on charitable causes instead, and maybe reduced their ad budget overall, and they had significantly reduced sales for the year. Restoring their superbowl ads restored their sales.
On this level of brand awareness advertising, the core message is basically "yes, the lights are still on, we still exist". People get used to the ambient level of ads supporting a brand and if it suddenly drops it will negatively affect the product. "They used to be everywhere, have they become sick or what?". A decline in brand awareness advertising will be worse than having never started. The nominal content of the ads is almost irrelevant, to do this job it can be anything from simple "our product is good!" to something benettonesquely edgy.
I think my "theory" is false for other reasons, but that they advertise where allowed doesn't contradict any of the theory. They just haven't successfully colluded to get the prisoner's dilemma solved by the local government in those regions and so all still have to defect.
I wanted to answer with an angry reply but I decided against it, I would however love a little more context to your comment so I can understand.
What does it mean? Do you think this person is lying about the taste for some nefarious reason? Do you think they have some condition where they feel sweeter drinks as more bitter? Where and when did you last check? What are the differences between the last check and the previous one?
From the Coke and Pepsi websites a 12oz can of Pepsi has 41g of sugars while a 12oz can of Coke has 39g of sugars, so the Pepsi is, objectively, sweeter.
Interestingly Pepsi includes both HFCS and sugar, so what the OP describes as "too sweet" might actually be a flavor palate that lacks that bit of sugar in addition to HFCS. (The exact mix is not clear, though Pepsi contains more "CARAMEL COLOR" than sugar so it seems suspiciously like the Pepsi is nutritionally the same as Coke with an extra 2g of sugar in addition to 39g of HFCS.)
(I know it's a late reply). I guess I meant it as a tounge in cheek response to someone not only confident in their own preferences but confident in the reasons behind them which happen to be false (somewhat objectively). Which is ironic because this could make them very receptive to an ad for Coke about being less sweet.
I too was surprised to see the post that Coke is too sweet compared to Pepsi as I've never heard anyone say or write that before. I've always heard people say Pepsi is sweeter than Coke. Apparently that was one of the main drivers of "New" Coke back in the 80s, to make a sweeter version of Coke to compete with Pepsi, especially the Pepsi Challenge, which rewarded sweetness due to the small sipping size.
The person you're responding to may have had a similar lifetime experience and that's why they commented on it. Likely not nefarious, simply an observation on something unexpected.
Coke doesn’t tell you that Coke is better. It seeks to link the sight, sound, smell and taste of Coke with things that make you happy. The only thing they say about flavor is that it’s classic - your granddad drank the same stuff.
Many people associate 8oz glass bottles of Coke with Christmas. Fizz commercials are part of the movie theater experience. A happy meal is paired with a Coke.
Pepsi usually has a different message; they used to peddle taste, but usually try to assert that cool people drink Pepsi.
No one drinks it because most people think even regular Dr. Pepper is gross, but Diet Dr. Pepper is nationally recognized as a household name in the US because of the extensive advertising that they do for it. "Diet Dr. Pepper tastes like regular Dr. Pepper." Hardly anyone drinks it, but pretty much everyone has heard of it.
Same as Mellow Yellow a couple decades ago. They advertised it extensively and when they used to give away free drinks under the cap it was always free Mellow Yellow. But relatively few people actually liked it. That didn't mean we hadn't heard of it.
everything wrong with the world, capitalism, climate change, etc can be framed as a game theory coordination dilemma. How do you get people to cooperate at scale in an anti-fragile way (not exploitable)? Solve that and we are post-scarcity
There are two ways to do it, that I think must work hand in hand.
Firstly,we must reduce the stakes of anyone in any given domain. This can be done by (partially) tying wealth to employment and reducing inequality. Indeed, in a hypothetical society where most companies are mostly worker owned, then the owners of the company do not have that much of an incentive in hiring climate change, because ultimately if the company folded and they changed domains they would not lose that much, and the cost of climate change is bigger relative to their stake since everyone's stake is lower.
Another part of the answer will be government, unavoidably. Hopefully, a less unequal economy that has a lot more worker ownership will allow for much more democratic government, as it will be much harder for any individual to have the resources to exploit government directly or indirectly.
That doesn't seem to track what I have seen with real societies - people will do all sorts of horrible things for petty advantages to non-advantages. Look at the sheer damage corruption does as they sell government resources for pennies on the dollar because it isn't their money.
Also even if you somehow made the worker economy (which seems to have become less likely since the end of literal cottage industries) given productivity's linkage to capital investment that would still result in resources to exploit a government. It is called a voting bloc. Coal miners don't want to give up mining coal despite the negative health effects and other mining jobs being available. The uncomfortable truth is 'a more democratic government' and 'a government that does everything you specifically want' are not compatible.
Coal miners will absolutely never, as a voting bloc, have anywhere near the power of the Energy Industry. It's also much easier to change their minds, by giving them another job at a net zero cost to society, than it is for a billionaire oil tycoon.
I agree that a democratic government won't do everything that should be done, but it's the least worst option we have. The best way to reduce corruption is to attack the class system.
Even when the underlying product works as advertised, dashboards and consoles absolutely sell the product, especially when that's what a lot of decision makers will look at.
Tim Hwang’s new book “Subprime attention crisis” talks a lot about this. The whole thing is a house of cards, based on the belief that better as targeting will yield better results, and they will squint at the numbers in any way possible to keep the illusion alive, because there’s so much money to be had in it.
But so many of the models are bad. Most of the data is awful, and the results when looked at objectively aren’t really amazing
It's a sensationalist headline, with some questions assertions.
> 36% percent of people in the UK use an adblocker, which means your javascript based website tracking is meaningless
"meaningless" is a strong word. If I run an ad campaign and see an uptick of 20% visitors, that's useful. The 36% is consistent on both side, so deltas are still very meaningful.
If I do need absolute metric -- e.g. distinct people -- I have to decide how to handle adblocking. I can model it, or I can accept the undercount. Honestly, this is largely going to be based on what the advertisers are welling to accept.
> The black boxes inside Facebook and other ad exchanges give you flat out wrong data about how your ads are performing
When you can tie it back to sales, you have pretty hard data. Also, there's third parties out there if you don't trust companies grading their own homework.
> The audiences you're targeting on Google, Bing, etc are fraudulent and don't even exist
Again, fraudulent is a bit of a stretch. Audiences do have waste. That's true in TV, magazines, and digital. It's not a question of perfect or fraudulent.
You’re right that deltas are still meaningful in aggregate, but in my experience the actions the users take and their retention rates are different. The ad-blocking users are more valuable in my opinion because they are more likely to be power users and advocates for your software. So looking at a graph of feature usage will not accurately portray the weighted value of each feature, based on changes to retention and virality – the devil is in the details.
I think marketing works but there's a tendency in most jobs for the work to move from doing the job to justifying the job.
You hire someone to market your product, at first, they market your product.
After ten years, they're spending half their time marketing the product and half the time justifying to you that the marketing works. It doesn't matter if they are good or bad at marketing--it only matters how well they can justify the rates that they charge their clients. If other marketing firms have bad data, you have to get data that's just as bad to compete.
I personally think the only way out of it is to bring it in-house. You can try to formulate what you want from marketing in terms of impressions, CTR, etc., but these are all metrics and if you rely on metrics there are perverse incentives to game the metrics.
This happens in every job, it's just that your marketing department has to spend a lot of money, so they are going to spend a proportional amount of time justifying why they spend that money, and it means that the marketing department wants to work with firms that provide them with the metrics that the department needs to justify marketing spend to the rest of the company.
I work in performance marketing, and I think the way to think about this is not “there’s fraud! We can’t trust anything!”
Rather, it’s how much is fraudulent.
For example, if I spend $1m and $10k was fraud clicks, I might be ok with that. Sure not ideal, but if I still got great results with $990k, the overall campaign still did good.
How do you know how much is fraudulent? Just look at your internal data. If Facebook says it delivered 100 clicks, but you only see 10 show up that is something to investigate. But if you see 90ish, move on with your life.
Part of being in tech is not worrying about edge cases too much. (Unless you are in security or reliability or something where that’s crucial). Otherwise you’d never get anything done.
> How do you know how much is fraudulent? Just look at your internal data. If Facebook says it delivered 100 clicks, but you only see 10 show up that is something to investigate. But if you see 90ish, move on with your life.
That isn't good enough IMHO you will (should) see a high amount of clicks, it's just that most/all of them might be from bots...
Really the fraud itself can be ignored if you just focus upon the apparent rate of return. If you have Option A with a high rate of fraud but the highest returns and Option B with no fraud but worse returns Option A is the rational choice. Granted the complication is that it is hard to get the actual rate of return reliably.
> If Facebook says it delivered 100 clicks, but you only see 10 show up that is something to investigate. But if you see 90ish, move on with your life.
Cool, so just give Facebook a 10% bonus for providing a fraudulent service?
The article is about establishing "enormous brands like McDonald's, Coke, Pepsi, Nike, Apple, Ford, Chevy, AT&T, Tide, Crest, Bank of America, Visa, MasterCard, Toyota, Tylenol, Kleenex, Budweiser..."
But internet advertising is dominated by niche marketing from smaller players.
Niche marketing requires targeting. You can argue about whether certain data really delivers effective targeting or not, but if a product is niche, it requires targeting.
If a product is mass, that's a different proposition.
I mean it's not. I've written this from the angle of having only ever worked for smaller brands. Marketers (myself included) like to think we're making decisions based on accurate data and thorough analysis but we might as well just be blindly throwing darts at a dartboard. We'd have the same amount of success.
I’m sad the author has never worked with good marketers. Yes, data collection is imperfect, advertising dashboards lie about attribution, and any metrics that aren’t connected to valuable, verifiable transaction data are terrible if your marketing team follows them the wrong direction. Marketing- finding the people that could use your products and communicating value- is a real skill and a soft science. Great marketing teams understand how to show value to the human beings at the other end of these digital processes, how to leverage tech to communicate most effectively, AND how to create directionally useful predictive models that connect noisy, messy data to valuable outcomes. There are plenty of bad marketers, but don’t confuse the lack of skill in the teams you’ve worked with for the upper bound of what’s possible in the discipline.
> Until leaders can lead on the strength of their conviction and experience instead of second guessing themselves and their staff based on the inadequacy of data.
I agree with the article in general, but “strength of their conviction and experience“ isn’t better than bad data is it? At least with a data driven approach you can try to figure out what the data you’re actually looking for is instead of just hoping your personal views on the situation map to reality.
When you ask for something to be one way vs. another way in a code review, is that a data driven decision? Are you citing a study with a statistically significant result that projects written one way actually performed better on some metric of maintainability?
Tech’s contempt for human expertise is bizarre, given that it’s what we do all day.
It would be nice if we had more studies like that. Tech's insistence that productivity is unquantifiable has allowed a large amount of superstitions - sorry, "best practices" - to flourish.
Code review centered around maintainability isn’t comparable to the effect marketing has on customers. It’s more comparable to how marketing manages its process of ad creation, connections in the industry, etc. Indeed human expertise is useful for having a starting point and determining what directions one should go in to fix things. But without data, how are you supposed to tell that your changes have actually had an intended effect? Gathering data around marketing would be more comparable to having monitoring on your software systems. Much like not having any way to know if the code changes you just made have actually fixed a performance issue you were seeing in the system, not having any data to judge whether a change you made to your marketing had the intended effect makes making such changes pointless.
Experts are wrong all the time regardless. The best of them tend to like being able to check their work against reality.
The data driven decision is 'passing the smell test'. If you have it one way or another do you find it easier or harder to work with as a first order approximation. Do your coworkers find it easier or harder as a second order approximation to be sure that in the relevant set that you aren't an outlier who happens to know pointer math really well but everyone else finds far harder.
For the code review task they don't need precise statistical data they just need to know which direction the arrow points with a decision in the main case.
It's not just bad data collection, but a lot of the data is misleading.
The best click through rates are on pornography or gore. Failing that, it's the closest thing to either - trypophobia triggering things, suggestive and sexy ads, miracle cures. Clickbait often has things that people would click without thinking, but upon entering the page, realize they've been scammed.
Game ads seem to be struck worst by this. They show gameplay that does not reflect the game, or some cringy bad playing, implying that 80% will fail to reach the next level.
There's also things like user tolerance for login walls or notifications, which only detect the fail/uninstall point or the amount of income from those alerts, not the fatigue from getting 5 unnecessary notifications a day which leads to someone uninstalling the app a month later.
Time for my I once worked at a marketing analytics company anecdote to make its biannual appearance:
A lot of marketing data is woefully inaccurate. I was once responsible for distributing a data set to company clients which covered interests and personal info for all UK population.
Not only was most of the data about my dad wrong, the things that were accurate were years out of date.
__
This was akin to the Google audience breakdowns that don't exit part of the article.
My information didn't exist. A colleague's email was completely wrong. Indicated he liked going on holiday but had never been outside of the UK.
Doing marketing analytics right isn't easy and the mathematical and statistical chops required is not often a skillset that coincides with great marketing skill. Larger organizations would be well served to split out a team for analytics separate from the team doing the marketing itself. Smaller teams I think should try and rely on finance for the same job.
>Modern marketing is all about data and however hard you might try, you can't spend any time around marketers online without being subjected to endless think pieces, how-to guides, ebooks or other dreck about how we need to track and measure and count every little thing.
I understand that this might be the perceived idea of a marketer, because the marketers that shove this down our eyes are the loudest ones. But it doesn't mean they represent marketers.
Marketing is way more then advertising and media buying, which is what most of this article is about.
Either way, data should be taken for what it's worth - different data has different value, and people that work with this know it. It helps you make some judgement, it can give you insights, and can help you understand if you're doing some things right.
Data always served this purpose and decisions were still made despite the data granularity/quality - because once again, it was taken for what's worth. You still have major brands leaning on share of voice, GRPs, Reach, which are children of TV/Radio.
If you ask me, is it worth to have AB Testing? The answer is I have no idea, because it depends on the brand/product/budget/audience and the message itself.
I'm a strong believer that a solid communication strategy stomps heavy data approach. Some people might say: "what if it doesn't work?!", well then it doesn't work - or better yet, it will work to some extent but might not generate all the expected outcomes.
But that's part of it, you have to take risk in these decisions and there's no data that will make the decision for you. That's why I argue that AI won't replace marketing anytime soon. I dare to say AI might replace the bulk programmers first before it replaces marketing.
Unless you think marketing is eating data and spitting out insights. Then you are already being replaced since the past 5 years.
I'm working on an (open-source) alternative to Google Analytics [0] and it's true, you cannot track users with perfect precision. It's technically not possible to archive, but you can fix some of the issues, like tracking on the backend to get around ad blockers or by setting up a CNAME record and serve the script from your own domain.
Anyway, even if tracking (or analytics, which is not the same!) isn't perfect, it's at least an indicator of how well you're doing. Which is better than having no data at all. Paying for ads that were "clicked" by bots or simply don't reach the target audience is a whole different story and the author is right, you will probably waste money.
Depending on the script (if it's configurable), you can send the request made to your own server instead of some analytics solution and redirect that request using a CNAME record. Combined with the script being served from you own domain, blockers will think it's a regular request and won't block it. They usually use a blacklist to filter unwanted traffic (like to GA). As we are still working on Pirsch, we don't have a documentation to explain the setup in more detail yet, but you should be able to find instructions somewhere.
[Edit] Okay, so it seems like blockers will catch that, see
iamacyborgs comment. But we will provide a nice and simple backend integration which basically will do the same as the script, but is unblockable.
[Edit #2] Looks like it will only block cookies using this technique, so it should work as described.
I don't doubt that marketing works but always had a feeling that a lot of it is busy work (e.g. reorganizing AdWords campaign structures every couple of weeks for no apparent reason) and a lot of data-driven success stories probably boil down to right time right place.
Clean randomized experiments naturally provide good answers and are especially feasible in this space but performing those properly, especially with regard to attribution and user identification, presents a whole zoo of issues that are inherently different from what a lot of marketers seem to work on.
It's extremely hard to make use of qualitative data without context.
For example, 36% of UK users may use an ad-blocker, and 90% of them may be a tech-oriented audience, which makes a different if your site is targeting moms.
I found the most valuable marketer's data to be qualitative data from people who've done something first-hand (I'm collecting such data on acquisition channels [1]). Qualitative data puts number-based-data in context and something you take take action on.
If you run marketing to drive superficial vanity metrics like ad clicks, email opens, unique website visitors, then yes you're 'addicted to bad data' but more so just lack attention to detail and aren't good at your job.
If you run marketing to capture high-intent traffic, measure positive engagement with your content, and correlate those indicator metrics with product activation and monetization, then you're simply using data – even if it's partially flawed – to drive meaningful growth for your business.
This piece comes off as weakly constructed and from someone who doesn’t understand the world that marketers and salespeople live in.
If I’m a marketer, I’m not looking for perfection in my open rates for instance, I’m looking for broad, directional signals about what works. I know that’s the best I can likely get, and it’s miles better than having nothing.
Nothing new here. It's been widely acknowledged for decades that the television and radio ratings (based mostly on voluntary self-reporting by a not-necessarily-representative sample of the population) are not accurate, and yet they drive the rates charged for TV and radio advertising.
All true. But what are the alternatives? Ignore the data? All of it? That does feel right either.
I would think the best approach is to not over-trust your data. To not only be aware of what might tell you, but what it might not. If you must guess or extrapolate, then do so with eyes wide open, not blind and reckless.
This article is massively misleading. Its key points:
> 36% percent of people in the UK use an adblocker, which means your javascript based website tracking is meaningless
It doesn't mean it's meaningless, it means it's only based on part of your traffic. But 2/3 of your traffic is still a lot of traffic, and ad blocker users aren't that different from the rest of your users. Additionally, ad blockers generally do not block first-party JavaScript, or most first party tracking, so your sample may be much better than that.
> Email open rates don't actually indicate that an email was opened, merely that a request was made to a server
Emails are read by a mixture of three kinds of clients: ones that never download images, ones that always download images, and ones that only download images when the email is opened. This means that absolute "open" rates are not that meaningful, since you don't know how many users are in each category. On the other hand, relative "open" rates are still very meaningful, and an email that has an unusually high "open" rate will in practice have been opened an usually large numbers of times. If you want, you can then calibrate with click rates.
> The black boxes inside Facebook and other ad exchanges give you flat out wrong data about how your ads are performing
This links to https://www.etcentric.org/facebook-agrees-to-40-million-fine... where Facebook was fined for misrepresenting one of its video ad metrics. The post presents a single (newsworthy!) issue as if it applies to every metric on every exchange. Additionally, advertisers are generally able to verify the claims of the exchanges through JavaScript that runs inside the creative, which dramatically limits the ability of any unscrupulous exchanges to cheat.
> The audiences you're targeting on Google, Bing, etc are fraudulent and don't even exist
This link to https://www.forbes.com/sites/augustinefou/2020/11/02/got-lar... which references (but does not link to) "A new report from cybersecurity company CHEQ" and claims "Their data also estimates that 1 in 5 clicks are not from humans, and greater than 10% of the spending is likely siphoned off by ad fraud." Setting aside the question of whether the statistics are correct, there's a world of difference between 10-20% fraudulent traffic and "don't even exist"
> The exchanges you're purchasing media space from are cheating you
This links to https://www.adexchanger.com/mobile/is-ubers-new-ad-fraud-law... which describes a suit by Uber against "Hydrane SAS, BidMotion, Taptica, YouAppi and AdAction Interactive". We don't know how the suit is going to turn out, but these five companies are tiny players in the advertising world. Again, highly misleading.
(Disclosure: I work on ads at Google, speaking only for myself)
I'm not familiar with it and your link is behind a paywall. Would you be up for summarizing or linking to something publicly available where I can read about it?
Okay, I've now read your links, though I don't have any Google internal information here.
It sounds to me like people who hired Google to bid for them on various exchanges ended up with some invalid traffic. The lawsuit is over, essentially, who is responsible for the risk when bids are wasted on fake traffic. The OP is essentially arguing that the entire advertising industry is fraudulent, and that there is no legitimate traffic, which is worlds away from a dispute over how to handle the risk of fraudulent traffic when it does occur.
(Again: I work for Google but I don't know anything about this case and I'm speaking only for myself)
I wrote the article, so apologies if it's not clear. I don't mean to suggest that the entire online advertising ecosystem is fraudulent. However, there are billions of dollars worth of digital ad fraud occuring every year. Even the IAB agree on that one.
Note: This is a really interesting post - please don't get my commentary wrong, in general I think he's spot on.
> Modern marketing is all about data and however hard you might try, you can't spend any time around marketers online without being subjected to endless think pieces, how-to guides, ebooks or other dreck about how we need to track and measure and count every little thing.
Yeah, there's a ton of chaff out there from marketers selling to, well, marketers. It's a fair point about the state of things that there is way too much noise and not much signal.
> We've got click rates, impressions, conversion rates, open rates, ROAS, pageviews, bounces rates, ROI, CPM, CPC, impression share, average position, sessions, channels, landing pages, KPI after never ending KPI.
Yep, too many KPIs! The main problem facing marketing is often a plethora of information that breeds useless analyses, diverting focus away from just buckling down to gauging incrementality.
> That'd be fine if all this shit meant something and we knew how to interpret it. But it doesn't and we don't.
I disagree with this. When we get way too far into the weeds we can lose perspective, but plenty of these metrics cited above are not just meaningful but _critical_.
> The reality is much simpler, and therefore much more complex. Most of us don't understand how data is collected, how these mechanisms work and most importantly where and how they don't work.
Truth.
> 36% percent of people in the UK use an adblocker
This is probably skewed, as the data is based off a survey where respondents using an adblocker are probably more likely to respond. The best way to get information like this broadly across the industry is comparing server and JS-based analytics logs to get real ratios, and it is closer to 8% when I last was part of a large scale test (2.2bn sessions across 84 countries). Even that number varies highly by country, device, demographics and what the user is viewing.
You could probably solidly bet that ~40% of HN users have ad blockers, but <1% of CNN viewers do. That's how big of a swing it is.
> which means your javascript based website tracking is meaningless
Not entirely true. JS based analytics is useful, to a point. It should always be considered one signal of many, and use it for insights, but not necessarily as a source of truth. _Anyone relying on web analytics at scale for insights should also be using server-logs as their source of truth_.
> Email open rates don't actually indicate that an email was opened, merely that a request was made to a server
Amen! They are generally BS and should only be considered an indicator, and your email CTR should always be based on send->click. That's it.
> The black boxes inside Facebook and other ad exchanges give you flat out wrong data about how your ads are performing
Meh. The video thing is old news and frankly only mattered about completion rates. If we're talking about FB, what matters is reach and impressions that drive overall lift in your business. FB has _by far_ the best holdout testing in the industry that gives a solid gauge of your total return on impressions. Clicks on FB, YT, display, etc. are just a signal - the real goal is who saw your ad and then takes action afterward.
> The audiences you're targeting on Google, Bing, etc are fraudulent and don't even exist
This is a tricky one. If you're talking self-reported content targeting on display, yeah - it's crap. IF you're talking about FB/Google/Bing/Oath audience data based on behavior? Solid gold, in descending order on that list. Don't forget that lookalike audiences are also an amazing tool, and they're definitely not fraud if they perform as well as they do.
> The exchanges you're purchasing media space from are cheating you
No comment on the linked article. And yes, there is massive amounts of fraud in the mobile ad ecosystem.
> And even if we know how the data is collected, what it means and what it's actually tracking, most of us don't have the technical chops to analyse the data we've collected1. I don't mean to rag on anyone by saying this, but we do need a reality check.
This is getting better imho. Over time Marketing Analytics has become a solid discipline and I've been incredibly fortunate to work with many people that have produced amazingly solid insights with massive impact. It is getting better, but like I said before we often get way too far in the weeds and lose perspective. Simple is often the solution.
> And look. I get it. Having tangible data allows us to demonstrate that we're doing our job and we're trying to measure and improve what we're doing. But as Bob Hoffman rightly points out - that's not how brands are built.
> The numbers are often all we have to prove our case, to get more budget and in extreme cases, to continue to stay employed. We'll remain in this mess until we can separate marketing from short sighted and poorly informed decision making. Until leaders can lead on the strength of their conviction and experience instead of second guessing themselves and their staff based on the inadequacy of data.
> I don't know what the way out of this mess is, or what the path to success looks like. All I know is this.
> We're addicted to bad data.
I _absolutely_ agree about separating marketing from short-sighted and poorly informed decision making. This is the single biggest issue facing us right now and it continues to hold growth back in so many companies.
Yes marketers are gullible and we all know it whether we are creating or selling the data or an employee at a company based on this stuff. Valuing companies based on user base is a ponzi scheme and there is no reason to care. The only result of the big settlements next decade will be to add an extra disclaimer of page 86 of the prospectus so that "investors are protected". Way to go future regulators.
Our product was B2B revenue attribution: for each dollar spent on marketing, how much revenue came of it? This is hard in B2B, because marketing happens to people, but purchase happen months or years later by companies.
We found that we could do this for several data streams (conferences, online ads, gated or cookied content marketing). What we then found is that _CMOs don't care_. We were asked multiple times to widen the definition. To push up the numbers, justifying more ad spend or a bigger user conference or more headcount.
And you know what? I think that's OK. Marketing works, and it's a critical component of any company. A lot of good marketing isn't trackable, because it plants a seed inside a person's head far before that person buys anything. It tells a compelling story. The world is full of data-addicted PMs and sales VPs and ad purchasers and CEOs, but good marketing is more than that.
If the marketers have to appear (or actually be) addicted to data to communicate well with the rest of their company, then so be it. Just like Jacques says: I don't know what the way out of this mess is, or what the path to success looks like. Good marketing can still happen with bad data.