This is exactly why social networks are far less valuable than properties like Google that started with algorithmic foundations as opposed to rebranding the next geocities. I'll never trust facebooks numbers on their fake profiles especially when Yahoo kills close to 1mil fake profiles every month and while Ashley Madison was rife with them. Reddit started this way too, with tons of bots, fake accounts etc. We just don't know how many fake or synthetic Astroturfed https://en.wikipedia.org/wiki/Astroturfing accounts are really out there. I have a close friend that ran a consumer oriented search site with 50 million monthly active users and he showed me that if certain sites, not his, did not make it easy for fake accounts to be auto generated from various advertising groups and spammers, then they just would not get the bloated monthly active user numbers they wanted.
Ironically, Google the term "buy facebook accounts". This is the elephant in the room nobody wants to talk about in the social networking space. 50% or more of the social networking profiles out there could be fake given bots, systems and marketplaces that auto generate this stuff in mass amounts to spam and siphon ad dollars.
Another way to look at this is based on how and why virus makers target PCs along with how many PCs are currently affected.
Not many fake 'searches' are happening compared to fake bots and social networking profiles. Yet another reason why Google makes $70 billion per year.
Apropos: Palantir crows about being able to use multiple sock puppet accounts to spam social networks - "persona management" [1].
It's a big market - and there are vendors who specialize in fake accounts - that means there's a lot of interest/money in being able to fake popularity.
The future ground of all social networks is Astroturf.
Some outfits don't just do fake searches but also fake downloads. E.g. visit https://bintray.com/groovy/maven/groovy/view/statistics#stat... then click on date range "1 year" and wonder why the daily download numbers suddenly multiplied by 10 in late May. You'll have trouble believing the 2.5 million download number for Groovy from Bintray, and wonder how long they've been running the same ploy with Maven.
Very interesting. I assume that he is doing that in order to collect up-to-date search result data. Or is he actually attempting to manipulate search results (or AdWords ratings) in some way?
>This is the elephant in the room nobody wants to talk about in the social networking space.
Indeed. I was working for a Facebook gaming company around 2010. Through the "recruit a friend" bonuses and "have X friends click this" mechanics, there were incentives to simply make fake accounts to obtain the goals. We looked at our users and figured about half were fake Facebook accounts. I have to imagine the numbers for other companies, like Zynga, were similar, as they used the same mechanics.
From an advertiser's point of view, those fake profiles are still real humans, right? So your ad is still being seen, and those Facebook profiles probably share enough of the social graph with a real Facebook profile that you can still do demographic targeting, no?
From an ad network's point of view, maybe, but from a publisher's point of view, no. This is just another variation on click fraud, because fake people don't buy real goods.
If you actually read the article, the Heineken advertiser's response is:
> “It was like we’d been throwing our money to the mob,” Amram says. “As an advertiser we were paying for eyeballs and thought that we were buying views. But in the digital world, you’re just paying for the ad to be served, and there’s no guarantee who will see it, or whether a human will see it at all.”
That's a response to bots viewing ads, not people who register a second Facebook account so they can send themselves Farmville gifts. They aren't fake people, they're real people wearing fake mustaches. I am sure they are less valuable to advertisers than Facebook profiles with real demographic data but it's still better than an ad never seen by a human.
But the GP is talking about a case where the ads are being viewed (so there is a guarantee that a human sees it); just not by as many unique people as the number of accounts would suggest.
Not necessarily. I'm not in the SEO or advertising game but intuition tells me that having the same person see an ad twice is less valuable than 2 people seeing the ad. Additionally, as a result "fake" accounts are likely to contain less information advertise to "target" so they're only hitting fake accounts with overly broad appeal and not really creating content that might interest the user on his/her fake profile.
From the post that I was replying to, posted by the person you're replying to:
> Indeed. I was working for a Facebook gaming company around 2010. Through the "recruit a friend" bonuses and "have X friends click this" mechanics, there were incentives to simply make fake accounts to obtain the goals. We looked at our users and figured about half were fake Facebook accounts. I have to imagine the numbers for other companies, like Zynga, were similar, as they used the same mechanics.
If I have 100 fake accounts, how many of them have access to my wallet to buy what you're selling in the ad? If 100 of them click your ad, how should you count it "100 potential customers" or 1.
Most of the fakes seem to represent a 0% chance of making a sale.
yes, lots of scepticism here, but everyone (at least everyone who didn't get down-voted) seemed to buy into FB's claim of 1 billion users in a single day.
Boris buys "links", not "likes". Some of the companies fulfilling Boris get those links by simply buying iframes on doubleclick, so Google makes money, Boris makes money, and the advertiser gets a comScore cookie to "prove" it's in-demo.
That sort of thing has nothing to do with the type of fraud in the article.
Boris Boris sells the contents of the iframe he bought on Doubleclick to Myspace (as "paid traffic"), which in-turn lets Myspace sell it on video exchanges.
Google don't care even a little bit about this, since they still make a fortune off the Boris's of the world.
The reporter originally claimed there were something like 5,000,000 women bots and 12,000 real women. Turns out the numbers were more like the other way around.
I'm not linking to her article admitting her mistake as it's pathetic how she tries to cover up her mistake with more allegations based on incomplete data.
This is a perfect example of why blogs shouldn't be relied on as news sources, and you're the one asking me for citations? Where are your citations?
While quite a lot more women used the service then the analyst initially thought, it is still true that AM did a lot to encourage men to pay them based on contact with fake accounts, and it apparently worked: http://www.gizmodo.co.uk/2015/09/one-chart-that-shows-how-mu...
Indeed. "What I have learned from examining the site’s the source code is that Ashley Madison’s army of fembots appears to have been a sophisticated, deliberate, and lucrative fraud. The code tells the story of a company trying to weave the illusion that women on the site were plentiful and eager. Whatever the total number of real, active female Ashley Madison users is, the company was clearly on a desperate quest to design legions of fake women to interact with the men on the site."
Ashley Madison was still a sad, exploitative scam.
I find the whole soft white underbelly of internet advertising interesting in that pollutes the entire advertising pool. Even ads that are "legitimate" are lost in a sea of click bait and click fraud and morass that is covered in the article.
It makes me wonder about companies that get a large part of their income from advertising (Google...). Once the ad market descends into a cesspool that no legitmate company will dip their toe into, can companies that depend on advertising revenue survive on the self-serving, artificial, and mostly automated, ad market?
I quite like the idea of a self-serving, artificial and mostly automated ad market. Maybe making it wholly automated would be an improvement..
Like: a browser plugin that would randomly click ads in a different browser profile, maybe through an IP-masking proxy, without ever displaying them or the pages they load to the user (hey, maybe without ever traversing the last mile from the proxy to the user's computer.)
In this way, and assuming the clicks can be made indistinguishable from actual for-real user clicks (maybe a tall order if a proxies are involved?), the whole ad market automation circle could complete while still retaining a usable web. The online ad economy spins off to become a self-sustaining fully automated exchange bubble, visible to the external world only at its interface to participants' billing systems and analytics.
Publishers and ad networks win because clicks, network infrastructure providers win because traffic is lucrative and content providers can flourish, users win big time because they have all the benefits of an ad-supported free-beer web with none of the down side (actually seeing and being tracked by ads, that is.)
So long as the advertisers (or, more specifically, the campaign managers' bosses) don't catch on, what could go wrong?
In this way, and assuming the clicks can be made indistinguishable from actual for-real user clicks (maybe a tall order if a proxies are involved?)
Unless your automated system actually spends money then it will always be distinguishable from the real thing. Advertisers, no matter how wealthy they might be, cannot afford to pay for a "fully automated exchange bubble" that gives them zero returns.
I never realized how bad fraud can be until I swapped out my old Google Adsense code for new. For awhile Adsense reported $200 a month in estimated earnings which was more than the $20 or so I usually get. Then after Google finally corrected for fraudulent clicks I actually got paid my normal $10-$20. Eventually the estimated earnings start to account for fraud so I no longer see $200 projections. That means a majority of the actual clicks were determined to be fraud. Either that or Google is shaving clicks. I lean more towards to first scenario.
So Google already detects and refunds advertisers for fraud.
I think cost per click will simply keep declining as the erosion of value caused by that cesspoolishness is priced in. It will never go to zero though because somewhere in there is a core of real influence on real buyers of goods and services.
I'm not worried about Google. I'm worried about user experience on the web as advertisers and publishers try to compensate ever more aggressively for the growing ineffectiveness of their business model (or is it greed?)
Average user experience on media sites is in rapid decline right now. If there isn't a whole page ad you need to click away, you can be sure there will be a popup for some customer experience survey or something else that acts like what we used to call a modal dialog box.
If I were in the business of defacing websites, I would like to replace all of those by an old Windows 3.1 modal dialog saying:
These things mimic biological systems. Fraudsters are disease and as you have noted threaten to kill the host they rely on. But the whole reason Google has survived is because they have built an immune system (ad fraud team) and antibodies (fraud detection systems) that keep it moving along. Over the years there have been plenty of ad exchanges that succumbed to the disease and died.
That's not to say Google will survive in perpetuity. Disease has a way of iterating and evolving. But big G won't go down without a fight.
Nonsense. Google has "survived" because they benefit massively from ad fraud just like the fraud in the article. Always have.
I actually flew out to New York and showed Douglas de Jager this specific trick back in June, and he said that it's not their problem, and until media buyers agree, he's probably right.
Any clever advertiser doesn't judge the success of a campaign by its number of clicks, but by its number of conversions (and ultimately, revenue). What's become a new pain in the advertising space are spam/ghost referrals, messing your analytics only for the sole purpose of making you visit their websites, also adding up to the stack of fake audience of your site (which the article fails to mention).
The really crazy thing is that tracking from click to conversion is still not a truly solved problem!
e.g. When a person clicks, and then closes the browser and/or views other web sites, only to come back as direct traffic later and convert. The funnel is very hard to track and very easy to lose.
Not that I do this full time (I don't), but I've yet to really see a solution that lets me truly match up a person who clicks an ad and then converts.
At best, I can roughly correlate ads to a bump in revenue.
Your example is actually pretty easy to track. That's why sites drop cookies and have look back logs.
Harder to track is when someone clicks on a link on mobile, and then switches to desktop to complete the transaction. Unless everyone is logged in, you have no chance of tracking the funnel.
I know nothing about advertising, but isn't this method thwarted by users (likely small in number) that either do not allow cookies or dump cookies after a session? The latter type would only affect tracking if the transaction was completed in another session. I would think that IP tracking is somewhat useless due to shared public address among users.
Really what I'm asking is that is it trivially possible to track a user that does not store cookies?
Yes, dumping cookies makes it much harder to track lookback conversions. Some networks will use fingerprinting (IP, user-agent-- panopticlick style stuff) to supplement their data, but it's much less reliable and a determined user, like someone blocking or dumping cookies, is going to defeat it.
In some cases it's not even the same person doing the purchase, which adds another layer of complexity. Especially common with services that might be purchased by a business. Lately I'm getting a bunch of seemingly targeted PaaS and IaaS ads. I click on them occasionally. If one turns up a service I find interesting, there's a good chance the sale it leads to will be initiated on a different computer, by a different person.
This even happens pretty regularly for consumer stuff. I've suggested travel-related things to my parents, for example, which have led to direct sales: I saw an ad for a sale at $hotel_chain, remembered that my parents are taking a trip in October, tell them about it, and they book. It's pretty hard to track back their purchase to the ad that was shown to me. (Not entirely impossible, though, e.g. giving out different coupon codes is one method that print advertising uses.)
CPM can drive your costs down on a highly targeted niche. Let's say you want to target a female tech journalist living in SF and using Slack, you'd rather use CPM than CPC as you're pretty sure there are only 20 people that match the criteria and even though you'd pay 50 USD per 1000 impressions, you'd only pay 5 dollars to ensure each person in this particular niche sees your ad 5 times!
I can count on one hand the number of people in this industry that like hard work.
It's not just the wrong metric, it's a fundamental misunderstanding of technology: for example the "viewable impression" as being anything other than software that is telling other software that it's on the screen...
The (myspace) vehicle the article is being sold as CPM pre-roll video, and not per click as the expectation is that people don't click video ads when they're watching an ad before the youtube clip that they came to see.
Modern web advertising networks are an example of data telling lies we (or advertisers) want to hear.
The dream has always been to connect each advertising dollar spent to revenue generated. The reality is that data only exists in people's heads, is spread out over time, or is in their social interactions. My hunch is at least half, if not more, of all positive impact of ads (for the ad buyer) is generated this way. The data isn't low quality... it is literally impossible to collect.
For example: You research items on your desktop, then actually make the purchase on your phone while taking the train home. Maybe they can connect you to that purchase with enough tracking; So what if your neighbor or coworker asks about the product? Can't track that one.
Another example: You hear ads on a podcast for Igloo and when work starts asking about preferences for Confluence vs Jira you mention looking into Igloo. Your company ends up adopting it. Later you grow from 20 people to 3000. There's absolutely no way for Igloo to connect the dots leading to a 3000 user account. Let's say after discounts that's $300k/year. If Igloo paid $200/episode * 52 episodes/year * 20 podcasts = $208k. That's an absolute steal just to acquire that single customer.
Yet another example: You may be 24 with no kids living in an apartment but will you always be that way? Smart car makers understand that if you have a good experience with your first car in their brand you're much more likely to buy a larger car from them when you have kids, or a more luxurious car when your career advances. How can you figure out if the dollars spent advertising to the college kid with no money is wasted in that scenario?
So that's the great lie... that somehow all the tracking cookies, comScore profiles, etc will make advertising more effective or has some benefit period, regardless of fraud or click bots.
Advertising on the web right now is a fool's errand in many cases. I'm glad because it gives the small guys like me a chance to exploit the system to buy ads on lower-volume sites directly from the content creators and reach a good sized audience.
This is such a re-hashed story. While not good, it's not the big issue people seem to think they keep discovering.
For non-brand advertising this is easily solved by down funnel tracking. Any marketer worth their salt tracks to the 'ultimate goal' and will avoid shadier networks by results based tracking. End of the day marketing value comes down to a ROMI metric. It doesn't matter if 50% of the impressions are fake as long as the total investment pays off. For brand marketing I simply follow the networks that show DM performance as an indicator of quality display.
Generally the worse point for junk clicks is mobile display - particularly in app. Id never advise RON campaigns here without tight monitoring. I like to build white lists rather than a black lists for companies where I handle their marketing for display on mobile.
Interestingly I've been noticing increasing bot clicks on Google search ads. If I can detect this is makes me really suspicious why Google, with all their data/prowess is not.
"The most startling finding: Only 20 percent of the campaign’s “ad impressions”—ads that appear on a computer or smartphone screen—were even seen by actual people.
“The room basically stopped,” Amram recalls. The team was concerned about their jobs; someone asked, “Can they do that? Is it legal?” But mostly it was disbelief and outrage. “It was like we’d been throwing our money to the mob,” Amram says."
Every time I see an account of an advertiser seeing a viewability metric, it reminds me how scare tactics always work to drive sales. Viewability tech is very nascent and is often just guessing because there are so many variables out there that can give it both false positives and false negatives.
However, giving such numbers to advertisers drives sales of your tech big time, even if the number cannot be verified because it's such a scary number.
Amram is trying to become his own brand, so I skip articles that he's got a quote in simply because he's not out to solve any problems, just get an agency job in a few years. I don't think he actually understands ad fraud at all.
Here's the thing: The (Boris Boris) sites that myspace is selling are obviously crap. Absolutely worthless. The traffic he's buying is a bunch of fraud redirects, and he thinks that it's the media buyer's responsibility to be choosier.
However Christopher Barnet of Myspace is selling these sites as genuine video preroll that retails at around 6-8$ per thousand views. He's selling them on ad exchanges and they're buying it like crazy because it reports as highly "viewable", it's in "demo" (meaning a lot of these users have a comScore cookie), and of course, because nobody believe's Myspace is defrauding them.
So here we are: At least 80% of Myspace's ad inventory is fraud, and nobody's going to ask for their money back because "hey, we got fooled too".
This is absolutely hilarious. And another powerful argument in favour of powerful ad blockers. Let the ad malware talk to the bot viewers and leave us out of it.
(That said, I'm still trying to work out how to unblock Project Wonderful, a non-arsehole ad network, in uBO. Seems to require picking the precise JS they serve.)
Would be an interesting tactic for e.g. Microsoft to secretly set up a click-spamming botnet so as to pollute Google's click-through measurement enough that advertisers no longer trust it. Would that even be illegal?
Aside from bot traffic, a significant percentage of "legitimate" traffic seems, anecdotally, to be engineered accidental clicks -- the mobile site that is constantly pushing content around in the hopes that one of your screen interactions accidentally yields an ad click. As one of an endless number of examples, a well respected, major recipe site has a mechanism to change the servings, and first you have to click on a "servings" button, and then on the actual serving count. After clicking on the servings, several hundred milliseconds later an ad appears exactly where the count input is, and clearly considerable engineering effort went into designing this, and many other, accidental interactions.
For what? I can only speak for myself but my immediate reaction is to click back and feel annoyed, and consider ad blocker options. It has never led to engagement or a purchase. Ever. The end result is that the performance of ads simply collapses, and sites have to get even trickier to entice accidental clicks. Rinse and repeat.
If you work in the "trick click" space, you are just dooming yourself. It is a race to the bottom.
Firefox for Android supports uBlock Origin and it works wonderfully. Highly recommended to improve mobile browsing, including faster page load times and less bandwidth usage.
You mean the Slashdot model. Four huge buttons that take up the entire screen while scrolling. No room on either side to avoid them or get past them. Slashdot has become the poster child for this crappy model.
I used to spend a lot of time there. That went down to barely any in recent history as HN and other sources "took over". When the sourceforge adding rubbish to downloads and slashdot reportedly censoring discussion of the topic (they are owned by the same parent company) I realised how little I'd visited in recent months and decided that I never needed to go there again.
Silly tricks like the one you describe when seen on previously respectable sites seem to be a symptom of the site slowly dying and desperately grasping for what it can on the way down.
> the mobile site that is constantly pushing content around in the hopes that one of your screen interactions accidentally yields an ad click
I think a lot of the time what you're seeing is shoddy web design (i.e. incompetence and not malice). Not saying it never happens, just that I don't think it's common.
Also, at the risk of stating the obvious, tricking people into clicking on ads is not a good long-term strategy for making money online. Savvy marketers judge ads by conversions, not clicks. Sending a bunch of clicks that don't convert is just going to drive down how much you get paid per click.
The primary revenue source of many of these sites are ad clicks (impressions often don't matter). If the function paying the bills are ad clicks, and site stickiness really isn't a thing anymore (nowadays we're all directed by social news, Facebook, etc. Few of us visit specific sites), why not abuse the users that do come by.
A principal web design facet in the past was the notion that you pre-size all of the elements, such that the content layout is static. This basic facet has largely disappeared -- despite being monumentally simple -- and the only rational explanation is that moving content is profitable.
And for sure it is a terrible long term strategy. But the problem is that it's a tragedy of the commons -- your ad payout on most networks is based upon the group norms, not on your own site norms. So if everyone else is click tricking users, your own per click payment drops, so the only viable solution is to join the race to the bottom.
It's interesting to think about whether it is engineered to be like this, or if it is just that poor webdesign did better in the metrics important to managers (clicks) than metrics important to UX.
Actually the Myspace video ads are being sold as video (6-8$CPM on exchanges; I've seen it as high as 18$CPM to advertisers), so there's no expectation that they click.
Of the traffic I observed, I was able to (in a few hours) classify 80% of the traffic as fraud. I don't think there were any legitimate users on these sites except ad network operators verifying whitelists and agency teams showing off how much myspace traffic they were buying.
> He dismisses the idea that it’s hard to tell genuine traffic from fake. “The whole thing about throwing your hands in the air and saying, ‘I don’t know, maybe it’s real, maybe it’s not real’.
Ironically, Google the term "buy facebook accounts". This is the elephant in the room nobody wants to talk about in the social networking space. 50% or more of the social networking profiles out there could be fake given bots, systems and marketplaces that auto generate this stuff in mass amounts to spam and siphon ad dollars.
Another way to look at this is based on how and why virus makers target PCs along with how many PCs are currently affected.
Not many fake 'searches' are happening compared to fake bots and social networking profiles. Yet another reason why Google makes $70 billion per year.