One thing I don't understand is why we are calling these patterns _dark_ rather than _manipulative_, or even just calling it straight up manipulation?
I feel like manipulation has a clear meaning for a lot of people, which I would define as attempts at making you do things against your best interest. This definitely applies to booking.com's multiple practices of creating artificial urgency, various website's "opt-in" to their tracker cookie policy, etc. I don't believe it "weakens" the meaning of manipulation as a term. The appropriate red flags should be raised when hearing about it.
To me, the term "dark pattern" itself seems like a dark pattern, intended to obfuscate what is really meant to those unfamiliar with internet lingo. But, at least the dot org -is-was still available.
I'm the person who coined the term. I was interested in design patterns and pattern libraries. I noticed that we had design patterns (good things to learn from), anti-patterns (things worth avoiding) but there wasn't an equivalent term for deceptive design practices. "Dark Patterns" was the title of a talk I gave in 2008.
I'm not sure why everyone chose to use the term. The idea of it being intentionally obscure is quite wonderfully Hacker News-eseque.
It’s a beautiful term, like doublespeak. For some people, it’s hard to define manipulative, because it must carefully exclude all the things that we ourselves do that is manipulative.
Like imagine all those people working for the S&P 500 companies. Amazon, Apple, Disney, Home Depot, AT&T, whatever. Things they do? Neither manipulative no dark pattern.
Automatically transitioning from a free trial to a subscription in the App Store? Bright pattern! Essentially unmoderated reviews on products and mysterious badges? Phone locking? Even Disney Channel runs ads, they’re just called Movie Surfers and it’s only cross promotion in the world’s largest media business.
Dark patterns, that’s for things that make money but are not S&P 500. People will debate if something Google does is dark or bright but booking.com? I mean fuck them right? Nobody is defending some business that isn’t growing their pension or hiring their kid, but we’re talking about it not because it offends our morals but because it makes a bunch of money.
Then there’s just day to day stuff that we do. For some people, it can’t be manipulative or a dark pattern if we personally do it. For example, generalizing about people of a different ethnicity than us. Or considering an argument won so long as the last word spoken agrees with us. “That’s not manipulative,” you’ll say. Fucking exactly! We can’t see how things that we ourselves sometimes do as manipulative, and litigating that on an Internet forum is fucking stupid so fuck me.
Many things are manipulative. Commerce and relationships could fall apart if we attached that moral quandary of taking advantage of people via a difference or misrepresentation of knowledge, and it’s easy to say that’s okay so long as it’s not you who’s the victim. It’s a bipartisan evil, really, and it can be found everywhere. Maybe that means manipulative is meaningless. So here we are: dark pattern, not manipulative.
Arguably you can be manipulative without being dark, e.g. manipulate people into returning to your website by making it attractive, fast loading, with lots of relevant, accurate, high signal to noise content. We tend to use the word to refer to dark manipulations, as in
> To tamper with or falsify for personal gain.
But it can be a positive thing as in about half of this definition:
I run a development agency, and on a few occasions, we've had clients ask us to build features similar to what you've outlined above. One of those clients had a pretty thorough specification document that outlined how this behavior should work.
When you open the listing, it should wait a few seconds and then show you a number between 8 and 20 of people who are actively looking at this listing. If it was night-time, it shouldn't really be that much, so let's put in a number between 2 and 8.
This was of course all fake since the platform didn't launch yet. There was also a lot of other "building fake anxiety" tactics, but this one and all other got buried way in the backlog and never actually implemented thankfully.
I'm not saying that Booking.com is doing this, but I definitely hate this pattern.
I never worked in a brick and mortar store but I wonder if they purposely only place 1 or 2 items onto a shelf to create a false sense of scarcity. Meanwhile they have a whole pallet of them in the back. As a customer you would never know but if it came down to taking legal action, it would probably be pretty hard to prove without a doubt this was done on purpose.
Scarcity or feeling like you're going to miss out on something (especially a deal) is a really powerful trigger. I generally ignore most advertising but man when you see that there's only 2 left of something you want on an online site, I would be lying if I said it didn't affect me. Luckily this tactic is less potent now for a number of things because so many online stores sell the same items.
Many years ago, I reviewed the code at Booking.com that implemented this (in the version thar existed at the time, there's many more variants of it today) and it had a clear enough definition and was roughly what you'd expect it to be if you tried to design/implement it faithfully but without crazy out-there amounts of work: "number of detected-as-not-a-bot page views for this hotel (page) within the last X minutes" where X was, if I recall correctly, "a few" as in single digit. It was done by consuming a near-real time steam of logs, such that the X minutes above wouldn't be massively biased by the processing delay. I'm certain this was reimplemented at least once since because that wouldn't scale any more to their volume today.
Also, this functionality has received significant amount of scrutiny from some regulators, so last I looked, there was a mouse over that gave the actual definition.
That being said, in no way am I trying to convince anyone that it's not a "dark pattern" or that you have to think it's a good idea on any way. Just looking to preempt some comment that claims Booking.com is actively lying about this. They weren't ~5-6 years ago when I read the implementation and I don't they could've started doing that systematically since then due to regulatory attention.
I believe that the messages are accurate but they’re also incredibly misleading.
I just did a search for nearby rooms for the end of September 2020. It shows me a bunch of listings with a bright red “Only 1 left!” As far as I can tell, these are Airbnb style listings where the property is really just a single apartment. The message is technically correct (there is only one left) but incredibly misleading (there’s only one to even be available at all, it’s not because of high demand).
Every listing I checked also shows me “Lowest price today!” Obviously that’s because the price has remained constant. In fact, I’m likely the first person today to check these particular dates at all. The claim is technically correct, but easily creates the impression that you’ve caught a price decrease and that it’s likely to go back up soon.
I didn’t see it in this search, but I’ve seen many times before where sites will say something like “5 rooms left for this date” and “12 people booked this today.” They mean 12 people booked this hotel in general today, but it’s easily interpreted as 12 people booked the dates you’re looking at where there’s only 5 left, so you’d better grab it now.
It’s misleading and they know it’s misleading. That is lying even if the statements are technically correct. After all, lying is about intent; making a false statement sincerely is not a lie, and making a true statement with the intent to deceive is.
I don't know, still, most people's "psychological setup" isn't wired to handle this kind of manipulation on a frequent basis, even if "designed/implemented it faithfully".
It would just make the web a little more serene place if the said websites did away with these kind of pernicious practices which only induce stress and agony.
I'm with you. Weirdly, even though I certainly have an above average understanding of these tactics, they almost certainly still have a minor effect on me anyway.
What I was trying to do was to preempt the allegations that Booking is lying to its users. None of the allegations I'd seen while I was there actually checked out on that front. I, like most others, severely disliked the urgency messaging, but as is with ethics discussions, it was never clear cut. I did not perceive that there was a widespread culture of trying to trick users into purchasing something they didn't want. For most of my tenure, I managed the infrastructure department, not product, so was a bit removed. Experimentation infrastructure fell into my scope, though. We invested significant effort into both education and countermeasures for things like p-hacking.
Yeah, that always makes me think: "Hmm, a high quality product would create some demand, why not keep the stock well filled?" Also it conjures up images of 1 last item lying at the bottom of a big box where it was squeezed and just lying around for the longest of it's entire batch.. No good!
So, why? In me it creates a sense of "no thanks, I'll wait".
I suspect that some edible products are actually leftovers approaching shelf life limits. At least the one time I have ordered some protein bars, they were barely before their displayed end of life (and those things last for over a year)
At least that one can be ignored. I hate even more how flight companies increase the price for the flight you've pre-selected if you are actively comparing options for several flights with the same origin/destination for the same dates (only to recover the original price several hours later if you don't buy it right then).
My team actually implemented this feature in a large booking.com competitor and the numbers were in fact correct - we didn’t just pull them out of thin air.
The video is a good example of why Linkedin's popularity is completely baffling for me — if a company has intentionally and malignantly mishandled your information in such a way a number of times, why would anyone want to hand over even more information? People you connect to, who you know, etc. It just blows my mind. Luckily I've had no problems with boycotting them, but I guess it might be hard in some industries?
I see it like this: yes, it's a poor website which comes down to semi-legitimised spam.
However, recruiters already were semi-legitimised spam, and they like LinkedIn because it's easier/more signal vs noise for them. Employees like it enough not to quit if there's a potential new job. Signal vs noise is not much different from dealing with job hunting/recruiters outside of LinkedIn. Network effects take care of the rest.
Because they're "offering" an inelastic good (a potential job). People who need it will be willing to tolerate a lot of bullshit for a chance to get one.
I've been trying to cancel my New York Times crosswords subscription for days. You cannot do it on the account management page, instead they force you to talk with their support.
Their recommendation is to use the chat support but whenever I try it says: "All of our advocates are currently occupied. Please try again soon or call us now."
I also tried to do it in email, of course no response for 2 days now.
I'm not from the US so I don't want to call them on the phone...
Unbelievable that such a big name as NYT does this with their customers, this is disgusting and I can't believe this is not against the law.
I still remember one of the first businesses I did a contract for in the early 2000s wanted a hit counter on the bottom of their page.
So I put one in but they wanted it to start at 10,000 instead of 0 to give a false sense that the site was popular. This stuff has been going on for decades.
There's one more: Suggesting that you lose benefits in a subscription business right away, when in reality, they run until the "contract" ends. Culprit: Amazon Prime. I just checked and they're still doing it (on Amazon.de).
If you try to cancel your Prime Membership, they make it sound as if you lose all benefits right away, even though I pre-paid for the entire year and my membership will end with the billing period and not before.
>If you try to cancel your Prime Membership, they make it sound as if you lose all benefits right away
This method of obfuscation is also prevalent on most music services I have subscribed to ie. Spotify, Deezer, Tidal et al., which also make it harder to access the option to cancel in the first place. If you use PayPal for the afore-mentioned services, an entry for recurring charges also gets placed on the account and once you cancel, it sits there dormant. Although, it can be marked as inactive manually, however, the process is not intuitive and hidden away in settings, where direct debits are labelled as pre-approved payments.
If you want to get really evil, combine this with "you need to tell us you want to cancel in advance or we'll charge you a termination fee". People will put it off thinking they will lose the benefits, then find themselves on the line for a fee because they didn't do it earlier (often giving up and continuing instead).
My old ISP had me mail them a copy of my new lease to prove that they could no longer service my area (let alone my state) to avoid a termination fee. Shit is byzantine.
What about ecommerce sites where they pop up a little notification where it's like "Cathy from Ohio just bought ..." and then it links you to the item.
Or even worse, you have XYZ in your cart and then suddenly as you browse you get all of these notifications that the world is buying that same item so you better checkout soon before you miss out.
Some services like Amazon make it outrageously difficult to leave their service. You have to contact support and the link is buried really deep in their site, you have to jump through serious hoops to leave their service.
There's a great site called Just Delete Me[0] that lists all the services where leaving / deleting your account is hard and the user is subjected to dark patterns.
I'd add unnecessarily asking for a mobile phone number for the stated purpose of two factory authentication and then using that for privacy mining ala Facebook[1].
GMail seems to have opened a vector to amplify dark patterns by placing action buttons on messages. My least favorite is the LinkedIn accept invitation button, which I've clicked now several times by accident because I've spent years using GMail without it taking actions like opening GitHub PRs and accepting LinkedIn invites.
I can't find a way to disable this feature. Does anybody know how?
It's built into the inbox view, so GMail is extracting the action from the message content and placing a button on the row element. Sorry if that wasn't clear from my initial post.
A dark pattern I submitted but was never accepted is clicking unsubscribe in an email and getting redirected to a page that asks you to check the boxes that you would like to not unsubscribe from, or to remain subscribed to, when I expect simply to be unsubscribed from the offending mailing list.
Well, a Roach Motel [0] is also something that is easy to get into but hard to get out of, but the name does imply that you are a roach for getting into it...
https://news.ycombinator.com/item?id=18722952
Also:
https://news.ycombinator.com/item?id=13116703 (3 years ago)
https://news.ycombinator.com/item?id=6301378 (6 years ago)
https://news.ycombinator.com/item?id=5347543 (6 years ago)
https://news.ycombinator.com/item?id=4002625 (7 years ago)