On one end of the spectrum you have the people who don't give privacy any thought - they've downloaded and logged into every app under the sun and generally don't care that they're being surveilled. They mindlessly consume hours of curated TikTok videos and go about their day getting a barrage of targeted ads.
And on the other end, you have… this. Buying burner phones and wearing facepaint to avoid facial recognition at a theme park. Imposing that on your _family_.
As an academic exercise I guess it's interesting, but living this way must be truly exhausting. Me? I'll continue to be somewhere in the middle of these two polar opposites. I participate in society, but I don't willingly or knowingly give away more information than is required. I obscure or block what I can, but still sign up for and use accounts that can be tied back to me. I consider success as making it difficult to tie my data together - either such that I don't fit the majority pipelines and need extra attention, or I break hamfisted techniques altogether.
(Original author here.) Yes, living this way is sometimes exhausting. I don't expect others to do what I do. I often go to extremes to reveal how complicated opting out actually is, in an effort to point the way to an alternative path that doesn't require all this surveillance. There's more about that here: https://www.optoutproject.net/about-the-opt-out-project/
What do you think will be a more effective push towards regulation: demonstrating & publicizing the pernicious privacy impacts like this, or passively sitting around and waiting for it to just happen?
Quick note: In your post you say the wristbands are "infrared"
> not on a wristband which is subject to infrared detection throughout the parks in Florida.
They use NFC for close range, RFID for longer range, and as of the new ones Bluetooth as well. Though...as far as I know most of the positional data from this is garbage and was thrown out in the past few years.
They are also most likely (I think they are but can't confirm) using phone WiFi signals for triangulation throughout the park to detect guest flow patterns.
They've used magic bands for overall guest flow in the past with some success but you're probably right that they use phones now. With the new fast pass system (Genie+) and the virtual queues they introduced for the most popular rides, it's pretty essential to be using the Disney app while at the parks now.
I did notice that they've gotten better and better at automatically assigning ride photos to your account and things of that nature over the last 5 years. I assume it is a shift to the phone tracking because the magic bands definitely didn't have the best resolution for that kind of thing.
They still market magic bands somewhat heavily though and they're still the primary way people tap into fast pass lane, some of the restaurant reservations, etc. Plus they can still be used as room keys and to charge purchases to your room if you are staying on property.
It might be more of a psychological thing than a tech thing at this point, but I feel like magic bands will be around in some form for quite a while still.
> I did notice that they've gotten better and better at automatically assigning ride photos to your account and things of that nature over the last 5 years. I assume it is a shift to the phone tracking because the magic bands definitely didn't have the best resolution for that kind of thing.
Yep, if you use the app on your phone they use BT for that now, and MB as a backup. [0]
> I feel like magic bands will be around in some form for quite a while still.
Oh absolutely, they just came out with "MagicBand+" which has RGB LEDs on it, Haptic feedback (iirc?) and BT for in-park experiences such as a statue talking to you. [1]
> They are also most likely (I think they are but can't confirm) using phone WiFi signals for triangulation throughout the park to detect guest flow patterns.
Given that this isn't rare in retail stores, I think that's a pretty safe assumption.
Indeed. I care a lot about this stuff, but my solution isn't to go to extremes to evade it. I just avoid it. For instance, there's zero chance I'd set foot on a Disney property, specifically because of these issues.
I am generally favourable to limiting the tracking we allow companies to perform at the legislative level because I'm pretty sure they are going to end up doing something nefarious to some group at some point going after a quick buck. I don't think the government should hold more than necessary because I fear what they can find in the aggregate.
But on a personnal level? What do you gain from going to extreme to avoid the surveillance you think you know? It seems to be a hassle with no upside.
I don't really understand your distinction, unless you're saying that you're OK with the government (the same government you point out as a risk in the next sentence?) limiting data collection on your behalf, but you're not prepared to take personal action? How are these different outcomes?
I think supermarkets taking pictures of my face when at a checkout is outrageous and should be outlawed, but I personally gain nothing by forgoing shopping for groceries where it's most convenient for me and my family.
> but I personally gain nothing by forgoing shopping for groceries where it's most convenient for me and my family.
Taking pics of your face at checkout is bad enough, but if you don't push back against the slow creep of data collection somehow, it will not end there. What they've really been pushing for, for a long time now, is Personalized Dynamic Pricing (https://www.researchgate.net/publication/338776528_A_special...) which means that once the store knows who you are (either by your photo, or by detecting the phone in your pocket, or by your loyalty card) they will alter the prices at the register to make as much money from you as they think they can get away with. They can drive up the price on items you always buy, or lower the price on certain items to attract you to them now so they can charge you more later. They can use data like what buy as well data like your income level, who the members of your household are, and what your shopping habits elsewhere are.
You gain a lot by pushing back on invasive tracking and violations of your privacy because those things exist only to let other people manipulate you and take more of your money. You'll likely never be told when or how the data someone somewhere managed to collect from you is later being used to screw you over, but it will be used to screw you over. Buy too much unhealthy food and your health insurance premiums go up. Buy a little too much alcohol, and now you've lost that job you wanted because your employer sees that and decides to hire the next candidate who has "better" habits. You never know what will prejudice someone against you, but all that data never goes away. It follows you for the rest of your life.
What you gain by pushing back is that there will be less ammo for others to use against you later. If that means driving an extra 10 minutes to the grocery store that doesn't record your face at the register, it's absolutely worth it for you.
That doesn't mean you can't ever go to disney world or that you have to walk around in Dazzle face paint to fool the cameras, but you should be aware of the risks to you and your children and you should be willing to take even small steps to protect them and yourself.
Laws are made by people. Those legislators sometimes respond to shifting Overton windows. Your actions and motivations can make a small difference. More actions makes more difference.
Consider the Suffragettes. Few people today would think women should be deprived of a vote. In the early twentieth century too, many thought it would be a good idea, but didn't want to make any personal sacrifice for the cause. It took single-minded actions of a few dedicated individuals and their tacit support by the majority to drive the necessary changes.
Buran’s Razor
I’m not exactly sure why Disney wants to track the fuck out of me right now, but if they value my information for some reason maybe I should too. Maybe someday I’ll find out why they wanted my information, better be prepared now.
Just watching a few episodes of Black Mirror and I have all the incentive I need. To use a bad analogy - I don't need to be faster than the cheetah, just faster than my unfortunate neighbor.
But that assumes that the bad consequences of these things will only fall on those who participated in bringing them about. I think of the data/surveillance system as being similar to global warming. Sure, it's good if I try to be more efficient and support places who do, but if 90% of the world still burns coal like there's no problem, I still get screwed in the next 50 years. I don't get to plead with the climate for mercy because I tried, and I don't get to plead with whatever Black Mirror dystopia may come because I didn't help create it.
If anything, it's closer to that thought experiment around AI where if you tried to stop the killer AI from taking over the world, when it does arrive, you're the first to die because it knows you hate it. If we end up all living in Google's country or whatever, your friends with Android phones and Gmail accounts are going to be the ones faster than you and not the other way around.
None of this to say you shouldn't try to effect change, but don't mistake principle with self-preservation.
Not exactly, but the sort of "I don't have to outrun the bear" mantra only works if the bear is going to stop at the first person. I'm certainly not saying to give up, but you need to prepare for things to get worse whether or not you participate in causing things to get worse. It's not reasonable to say you did your part to not be tracked and therefore you're guarded against Google taking over the world or whatever.
I don't go to extremes. That was the point of my comment.
I do avoid having data collected about me to a large degree, though, mostly by avoiding things that involve data collection where reasonably possible. So I won't go to a Disney property, I don't use SaaS services, etc. Once my current smartphone breaks, I won't be getting another smartphone.
Why? Simply because I don't like being spied on and resent the constant attempts at doing so. It's really no more complicated than that.
Well, there are a few other aspects (such as that none of the companies who are involved in the domestic surveillance industry are trustworthy), but "I don't want to be spied on" is the main thing.
some would say that constant 24/7 surveillance is extreme, yet those that resist such behavior are labeled extremists.
we created the world where this extreme behavior is normalized; I don't consider it to be extreme to strive for other paths -- even if it may be out of lock-step with society; if anything I view such calls for a return to basics as a moderating factor against such extreme abuse of non-corporate humans.
Maybe, but most people I interact with wouldn't. Except, perhaps, the smartphone part, but most people are far more dependent on their smartphones than I am. I don't view my efforts as going to extremes because they aren't imposing a huge burden or loss on me.
To me, the "why do you want privacy" question is a very odd one. In my worldview, my desire for it is literally no different than my desire to have curtains on my windows. I find it somewhat amazing that this is something others struggle to understand.
Roughly true, if you also avoid using the vast majority of smartphone apps. But for me, going that route is an even greater inconvenience than just not having a smartphone. It requires greater vigilance and caution.
The path of least resistance for me is to avoid smartphones and use a real pocket computer instead.
Intelligent systems look for outliers and evaders as potential threats. This guy going so far out of his way to evade detection most likely got him flagged and monitored to an extra degree.
What kind of meaningful data is Disney going to gather on people based on their limited range of actions within the theme park context? This dad drinks Coke and never drinks Pepsi, dislikes the tea-cup ride...
> As an academic exercise I guess it's interesting, but living this way must be truly exhausting.
Especially because it's not sufficient against a motivated adversary. Disney wants the data, but they're not going to expend extra effort to catch the few people that slip through, it's not worth it to them. For example, Disney could be using gait recognition in addition to all the other stuff they do, and that could significantly help to tie together activities; But this is not economical to implement right now,
Why is it exactly economical for China, but not Disney? They already have the video. They can do it at any time. They have night vision and lots of other tech so I doubt expenses will stop them.
I don't think Disney is really an adversary in this scenario (where an adversary is not purely economically transactional); the recognition is primarily geared towards (a) analyzing general patterns to optimize (b) selling you more things you would be interested in buying. Their motivation is entirely economical.
These detection tools aren't perfect, and there is a diminishing return on getting that last few percent of people who slip through the cracks. At some point, it isn't worth the effort.
China, on the other hand, is motivated by more than mere economics. It is also interested in analyzing general patterns- they are a partial-command society given the extensive centralized planning that goes into the economy and social behaviors. However, the people who attempt to go unseen are precisely the most important people to observe- the bad actors, the malcontents, those most likely to cause trouble (乱) to an otherwise harmonized society.
Viewing everyone trying to sell you things, particularly the ones who you are actively seeing out to buy entertainment from, as adversaries is a pretty self-defeating approach.
Companies Wanting to sell people things isn't the problem. Using the most mundane details of our life to manipulate us, and to extract as much money from us as possible is a problem. So is failing to secure that data.
The first two reference something where there's no strong evidence showing that it actually happened (it's more likely that it was credential stuffing-- using data from other breaches to compromise Disney+ accounts using the same passwords), and the third was Starwood Hotels (now Marriott), not Disney.
As for the hotel it's sort of a partnership deal, Disney owns the land, built it, leases the building to another company to run, takes a cut of the profits, and has them branded as part of their Walt Disney Collection of resorts. I think it's fair to give Disney the blame.
https://en.wikipedia.org/wiki/Walt_Disney_World_Dolphin
There's also this, which you could argue was outside of their control too, but I think once a company hands your data over to third parties they're on the hook for what happens as a result:
https://threatpost.com/epsilon-data-breach-expands-include-c...
There isn't an opt out for things like facial tracking, and it doesn't bring benefit unless your goal is to be sold things based on being calculated. If they had a way to automatically track you for your benefit like emotional distress, lost kids, etc it is not adversarial.
My goal is to not be sold things I don't need or to add to their data, so I don't go to Disneyland.
Is there any benefit to the end user if they're not interested in being sold to? If I don't get one is adversarial and they are my enemy.
Some of their tracking is directly improving guest experiences in the parks though. Live data from magic bands helps with crowd control measures for example.
Sure, this is ultimately a business move, but it's not a manipulative tactic, it's actual product improvement. If you don't care for Disney to begin with that is fine, but I think the tracking absolutely can bring benefit to the entertainment value of the trip if you are a fan of the parks. I like having my ride photos automatically show up in my account and having my lunch pickup order come out with impeccable timing.
I'm not saying Disney doesn't also play psychological tricks to get you to buy more, and probably they are using personalized info for that. I'm just saying "it doesn't bring benefit unless your goal is to be sold things based on being calculated" is not entirely true either. It does improve things you are already paying for like how efficiently you can plan your activities each day in the parks.
Thanks I haven't been to Disneyland in a long time. I'm so used to seeing tracking with no benefits I didn't believe it could be used in a way that a customer like yourself defends it and says it has uses you benefit from.
Setting up a surveillance system to part you from your money sounds adversarial to me in the combative sense. That said, adversarial doesn't necessarily imply hostility, does it?
You're right. But I'm not doing that. I view the data collection and the use the data is put to as adversarial, not necessarily the efforts to sell me things.
Although that can be very adversarial, too, depending on how the sales effort is conducted.
I don't think Disney is really an adversary in this scenario (where an adversary is not purely economically transactional);
Club-33 and Disneys close, repeated ties with pedophiles (especially including the grooming of disney kids) indicate you are probably wrong, but I'll admit we are mostly working with inductive evidence on this.
What is your claim about Club 33? As far as I know, it’s a kind of cool elite thing for people with money to burn (I had the opportunity to have lunch there once in the 80s—I kind of wonder whether the tourist couple they turned away when our group checked in were cast members whose role was to help enhance the exclusive factor), but beyond being something that’s exclusive for rich people, what’s the harm you see with it?
China isn't interested in it being economical. They have different goals with their systems.
Notice I also said that its not economical _right now_. In the future it may very well become that.
I'm not aware of the actual state of b2b offerings for this kind of tech, but I'd imagine that when someone starts offering a reasonably priced turnkey solution for this sort of thing, Disney will start using it; Along with everyone else.
Disney is just trying to manage their theme park more effectively, and sell you some stuffed animals and t-shirts. 99.99% of people are not going to any lengths to avoid the unsophisticated stuff they're already doing, so they gain nothing from spending any more effort on it.
I imagine OP understands this though - they're opting out on principle, not out of practical reasons.
The majority of people I've seen at the parks have intentionally opted in to tracking measures for over a decade now. People literally pay extra to give Disney more information, because it makes the trip a lot smoother. It's part of the fun of Disney honestly, living in their bubble for a few days.
Anyway, just seconding your point - not only do most park guests make no effort to avoid tracking, but many want to be tracked, because it helps Disney do a better job. Sure this might mean they spend a bit more, but if it also means the already expensive trip is a better experience it's worth it for a lot of folks.
As an academic exercise it's interesting as a solo project or as an adventure with a fellow researcher. With the kids it's effectively spoiling their vacation while teaching them things they shouldn't have to know or understand for at least a few more years.
I could imagine that certain people would have a risk profile that might necessitate this kind of behavior -- but that kind of person wouldn't turn around and blog about it 1.) calling attention to it 2.) giving Disney more pieces to the puzzle if they were motivated enough to solve it.
You're right - the average person doesn't need to have that kind of stance. Even if they were a real target, much higher value targets slip through the cracks all the time. We're not nearly as good at data processing as we think (yet).
I'd add to your list: being aware of how this data is used and thinking before interacting with algorithmic content & advertising. Having worked in advertising during a younger & less jaded period of my life, knowing the aims and tactics they use helps reduce their effectiveness.
Eliminating tracking for other, more nefarious uses, is very difficult and largely impractical against a motivated enough opponent. This is where we need governments to reign in corporate interests and human rights & privacy advocates to reign in the governments as much as possible, as they will happily increase surveillance as much as they can, in the name of security and a quite life.
> Having worked in advertising during a younger & less jaded period of my life, knowing the aims and tactics they use helps reduce their effectiveness.
It's best to be aware of what they're doing and how, but I'm not sure how much it helps protect us. I can't seem to find it now, but I'm pretty sure I've seen research saying that awareness doesn't offer much defense. Our brains are simply susceptible to certain attacks and like an optical illusion that can't be unseen even when you know what's wrong, we're still influenced to some degree by the kinds of manipulations commonly used by advertisers.
Regarding exhausting, here's a quote from a different blog post of theirs describing how to avoid surveillance in person purchases:
> I once took about 20 minutes at a Target checkout trying to get enough cash from the till to buy Christmas gifts for my kids and nephews. Then there was the time I held a bunch of kids' books at the counter at Barnes and Noble and drove around for 15 minutes until I found an ATM open on a Sunday evening in NJ. Getting cash out before a kid-related purchase is basically second nature to me now.
Yeah - this seems ridiculous, and defeats the goal of privacy too.
If you pay for everything with cash surely it's much easier to just get a bulk amount of cash out periodically.
If you go to the nearest ATM to withdraw just enough to buy whatever you're buying every time you're out that's almost as much information for someone trying to surveil you as if you'd paid with a card at the store.
Also, getting the cash out at the till to buy stuff at the same store is even more pointless. That's just paying with your card with an annoying extra step.
In all fairness, it's a stretch to call it "abuse": it's not uncommon for companies to preemptively extend a legally required service to those outside of the technically required jurisdiction due to the simplicity of not needing to account for who is where and if that's acceptable within their jurisdiction at this time
It's also not unreasonable to expect that more jurisdictions may eventually follow suit, and so having to dedicate resources to ensure every request comes from an applicable jurisdiction that they legally are required to handle...
...might just be less preferable than fulfilling some "delete data" requests from less demanding areas.
Besides, the idea that large corporations have free unaccountable reign to abuse systems of all sorts, while individuals should be quaking in their boots or snubbing their nose at the mere possibility of inconveniencing an hourly worker under faceless brands is a little over the top, IMO. It's just a request that can be denied.
Why would it be abuse if you just asked for the data to be deleted? As long as you do not misrepresent things and do so politely, of course.
If Disney wants to spent time and money and reputation to figure out if they legally need to delete the data they collected about you and only do so when that's the case, then that's their choice. Same as it was their choice to collect data in the first place.
If they instead want to be nice and consumer-orientated, as they like their public image to suggest, or at least save some bucks, then they will hit the delete button. They gotta have such button by now anyway for legal requests from Californians.
> As an academic exercise I guess it's interesting, but living this way must be truly exhausting. Me?
I agree with the gist what what you're saying. My concern involves the invasive data collection of biometrics [face data] and IDs. It's a concern over why is it needed, and why are they so forceful in collecting it.
Face rec- The TSA and the airlines are trying to push this as if it's required. (There a lot of documents and internal communication showing it's not). Why do they need this, or are our IDs not secure enough? Was their existing process not enough?
Porn- Lousiana is trying to push for auths via your id online.
Other orgs are trying to get copies of your license. Is this important for their operation? Is it important for them to keep access to this info just for the "societal harm" that is claimed over content that is age restricted?
Another piece why is my data being non-consentually gobbled up by the creeps over at Clearview AI? Why is the MSG entertainment group trying to look for a reason to ban me before I enter?
All of this demonstrates a one sided demand for this data without having any usable benefit for the people that the consent is being taken from.
Maybe, but so is constantly being bombarded with ads, offers, etc. when you're trying to do something else. And the tracking is mainly to power those things.
Part of blocking tracking is just outright blocking ads, so yes, I definitely get fewer ads. Also, I'm able to see more content. My spouse often finds they can't access some random article (or not all of it) but it works fine for me with my extreme blocking.
Doesn't this do nothing for the cause of legislating away personalized ads, though? Not wanting ads at all is different from being a consumer of non-personalized ads and buying stuff from them when it's still relevant so that companies can still survive without your personal data when it's regulated.
I don’t know the answer to your question, but I know that in 30+ years of browsing the web, I have literally never once intentionally clicked on an ad of any sort. (I have been tricked into clicking on a few, and done a few by accident, of course.) If I ever saw an ad that had something interesting in it, I would open a new window and enter the domain name by hand, anyway. But I’m weird that way.
yes it sounds amazingly hard. but then , opting out doesn't have to be an all or nothing. And it doesn't have to be all at once. Small steps are much better than no steps!
this post (same author) talks about how to he strategic and selective about your opt outs. Not everyone has to be an extremist!
I'm also somewhere in between the two types. I don't like being tracked, so I do limit tracking as much as I could for as long as things stay convenient for me and everyone else. I do run some form of ad blocking on all my devices. I do have third-part cookies disabled with a few exceptions for sites that do legit need them.
But, I also have multiple loyalty apps installed on my phone, which is a stock, non-rooted Google Pixel. I also use VKontakte, Instagram and Twitter. With my IRL identity. My attitude is simple: yes, of course there still are endless ways to track me for all I care, but — what use is that tracking if there is zero technical possibility to non-consensually shove the products of processing that personalized information into me to persuade me into something? In other words, there probably still are ads creepily targeted at me, but they never reach my eyes. And with this I'm perfectly fine.
It feels like there's a distinct type of person who would go to great lengths to stick to some principle for the sake of sticking to that principle. Some people use Linux because they legitimately enjoy it and the tinkering that comes with it. But others use Linux because they have a principle that they must only use free software. I'm sure everyone on HN knows at least one such person. They hate it, they constantly tell everyone how this and that doesn't work with their setup so they can't fully participate in the society, but don't you dare tell them to, uh, "maybe look into other OSes that just work out of the box if you don't like Linux and its way of doing things". The author is most probably like this.
To be honest, this also makes you stand out. Maybe Disney doesn't really care, but when I worked on people's data a bunch of "ghosts" entering our system without really leaving a trace would raise all kind of eyebrows and maybe even get a team assigned to them to find out what was going on (to be honest, one of my tasks was to find out malicious actors, and this is exactly the kind of behaviour you expect from them).
At this point, surveillance is so massive that not leaving a trace is a trace of its own.
Adults aren't allowed to wear costumes in the park. You can wear clothing and accessories that strongly suggests a character[0], but no costumes.
(I guess this is because they want to avoid "Goofy" or "Donald" spouting crazy talk.)
[0] and this is apparently a big thing that enthusiasts do, on the level of cosplaying, see https://disneybound.co/ and search term is "disneybounding"
I mean, you could, like, just avoid Disneyland. I haven't been there since I was a kid and honestly I don't think my kids would care one way or another if they ever visited. (Now if you had to do this at Legoland, I'd be honestly disappointed)
We also haven't bought any "smart" TVs or IOT, or anything Alexa/Siri/etc -enabled.
At some point those kinds of purchases will be required, but I'll keep looking for non-invasive alternatives including... not participating.
Some people spend so much time making sure no one can see what they do that they don't have time to do the kind of things that make anyone else want to see what they do.
I think it was excessive. He didn't need a burner phone, or worried about his car. He could have probably gone there, said I only have a flip phone or had paint to look like he was old and can't use a phone.
Honestly this sounds like fun to impose on your family. We're going to hide from Disney tracking! We're getting a fancy Uber, everyone gets a radio, and we're painting our faces! Yes you can pain your arms too kids!
When were you last at Disneyland? While you may not need a phone to enter, there's no other way to fastpass, check wait times (without actually walking to the ride), schedule <anything>... It would be like paying for high speed and throttling to dial-up
Fun-ish story (for me, at least): I worked with a fellow who was incredibly privacy conscious. Now, that means many things to many people. I've worked in academia where people had time to do things like dynamically tunnel connections to their home VPN concentrator only to be piped out through their handrolled nameserver. That was old hat compared to this gentleman. During one engagement, he and I were holding a seminar on location about secure coding practices. He insisted on using HDMI cables without ethernet connections, piped to his own projector of his choosing, and spent about 10 or so minutes finagling device drivers to get the projector to work with his librebooted OS. After that we spent 10 or so minutes trying to setup a cellspot router to extend his phone signal so that he could phone home to a concentrator setup like I mentioned earlier in order for him to pull the handout pdfs from his home server (also librebooted btw, thanks for asking). Every bathroom break, and there were only 2 over 8h, he would disassemble his getup and take it with him into the stall of the bathroom.
I don't know where this person is today, but I can only assume that the extent of his privacy consciousness has only continued to sink its roots deeper into his life. It struck me as a sort of paranoia that likely started off proper and good but grew malignant and degenerative over the years.
I know two people this story could easily apply to and for both of them that's not at all paranoid. Both of them have had the resources of various nation states thrown at them on multiple occasions and they are both still walking the earth last I checked.
To describe this as malignant would require you to be intimately familiar with everything they've been up to. There was a short period where I myself had very good reason to be that paranoid (and more, in fact) and it's not a memory I like to revisit much. Being paranoid is one thing but to actually know that you may be - for whatever reason - a legitimate target changes things considerably.
What do you have to do for "nation state sending ground operatives to do hardware-level attacks against your security" becomes part of your threat model as an academic?
He was was an armchair cryptographer. Not a particularly bright or studied one, but a cryptography enthusiast nonetheless. The kind of person to get all hand wavey about the efficacy of timing attacks and POODLE.
Needing to go to extremes to avoid having their lives ruined can be reasonable for a lot of people depending on where they are and the kind of oppression they live under. Someone might be a whistleblower like Snowden, or in the witness protection program, or a homosexual, or have an abusive ex or stalker, or be seeking an abortion, or be a protestor/freedom fighter, etc.
All the data collection pushed on us, even if it's only for marketing, leaves a lot of people vulnerable.
Even the bit about taking the gear into the bathroom with him? The only threat model that thwarts is somebody physically tampering with it, which seems very paranoid to me.
You have a poor understanding of what kind of things are possible with hardware under your control for a couple of minutes, say the length of your average bathroom break.
I once had to hand in my laptop to some busybeaver borderguard who wanted access to it (impossible: wiped clean Chromebook, only to be re-installed on destination), I told him that if he took it out my sight he might as well keep it because it would be useless to me.
> The only threat model that thwarts is somebody physically tampering with it,
They way you state 'the only' seems to present a total misunderstanding of the fact that physical access is the number one easiest way to compromise just about any type of computing device that exists. If for example you went to any data center and attempted to get physical access without permission you would quickly find yourself accosted by armed personnel as to prevent the physical tampering you're talking about.
In my work I must keep my laptop on my person, or otherwise locked up when I'm not using it to prevent physical access by others. This is in now way unique in the computer security industry.
For the vast majority of threat models, having someone you have a little bit of rapport with watch your locked computer is perfectly adequate. Realistically the bigger threat is someone stealing the laptop to sell, not as part of some targeted assault on your security.
For that not to be adequate, your threat model needs to include field agents establishing a false sense of trust through some relationship, then leveraging that into an attack physical security. At some point you're getting really close to the "It's easier to bribe/blackmail/kidnap you" territory.
In some businesses the risk of being blackmailed is high, but it also comes at significant risk of the blackmailed working as a double agent. If the affected agent has no idea they've been compromised it is unlikely they will change their behaviors in any manner.
You seem to have a total misunderstanding about the threats this person faces.
If your job is running seminars about IT security to random companies, taking the projector with you to the bathroom is ridiculous and your clients will think you're a tin foil hat weirdo to boot.
Of course, if you're Snowden or Assange, your threat landscape is quite different and this would not be paranoid at all.
No, he doesn't connect anything, that's why he has his own projector and cables. And if he did, most companies would be much more concerned about the risk they present to company networks.
I think a lot about this subject and have a lot to say about this.
Rather than nit-pick, I want to showcase this particular item:
"I ended up going for the Disney lot, although they learned that they do have license plate readers. I was driving a rental car. Certainly, my name and driver's license are attached to that car, but that's through a different corporate database. A corporation that Disney does not own or have data rights to or share board members with ..."
(and later)
"Here I follow a different rule for obfuscation, which is to store data across corporate databases where the corporations have no prior relationship--or even an antagonistic one. I can be reasonably sure that, failing an acquisition, that data won't migrate."
This is wrong thinking.
Personal identifiers like phone number, license plate, address, etc., are commodities that are collected, digested and sold by many different third party providers.
He's thinking about Disney somehow comparing their license plate reader data with Hertz or something through some unlikely corporate agreement.
Far more likely is that both Disney and Hertz employ a third party data intel provider that gives them enterprise wide coverage and query for these, and other, identifiers while simultaneously acquiring the data to be made available via API to other "partners".
A good example of this is Ekata and their reverse phone product which, until recently, was available via Twilio API lookup:
/usr/local/bin/curl -s -X GET "https://lookups.twilio.com/v1/PhoneNumbers/$number?Type=carrier&Type=caller-name&AddOns=ekata_reverse_phone" -u $accountsid:$authtoken
... and would give you not only a reverse number lookup but also a list of "associated persons" as well as your address and number history.
I feel assured that APIs like this exist for license plates, SSNs, IMEI, etc.
I also strongly suspect that Disney and Hertz are both contributors to, and consumers of these APIs.
(Original author here). Yes I am aware of the identifiers that are sold through a third party data provider. Reading further, I also note that I decided not to care if Disneyland knows that I went, it was more about continuing to obfuscate my family, so I decided that was a risk I would accept. Of course, YMMV.
Curious why you consider D+ asking for birthday and gender to be deal breakers? Why would you even consider providing real data to those question? Just simply make up a name, birthday and gender. That's what I do.
It's not a dealbreaker for practical reasons, it's for moral ones. I don't think they should even ask for or collect that data, given the ability to de-anonymize that the additional information provides them. They don't need it and shouldn't ask. Especially as most people don't know they can just make stuff up, and never do.
I'm with you, I make up email addresses, mail addresses, credit card numbers, pseudonyms, genders, birthdates etc...
One minor note from a Disney nerd: Disney World no longer gives you the wristband (magicband) by default. You can buy one, but they want you to either use your phone or a RFID card they give you when you buy the ticket.
The rumor is that they were never able to make effective use of the long range data collection beacons that used the bands. Alternatively (and more likely IMO), they realized that knowing how many people were in an area was a matter of measuring how many unique devices scanned for the Wi-Fi AP in that area, which would work even if you didn’t have a band or the battery in the band died (the battery was necessary for the long range functionality, you could still get in the park via RFID no matter what).
Since they never launched the bands at Disneyland, they probably were able to do a a/b test and confirm that they didn’t get additional useful data from the bands (or that their data science team wasn’t able to use the data to improve guest experience, anyway).
> The rumor is that they were never able to make effective use of the long range data collection beacons that used the bands. Alternatively (and more likely IMO), they realized that knowing how many people were in an area was a matter of measuring how many unique devices scanned for the Wi-Fi AP in that area, which would work even if you didn’t have a band or the battery in the band died (the battery was necessary for the long range functionality, you could still get in the park via RFID no matter what).
IIRC both are true. MB was in development before phones were everywhere, so it's a long time pivot basically.
Also the 2 reasons I've seen that the data was thrown out was not due to use, rather the data was VERY bad / corrupted and not as accurate as they needed.
Correct, I forgot that. They never launched them as a free accessory for guests to help with payments/room key/FastPass/admission/data collection though, like MyMagic+ at Disney World. They are mostly a wearable ticket now (and I presume they can be used as room keys/payments at Disneyland too, but it’s been a minute since I’ve been there).
I can’t help but feel bad for the kids in this situation. I’m sure while they are young it could be fun to play spy, but at some point the extreme aversion to data collection just becomes a new flavor of paranoid helicopter parenting. The author even mentions these overprotective tendencies in the Public Books companion!
Like, when does it stop being "we're playing spy" and becomes "daddy won't let me come over and play because he says your toaster told google about me". The face painting in particular (especially the OP's own admissions that its not clear how effective it is) feels like putting on tin-foil hats. If privacy concerns are that great, then the OP has an obligation to explain to his/her kids why they can't go to Disneyland.
> I can’t help but feel bad for the kids in this situation.
I feel worse for the kids whose parents don't care and have all their information collected before they're old enough to know better. They're handing children chromebooks and letting google collect their kid's test scores so they can sort children into 'smart' and 'dumb' buckets before they're out of primary school, letting youtube and tiktok babysit them the way previous generations did with television, making them carry cell phones at younger and younger ages etc.
I think there's some solid middle ground there, but those kids will be much better off having been made aware of the issues and having their information at least somewhat protected.
OP here. My kids know "Mommy doesn't like Google." If we were totally restrictive and opaque about it, it might feel like living in a cult. But I use data privacy techniques to teach our kids about how systems work, where data flows, how computers process information, and how to implement home-grown alternatives. Otherwise they'll think an Alexa "just works" when they talk to it, or that they're a "digital native" just because they can swipe on a gesture-based interface. They will grow up and make their own choices: my hope is to prepare them with a better understanding of how systems work under the hood, a richer grasp of the history of surveillance in society--and a relatively clean slate data trail.
Yea, it sucks for the kids whose parents do not care at all, but this has been the status quo for a long time with things like lack of involvement with education and just putting the kid in front of the TV. Obviously neither extreme is beneficial, but surely privacy neurosis will rub off on the kids and and more dangerously isolate them socially.
I do agree that it's a part of the environment kids will grow up in, so it's important to teach them to careful. Just like how it's important to teach kids to look both ways before crossing the street, to not play in street, and to be increasingly mindful the busier the roads are, but I wouldn't call that "traffic neurosis" or worry that it will cause them to be dangerously isolated and unable to navigate a city.
Kids can be educated about the dangers of data collection, be mindful about the data they are giving companies and aware of how ways that data will be used against them so that they can make smarter choices without cutting themselves off from the world they have to navigate and be a part of.
Do you think ethics would stop them? You should always assume that if a company can do something that will increase their profits, legal or otherwise, they will.
The Samsung keyboard that shipped with my cell phone was sending every single letter I typed to a third party whose privacy policy said explicitly they used that data for market research and to make determinations about the intelligence and level of education of the user. Companies are collecting analytics and monitoring people while they play video games which is then being used to group them into categories.
Lists of people with low intelligence, poor education, alzheimer’s or dementia are extremely valuable and data brokers are happy to sell them to advertisers and scammers. Google just put themselves in a position to collect that kind of data as early as possible. Why would you expect them not to use it?
>There's some mistakes in this, most notably that the app is not a requirement to go to Disneyland or Disney World.
As the piece notes, this is about the tech stack itself, not the actual experience at the park. In the companion piece[1] that is linked to at the very beginning of the article, it's clearly stated repeatedly that the app is not required, but that your experience may not be as "ideal" as you would like as a result.
To my knowledge at Disney World there isn’t a web client for Genie like there was for FastPass+, since it’s all day of you’re expected to do it in the park on your phone (I’d be happy to be proven wrong). Guest Services can probably still do it though, you’re right.
You can book lightning lanes through the website (or at least you could last February). It's the most reliable way to get the ones that are very popular because the app is too slow, it's better to log in on a laptop from the hotel first thing in the morning.
Seems they left out a huge point - they had a spouse/family amenable to this. My wife and her family don't give a shit about data collection. They have no imagination of how it could be used in the future, at which point it would be too late. Even the slightest inconvenience (like the marginally beneficial switching to DuckDuckGo, Private Browsing, turning off location, or broken links due to PiHole) are met with annoyed resistance. It's actually a struggle to get them to even lock the door when they leave the house...
I care about privacy and find switching to DuckDuckGo to be annoying! As the one also trying to take care of privacy concerns in my home, I try to go for the lowest-friction options: Adblockers, paid Google accounts which have history tracking turned off and aren't supposed to be monetized, trying to be Apple-product exclusive, keeping social media accounts private, etc. I'm not going to win any further compromises and am content with these good-enough solves.
I suggest privacy advocates draw a distinction between potential sources of tracking and likely sources of tracking.
Tracking by a credit card: 100% happening. If that's a problem for you you definitly should pay by cash or use a privacy preserving card like the author does.
The app looking at history of WiFi hotspots to expose you: Pretty unlikely. Tethering to prevent "a record of a home wifi connection point" is really low value work.
Yeah and what are the odds Disney is correlating their license plate scanners with anything in the park? I’d assume the license plates only get looked at if a crime occurs
They actually list specific uses in a separate policy [1]
> our use of the ALPR Data is limited to the following purposes:
> * To enhance your experience while visiting such properties such as, for example, by assisting in locating a lost vehicle;
> * To prevent unauthorized use of our facilities; and
> * To detect, investigate and prevent activities that may violate our policies, be illegal, or otherwise impact the safety and security of our guests and/or third parties.
> I’d assume the license plates only get looked at if a crime occurs
Why do you assume that? Disney is a major multinational corporation. It's hard for me to believe that such companies are willing to leave money on the table.
Disgusting. A few year back we took the family to DL. I had read that they were going to take our pictures and I thought "not a chance" and prepared for a fight.
But I knew deep down they had me over a barrel. Gonna turn around and say no to the family (after an hour drive and $$$ spent for parking) at the front steps of DL? Paying cash not practical anymore either, was almost $500 for tix!
For whatever reason at the moment we arrived they were not prepared for pix and we walked in unscathed, and without apps. Just like the good 'ol days of... 2010?
I'm a privacy-focused person, but when we went to Disney World I kinda just accepted I was going to visit their sandbox and they were going to know everything I did while I was there. I don't want them tracking my behavior outside the park, but in the park, it's their territory, and there are countless strategies they can (and do) use to monitor it.
I think if I wasn't going with a wife and kid, it would've been fun to try to avoid data collection as the author did, but if you're also staying on resort as we did, your magic band is your hotel key and stuff too, so it's really kinda a whole-hog thing. We didn't exit Disney's municipal boundaries the entire time we were there, so it's not like they could track us anywhere else.
I guess I made my vacation to Disney World sort of a vacation from being a privacy advocate, and just went ahead and cried at the sight of my credit card statement when I got home. (Hot tip: The "character dinners" that your vacation planner will recommend are obscenely expensive, and they will definitely know you are a Midwesterner if you dare ask for a takeout container when you are leaving. It's really great when you spend like $60 a seat for one person who's not feeling well enough to eat that much, and a kid who's going to barely sample the food.)
The comment I would have to the author is this -- it is all well and good to demonstrate how difficult it is to opt out of marketing surveillance. Great. Completely on board. The question is, how did this make your experience of the park any better? Other than personal gratification of "yay this awkward face paint is stopping cameras from recognizing me", what is it, exactly, that you got that the other 50,000+ people in the park didn't get?
Because the completely opposite argument is much more compelling to the general public. Download this app, turn on your location data, and you get notifications for a little personal guided tour. Attach your payment information and you can mobile order food for a quick pickup and save time that you can spend going on the rides. Add your party contacts and you can set up fast-passes to jump the lines for your entire party all at once, again saving you time.
If the idea of opting out (which again, great, admirable, etc) doesn't come with a clear benefit, nobody will do it, and the paper attached gets into Howard-Hughes-level of data paranoia.
OP here: This is what the article in Public Books is all about. A lot of the "benefits" the Park presents are made up, manufactured conveniences, accompanied by manufactured inconveniences that make it hard for others to resist. The Park makes people feel like they've get a lot of benefit out of playing along, while the park actually benefits from convincing people so easily to relinquish any control over their personal information. We should think more about whether a convenience or "benefit" on offer--a few dollars off, a few minutes shaved--is a lure designed to get us to sign our data away for their profit.
Disneyland in particular has had a steady, growing problem over the past 15-20 years, accelerated recently with the rise of photo-sharing-sites like Instagram, where they are fighting a balance between keeping the parks accessible and affordable, while also trying to limit overall attendance.
A family who books a vacation for a couple of days at Disneyland will struggle to see and experience everything that they want, for the simple reason that there are 50,000+ people (on quiet days!) all trying to do the same things. This isn't manufactured or made up. It is a real inconvenience, and one that existed before smartphones and mass data tracking. The real solution is in eliminating the annual pass program (and raising the ire of the millions of Southern California locals who go to the park on the regular) but that's for another conversation.
"A few minutes shaved" is absolutely a tangible benefit, 100%, especially for a family who has paid thousands of dollars to travel, stay, and visit the park. "the app was a necessity, a non-negotiable" - that "non-negotiable" is because compared to the experience even four years ago, pre-app, your day does allow for more experiences, and I suspect you felt that while breezing through the Space Mountain line.
Look, I do not work for Disney. Disney does not need me to "shill" for them. But "personal data autonomy" is not enough of a real, tangible benefit to the average person, especially compared to the real conveniences of even the slightest opt-in. It does not take a massive data surveillance program to know that our family are Star Wars fans. We literally do not care if the app is more likely to recommend we go on Star Tours more often. We do care that we're able to go on Star Tours without an hourlong wait.
Both this article and the non-tech companion article [1] skip over talking about why the author wants privacy. The failure to outline her objections makes the countermeasures seem untethered from any motivation; I think that is why so many comments here react negatively to the valuable anti-surveillance work she's doing here.
This could be a issue by issue analysis. For example: "I don't want targeted ads -> prevent collecting targeting data -> use a privacy credit card". This is the easiest way to argue for privacy and can cover a lot of ground, however I worry it's too limited. This gives us the world of the "opt out" button which can fix a specific issue but somehow still leaves a really nasty taste about the surveillance world we are in.
I'd love to see more writers make on-the-principle arguments for privacy. This author clearly has that depth of feeling so it's a real missed chance.
Original author here. I'm also in favor of more writing about privacy motivation, even if editors don't always want that all in the text they publish. I'm writing more about it at The Opt Out Project in addition to other places: for instance, the famous Pregnancy Experiment, where I kept my data about my (unborn) children away from digital detection: https://time.com/83200/privacy-internet-big-data-opt-out/
It is surprising that I had to scroll down so far to see this answer. Unlike government services, street cameras, and things that cannot be avoided, it is reasonably possible to avoid Disneyland altogether. I do it quite easily (but I am in a country that doesn't have a Disneyland). If OP doesn't want to participate in the Disneyland business model for monetisation of their brand, then don't do it (no Disney+ either, so that you don't have perpetually disappointed kids).
We went to Disneyland end of 2021 and it was unpleasant. The staff seemed stressed, there always seemed to be one or two people being assholes about their masking rules (they were outside off, inside on at that point), and the usual Disney level of attention to every detail was just not there. We chalked it up to pandemic stress, but also thought maybe new leadership was going in a bad - like some conveniences previously available on the app no longer available as they were about to transition to a version where those services would be paid for or made more expensive. Anyway, my following opinion is based on pre-pandemic Disney.
Disney for me is a little like Apple in that you're handing yourself over to the corporate overlords and are reliant on their benevolence. These companies take some freedom in exchange for a clean and consistent experience. I don't like the surveillance state either, but going to Disneyland with two families was easy because of the app. We pre-booked rides, pre-ordered meals and snacks, kept up with where groups went if there was a ride split. It was easy to give kids some spending money we knew was only going to get spent in either of the parks they had access to. If you go further and pay for the hotels nearby they're even more connected. And it wasn't creepy, because it's why we went. It was what I paid for. That experience you're trying to opt out of is a core part of their product. It's why their movies are on repeat in houses all over. It's why people will hand their kids and their grandparents alike and ipad. There's an expectation around what these companies provide and a certain level of assumed safety.
Now, in Disney's case, since we are paying for it, if we could just get some privacy guarantees when we surrender to their systems I'd love that. I'd even pay extra for it. I'm so ready for people to start selling me privacy tiers. I can pay to get rid of ads, lemme pay to not have my data sold or tracked. I'll vote and harass my reps accordingly, I'll use blockers and unique emails across services, I'll teach my family to do the same, and if I'm at a protest or driving someone to an abortion clinic sure let's talk burner phones and face paint - but I also just don't have it in me to miss out on experiences with friends or family because of my objections.
> We pre-booked rides, pre-ordered meals and snacks
I get that's how Disneyland is now, but there's like zero serendipity?
When I was growing up, I'd often go in February/March, and if it happened to be rainy/drizzly in the morning, there were few lines and you could do whatever whenever. The "E-Ticket" rides would take 15-30 minutes in line if you went on the right days. Most of the restaurants would be fast and easy, although The Blue Bayou could still be crowded (OMG, best Monte Cristo ever though).
I visited a few months after you did, the masking rules were gone at that point. Everything seemed pretty positive, other than the park just being absolutely crammed with people. I don't know how they'll be able to maintain quality long term until they can reduce the daily headcount, which means we'll all be stuck booking further into the future, but at least you'll be able to walk between rides in the evening.
I've long given up on keeping my data safe. What I mean is I try and restrict who has what data and provide junk when necessary but after awhile, there's only so much you can do.
So I've done the opposite: I've made it impossible for them to use my data to target me. I block ads on all of my browsers, I regularly reset my advertising ID, and I hope that the amount of trash data I feed makes it even more useless.
One day, I'll set up some sort of home server and use next cloud or something and finally move off of Google's garden.
The cheapest method for defeating car licence plate recognition systems is just don't give them the plate to begin with. Just pull over and tape over the license plates right after you get off the freeway with two pieces of duct tape. Or remove the plates. Illegal, obviously, but weigh the chances of getting pulled over, vs those cameras in the Disneyland parking lot which don't take breaks. If you're white, and also not a drug dealer or a criminal on the lam, they'll just give you a ticket.
If you're dedicated to the cause, here's a device available via Amazon Prime that will hide it on demand. Not sure how easy it would be to install on a rental car though, given that it needs power and you don't have a garage.
As someone who regularly visits the parks and resorts, I don’t understand what they do with all this data. I use the iOS Disney+ app and their website to make reservations, and don’t attempt to block most of their ad tracking and telemetry. I’ve probably made 50+ visits to the Orlando parks and resorts in the last 3 years and our family spends considerable sums of money on their concessions.
So why does every restaurant host and hotel employee ask if it is our first time at the restaurant/hotel? And why doesn’t the app use historical data to promote experiences we are likely to enjoy? Why do they always ask where we are from, if we are parking a vehicle, and whether we (still) have food allergies or preferences?
Saving these preferences could save dozens of person-hours per family vacation and would not be difficult to implement, or at least to try.
I believe they, like many companies, are just hoarding the data in the hope that one day Someone will come and figure out how to squeeze it or sell because Data Science. Or maybe it will become sentient and some creature will crawl out from the ocean of primordial data ooze.
Funny that the author considers D+ asking for birthday and gender to be a deal breaker. I'm always flabbergasted at people whose knee-jerk reaction to forms is to just fill in truthful data. As if it's a moral issue.
I have a hypothesis (please prove me wrong): It is not amoral to default to completely fictitious information when prompted on the web.
Signing up for D+? "Jose Conseco", Gender? Female.
Buying some shoes at a random retailer? Just make it all up. EVERYTHING.
Additionally, I've discovered that when paying by credit card, even the billing details can be entirely wrong, with the exception of zip code. There is just zero reason to provide your real address (assuming you're not having something shipped) for charging to a CC.
Think about it, you're signing up for HBO Max and they ask for you name. I'm genuinely curious how many people think there's any valid reason at all to answer truthfully. Why does HBO Max need to know your real name? You're just there to watch to videos. This basically applies to close to 100% of places on the internet.
Some may consider it fraud, and the legal system might have a say in that too.
That said, I think there's a spectrum of acceptability for fictitious information that varies on the person and one requesting it. At one end is anything government or otherwise "real life" or "serious business" related, where I would not hesitate to provide real information. At the other end is accounts for various online-only entities, in which I tend to stay as far from my real identity as possible.
AFAIK CV-Dazzle is a hand-crafted adversarial pattern for facial detection, specifically by a Viola-Jones style detector of Haar cascades. The painted patterns invert the brightness relative to the first few filter passed to put you under the detector threshold.
Modern face detectors (and recognition) have much stronger priors for e.g. shape and illumination that there is a chance they can separate the intrinsic images related to shape, texture and illumination and still work. I still think there is some merit to the approach though, since modern systems are trained on data for what most people look like (and not e.g. the band members of Kiss). Even so, they are trained to be robust to occlusions. Likely the most successful approach at this point is facial prosthetics and contact lenses to hide your irises...unless theyre using IR to look at the blood vessels in your face..etc.
I like the take on this, using a burner phone to blend in.
At what point is the lack of signal the signal? If I were Disney and my tracking system showed you as not ozzing out radio or AI not ever labeling your stills as "person staring like a zombie on the phone" I would red-flag the hell out of you.
No, it probably means you're getting assigned your personal security guard tracking every step you make until you leave property. And that the moment you look too awkward a very friendly guy is going to come to you and start asking questions.
Disney uses all this data to "optimize" the park-- for profitability. They're not interested in you, exactly. They've gamified the park, which is what sucks about it now. I was very blessed to have the means to live near Disneyland and be able to afford yearly passes for a decade. I've been to Disneyland at least a thousand times-- and if you think surveillance is a new idea there, you are mistaken.
True story-- I worked as a sysadmin for a number of years, and as such found it useful to carry a knife with me to open boxes-- as receiving equipment was a daily duty for me. My weapon of choice was a CRKT "snap lock" mini knife with a blade no larger than an inch. It was completely inoffensive, but it could open a package quite well and due to the design it was quite safe to keep in your pocket. They don't seem to make this knife anymore-- but be aware it was not the quite large "snap lock" knife CRKT still makes-- this guy about the size of a key.
So, I arrive at Disneyland like any other day. Park in the mega-structure and when I arrive at the security gate, a very respectful gentleman in touristy clothes says to me, "Excuse me sir, can you come over here for a moment?"
He pulls me aside-- he says, "The knife in your pocket. You can't bring that into the park."
I am floored. How could he possibly know about a knife I literally keep on my key ring that doesn't even look like a knife? This is the day I learned that Disney employs security staff to pretend to be park guests.
I said, "Yes, I do have a knife, but I think if I was allowed to show it to you, you might agree that it's not a weapon. If you disagree, then I'll give it to you, so it doesn't enter the park, but don't throw it away because it's a great knife, so consider it a gift."
To his credit, I showed him the knife-- and he said something to the effect of, "That's fine. Have a great day at the park."
To me, this was security done correctly. The guy identified a possible problem (my knife), and was allowed to exercise his own judgement about the situation.
Here [1] is a video someone recorded of a fight that happened in a Disney park. And here [2] is a Reddit thread discussing it. Expand the comments. Numerous people talked about Disney's infamous security and the apparent lack of it in the video. Not a single other person in that thread noticed the four plainclothes officers in the video, two of whom were present throughout almost the whole video.
Can you find them?
This day was grey-hat day. The first one shows up exactly 30 seconds after the incident starts. The one who does the choke-hold departs almost as fast as he arrived and acknowledges his coworker at 5:29. And no one saw them even from their armchairs. Caught on video and still complete ghosts.
Have you made any attempt to determine how effective such "personal responsibility" methods are at actually avoiding surveillance, and/or do you have any thoughts on how you might go about finding out? A follow-up on this topic, perhaps interviews with people who'd know, might be of interest.
I do like the piece, as a demonstration of the difficulty / cost / inconvenience of actually opting out of pervasive surveillance. As a work of performance art your blog post is excellent. (A point several people commenting in this thread seem to have spectacularly missed.)
Thanks! You totally grokked what I'm doing with this--I appreciate it!
Yes I do think a lot about the effectiveness of personal responsibility, and its limits. Certainly I could request my logs and data from every company, although that takes more time than opting out. In the meanwhile, I don't block ads as those as my only clues as to what a corporate database thinks it knows about me (obviously I use blockers to stop info flow from me to them, it's just that I want to see what they think they know). I also use lots of addresses and CC numbers so I can trace data migration and can identify the culprit if so (
https://www.optoutproject.net/what-email-address-should-i-us...)
As for if it's working... There are more quantitative ways to tell for sure. In the meanwhile, my Pregnancy Experiment started in 2013 and in the intervening ten years I have seen zero online ads for anything baby- or child-related, and I have only received an unsolicited catalog in the mail every three years (thanks to the above technique, though, I always knew who the weak link was). When I tell other moms that, their jaws drop.
Logs are one option, I've been increasingly leaning toward various canaries / telltales myself, which ... it looks as if you're doing through email at last in part, and credit card numbers, which would of course have been my next suggestion. Variant spellings of your name, or if you're ambitious, different postal mailing addresses, would be another option.
(Curiously, a lot of the methods for tracking surveillance ... are also used by marketing entities to track campaign effectiveness and such.)
On top of these, unused and obscured URLs might also be used.
I'd especially like to see what dot-connecting capabilities various entities have, and am thinking of how these might be tested, e.g., by having two not-directly-linked tokens used and seeing whether or not there's plausibly been a connection made between the two.
I'm realising that there's actually a substantial site and project behind your work, I'm going to take a look at that before I say too many more stupid things ;-)
NB: As one who read economics at Uni, I'm developing a much-belated mad respect for sociology.
If you happen to know of any good references / sources on methods --- for both individuals and organisations --- or even just the right keywords to use for researching the literature, I'd appreciate it. That's information which could stand to be more widely known.
(My search-fu is usually pretty good. There are times when I'm stymied simply not knowing the language / terminology used within research / technology areas.)
Not annoyed. Spouse is fully on board. Kids think it's fun, especially as we use data privacy to explain how systems work under the hood. Extended family is supportive. Maybe people are surprised to hear that, but that's been my experience so far.
There's something exceptionally funny about putting this level of effort into anonymizing your trip to Disneyland, of all places: I was never a "Disney kid," but my understanding is that it's milquetoast in terms of advertising profiles. "Likes and went to Disneyland" is probably roughly as strong as "lives in a house" and "grew up in the US" in terms of personalization signals.
(This shouldn't be read as a criticism! I just think it's an amusing application of an otherwise valuable thought experiment/practice.)
> My understanding, from these conversations, is that it is still possible in Disneyland (not World) to evade because they rely on the app and credit card swipes to generate single user ID's, not on a wristband which is subject to infrared detection throughout the parks in Florida.
MagicBands (via MagicBand+) has finally being introduced to DisneyLand Resort[0].
> Unlike an iPhone, my phone doesn't learn much about who is tethering to it, nor does it relay to the access point what is going on or who is accessing it.
So what's the chance that this set of countermeasures set off some red lights in the Disney master control room, and they put a tail on him/them for the rest of their visit ?
Why does the author not simply send Disney a CCPA data deletion request, either instead, or in addition to the rest of the countermeasures outlined in the article?
And by opting out manually using these methods, they've gone from lost-in-the-crowd to unique, including with a publicly findable name, since the whole writing-an-article on it.
Unless you tell the other side, "hey don't track me", they can (and will!) legally use your aversion to tracking as another data-point!
of course, spreading tracking-avoidance methods helps with this! (as long as we can all agree on which methods to use...)
This seems like a long way to go to avoid some place that you could simply not go to. I've never been to Disney. I hate kids, crowds, and lines so instead of trying to find some way to get around it I just don't go.
And on the other end, you have… this. Buying burner phones and wearing facepaint to avoid facial recognition at a theme park. Imposing that on your _family_.
As an academic exercise I guess it's interesting, but living this way must be truly exhausting. Me? I'll continue to be somewhere in the middle of these two polar opposites. I participate in society, but I don't willingly or knowingly give away more information than is required. I obscure or block what I can, but still sign up for and use accounts that can be tied back to me. I consider success as making it difficult to tie my data together - either such that I don't fit the majority pipelines and need extra attention, or I break hamfisted techniques altogether.