> Anyone have experience with these sort of services?
Quite a bit. Often if you request removal or opt-out, you'll reappear in a matter of a few months in their system, regardless of whether you use a professional service as a proxy or do it yourself. The data brokers usually go out of their way to be annoying about it and will claim they can't do anything about you showing up in their aggregated sources later on. They'll never tell you what these sources are. A lot of them will share data with each other, stuff that's not public. It's entirely hostile and should be illegal. I am trying to craft a lawsuit angle at the moment but they feel totally unassailable.
I'm extremely skeptical of any services that claim they can guarantee 100% removal after any length of time of longer than 6 months. From my technical viewpoint and experience, it is very much an unsolved problem.
my understanding is that there's a bit of a catch-22 with data removal - if you request that a data broker remove ALL of your information, it's impossible for them to keep you from reappearing in their sources later on because that would require them to retain your information (so they can filter you out if you appear again).
I’ve heard this claim, but they could use some sort of bloom filter pr cryptographic hashing to block profiles that contain previously-removed records.
There could also be a shared, trusted opt-out service that accepted information and returned a boolean saying “opt-out” or “opt-in”.
Ideally, it’d return “opt-out” in the no-information case.
Hash-based solutions aren't as easy as we might hope.
You store a hashed version of my SSN, or my phone number, to represent my opt-out? Someone can just hash every number from 000-00-0000 to 999-99-9999 and figure out mine from that.
You hash the entire contents of the profile - name+address+phone+e-mail+DOB+SSN - and the moment a data source provides them with a profile only containing name+address+email - the missing fields mean the hashes won't match.
A trusted third party will work a lot better IMHO.
And of course none of the data brokers have much reason to make opt-outs work well, in the absence of legislation and strict enforcement - it's in their commercial interests to say they "can't stop your data reappearing"
> Someone can just hash every number from 000-00-0000 to 999-99-9999 and figure out mine from that.
That's what salts are for, right? It wouldn't be too hard to issue a very large, known, public salt alongside each SSN.
> And of course none of the data brokers have much reason to make opt-outs work well, in the absence of legislation and strict enforcement - it's in their commercial interests to say they "can't stop your data reappearing"
If the salt is public, what’s the point, then you can get all the salts, and combine them with every possible ssn, and you’re back where you were before.
No, that is kind of the point of a salt is that it doesn't need to be hidden - it's designed for a scenario where e.g. your database is hacked and they're visible as plaintext: https://en.wikipedia.org/wiki/Salt_(cryptography)
Since the salts are random, unique to each SSN and long: a) you'll find no existing rainbow table that contains the correct plaintext for your SSN hash and b) each SSN now requires its own bruteforcing that is unhelpful for any of the other SSNs
Combine that with a very expensive hashing method like PBKDF2 (I'm sure there's something better by now) and you've made it pretty dang hard for non state actors to bruteforce a significant chunk of SSNs. There's also peppers that involve storing some more global secrets on HSMs.
I'm sure the crypto nerds have like a dozen better methods than what I can come up with but the point is this is not a feasibility issue.
I’m sorry but it’s not that simple. You can’t just say add salt, here are the benefits of salt, problem solved.
In a password database, salt is not secret because the password combined with it is secret and can be anything. Even if you know the salt for a particular user, in order to crack that user, you need to start hashing all possible passwords combined with that salt. If a user picks a dumb password like password123, then they are not safe if the salt leaks. Other users with password=password123 will not be immediately apparent because other users have different salts. You would have to try password123 combined with each user’s salt to identify all the users with password123.
You said “It wouldn't be too hard to issue a very large, known, public salt alongside each SSN.” That means there should be some theoretical service where you pass it an ssn and get back the salt, right? So what have you gained? Any attacker with an ssn can get the salt, and nothing was gained. Or if attackers don’t have ssns they can just ask for all the salts, the mapping from ssn to salt is public so they know 000-00-000 has salt1, 000-00-001 has salt2, etc, so you haven’t increased the amount of hashes attackers have to do to do whatever it is they want to do.
You’re right about commercial interests being at play. That’s why we don’t have laws like GDPR in the USA. Crypto nerds have thought about this long and hard and if it was that easy we wouldn’t need stupidly complex laws like GDPR. They would “just add salt.” Or other services would “just add salt” instead of relying on more complex and expensive forms of identity verification and protection.
You don’t need to be a crypto nerd to try to describe a flow where having a public known salt per ssn helps with privacy. You do not need to be a crypto nerd to design secure one way hash functions that would plug into that flow.
Yep, you are right, complete brain fart on my end. Of course it doesn't work if it's required for the salt to be publicly mappable to the SSN, since that just circumvents the whole thing. I just didn't understand what you were saying in your earlier message.
"all the salts" * "all the SSNs" becomes a very big number. With a large enough but still reasonably sized salt, you can engineer it so that hashing all combinations takes an amount of time greater than the age of the universe even if you use all the computers in the world.
All the salts * all the ssns is a very large set but it’s irrelevant because in the above scenario each ssn has a public well known salt, you don’t have to test each salt against each possible ssn because the mapping from one to the other is known.
Even if such a service doesn’t exist, and you just have a list of all the salts without knowing which ssn they map to, you’re just hand waving how hard it will be to hash the entire salt*ssn set.
Hashing a salt+ssn can’t take too too long because data brokers need to be doing it frequently in order to verify identities.
In this report, https://files.consumerfinance.gov/f/documents/cfpb_consumer-..., it says monthly volume of credit card marketing mail is in the hundreds of millions per month. Can we assume that each piece of mail is roughly associated with one instance of hashing a salt+ssn? Given that number, how expensive (in terms of time, compute cycles, whatever) can it possibly be to hash a salt+ssn? If we make it too expensive, expensive enough to support your “age of the universe” claims, credit markets would grind to a halt.
I’m quite familiar with how a salt works. One might say deeply familiar since I have worked on auth services for very large, very secure organizations.
Poster above me just said “add salt” and waved their hands without describing anything concrete, like just saying some magic words can solve hard problems.
So for a perfect match they'd need to have some sort of unique identifier that's present in the first set of data you ask them to remove, as well as being present in any subsequent "acquisitions" or "scrapes" of your data.
If these devs that scrape/dump/collate all this info are anything like the ones I've seen, and they're functioning in countries like the US and UK whereby you don't have individual identifiers that are pretty unique, then I'd say the chance of them being able to get such a "unique" key on you to remove you perpetually, is next to impossible. And if it's even close to being "hard", they'll not even bother. Doubley-so if this service/people/data is anything like the credit-score companies, which are notoriously bad at data de duplication and sanitation.
Likewise, if you want them to do some sort of removal using things other than a unique identifier, then you have to have some sort of function that determines closeness between the two records. From what I've heard, places like Interpol, countries' border-control and police agencies usually use name, surname and dob as a combination to match. Amazingly unique and unchanging combination, that one! /s
Sorry, I value my legal rights over the viability of the data broker industry. If they can’t figure out a way for lawfully not collecting my data, they should not collect data period.
Which would never work because real life data is messy so the hashes would not match. Even something as simple as SSN + DOB runs into loads of potential formatting and data entry issues you'll have to perfectly solve before such a system could work, and even that makes assumptions as to what data will be available from each dataset. Some may be only name and address. Some may include DoB, but the person might have lied about their DoB when filling out the form. The people entering it might have misspelled their name. It might be a person who put in a fake SSN because they're an illegal immigrant without a real one. Data correlation in the real world is a nightmare.
When you tell a data broker to delete all of the data about you, how can you be sure they get ALL of the data about you, including the ones where your name is misspelled or the DoB is wrong or it lists and old address or something? Even worse if someone comes around later and discovers the orphan data when adding new data about you and fixes the glitch, effectively undoing the data delete.
It's a catch-22 that if you want them to not collect data about you they need a full profile on you in order to be able to reject new data. A profile that they will need to keep up-to-date, which is what they were doing already.
> Even something as simple as SSN + DOB runs into loads of potential formatting and data entry issues you'll have to perfectly solve
You don’t have to solve it perfectly to be an improvement.
Also this is BS. Not every bit of data is perfectly formatted and structured but both of your examples are structured data. You can 100% reliably and deterministically hash this data.
There’s so much in your argument that can be replied with “imperfect is better than status quo”. If you give someone the wrong DOB, it’s “not you” anyways, at least let me scrub my real data even if the entry is imperfect for some people or some records.
> You don’t have to solve it perfectly to be an improvement.
They don't want to solve your problem. You aren't their customer. They want to comply with the letter of the request in as much as it covers their own butt in terms of regulatory requirements and/or political optics.
The “solution” mentioned is political. A requirement that data on an individual is properly deleted when presented with the data would be “good”. A requirement that captures every nuance of mistakes would be “perfect”.
Hashing a birthday and SSN is deterministic. We could deterministically keep that data deleted. This would be better than we have today, and could be done reliably and affordably.
The companies can easily be required (by law) to implement the “good” solution. Everyone complaining it’s not “perfect” is stopping “good”.
There's a trivial way to not re-add data that was removed: don't do it without user opt-in, whom admittedly you have access to ask at the moment of data collection. If you don't have the ability to ask users to opt in, you probably shouldn't be collecting the data anyway, with very few exceptions like criminal records.
edit for clarity: by criminal records, I mean for the official management of them, not for scraping their content.
I've had a very bad experience with Liberty Mutual following a data opt-out from another service. They sent me on a runaround, ending with an email saying to follow "this link" to verify myself. (There was no link, only sketch.) I ended up getting a human on a phone through special means, and they sent me a fixed email with a working link.
I should be hearing back from them in the next 32 days, as this was 13 days ago.
I got a quote from them and immediately initiated a data removal request. It seems like it went through, got a link in the email. Thanks for the reminder that I might need to follow up to make sure they followed through.
It's hard to make collection, aggregation, and sharing of facts illegal.
Not to minimize the harm that can be done by such collections, but the law is justifiably looking for a scalpel treatment here to address the specific problem without putting the quest to understand reality on the wrong side of the line.
> It's hard to make collection, aggregation, and sharing of facts illegal.
Sure, but the US has a precedent in HIPAA. Not saying it's copy-paste, but... maybe it should be.
I would prefer the law be more restrictive than less, because I don't believe this is true:
> law is justifiably looking for a scalpel treatment here to address the specific problem without putting the quest to understand reality on the wrong side of the line.
I believe the law may use that noble goal as cover for the actual goal: restrict the ability of capital holders to accumulate capital as little as possible. Data sharing isn't a public good in any way. It's mostly not even useful for the targeting purposes it claims. It's extremely reckless rent-seeking that knowingly allows innocent people to have their lives wrecked by identity theft.
As someone who helps care for elderly relatives with widely-dispersed out-of-state families, I can point to HIPAA as an excellent example of why crafting this kind of law is difficult.
I think we are going to discover, once people do the research, that HIPAA has done net harm by delaying flow of information for critical-care patients resulting in lack of patient compliance, confusion, and treatment error.
Yes, there is harm potential in insurance companies denying coverage or claims because they are privy to too much information about clients (a scenario that, I'd note, we could address directly by law via a national healthcare system or banning denial of coverage for various reasons) or by employers or hostile actors (including family) discovering medical facts about a patient. I have to weigh that harm potential against my day-to-day of having to fight uphill to get quality care because every specialist, every facility, and every department needs a properly-updated HIPAA directive for a patient (and the divisions between these categories aren't clear to the average non-medical observer).
Huh, I wasn't aware of such a viewpoint. I've never had or heard of problems with HIPAA preventing timely or accurate care, even with my father going in and out of hospice toward the end of his fight with cancer. I'm really sorry to hear it. At the same time, I do have to wonder if that kind of problem genuinely outweighs the protection HIPAA has given millions of people against harms small and large. (I guess with the state of data privacy today, HIPAA may be basically useless, but that isn't exactly HIPAA's fault.)
> HIPAA has done net harm by delaying flow of information for critical-care patients resulting in lack of patient compliance, confusion, and treatment error.
You won't find any disagreement from me that HIPAA is very complicated. However there's a certain level of whining and foot dragging that happens in the industry that we should take with a massive grain of salt. There's so many HIPAA compliant and still convenient ways these days to have patient communications, but the industry doesn't want to invest and doesn't care about patience experience enough, and then go "sorry, HIPAA :-(((" every time.
With GDPR, after Schrems II happened and it became clearer that the EU-US Privacy Shield was no longer a valid workaround, I personally observed companies (including the one I was in) suddenly moving mountains to complete migration projects and privacy upgrades in just a few months that the industry previously deemed was technically unfeasible or impossible, cost prohibitive, business destroying, etc. And they still remained massively profitable and growing. If they had just done the right thing early on it wouldn't have been on such a tight deadline either.
That was the final straw for me in terms of being very firmly convinced that we should be telling companies to shut up and comply a lot more because they will never do the right thing on their own even if it wasn't /that/ hard. Another approach here is to start holding them liable for the personal costs of data breaches etc and let the incentives take care of themselves. In fact, why not a bit of both?
Sure, I should probably have clarified "In the United States," where there's a First Amendment that most attempts to make fact-sharing illegal immediately fall afoul of.
There are definitely exceptions, but it puts strict scrutiny on any novel prior constraint of speech.
this is true and nothing new.. mass "gray market" personal information services lept into markets since VISA and Mastercard fifty years ago, and somewhat before that with driving records, in the USA. The "pure land" of democracy in North America was never pure, and the Bad Old Ways have crept into the corners since the beginning.
The difference now though is an attempt to legislate personal data collection, such as the CCPA. I strongly believe they are violating the law, and that if I opt-out or request removal, an answer of "oh well nuthin we can do" is not acceptable when my data re-appears either on their platform or on another platform they provided data aggregation services to.
>The "pure land" of democracy in North America was never pure
don't mix your pet grievances together, having full public knowledge of every person in your country is democratizing, frankly, an aid to democracy, not a hindrance. Not saying I want to live in that world, but it's not an impure democracy.
Norway (and others?) already publishes everybody's income statements. Not healthy imo but I guess would aid more accurate snitching (and envious resentment).
Quite a bit. Often if you request removal or opt-out, you'll reappear in a matter of a few months in their system, regardless of whether you use a professional service as a proxy or do it yourself. The data brokers usually go out of their way to be annoying about it and will claim they can't do anything about you showing up in their aggregated sources later on. They'll never tell you what these sources are. A lot of them will share data with each other, stuff that's not public. It's entirely hostile and should be illegal. I am trying to craft a lawsuit angle at the moment but they feel totally unassailable.
I'm extremely skeptical of any services that claim they can guarantee 100% removal after any length of time of longer than 6 months. From my technical viewpoint and experience, it is very much an unsolved problem.