I read this and immediately thought "oh shit, yet another regulation for a small bootstrapped software business where we try to be honest while the big guys will still find a way to circumvent it". Thankfully, I looked into the fine print and was wrong. This bill is only for Corporations that do over $50,000,000 in revenues or higher OR (EDITED from AND) have info on at least 1,000,000 or more customers. Of course, I don't mean to imply that smaller tech. companies are all saints but this bill clearly seems to be drafted for the bigger fishes who are powerful enough to blatantly ignore privacy laws.
EDIT: I clarified one thing user downandout pointed that it could apply to App developers who may have a million users as well but overall, it should not affect bootstrappers or smaller tech. businesses that don't have that high amount of consumer data so my overall point still remains.
So then big corps do all their customer information risking behavior in spun out small wholly owned subsidiaries or even arms length non owned ones and if successful acquire them for some fixed amount but if they screw up with this law the company just folds and the parent is free from financial damage.
I am not an expert bill reader but seems like this loophole is covered as there is another point that says :
"(ii) is not substantially owned, oper
ated, or controlled by a person, partner
ship, or corporation that does not meet the
6 requirements under clause"
Tbh thats still a better outcome than the status quo isn’t it: a single hack is now limited in damage, and multiple are required to do the equivalent of todays scenarios
What if everyone contracts out the data collection to a single party? Only giving the data a larger exposure.
I'm really glad this is coming to light and I hope it passes. It will be very interesting to see how companies try to avoid it, but at first glance, it seems well thought out.
Software engineers tend to assume that (a) they are the smartest in the room, (b) legislation is exactly like code, (c) caselaw doesn't exist and (d) nobody has ever had to write complex rules before software was invented. All of which, we feel, qualifies us to poke holes in any legislation we happen to stumble across.
But, of course, it doesn't. Not even close.
By way of analogy, if someone looked at a link to a github repo and said "yeah well I can't see any GOTO 10 lines, I bet this will crash when the IP trace becomes Apache'd" I hope someone would be patient and polite in explaining the several levels of wrongness involved.
There are jurisdictions were caselaw doesn't exist (which a lot of software engineers also conveniently forget a lot of the time) or law is written differently from the anglo-american legislations (which software engineers also conveniently forget a lot)
That the doctrinal basis is different does not change that law requires interpretation by experts using something more sophisticated than "it's just code".
Civil jurisdictions don't have caselaw as an independent source of legal rules, but they still have cases and they still have interpretation. It is not uncommon for an ancient Roman jurist's opinion to be cited in argument in Scots law, just as it is not uncommon to cite very old English cases in Australian law. And it is also common to have civil codes in common law jurisdictions -- the criminal law I was taught was based on a statutory instrument which asserted itself to be the whole of the criminal law and which was frequently amended whenever courts began to accrete rulings around it.
That ultimately doesn't matter. The bill has zero shot at passing and becoming law. Wyden doesn't have anywhere near the votes he would need, so it's essentially his fantasy idea of a bill.
There's no privacy law that is going to get passed by the US Government, focused on large corporations, that involves sending violators to prison for up to 20 years.
You read that wrong. In order to NOT be a "covered entity," you must meet ALL of the following criteria:
1) Revenue of less than $50 million; AND 2) Must not have info on 1 million or more people; AND 3) cannot be a data broker
That means an independent app developer who gets more than 1 million installs, or a website with more than 1 million users, IS a covered entity, regardless of revenue. Also, ANY "data broker," regardless of size, is covered.
This info is on page 4 and 5.
Edit: How is a factual comment getting downvotes? OP read it wrong, and I told him so. There's nothing in this comment to disagree with. There are only facts.
No, it would kill all American software/web startups by limiting them to 999,999 users, unless they have millions of dollars in VC funding that they can use to comply with this law. It would strangle the startup community, as most startups (even those with 1M+ users) can never hope to have the resources to comply. You have to remember that the reason that startups get any funding is because investors hope that they will be highly successful. If there is a guarantee of a huge compliance bill once the company reaches 1M users, far fewer companies will be funded.
Getting 999K users, while a milestone, isn't necessarily enough to raise the money necessary to comply with this law.
I'm not GP, but it looks like the more burdensome things are on pages 26-33, and they are too lengthy to post here. I can see compliance costing significant sums that would be out of reach to a typical startup.
The question is not "is there anything that would clearly be burdensome?", but "am I confident enough that I am complying with these items, as retroactively interpreted by regulators?"
You need to pay a lawyer to evaluate that for you, that's the cost, not whoever the bills sponsor says this is intended to target.
Can you cite an example of one of these requirements that you wouldn't be confident in being able to comply with? Also: how much do you think a legal consult costs? For any one item, I think we're talking a couple hundred bucks.
Almost all of the language in the section we're referring to applies to just one requirement, which is to make data tech companies retain about consumers available upon request to those consumers. That's something responsible companies already do, many because they're already by regulation required to do so.
Just popping in to say that I've never had an invoice less than $800 from any firm I've ever hired, and those were very small things. Data privacy-related stuff tends to be way more complicated, and if my company didn't have me, basic data privacy stuff would cost them tens of thousands a year just in legal.
Most competent lawyers cost $400+/hr. For them to review your internal compliance policies and procedures (including your opt-in/out procedures. etc), privacy policy, etc. you could easily be looking at a few hundred hours. That doesn't include the external auditors that the bill wants you to have.
As you said in one of your comments, fortunately this bill as written will never come into law, both due to its implications, and the fact that its author is a single member of a minority party. This is one instance in which I am happy with our system of government.
This proposal doesn't require companies to do formal internal compliance reviews. It's not SOX or GLBA. For most startups, the legal overhead here would probably amount to a few phone calls with a lawyer.
My read is that it's less onerous than the California privacy statute that already covers a huge fraction of tech startups.
We do both security and privacy engineering work for our clients, most of whom are encumbered in one way or another by regs, and it is not the norm for legal to do line-item review of policies and procedures. SOC2 Type 1 audits are much closer to a mainstream practice, would almost certainly satisfy the "data protection" requirements in any rule the FTC would come up with, and certainly do not involve "a few hundred hours" of legal.
That's just not accurate. You should read pages 26-33 in detail. It wants external auditors to come in, and while consultation with a lawyer isn't required, companies would offensively have to use them to review everything they do, lest they be found non-compliant. That could easily range into hundreds of hours of legal work.
I believe I'm one of the "auditors or independent technical experts" this bill refers to (trust me, we don't need Wyden's help getting work), and for the most part the only time we talk to client legal is when we're negotiating our contract. Note also the "if reasonably possible" attached to getting external assessment.
You're referring to that specific provision, but again you aren't considering the fact that any business interested in complying will have to have an attorney review the law, and then review all aspects of their business, software implementation, and policies/procedures in order to ensure they are compliant. That's not a requirement of the law, but how else can they ensure that they are compliant?
It starts with the claim that this law could put Flappy Bird on the hook for decades of prison time. I rebut, and you say (paraphrased) "no, read the law, anyone with 1MM users could be sent to prison for failure to comply". This is obviously not true.
Then the claim becomes that pp26-33 of the statute has so many burdensome requirements that it would be impracticable for many startups to comply. I ask for specifics; none emerge. Instead, a new claim appears: every startup would be on the hook for "a couple hundred hours" of legal to verify their compliance.
But the proposal as stated doesn't require formal compliance reviews, making it hard to support an argument that this proposal would somehow cost more than many other regulations that do have that requirement, and for which my firm has done significant engineering and compliance work without spending a hundred hours talking to legal.
But, no, it turns out that's not the argument. The real argument is that the proposal requires auditors, for which legal will have to be deployed prophylactically. Now, the proposal does not in fact have an auditor requirement, but also, the clause that discusses auditors goes out of its way to make it clear that the types of third parties they're referring to are technical experts, which startups already use.
So the argument changes again. Now the argument is that regardless of the specific construction in the proposal (again, these specifics were all brought to the discussion by you!), it would be prohibitively expensive for startups because a lawyer would have to take time to verify the meaning of the law for the startup.
I point out that this is an argument that applies equally to pretty much any privacy or security law, and you respond that this is one is a special case because of the prison time and fines (the "breathtaking" fines are part of the same clauses as the prison liability) --- thus resurrecting the original false claim.
This doesn't read to me like a good-faith argument.
It's of course fine to make the argument that any new regulation would impede startups and would therefore not be worth the trouble (there are other arguments against this proposal you could just as easily make; for instance, that the field isn't mature enough for us to have the FTC use rulemaking authority to establish cybersecurity requirements for startups).
But if those are the kinds of arguments, you're making, make them. Don't move the goalposts.
It starts with the claim that this law could put Flappy Bird on the hook for decades of prison time. I rebut, and you say (paraphrased) "no, read the law, anyone with 1MM users could be sent to prison for failure to comply". This is obviously not true.
Actually, with specific regard to Flappy Bird, it is true because it had more than 100 million installs, far surpassing the 50 million requirement to expose him to criminal as well as civil penalties. So, in contrast to your statement, it actually is true.
Now, the proposal does not in fact have an auditor requirement, but also, the clause that discusses auditors goes out of its way to make it clear that the types of third parties they're referring to are technical experts, which startups already use.
I'm not sure what you mean here. There is an auditor requirement "where reasonable," and presumably "reasonable" would be entirely up to a court's discretion. Also, "technical experts" in the context of this law, wouldn't necessarily be the developer of the site, but rather technical experts who are trained in complying with this law. Likely, that means someone brought in by a law firm or professional auditing outfit, at enormous expense.
No, you're still not correct, because the problem with your claim isn't simply that you have to be a larger company to face prison time, but that there's only one offense in the bill that includes that thread: knowingly certifying fraudulent data protection reports. I'm like the 4th person on this (broader) thread to point that out, and this is at least the 3rd time I've pointed it out to you.
By the way, did Flappy Bird even collect NPI? Or is this an even sillier example?
there's only one offense in the bill that includes that thread: knowingly certifying fraudulent data protection reports.
That's what it says, but one would have to believe that failing to file such reports would also be a criminal violation in any final draft of the bill. Otherwise what would be the point of the bill? Does it make sense to you that they would have a bill like this, and provide a simple way to avoid it: just don't file? That appears to be an oversight by the author, but one would undoubtedly be fixed.
By the way, did Flappy Bird even collect NPI?
Since this bill uses a vague and legally untested definition of "personal information," simply maintaining weblogs containing IP addresses could trigger this.
"It costs money to comply" is a tired argument that gets trotted out every time a new business regulation gets proposed, no matter what the regulation is. All regulations cost money to comply with. Are you arguing against all business regulations?
If your problem is actually with the number of users, please propose a specific number of users that would make a better limit.
EDIT: This is also the exact same kind of "sky is falling" rhetoric that came with HIPPA. Lo and behold, we still have large and small doctors' offices, and have taken the first steps towards actually protecting customer health records.
There are (I have reason to believe) plenty of non-VC funded social media apps. Easy to make, and people think they’re going to be as big as Facebook. I think that type of app will still get made, as people who can’t sort out funding are people I don’t expect to know the law.
Only as much as the parent's argument was a non-sequitur argument.
Parent mentioned startups with lim -> 1 million users unable to comply. If they can't find VC money at the million user stage, then perhaps they didn't have a viable model in the first place.
(Plus, where all these I mentioned "VC funded" from the first user? If not, the argument holds for them too).
> gets more than 1 million installs, or a website with more than 1 million users,
Incorrect, that would only be true if they collected and stored personal info on their users. Hopefully we see more apps and websites stop collecting this info, or a minimum started purging the data (i.e if you visited a site 1 time 5 years go they should not still have your data but many do)
> (12) PERSONAL INFORMATION.
—The term 6 ‘‘personal information’’ means any information,
re-gardless of how the information is collected, in-ferred,
or obtained that is reasonably linkable to a
specific consumer or consumer device.
So if you use an email address for your users to log in, or even a username, you are collecting personal information based on the vague nomenclature of this law. Device IDs might be the case too (though I think Apple at least now gives a per-app id, which means it can't be linked to external sources much now iirc).
That's analogous to a construction already in place in California privacy law, which is more prescriptive and onerous than this law is, so it's hard to make the argument that this federal act would wreck the startup ecosystem.
You are posting like this is a bad thing, I think it is great, companies need to be held accountable for gobbling up personal data, and should be discouraged from collecting anything including email addresses, I get enough spam thank you.
A lot of people can be tracked down by their online aliases unless the person has gone through a lot of work to make sure they keep their aliases separate (which isn't as easy as it sounds). That means that by definition most aliases and usernames can be googled to find a specific person and thus would fall under personal information according to the law.
You think email addresses are bad but the masses hate having to remember usernames. Emails aren't used for logins because of marketing (all those systems already had email verification in addition to usernames, thus email was required). Furthermore, without an email it becomes impossible to recover your account if you forget your password. If the app has no personal information and email and you forget your password (which most general users do) then it's impossible to recover your account.
So this law isn't going to discourage companies from collecting your email address, it just has the potential to add burdens to companies that end up with 1 million signups (even for a free website).
You have said alot of words none of which changes my opinion.
Using email as either Verification or Username has always been a lazy and insecure and should stop.
If normies can not recover their candy crush account and need to sign up for a new one in order to protect privacy i am find with that if companies like King stop collecting data
I am security and privacy first, convenience and "free" are about 1000000000000000 on my list of importance. If a lot of free sites die that is price we pay for better data security and privacy. i am fine with that.
This has nothing to do with me... if more people would like me Facebook, Twitter, etc would not exist at all, and Google would have a massively different business model more like when they started then what they have become.
I do however want ownership over my data, and the right to demand these companies tell me what they collect on me (often with out my permission see Facebooks Shadow Profiles on people that do not have accounts) and right to demand they delete said data.
If you don't put the information out there, then they won't have it. It's a simple cause and effect relationship. Not giving up this information will likely cause you to not be able to use some services, however you have no inherent right to access services run by private parties. They are offered to you under certain conditions, and if you choose not to comply with those conditions, you are free to not use the service. With regard to "shadow profiles," simply use incognito mode if you are this worried about it.
Let's be clear that this law won't pass, certainly not as it is written. In the US, it's perfectly legal for websites to track your behavior. Should you object to this, you have a simple remedy: use incognito mode.
So the app developer has to be able to demonstrate they followed some form of best practice with regard to user data.
I think you're downplaying the requirements of this law. You should read it, it's pretty onerous and carries decades in prison with it - even GDPR didn't go that far.
One interesting caveat, however, is that at least as written, I can't find anything imposing penalties for simply not filing the reports this law claims to require after all of the expensive audits etc it wants. It only imposes penalties for lying on the reports. I'm not sure if that was an oversight on the author's part or if that's intentional though. Any final version would likely "fix" that issue.
Again, I believe this is simply false. The provision carrying "decades in prison" applies only to companies making over a billion dollars in revenue, and only in the very limited case where a particular officer of the company knowingly mis-certifies a report to the FTC.
The plain language of the law says that your interpretation is not correct. The criminal provisions apply to companies with over $1 billion in revenue or those those that have 1M or more users. That would expose a much larger range of independent developers to decades in prison.
A closer reading indicates seem to be correct that the criminal provisions only apply to those larger entities. However, ALL of the provisions in pages 26-33, which are significantly burdensome, still apply to All covered entities, which you can hit by just having 1 million user accounts.
I corrected my And to an OR so yes, good point but overall, it still doesn't impact "entire startup community". There are plenty of tech. businesses that don't hit 50 Million in revenue AND don't have a million users. I am talking about those.
True, but the issue is that getting 1M+ installs isn't under the control of the developer. Sometimes things go viral - look at Flappy Bird. Under this law, that guy (if he were in the US) could be looking at decades in prison unless he took enough investment money to comply.
This law also uses a very broad definition of "personal information" that could possibly include IP addresses. So it does have an effect on the entire community, in the sense that every startup hopes to surpass 1M users, and this law will punish them for it.
I don't think it will pass as is, but if it does, this is truly a "sky is falling" moment for US startups. Because it effectively limits non-VC backed startups to less than 1M users, it also makes sure that there will never be a competitor to Facebook or Google. Like GDPR, it locks in entrenched competitors.
Flappy Bird might have a million customers but he could opt to not collect information on those users. If Apple/Google had the information that is their issue, not that of the app developer.
No, he couldn't. I think you need to read the draft more carefully. Flappy Bird, in the scenario you describe, is explicitly exempt from imprisonment under this proposal.
That seems to be an entirely incorrect interpretation. Any app with more than 1 million users would fall under this law. You're simply reading it wrong, as the OP of this thread initially did. Any entity with personal information - as that term is (very broadly) defined in this document - on more than 1 million or more users is fully exposed to its civil and criminal penalties. This includes developers that just get lucky and get 1 million or more installs, and who have no way to pay for compliance.
No, I think you're confused. Having 1MM users makes you a "Covered Entity" in this draft. But "Covered Entities" aren't required to file data protection reports to the FTC until they make $1B in revenue or have 50MM users.† And, again: the "decades of imprisonment" 'downandout is talking about refers only to the crime of deliberately misreporting those data protection reports. It is not the case that any failure to comply with this law has prison time attached.
Happy to be wrong about this; if I am, please offer a cite.
You are both right and wrong. Flappy Bird indeed had over 50MM users (in fact it had over 100MM users), and therefore its owner would have criminal liability under this law regardless of revenue. However you are right that the lower limit excludes people from criminal penalties. Having just 1MM user accounts still exposes them to the full brunt of the civil penalties available under this law that could easily bankrupt them. So if you have between 1MM and 50MM users, you won't go to prison, you'll just be broke.
I hope it passes. If there are significant problems, updates and changes can always be made. As for your comparison with the GDPR, I think it's way too early to start drawing conclusions.
The way GDPR hurt smaller companies wasn't so much that the regulation directly applied to smaller companies but that larger companies, due to the liability the regulation presented, no longer felt comfortable working with smaller vendors who they felt couldn't provide the assurance of GDPR compliance. External vendors became riskier that everyone reduced the number of vendors and partners they wanted to work with (either directly or indirectly), which had the cascading impact of smaller companies being cut out of the ad ecosystem and Google becoming much more dominant.
Now I'm not yet familiar with this draft nor with how it's likely to evolve from this point on but you have to do one or the other. You make the regulation overarching enough that partners' lack of compliance affects your own compliance efforts, in which case smaller players are going to have a hard time competing and will be cut out of the ecosystem by legit big companies who have more to lose than you do. If you don't do that and it becomes relatively easy to outsource data-related liability to your partners, there will be a loophole where shady data collection and processing activities will tend to aggregate and move towards entities that are either exempt from regulations or at least willing to take on liability for their customers for short-term gains. The latter is distinctly worse than the status quo because you're making the problem worse. The former is arguably bad for the startup ecosystem.
My understanding is that smaller players are, in the aggregate, much worse about pretty much everything and the effectiveness of GDPR-style regulations depends on the impact they have on smaller players who are much further away from compliance than the big tech companies who were for the most part never too far away from compliance and whose bottom line never depended on any of the alleged shady practices. Most people aren't aware of smaller ad-tech players or data vendors so often their (and their customers' or partners') wrong-doings are blamed on the big tech companies that are much more visible.
To a large extent, what happened in the ad ecosystem is that smaller shadier ad tech companies forced everyone else to become shadier - if they are out there promising marketers more data, better tracking and more accurate measurements and they are accomplishing that through questionable practices, that still raises the bar for what marketers expect and they are able to force their way into integration with other platforms or at least force everyone to do similar things. Ultimately, there's a trade-off between transparency and measurements for marketers and privacy and a world dominated by lots of small players who don't trust one another is one where marketers are forced to require proof that their money is being well-spent, which means more privacy-defeating tracking and measurements.
This is an extremely frustrating headline. The draft bill we're discussing does not appear to establish "prison time" for "data privacy violations".
The bill "covers" any entity with over 1MM users (Flappy Bird) or $50MM in revenue over 3 years (most mid-sized startups). It creates a compliance regime, violations of which can be pursued by the FTC under its "Unfair Trade Practices" authority.
A subset of Covered Entities (those with over 50MM users [very few startups] or $1B in revenue [virtually no startups]) are further obligated by this proposal to file annual Data Protection Reports. If the CEO, CISO, or Chief Privacy Officer of one of those entities deliberately certifies such a report knowing it to be false, those specific people are liable for imprisonment.
Actual failure to comply with data protection and privacy requirements are not enough to get you charged criminally in this proposal. The violation that can actually get you imprisoned in this proposal would constitute a deliberate attempt to defraud the government. You have to be a relatively big company, fail to comply with the requirements of this draft, and then lie about it to the FTC to end up in prison.
(For what it's worth: I don't think this bill is going anywhere; it's a discussion draft by a single member of the minority party.)
> (For what it's worth: I don't think this bill is going anywhere; it's a discussion draft by a single member of the minority party.)
Ron Wyden is indeed a single member of the minority party, but he is establishing himself as one of the leading voices on these issues. Christopher Soghoian probably played a big role in drafting this bill as his Senior Advisor for Privacy and Cybersecurity.
You can actually also tell that by the fact that the article states that privacy regulations are backed by FAANG or similar et al. They think that an eventual bill would be so watered down or not matter that they don't even mind stating up front that they support such a bill.
This is similar to a technique that I have used with my wife. She wants to know say if we can do this or that at some future date. I know that we won't end up doing it so I don't put up a fight (even though I don't agree) and I actually support it fully. Later if it looks like it might happen I use my initial agreement to change the exact terms and she is less likely to think I am gaming her. Or figure out some other way out.
Curious if others think this is wrong and sneaky or smart.
I've said it before, when it comes to mass surveillance,intentionally malicious backdoors and general societal loss of ptivacy, the solution should be primarly legislative not technical.
Good example: every store with a camera uses ML to identify customers and passerby's and shares this info with 3rd parties. Should we talk about how to best facial and gait ML analysis or is the answer simply criminally outlawing this practice?
Why can't I file a restraining order against big tech that states "if you identify activity and you correlate it with my identity,immediately erase it and take steps to avert similar activity data collection": because I feel bigtech's abuse of this data pauses a danger to my liberties and free excercise of my civil rights.
Modern privacy laws are overdue as it is and they need to be criminal,not civil.
Another one: ISPs would think twice about selling your celltower correlated location data and web/dns activity to 3rd parties if this meant jail time to the CEO. If this practice is outlawed with only fines and regulations as the penalty,the only person that can sue them is a well resourced attorney general or a very very wealthy person that can afford a multi year legal battle. Unfortunately,even when bigcorps break CFAA the FBI won't even listen to civilian complaints.
It can't be criminal because corporations can't go to jail. The only punishment you can apply to corporation is to take a little bit of its money, but not so much that it declares bankruptcy and gives all its assets to a new corporation formed by the same owners, who are protected from the Limited Liability structure from having to pay their debts.
That's what CEOs are for. All corporations have a structure where one person or a group of persons are accountable for decisions. For example,look at financial crimes(think Enron),executives are sent to jail all the time.
There have been a handful of prosecutions (and many hundreds of firings) for misuse of the FBI's NCIC database by law enforcement officials. For instance, ex-NYPD Sergeant Joseph Dwyer was convicted of conspiracy in federal court in 2016 for selling information from NCIC to private defense investigators.
Personally, I believe this kind of breach-of-trust should be prosecuted much more vigorously. But besides the fear of embarrassment, I suspect federal agencies are unwilling to
compromise intelligence methods to prosecute misbehavior.
The key here is Misuse under the FBI's guidelines which are no where near as strict as they should be under a properly enforced 4th amendment, most of what the FBI collects in the first place should be unconstitutional for them to have if we had a properly enforced constitution, unfortunately we do not.
The big tech companies are edging towards complete monopolies in their spaces and a significant part of this is that they know everything and they're allowed to leverage that data. Why would you go to anybody else to advertise? Unbundling this, making advertising harder again will spread out the budget, probably push more towards publishers rather than networks. Again, not a bad thing.
It also means that new players get to compete. It's been very tough to compete with similar services on price when the incumbents can operate at zero "cost".
Not having your data traded under the table is just gravy.
I think it would be a pretty horrible thing if all these once free internet services suddenly cost money. Nobody seems to be thinking about the significant amount of people who wouldn't be able to afford monthly subscriptions to Facebook, Twitter, Reddit, etc. even if it did all add up to "only" $15/month. Poor people should not be priced out of the online spaces where modern discourse happens. Single mothers should not be priced out of getting to post baby pictures on Facebook. I think it's indicative of living in bubbles that nobody else seems to be giving this consideration any weight.
Currently, social media users are the goods being sold, so the companies running the networks don’t have much interest in offering tools and algorithms for high quality discussion. With paying customers the situation would be very different. (Not that it’s going to happen – I’m just saying that we already pay a hefty price for the “free” services.)
If I had to test that hypothesis I would consider the comments section on the NYT (a website that requires a subscription to access more than 10 articles a month). Just checking the comments section there shows that people are just as vitriolic there _even_ if they've posted on more than 10 articles that month.
I'm comfortable saying that paid networks won't in-and-of-themselves improve conversation.
What this bill might help (or at least give law enforcement additional tools to prosecute with) is preventing situations like the Cambridge Analytica situation, where corporations with nefarious motives gather user profile information from social media in questionable ways, and use these tools to try to manipulate in areas outside of mere marketing.
In other words, while I'm not sure if this will improve discourse, at least it mitigates a little bit of the questionable social media meddling that we've seen of late.
I kind of see jimmaswell's point that there is a risk of creating a "tiered Internet". But I'm not sure it's completely an either/or situation. It sounds like that the bill also emphasizes more transparency on what is being done with the data and who it is sold to. As long as personal data is not shared haphazardly and in a leaky fashion, consumers might be just fine with the bargain they get.
Things like shopper loyalty programs already offer this sort of bargain: in exchange for allowing companies access to more granular shopper information, the shopper gets access to greater discounts. Loyalty programs currently are subject to various consumer protection rules. In contrast, the bargain social media has struck with users (free service in exchange for marketing data) is not well protected.
Personally, I'm fine with a little more transparency and disclosure in this fashion. I don't think this necessarily means the end of free social media per se. (I will say though that this is a sign, one of many, that the days of social media "moving fast and breaking things" is over.)
Given "the discourse" currently includes inflammatory crap by bots pulling people's strings, even a small monetary barrier to entry might slow that down.
They could still be freemium, or could use less targeted advertising. Would it be a catastrophe if Mark Zuckerberg only made $20B off Facebook instead of $40B?
Also, today, privacy conscious users are cut off from the main forums of discourse. Is that better?
I would expect it to drive monopolization, since you now have to have resources and competency in yet another thing. Seems like it would be harder on incumbents.
EDIT: since people are downvoting this, I should clarify that I’m not advocating against the policy, it just doesn’t seem intuitive to me that this policy would reduce monopolization (data privacy may well be worth the tradeoff). If I’m wrong I’d love to be corrected!
If you want to be GDPR compliant I don't think you can do that. If you offer the "usage data gets sold" option you also have to make that full opt-in _and_ not deny the service if they don't opt-in.
Under GDPR, if your data processing is based on user consent, then that consent must be given freely. That means you should provide equivalent service, regardless of consent being given. Providing one service for free and another for a fee is not equivalent, so I'd argue that such consent would be invalid, as it was not freely given.
However, consent is only one of several conditions for processing data. Another would be "legitimate interests" of the business. If you choose to process data based on it, then you do not need consent at all. In such case you can provide separate services: free with data sharing, and paid without data sharing.
The trick here is that you actually need to be able to prove that "legitimate interests" of your business require collection and processing of that data. And additionally such interests are overridden by "fundamental rights and freedoms" of users. Which is probably why most businesses went the consent route, as it appears to be more clear-cut.
Thanks a lot for the insight, wasn't familiar with the GDPR logic. I think it'd be very hard to make a business case for processing non-paying users only so it's pretty easy to understand the consent option now.
Twitter wouldn’t need to pretend that it didn’t know that a substantial part of its user base is fraud-bots if it just charged some nominal fees for something other than advertising.
$50bn/year would be more revenue than fb had in 2017, though of course the majority of their 2.23bn monthly users would disappear if asked for $.01/mo.
It is an interesting thought experiment, though, to consider what facebook would look like if its revenue was derived from convincing users that it was worth a monthly subscription.
I remember when Facebook was still figuring itself out, back in the day, users could purchase digital gimmicks directly there, to send to other users. Some "pokes" or emojis or something like that,
I topped up my FB balance with like $2 or $5 dollars, in ?2006?. When Facebook (quickly) discontinued that silliness, my account silently disappeared. MARK ZUCKERBERG, YOU OWE ME MONEY!
Running Facebook would become much cheaper if they could focus on being a social network instead of being an ad selling business. Right now they have to do both.
I know the majority of people I know have at least 2 fb accounts. And then considering bots and other things. I'd imagine then that at least double your quoted amount per real human.
Which is bonkers to me personally considering how little I use it.
I'm fine with fines, but I hope prison time is restricted to intentional and malicious illicit behavior and not anything due to neglect or oversight. I'm not a fan of people being put in cages unless they're violent or have ruined people's lives intentionally.
It's not like people will be thrown in prison because their DB wasn't patched quickly enough. They have to knowingly and intentionally lie to the federal government in an annual report.
They have to knowingly and intentionally lie to the federal government in an annual report
So it's a nonstarter, since the people being prosecuted for these things will have lawyers adept at whittling down intent to only the most brazen and malicious behavior. Not only that, but Sarbanes-Oxley showed us how effective "annual report" red lines are.
"Wyden would also create a national “Do Not Track” system to stop companies from tracking internet users by sharing or selling data and targeting advertisements based on their personal information."
Why not just reform EULAs so that people have more power over what they're often blindly agreeing to? If people could easily see the details and to what they're agreeing to and have more power over certain clauses, I think market forces would take the industry into a different direction, and there'd be more transparency. I don't always believe a market-solution is optimal, but in this case, it seems right.
Companies will have a Chief Privacy Officer whose job is basically to provide oversight and, of course, absorb the risk. That person will probably be paid well.
I'm actually OK with that. We're always complaining that companies don't take security/privacy seriously because there's no incentive to do so. See e.g. the Equifax HN threads. Having a person in the C suite who'll end up in jail if the company seriously fucks up is, IMO, a net positive for the world.
That's exactly my hope. Only large companies benefit from such laws (including, potentially GDPR), other smaller ones get slowed down. With gdpr, many newspaper outlets stopped access from outside of the US.
Maybe. That compensation would still have to come out of company's coffers. Either way, this law will definitely make large companies spend money - as it should, because that's what you buy back risk with.
I find the notion of a do-not-track registry disturbing. It would actually become an identifier for you that could be used to link all the disjointed information collected under that identity. Kind of the opposite of what it's intended to do.
The problematic aspect of regulation is that you don't just need to comply, you need to _convince the regulators_ that you've complied. A better approach is to create liability for certain specific failures if the company didn't follow reasonable best practices, which means you might get sued, but in that case it would be on the plaintiff to show that you didn't follow best practices, rather than the regulatory approach which makes it a burden on the company to prove to regulators (who have no incentive to keep costs of compliance down, or even limit themselves to cases where customers are in some sense harmed).
As you mentioned Facebook, Google, & in large part Microsoft sell your attention. How do you propose that they are going to pass the insurance tab onto you?
On one hand, I think this is a good thing. That is, I certainly would like to have more control over who uses my own data.
But, on the other hand, the scope of this bill has some risk of bringing about a technology winter. Most people outside of tech don't realize how much of the software they use has been indirectly subsidized by the ad and data brokering industries.
Isn't that kind of the point? I'm not anti-google/facebook in the least, and like to think that I'm knowingly trading some privacy in return for ad-supported services that have value to me. But it's tough to think of any compelling arguments for why companies making billions shouldn't be required to a) provide some minimal level of care over the data they collect, and b) disclose what they're doing with all that data. How will that bring about a tech winter?
What I mean is that it's become a major part of the US's economy. Globally, this industry probably generates 100s of billions of dollars, and those companies mostly spend their revenue on more software, more hardware, more research, more computer scientists, more computer engineers, etc. Indirectly, probably almost all of us here are partially paid from the ad-network-value-chain. And, what about all of the open source products that have been funded by these companies.
I have been blocking advertising on my computers and phones for close to a decade now. I am ad-free. And yet the bubble persists. This industry is beyond mere logic.
How is this a bubble? Unlike previous bubbles, the current technology surge is actually funded by real value, real demand, real revenues, and gigantic profits.
"Bubble" may be wishful thinking, but can you really argue that it's not "labor mis-allocation?"
"The best minds of my generation are thinking about how to make people click ads. That sucks."
By intentionally sewing economic irrationality (ie beyond how irrational humans are already), advertising destroys societal value. Here I use "irrational" in the sense of "making self-harming economic decisions."
In order for society to function properly it must be informed, if informing the public about the level of spying and data collection large corporations do in order to "subsidize" their products brings about "technology winter" then I welcome the cold...
It would be nice if the government imposed the same requirements on its own departments for violations of privacy as it does on private companies.
Seems to me that government is eager to punish private companies for violations, while increasing its own storage of personal information on innocent people.
In this comments section, someone else said:
> [..] when it comes to mass surveillance,intentionally malicious backdoors and general societal loss of ptivacy, the solution should be primarly legislative not technical.
I'd object to this, given that a legislative solution is unable to restrict government collection of private data, while a technical one isn't (case in point: cryptography).
Fortunately, this is just a "discussion draft" and I don't believe it would ever be passed as it is written. This clause would expose mom-and-pop app developers who have apps that happen to go viral and get more than 1 million installs to the same expensive, onerous requirements as an entity with $1 billion or more in revenue:
Each covered entity that has not less than $1,000,000,000 per year in revenue and stores, shares, or uses personal information on more than 1,000,000 consumers or consumer devices or any covered entity that stores, shares, or uses personal information.."
Putting those two vastly different classes of entities under the same umbrella and exposing them to decades in prison seems like it would have a chilling effect on the startup community. You would just have to hope that your app/website doesn't get to 1 million users, otherwise you're exposed to requirements where the implementation will bankrupt a small team or independent developer.
I guess you could simply stop allowing new registrations at 999,999 people, but it seems like a bad idea to discourage businesses from growing beyond that.
Second degree murder from one jurisdiction to another. In California I stand mostly corrected: The penalty is: 15-years to Life. At the 15-year mark they are eligible for parole.
I'm cynical, if bankers can get away with the mortgage securities fraud and not go to jail, then how can a data breach in a tech company cause people to go to jail?
Usually, drafting a bill in advance would be a bad case of premature optimization. Regulation by definition restricts freedom, which is a negative thing in itself. Drafting and maintaining regulations costs money directly and increases complexity of the legal system. Regulators aren't omniscient, they aren't able to consider the entire problem space and pinpoint every issue that needs to be proactively focused on.
When a problem is known and risk judged high enough to outweigh the costs mentioned above, a proactive bill can be introduced.
(Or, at least this is how it's supposed to work; in practice, you also need to add politics and lobbying to the mix.)
Sillicon Valley has spend the last few years investing in their lobbying so I'm not holding my breath for a US GDPR.Especially not with current administration.
Prison time for executives is the only thing that gets taken seriously. Fines to executives can just be paid by executive insurance, and unless you crank them up to 4% of global revenue like GDPR, fines to the company will likely simply become cost of doing business.
But threaten the executives with prison, and they'll suddenly make sure that the company complies with the law. I bet e.g. SOX would be taken a lot less seriously without those teeth.
Besides that, I don't see a reason why wilful privacy violations should not be met with prison terms. Don't want to go to prison? Don't collect/share data that you're not allowed to collect/share.
Even for negligence, there is precedent to send people to prison, although that usually requires the negligence to result in death. But the scale at which software mistakes can cause damage is often higher than the scale at which mechanical/structural engineering does: A collapsing building kills hundreds of people. A collapsing Equifax database doesn't kill anyone directly, but affects over a hundred million people, and has the potential to ruin their lives (imagine e.g. private Facebook profiles outing people in intolerant areas - would probably lead to a larger number of deaths than most modern-day building collapses).
The 2008 crash is full of executives who did not comply with the law, and did not take the threat of prison seriously. I think to this date you can count the number in prison on two hands, and most of those were more foot soldiers than masterminds.
So why pass a law if it can't or won't be enforced? And do you realize how political prosecutors offices are? Also, laws like this become patronage to the drafters when they're out of office. "Well you REALLY should hire our compliance services."
Laws only work when cultural norms already do 99% of the heavy lifting.
> Laws only work when cultural norms already do 99% of the heavy lifting.
And this is the aspect where a lot remains to be done in the USA. Many people who buy, sell or plain steal personal data don't perceive it as something negative. Some of these people are high-level executives and get praise as "disrupting the industry", some do their job quietly, and very often their victims don't even realize the extent of the harm done to them.
They published opinions about the future performance of certain securities. Those opinions were incorrect. Lying would mean misrepresenting the present state of those securities, which they didn’t do. Everyone know they were buying subprime mortgages. They just expected them to trade like prime mortgages.
> the only difference between lying and knowingly misleading the point in time of the subject in question?
If I say “this stock will rise in value” and then it doesn’t, that isn’t lying. If I say that while knowing the CEO is committing felonies, it still wouldn’t be lying, but it would be problematic. If I say “this is a share of Apple” when it’s actually Twitter, that’s lying. Bad forecasts aren’t lies, they’re mistakes.
The rating agencies’ role in the crisis is interesting and nuanced, and it is difficult to fault anyone other than the investors who over-relied on (and arguably misinterpreted) their guidance.
If you look at every security the agencies rated AAA, they performed as expected in cash flows. The underlying mortgages were mostly garbage, but some kept paying. That meant the top tranche of those structures, the tranches that got a high rating, kept paying.
If you held to maturity, the securities performed as promised. (Cf: if the government hadn’t bailed out AIG, they may not have.) The problem was they crashed in value in the interim because people began doubting if they would perform. That was a problem for liquidity-constrained investors, which was almost everyone in 2008. Rating agencies don’t say “this bond won’t trade at 50¢ on the dollar.” They say “this bond will probably pay you back.” And in the latter assessment, they were surprisingly accurate. The problem was people took the second to mean the first.
As the Devil's Advocate, one can argue that the fraud was perpetrated at the peon level of the loan originators, who originated loans to people that could not afford the loans. The executives hands were washed clean by the ratings agencies that gave out AAA ratings. Those high ratings were "reasonable" because the loans were consolidated into CDOs and the top traunche(s) of the CDOs can handle a few missed payments from some of the debtors. In hindsight, many of those products probably should not have been AAA rated (especially ones that consolidated lower traunches of other CDOs). If there was a law that stated that ratings agency executives go to jail when AAA rated products fail (i.e., make them strictly liable for top rated products instead of relying on reputation alone), those executives would likely pay better attention to which products get AAA ratings. Looking back, there was no such law and the ratings agencies were free to give out improper ratings backed by their reputations.
Skipping back to private consumer data, a law that clearly states that executives go to jail when there is a breach of private consumer data would likely increase the chances that the executives will pay better attention to what data is collected, how data is stored, and how data is protected.
Tangential topic: an interesting regulatory idea would be to have mandatory jail time for all executives (and board members) of a company of a minimum size that is successfully sued in a class action for more than X dollars. Make it strictly liable so that no direct knowledge is required to convict so as to punish executives who allow their companies to harm the public. If an executive allows their company to harm the public, then that person is not fit to serve as an executive.
That reminds me of itself a highly dysfunctional old system of private prosecutors. Especially with double jeopardy given that not having it turns it to a clear way to harass anyone. One obvious flaw is that trivially the best defense strategy isn't to hire a good defense attorney - it is to hire the absolutely worst prosecutor - against yourself.
Infamously prosecutors with conflicts of interest - often seen in cases of outrage over police misconduct. The prosecutors proceed to obviously present the worst possible case in what is an /unopposed/ hearing for a crime. There is a reason why the saying is 'they could indict a ham sandwich' - the standard is low enough that they just need to be able to present a very initial case to begin the process of proving guilt officially.
That sounds somewhat naive, but on the other hand it also might help in getting all these unenforced, legacy laws of the book that result in anyone at all times being with one foot in prison if someone decided to enforce every law.
I agree with it sounding naïve. The thing is, we’ve tried a lot more things that attempt to pragmatic and reasonable with limited success: innocent people get hurt, while bad actors weasel out. I know the standard is supposed to be to allow a hundred guilty people go free rather than allow one innocent person be imprisoned, but given how many people are in prison, there might be more innocent people in there than we’d like to admit.
That would end up using money against law enforcement or prosecution as a proxy for lack of fairness/effectiveness, and we already know that money is a bad proxy for morality. That’s why this law has prison time.
If the blockchain could send people to prison, then we’d probably have other problems.
Fair point. Maybe we should use this technique to protect software. Software engineers should go to jail for exposing vulnerabilities that are known. SQL Injection possible? You go to jail for ten years. Get root on your webserver? Fifteen years.
It reads to me like the prison time in this bill is reserved for the crime of knowingly mis-certifying annual reports to the FTC on data protection (ie, for defrauding the government) --- those reports are also only required from businesses with $1B+ of revenue. The rest of the penalties in the bill seem to take the form of liability under FTC's "Unfair Trade Practices" authority.
Journalists are much better than me at obtaining and reporting information. This story just isn't very important; it's a "discussion draft" of controversial legislation by a lone member of the minority party. It wasn't allocated many resources by Reuters.
I disagree. The current standard is that privacy and security are only so important as to prevent law suits unless there is law that says otherwise. That is a pretty low bar.
With this privacy and security must be taken seriously from the start. I am of the opinion that wanton neglect that results in massive consumer harm (think Equifax hack) should warrant prison time.
If management asks you to cut corners to ship the app, ask them to put it in writing.
There's nothing wrong with iterating quickly, but developers are responsible for making sure their systems are secure and resisting shipping applications that aren't. It is a developer's responsibility to push back on shipping broken software. If they're forced to, they need to make sure there's a paper trail that shows that. Otherwise it's indistinguishable from a developer just having done a bad job of their own accord.
> It needs punishment for the management who doesnt allow time to set things up properly
Hard to show it was management's fault, and not the result of a developer who really didn't have the AWS skills (though that's probably still the fault of management, for not verifying skillsets before hiring or giving AWS keys)
Everything we know about iteration cycles, quick access to devops resources, etc, will change when prison time becomes an option.
> Hard to show it was management's fault, and not the result of a developer...
No. A leader is ultimately responsible for everything that happens or fails to happen under his or her leadership. Full stop.
The people in charge of your hypothetical developer are the only ones with the ability to put processes in place to prevent it from happening. They are the least-cost avoider. Therefore, the power and the responsibility belong there.
Strict liability for the least-cost avoider is a sound strategy from both a moral perspective and a law & economics perspective. When proving knowledge and proximate cause are difficult, and someone is clearly in charge, and the harm is great - you place the liability on the people in charge whenever something goes wrong, and be done with it.
As a member of upper management, how would you address this then? More QA? More management of development practices? Move way from "devops" and back toward a world where there's a clear isolation between ops and development?
Over the past few years, developers have seen more and more autonomy and power. I can't imagine there's a way to avoid walking some of that back (if it can be)
> As a member of upper management, how would you address this then?
By specifying what is and what isn't allowed wrt. user data in products. By ensuring it gets included in new employee training, and communicating that exposing the company to data-related risks is a serious, fireable offense. By having internal checks and audits that monitor data risk and keep it low. I.e. basically the same things you'd address risk of financial malfeasance.
> Move way from "devops" and back toward a world where there's a clear isolation between ops and development?
It's not really about "devops" vs. dev/ops separation - it's about not moving fast and breaking other people's things. You can solve that with good professional practices, but there needs to be an incentive to adopt them (as opposed to the existing incentives to ignore them, in pursue of short-term profit).
Say you have a medical office, and you installed a cheap door lock which most burglars can easily crack, then a burglar breaks into your office, steals Ssn and other private info, should the doctor who runs the office go to prison for that?
You’d have to walk back from who owns the medical office. If it’s the doctor who owns the office, then that would be the where responsibility ends.
Doctors already have a responsibility to safeguard the privacy of their patient e.g. interviewing minors-turned-young-adults on their own and asking if they have permission to share the patient’s health information with their parents or guardians.
If the doctor installed the lock manually and there was a reasonable expectation to install a lock of a certain strength, then yes the doctor would be responsible.
If the doctor hired someone else e.g. a locksmith or the property management, then the people who were hired would be responsible so long as they were informed that they needed a particular kind of lock, they were certified to perform the installation, and there were laws on the books for the each of those kinds of responsibilities.
It definitely seems onerous, burdensome, and expensive, but the costs of these kinds of security breaches have been severely discounted by those who have the power to otherwise act.
The question isn't whether the door lock was great, but instead was there a door lock in the first place and if it was locked. If not do you continue to operate with an unlocked door or do you lock the damn door?
You cannot stop all criminals, but you can take reasonable actions (due diligence) to ensure a reasonable effort, according to industry, and timely corrective actions once a breach is known.
And one solution is enforcing consequences for incompetence.
If a construction company sends a bunch of untrained yahoos out with explosives, well, maybe the yahoos should have known better, but the company absolutely should have known better and I have no problem holding them liable.
I think it would be reasonable to have a law that says you need to have X level of security for certain information. Then it would be criminal to provide less than that level of security, especially just to save money. I just think it would have to somehow have exemptions for honest mistakes, and clever attackers.
The problem with that is the devil is in the details and it fails to account for changes properly in technology and excludes far better alternatives.
Say for instance there is a requirement that there must be DES encryption of passwords. That would be a downright terrible law on several levels - first of which is that the best way to secure passwords is not keeping them in the first place but a hash. The encryption standard is as laughable now as requiring banks lock their vaults with a simple warded lock - the kind where skeleton keys work because shaving the teeth from the key means it no longer has anything to catch on while it turns the lock.
Failure to disclose a significant data breach isn't just about deceit against users, it's also about defrauding investors. To borrow a phrase from Matt Levine (Bloomberg), "everything is securities fraud," including failure to disclose a data breach certain to have a material effect on the company's stock price.
Failure to disclose (or unreasonable delay in disclosing) a massive data breach by the executives of a public company is already a criminal offense! But I don't know of any cases where criminal charges have actually been brought. It's possible that assembling a securities fraud case is extremely challenging, or that US Attorneys are not keen to go after tech co. execs. If the latter is true, I don't suspect a new data privacy law would be any better enforced.
That's already sort of the case in the investment management world with trading algorithms. Axa Rosenberg comes to mind. When that case first came down a lot of people were weary of how that relationship between the PM and the programmers would shake out. For the most part it's been okay. PMs were required to become a bit more familiar with what the tech guys were doing and allow a little bit more oversight into the process so that the programmers had time for a maker/checker process as opposed to coding it all in as fast as they could and hoping they didn't make a mistake.
Hire good developers. Or prove the developer is at fault. Then the developer should be jailed. Developers should be responsible for deploying good code. Software engineers act like they are professionals, want to get paid like they are professionals, want to be treated like they are professionals but don’t want to take on the ethical argument. Maybe they should be required to take ethical exams like other professions.
To those who say white collar crime is out of control, I agree, but we don't get it under control by criminalizing new things. We control it through fairer and better enforcement of things that are already illegal.
I strongly suspect the only people criminally prosecuted under such a bill will be patsies, leaving the politically powerful free to continue rear-ending Americans while prosecutors continue to whistle and look the other way, occasionally rounding up (and convicting) the usual suspects to appear busy, and laughing all the way to the bank when they run for higher office and the same politically powerful underwrite their campaigns.
> I strongly suspect the only people criminally prosecuted under such a bill will be patsies
The only people that can be prosecuted under it are the chief executive officer, the chief privacy officer, and the chief information security officer.
The thing that is criminal under this bill and thus can subject them to prosecution is, despite what most news stories imply, not violating privacy.
The bill requires the company to file an annual privacy report with the FTC. The aforementioned three offices are required to provide a written certification to accompany that filing certifying that the report follows the rules of the bill.
The crime is certifying a report that they know does not follow the rules.
The annual report has to describe in detail whether the company complied with the regulations in accordance with subparagraphs (A) and (B) of section 7(b)(1), and to the extent that the company did not list which regulations were violated and how many consumer's personal information was impacted.
7(b)(1)(A) requires the company "to establish and implement reasonable cyber security and privacy policies, practices, and procedures to protect personal information used, stored, or shared by the covered entity from improper access, disclosure, exposure, or use".
7(b)(1)(B) requires the company "to implement reasonable physical, technical, and organizational measures to ensure that technologies or products used, produced, sold, offered, or leased by the covered entity that the covered entity knows or has reason to believe store, process, or otherwise interact with personal information are built and function consistently with reasonable data protection practices".
To be criminally liable the officer has to certify the report "knowing that the annual report accompanying the statement does not comport with all the requirements set forth in this section".
In other words, to be criminally liable the officer has to lie to the FTC.
The main risk it seems to me is that the officers might be given false information from underlings, leading the officers to believe the report is accurate when it is not. If the FTC discovers this, their initial suspicion will be that the officers were the ones that lied. If the officers keep good records of where they got the information they relied on when certifying the report, they should be OK, although it will certainly be something of a hassle.
This only applies at companies with $1 billion or more in annual revenue that deals with personal information on more then 1 million consumers or consumer devices, or that deals with personal information on more them 50 million consumers or devices.
I'd expect a CPO or CISO at such a place is paid well enough to handle this.
This bill is linked from a prominent story on the front page of Wyden's website. It's hard to see how much easier he could have made it for you to read it. I found it in less than a minute.
Yeah, I'm not going to even bat an eye about millionaires and billionaires going to jail for actual crimes and damage done to significant parts of the population. We need to start putting more millionaires and billionaires in jail for crimes instead of black teenagers for having 2 grams of weed in their pocket.
When a $30B+ corp sits on a security vulnerability for 14 months while their customer's in K12 and local government sit exposed, something has to change. Student's futures and safety will be impacted.
Indeed. Most people have absolutely no idea what prison is, how it works, who actually suffers, the lasting secondary effects, and how little it changes anything.
Throwing around prison lightly is dangerous. For some reason data privacy gets people emotionally charged instead of thinking clearly, and it's easy to say "lock them up", but that's exactly the same attitude that led to such heavy criminalization of other things that now takes up resources, fills up courts, wastes lives, and accomplishes almost nothing.
It should be heavy per leaked datum fines like HIPAA. This would transform huge troves of personal data from assets to liabilities, which is what we need.
I wonder if the process could be modeled after the recording industry, where the owner of the information can sue for damages that are set at a pre-determined amount per violation.
agreed, but at the same time, we send small time druggies to prison for years, which is also madness.
these are all valuable humans, despite the othering identity politics of law and order rhetoric, and our focus should be bringing them back into the fold, not harshly and debilitatingly punishing them. very few people are so far gone that (long) prison terms make sense.
with that said, it'd be an injustice if we didn't also send privacy violators to prison, as it would unfairly advantage to them in the eyes of the law. above all, the legal system should strive evermore toward fairnesse. obviously the best solution is not to send any of these folks to prison.
ideally, privacy violators should have to face the consequences of their actions as part of their punishment (like personally making whole each victim, as an extreme example).
My pet theory is this is firing a shell across the bow of BigCorp. Telling them change is coming. Maybe this won’t pass, but be prepared to be held more accountable.
The FTC already has quite a lot of authority over privacy regulations (in the financial sector at least) and does next to nothing with that authority. They spend zero time affirmatively looking into regulatory compliance and only take (very limited) action after receiving numerous reports about egregious violations. I suspect this far more of a "look at how much we're helping consumers" that will add some teeth to the few enforcement actions they do pursue but won't do much by way of removing their head-in-the-sand approach to consumer privacy.
The bill defines covered entities in Sec. 2.(5)(A) and 2.(5)(B). In particular, companies with less than $50,000,000 in gross receipts and information on fewer than 1,000,000 customers are not covered by this legislation.
And even if those apply to your local coffee shop or whatever, Sec. 2(5)(B)(iii) further limits the definition of covered entity so that businesses that do not provide 3rd party access to information are not covered.
So Starbucks and other huge coffee chains/retail shops are the only organizations that would have to re-evaluate data collection from their public Wifi hotspots, and even then might be exempt depending on what they are collecting and how they are using that information. And, I should point out, these companies will need privacy experts on staff anyways, so this provision is highly unlikely to cause them to shutter their in-store Wifi networks...
Additionally, some of the more onerous requirements only apply to a subset of covered entities with yet larger gross receipts and yet larger numbers of tracked consumers.
But, unequivocally, your locally owned mom & pop coffee shop is excluded from consideration under this provision multiple times over.
Unless inflation happens...
This only applies to big businesses now, but in 25 years it will start effecting medium sized firms and it will eventually hit small businesses. This will create a morass of bureaucratic regulations stifling entrepreneurship...
Granting the assumption of monotonically increasing inflation at a wild rate, this is still only true ceteris paribus. I can't imagine inflation that would make a small coffee shop chain into 50m/year revenue (customer floor requirement notwithstanding) would happen in a vacuum.
What he means is regulatory inflation, not the monetary one. A few years later, the customer threshold would be amended to a few thousands, ten years later, the thresholds would be abolished.
Regardless of their intended purpose, in a world of costly cell phone data plans, the widespread availability of public wifi has been fantastic when you need it. Would be a real bummer to lose that if nothing takes its place.
Just about the only threat that execs will take seriously is a posse of well-armed men. I’m personally not against dispensing some long overdue vigilante justice, but I wonder whether we’d know when to stop?
I could see this seriously effing with software jobs. Even if a company intends to keep their noses squeaky clean, every developer hired becomes an additional risk. Companies will try to control more of the process from the top down (and that works out great LOL) and they'll try to do more with less. Or GTFO of the country.
No, it means you shouldn't create a startup if you're not qualified to handle the data correctly. Way too many bootcamps grads running around saving unencrypted PIA in databases and with root AWS keys because of the "move fast and break things" mentality. (obviously most startups wouldn't be affected by this law, but the principles still apply)
That's preposterous. Healthcare startups are doing just fine managing HIPAA, which includes up to $1M fine and prison time for violating patient privacy.
No. This will stop you from building the kind of startup that is built on a bad ethical foundation. Don’t collect data you can’t protect. It’s that simple!
Startups aren't even covered by this bill until they've gained lots of traction (1 million users and 50MM+ gross receipts). At which point, again again IFF their business is data hoarding, they will need to hire approx. one additional employee.
I'm sure startups/consultants will step in to provide regulator compliance as a service as well, so maybe not even that.
EDIT: I clarified one thing user downandout pointed that it could apply to App developers who may have a million users as well but overall, it should not affect bootstrappers or smaller tech. businesses that don't have that high amount of consumer data so my overall point still remains.
Below is the link to the details on this bill:
https://www.wyden.senate.gov/news/press-releases/wyden-relea...