> Sure, there are multiple laws. The ones I've read all seem similar enough to me on the points people bring up.
The laws are template laws, but do occasionally differ in important ways. You've mentioned before that Texas includes a financial penalty for retaining user IDs beyond verification. You didn't mention that Texas is pretty much the only state that does this, and the majority of the other bills only allow for suing for harm and attorney's fees. Harm can be difficult to prove for information retention, and these provisions rely on individual action for enforcement.
You mention later in this comment that Utah includes provisions for ID-only verification. You don't mention that Utah is (as far as I can tell) one of the only states that offers this kind of detail, most merely mentioning that "government identification" could be used for verification.
These things matter. When we treat these bills as a single unit, we run the risk of building a composite bill that theoretically addresses every concern, even though that composite bill doesn't actually exist anywhere.
----
> Identity verification is not that mysterious.
Agreed. Do you believe that the security professionals who are intimately familiar with identity verification services and who know how the current services work are just... lying? Like, what do you think is happening here? This is not something complicated where there are a bunch of debates about how ID verification can work, we know how the ID verification services today work. And security professionals are saying there's a security risk.
Does the Texas AG know something that they don't? Is there some secret new ID verification system that only lawmakers know about? Like you say, this isn't that mysterious, ID verification online exposes users to privacy and security risks. It's straightforward, this is a known risk.
The fact is, there are no identity verification services I'm aware of that I think are secure enough enough to use for this level of transaction -- and every 3rd-party ID service I'm aware of works by retaining and accessing stored information about users.
The people talking about the security risks know how existing identity verification services work. They're not that complicated. They work by collecting and transmitting and cross-referencing personally identifying data, and that process is vulnerable to attack and data misuse.
----
> i.e. KBA, which is already a thing. These companies already know facts about everyone. You claim you're person X. They ask you to tell them a fact they already know. They check your answer against their database. They don't need to store anything you tell them.
Okay, are you listening to yourself?
> They check your answer against their database.
So personally identifying information is collected and stored. And that information is linked to requests to access potentially compromising or embarrassing material on a level of granularity where those requests, if intercepted, can be used to link personal identities back to those requests. By your own admission.
I don't know, you're agreeing with me and then saying "see, that means that data doesn't have to be stored." No, you just described data getting stored and held by a 3rd-party (notably, a set of 3rd-parties that have had historically awful security and have regularly been irresponsible with those databases) and then cross-referenced with individual access requests in a way that would necessarily require personally identifying to these data brokers which individuals were interacting with which companies.
Sure, those services don't need to store your newly uploaded ID -- they already have it! But what comfort is that? They still have the ID either way. You are describing a system that can only exist by hoovering up and retaining huge amounts of data on individuals, and you're advocating that this system should be expanded.
And while we're on this subject, none of the laws I've read ban retaining records of this access or selling information about which individuals' identities are verified, even though that could be compromising or personal information. More PII and data is created during this process than just the ID you transmit, and I don't think a single law that I've read addresses that fact. But sure, the data broker that already has your ID won't store the image you sent them. That'll be a huge comfort to Texas users when those sites get hacked and leak access information about which users had their IDs verified for which services.
What you're describing is not a privacy-respecting system.
----
> The Utah law also allows the user to present a "data file from a state agency or an authorized agent of a state agency that contains all of the data elements visible on the face and back of a license or identification card and displays the current status of the license or identification card."
I avoided pushing this point too hard before, but reminder that there is no requirement in any of the laws I've read for state agents or authorized agents of the state to delete records of that request or to avoid linking those requests to individual services. The laws as written do not block government agencies from using this information to build detailed records of who accesses which services.
> No need for the site to save anything. Just check the signature and age.
This would not pass a check for fake IDs. Nor would it prevent shared IDs. The laws I've read provide no guarantee that a system that was trivially bypassed would be sufficient to ward off State action. Again with the ambiguity about what "reasonable" means, which is a major problem in these bills. "Don't violate privacy, but it has to work." Well, if all you're doing is OCR on a license and you're not cross-referencing that data or storing information about attempts, that is not a system that is hard to bypass.
Also as I mentioned above, there isn't just one law. Other laws do not go into this level of detail about what kinds of IDs are accepted or how they could be verified. Great that Utah does (although Utah's example is not sufficient to address concerns) -- that just leaves all of the other bills.
> I don't see what makes porn sites unique vs. any other e-commerce business that requires customers to identify themselves wrt. security.
Multiple things:
A) not all porn sites are e-commerce businesses, and not all platforms affected by these bills are porn sites. These bills are not typically restricted to commercial transactions -- merely accessing commercial sites requires verification, even without a business relationship.
B) e-commerce businesses with traditional verification requirements typically do not allow for anonymous usage in the first place. Many of them have extensive "know your customer" rules and are not concerned with protecting the privacy of their users -- quite the opposite, many of them are required to retain information about their users.
C) Security-wise they're not that different, and the criticism of these bills directly extends from knowledge about the security risks and bad practices of many of those e-commerce sites. Whether or not you understand the security implications, I promise you the organizations and security experts that are pushing back on these bills already understand that Flowroute exists.
Note that the theoretical instant, private identification that you seem to be proposing sites will implement doesn't exist for the companies that are relying on this verification today. Once again, I'm left pointing out that you're describing a happy-path scenario that isn't the case for any online identification system I can find. As far as I can tell, these services all store data about their users' individual identities.
----
> Also many grocery stores do scan IDs when you hand them to the cashier. Who knows what they're doing with that info. Wouldn't surprise me if they retain and sell it.
Shouldn't you check up on that before advocating that Internet ID verification is fine because it's just like local verification? Me personally, before I compared digital ID verification to local ID verification, I might make sure that local verification isn't retaining and selling all of your data, because otherwise the comparison would look awful. Have you checked to see whether security professionals have also raised alarms about local storage of ID information? Because... they have, for the exact same reasons :)
Local ID verification ideally should not involve scanning an ID, and the fact that it sometimes does anyway is worrisome. It doesn't bode well for expanded digital ID verification.
If your point is "local verification doesn't require sending information to multiple parties across the Internet and yet companies still do it anyway, and we still don't know what's happening to your data in that scenario" then... I mean, you have to understand that's not something that is likely to make anybody feel more charitable to your argument, right? That's not something that makes online ID verification seem like a good idea.
----
Once again, I'll repeat:
- Texas's own language refers to these systems as storing user information.
- There are no ID verification systems that I'm aware of for online services that work without maintaining and storing information about users.
- Addressing long-term retention of submitted information is not sufficient to address the privacy and security concerns that researchers have brought up.
- None of the bills I've read are clear that an unverifiable zero-retention policy would be sufficient to avoid liability, this seems to be something you're just reading into the text as an assumption of good will.
What you're suggesting above about retention practices and the ability of ID verification services to do this without storing customer data isn't true -- but even if it was true (which it's not) it changes nothing. Regular transmission of this kind of information is dangerous, users should not be trained to submit this kind of information casually, especially not to sites that they don't have business relationships with. The transmission and collection of this information exposes users to risks to both privacy and security.
I don't think security professionals are lying. I think "security professional" is a meaningless descriptor like "thought leader" that one applies to themselves, and they shouldn't be given any specific credibility.
At the end of the day, I agree we should have stronger data protection and retention regulations, federally even. That's an orthogonal issue to whether adult services online should require some validation that the customer is an adult. It's not the first solution I'd reach for (I'd prefer requiring metadata to make client filtering easier), but the more I think about it, the more reasonable it seems. No one throws a fit when instacart scans your ID for alcohol orders. Buying a gun online has even more stringent requirements where you need to go visit an FFL to pick up. Likewise in my area, marijuana is legal (modulo federal illegality), but delivery is not; you basically can't buy it online.
I don't see why porn is special here. The law has always banned distribution to minors before the web existed. By default, sites (commercial ones at the very least) should be criminally liable for breaking the law if they distribute to minors, just like in-person stores are. They should be proposing systems that they believe are reasonable to meet their obligation, but they are not. Instead, they've gone from at least requiring credit cards to... absolutely nothing. They've frankly brought this on themselves.
The obvious elephant in the room to me is that none of this would even be controversial if sites hadn't moved to an ad-supported model. If you're paying for it, of course they need to know who you are for billing. Again, the more I think about it, the more reasonable it seems to me that if you're going to have that business model, then fine, but you need to at least do the checks you would've otherwise done during billing.
So perhaps the issue is
> sites that they don't have business relationships with
Is simply not a good model. If a business doesn't want to establish itself as credible to its customers such that they can trust it to professionally handle their information, then maybe they shouldn't be in an adult restricted industry where they need to handle that information. If they don't want to handle that information, perhaps they can propose a system where they don't need to (I've commented elsewhere on HN[0] about an oauth-like system where the government could provide age gate tokens without knowing who the token is being issued to or even if the age required is over 18 or over 21. It's not that complicated. Why do we have no one in these industries making such a proposal to lawmakers? They've had 30 years to do it.).
> I don't think security professionals are lying. I think "security professional" is a meaningless descriptor like "thought leader" that one applies to themselves
You don't believe that people who study security for a living might know more about it? Certainly software security experts should be given more credibility about software security than politicians should be given. I'm not sure I've ever run into the view before that security research is a pseudoscience.
> At the end of the day, I agree we should have stronger data protection and retention regulations, federally even. That's an orthogonal issue to whether adult services online should require some validation that the customer is an adult.
In what way is that orthogonal? The lack of data protection and retention regulations is a big part of why this stuff is dangerous. This is a little silly, you agree that the existing standards and services are not sufficient, but you don't think that's relevant to whether or not their use should be massively expanded under the direction of the government?
Of course it's relevant.
----
> No one throws a fit when instacart scans your ID for alcohol orders. Buying a gun online has even more stringent requirements where you need to go visit an FFL to pick up.
I already talked about this, not all of these sites are transactional. Also, note that porn is tied into normal political and social speech in a way that it could never be fully transactional and commercial without restricting a large portion of that speech.
Also, people do throw a fit about data privacy and about at the very least improving security for ID verification. To your point:
> I don't see why porn is special here.
It's not. These debates happen in other areas too; attempts to clamp down on hate speech, propaganda, to restrict information flow across state lines, to track copyright violations, to access E2EE messaging, etc, etc. What you're seeing is completely normal consumer advocacy for privacy, security, and free speech, but because the US is so conditioned to think that porn is some kind of special category, advocacy that probably wouldn't make you blink in other situations feels weird to you now. You may not be aware about debates in other areas of customer tracking, but even with that lack of awareness porn jumps out and you are aware of that specific debate... because everyone thinks porn is some special category.
Data scientists and security experts ruining a legislator's day by pointing out that the systems they imagine actually have huge security holes is normal. It only feels different to you because this time it's about porn.
----
> They should be proposing systems that they believe are reasonable to meet their obligation, but they are not.
So here's an interesting thing to research: they are. Every single one of these sites labels content in a way that it could be intercepted and blocked at the router layer or by parental controls on devices. They all self-identify, even in areas where they're not legally required to.
If you think that porn companies are sitting around and doing nothing, you really have not done much research in this space. They have made plenty of proposals about how to make filtering easier, but states have largely ignored those proposals because:
A) they would require pushing companies like Apple to develop competent parental controls, and that doesn't poll as well among Conservative voters,
B) the majority of these laws have backing from explicitly anti-porn advocacy groups who do not want parental controls, they want to ban porn.
----
> The obvious elephant in the room to me is that none of this would even be controversial if sites hadn't moved to an ad-supported model.
I would advise doing more research on this, there are controversies about this kind of ID requirement even for purely transactional data because it does expose people to privacy risks. I will also note that pushing an entire category of speech to require a transactional relationship would very likely be a violation of the 1st Amendment.
----
You have a couple of accusations here that are just straight-up false. Pornhub specifically called out the lack of government-backed ID services as a partial reason for their opposition for these bills, and has lobbied for states to build such a service. More importantly, Pornhub already does what you're advocating is your preferred solution: "I'd prefer requiring metadata to make client filtering easier".
Pornhub is pushing out metadata today. There's a full-on standard for it and everything (https://www.rtalabel.org/).
What is actually happening is that Apple, Android, and routers don't provide sufficient parental controls to act on that metadata. But commercial porn sites are not in charge of what Apple and Android build. The argument that these commercial sites have made is not that they should be allowed to market harmful content to children, but that when states ignore a workable solution to a problem in favor of a less practical solution with greater security, privacy, and free speech implications -- that is not a good use of legislation.
If this is the first time you're hearing about this -- I mean, I'm not surprised, like I mentioned above porn is a weird category of protected speech and as a result coverage gets weird around it. And the lobbying groups behind these bills have worked very hard to act as if porn sites are simply throwing content online or even deliberately targeting children. So it's not unexpected that you largely get one side of the story. Texas doesn't advertise that it outright ignored calls to legislate better parental controls. Louisiana doesn't advertise that there exists a labeling standard in use today that they are actively ignoring. I don't expect someone just now looking at the text of these bills to know that.
But knowing that now, you should do some deeper research on this and figure out what the status quo actually is. Unlike buying alcohol, porn is very directly speech and has been affirmed by the Supreme Court to be speech on multiple occasions. Porn is not always directly transactional content, it gets mixed into normal speech -- and a proposal to get rid of an entire monetization category is extreme. But people feel comfortable proposing restrictions that wouldn't really fly in any other speech category, because they're conditioned to believe that porn is something special and that porn companies are just sitting around happily showing dicks to kids or something.
----
You've advocated elsewhere that router-level blocks are sufficient to handle blocking for VPNs, foreign sites, etc... What porn companies are (and have been) proposing is exactly what you want. Require routers to offer parental controls that can act on the metadata that porn companies will happily attach to the content they serve. Legislate that this metadata must be attached to pornographic content. This would not only be a more private and secure solution, it would also be more effective. It would do a better job of protecting kids than a random OCR check on a drivers license.
Now that you know that, do you find it at all odd that all of these states have completely ignored that proposal and are instead pushing a solution that has obvious privacy and security risks and that is observably pushing websites to block their states? Does knowing this information help you understand what I mean when I say that these laws are less about protecting kids and more about banning porn?
They're orthogonal issues because you can address one, the other, neither, or both totally independently. We can have very strict data protection laws and also have strict id checking for regulated industries.
I'm well aware of RTA labels. I've pointed them out on similar threads. They're also not ideal (given that they're basically "yes/no" which will necessarily lead to arguments about what should be classified), but like I said, I'm inclined to prefer that kind of approach. Something like mandate commercial sites and commercial browsers (which is every major one) to implement it or something like it, with criminal liability for commercial porn sites that fail to do so.
That said, not all sites do implement it. e.g. reddit and redgifs do not, and reddit also hosts forums specifically targeted at children. Those two sites are very high traffic and are completely negligent here. Also, content can't be blocked at the router level if it's using TLS, which of course almost all of these sites do (you could potentially do SNI sniffing against a host blacklist, but even that will go away with ECH). Perhaps the "evil bit" could be used for that purpose at the IP layer so it works with TLS.
Generally, the more I think about it, it does seem "reasonable" to just say businesses dealing in adult restricted materials are liable for determining their customer is an adult (to a standard that a reasonable person would believe), and websites are not an exception unless it was e.g. a defacement. Let them figure out how to do it, and if the government can collect evidence that they failed to do so, they can charge them with distribution to minors. The sites can come up with their own system according to their risk tolerance. Basically, just raise (or introduce) the bar for negligence.
Alcohol distributors don't seem to have a problem doing this. Perhaps porn distributors can ask them for help.
> They're orthogonal issues because you can address one [...] independently. We can have very strict data protection laws and also have strict id checking for regulated industries.
We can not have secure ID checking without data protection laws. They're not orthogonal. This is the same conversation that comes up every time the government tries to mandate secure backdoors into encryption. You can't massively expand usage of an insecure technology and when it's pointed out that the current technology is insecure say, "well that's a separate issue, we don't have to worry about that right now." It's not a separate issue, you're massively expanding a technology that is currently insecure, just own it.
> I'm well aware of RTA labels.
Then why did you claim that porn industries weren't doing anything? I mean, I'm trying to be charitable here, it would be very reasonable for you not to be aware of those efforts, most people aren't aware of them. But you're saying you were?
You're telling me that when you said:
> They should be proposing systems that they believe are reasonable to meet their obligation, but they are not. Instead, they've gone from at least requiring credit cards to... absolutely nothing.
You knew that this was false -- like literally just straight-up wrong? When you commented that porn industries had 30 years to propose government ID systems to avoid handling this data themselves and hadn't... you knew that porn industries had actually proposed and lobbied for government ID systems?
So why did you say otherwise?
> They're also not ideal (given that they're basically "yes/no"
Come on, this is obviously not an issue for you because if it was, you wouldn't be supporting the current bills, which all implement binary "yes/no" classifications. We could debate whether or not broad classifications that refuse to distinguish between types of porn are good or bad, but you are currently arguing in favor of a binary classification for the purposes of liability, so I don't think that discussion would be a good use of time. Obviously you're OK with binary classification for age-verification, so this is not a real objection.
> reddit and redgifs do not
It's not clear that Reddit is liable under all of the laws proposed. Reddit hasn't pulled out of any of these states or added ID checks. Your argument against the proposal of labeling is a site who's content isn't addressed under the proposed laws.
Also, if you don't like that Reddit doesn't currently use the unlegislated standard... legislate it. Pornhub isn't lobbying to block labeling laws. You can require Reddit to use a labeling standard.
> Also content can't be blocked at the router level if it's using TLS
My sibling in Christ, you proposed blocking sites and VPNs at the router level. This was your solution to foreign porn sites that aren't covered by these laws. Now suddenly that's not sufficient?
Regardless, we use per-page metadata all over the place on platforms like iOS and Android to enable functionality based on page contents -- from device support to PWA indicators. There is no reason why these platforms can't work those same indicators into content blocking tools. And the presence of headers on landing pages for sites like Pornhub can be used at the network level to block these sites entirely, which again... you proposed doing!
Blocking per-page content is just a bonus, the current bills don't address that concern. It's a mark of the superiority of labeling that it allows a level of granularity that current bills don't.
> Alcohol distributors don't seem to have a problem doing this.
I'll repeat, porn isn't always transactional and porn is rolled into normal political and social speech in a way that prevents making it purely transactional without limiting large categories of speech. It's not the same as alcohol.
Alcohol also isn't speech. Porn is.
> Generally the more I think about it, it does seem "reasonable" to just say businesses dealing in adult restricted materials are liable...
You're allowed to think it's reasonable. The problem is if you spread misinformation while defending that position. To summarize where this thread has gone, you've suggested:
- People shouldn't worry about data collection because the laws prevent it. This is false, many of the laws have limited liability and recourse for data collection, and most only target retention of ID information, not aggregate data collection about users' browsing habits. Additionally, none of the laws limit government collection of data.
- The laws are close enough to each other that they can be read interchangeably. This is false, although the laws are templates of each other they often differ on details, and the presence of a provision in one bill does not solve problems for other bills.
- Information does not need to be stored or collected to implement 3rd-party ID checks. This is false, there are no 3rd-party ID checking services that I'm aware of that do not collect and store information about users.
- Retention laws would solve the security problems. This is false and a misrepresentation of security professionals' criticism of the bills. Retention is one part of the security and privacy risk.
- Porn companies have not proposed any alternatives. This is false, they have -- both ID systems and labeling systems. What's wild about this one is that you're suggesting you knew that this was false when you said it, which is not something I would have suggested.
- Porn verification is identical to alcohol/gun verification. This is false, most porn consumption online is not via a transactional relationship.
----
Like I said, I don't care if you support the bills, that's fine. It's a free country, you can support whatever you want. Just don't spread misinformation while you're doing so.
You can of course have secure id checking without data protection laws: the companies doing the check can just not store information about the check, regardless of whether they are required to delete it. As long as they are not required to retain it, which I have not seen anywhere, they certainly can choose not to. Here though, the laws I've looked at all specify that they must not retain it. They could have higher penalties, but they already explicitly forbid it.
Like I said several times and have said in other similar threads, I'm inclined to think RTA headers are a "better" approach. Currently they're not consistently implemented on either end (e.g. Firefox doesn't support them, sites I mentioned don't send them), but it'd be a quick win to mandate that in commercial contexts, which would include Firefox.
But you don't have to look far to find people who think the filtering problem is entirely intractable (they're in this thread). I think it's worth trying the metadata approach more with commercial mandates to implement something along those lines. I can see why people could argue that's been tried enough (filters have existed for over 20 years, and access for children is still easy), and they need something more. It's not clear that they're even wrong, though I'd like to see us try still. But the more I consider it, it really doesn't seem like that big of a deal to just do ID checks. Presumably you'd do it once to establish an account that's above the age limit. Not the end of the world.
Maybe I'm wrong about these sites' lobbying efforts. Maybe most of them have been posting on their front page big banners asking people to tell their representatives to support mandatory metadata processing/filter enablement laws. I sort of doubt it, but it could be. I do know that some major sites (e.g. reddit) don't implement the metadata or any other controls.
It's not clear what the "percent of content" in these laws means, but when I looked at dumps last year, reddit looked to be ~40% porn by posts (obviously not if you consider comments to be content for counting). It is (or was, as of last year, if dumps are accurate) pretty close to being more porn than not. Certainly for a discussion about how porn sites behave, they are a major porn site with millions of users, and they do exactly nothing to turn away minors (in fact they obviously target them) or segregate the site.
I pointed out elsewhere that routers can block common VPN protocols (e.g. ipsec or wireguard). Of course they can do almost nothing to block something going over TLS:443, and soon they won't be able to do SNI sniffing either. So network filtering of sites is not possible anymore unless they stop using TLS.
Anyway, my point about worrying about data collection and retention is that people should worry about it to the same extent they do with eBay or some small shopify-based site. They should worry about it! But they shouldn't specifically worry about porn sites. And the laws here seem to all ban retention, which is good. Perhaps they could have higher penalties, but they do ban it. Generally e-commerce sites don't have retention regulations.
It's not clear to me how governments would get any records to retain, but sure they should disallow it.
3rd parties already store data that can be used for verification. I don't see KYC laws being undone anytime soon. There's no need to record any information about a verification occurring. I'm sure companies offering KYC services who are already used to operating in regulated environments can deal with not retaining submitted information.
I don't really understand your point about "transactional" relationships. If you have a business providing a service, they can follow relevant laws for their industry. If total wine decided to place an unmonitored "free beer" keg out front where children could get to it, they'd almost certainly end up in legal trouble.
Or perhaps a more direct analogy would be if you opened an adult theater with an automated ticket machine so no one checked who was coming in. Or a Redbox that took cash and rented adult movies with no checks. That business would never fly in person. Why is it different online?
> the companies doing the check can just not store information about the check, regardless of whether they are required to delete it.
Seriously? By that logic the bills themselves are orthogonal to porn, since sites can just institute ID requirements without being required to do it. There are no 3rd-party ID services that have good privacy handling or refuse to retain information. And in the absence of government-sponsored alternatives (which companies have asked for) this is a de-facto requirement to use 3rd-party ID services that put customer data at risk.
> Here though, the laws I've looked at all specify that they must not retain it. They could have higher penalties, but they already explicitly forbid it.
False. I already covered this:
> many of the laws have limited liability and recourse for data collection, and most only target retention of ID information, not aggregate data collection about users' browsing habits. Additionally, none of the laws limit government collection of data.
----
> I can see why people could argue that's been tried enough (filters have existed for over 20 years, and access for children is still easy)
People can argue a lot of stuff, that doesn't make any of it correct. If someone argues we've tried mandating labeling for online porn and legislating parental controls... we haven't. They're wrong. They can argue it if they want, but they're arguing fiction.
----
> Maybe I'm wrong about these sites' lobbying efforts. Maybe most of them have been posting on their front page big banners asking people to tell their representatives to support mandatory metadata processing/filter enablement laws.
Pornhub has in fact literally placed large banners in certain states lobbying about this topic and asking customers to go to their representatives and get involved. I've never seen a response from the company to any porn bill in which they don't put forward the idea of device-based filtering. They are constantly qualifying their responses with stuff like "of course, we also want to keep kids safe, so that's why we support local filtering and labeling laws".
It is strictly inaccurate to claim that these bills are the result of inaction from porn companies, or that porn companies have not proposed alternatives. You claimed that these companies had done nothing; but in reality they literally built a standard for the government. And pushed for government ID verification too as an alternative to 3rd-party services! :)
----
> It's not clear what the "percent of content" in these laws means
ie, the laws are over-ambiguous and don't clarify liability to an acceptable degree. A lot of things aren't clear in these laws. It's not clear what "reasonable" means. It's not clear what "damages" are for data retention. It's not clear what "retention" means in these laws!
They're badly written laws.
> Certainly for a discussion about how porn sites behave, they [Reddit] are a major porn site with millions of users, and they do exactly nothing to turn away minors (in fact they obviously target them) or segregate the site.
Multiple of these laws have taken effect in states already. Reddit requires an ID in none of those states. No politician I'm aware of has talked about suing Reddit. If you have a problem with Reddit's handling of porn, these bills aren't doing anything about it.
Because of course they aren't, no AG is going to be so foolish as to try and force an ID requirement in order to view Reddit posts. But you know what would allow blocking porn on Reddit? Labeling requirements.
You want to know who else isn't covered by these laws? Non-commercial sites -- because placing these kinds of restrictions on non-commercial hobby sites would be far more likely to raise 1st Amendment questions (not that the laws as they exist don't already raise 1st Amendment questions). But you want to know how you could legislate filtering for smaller sites without raising those questions? Labeling requirements.
----
> I pointed out elsewhere that routers can block common VPN protocols (e.g. ipsec or wireguard). Of course they can do almost nothing to block something going over TLS:443
> "But what about sites outside of US jurisdiction (e.g. Russia)?" Require ISPs to have a setting for customers to opt into blocking them.
Well tbf, Russian sites famously never use TLS ever ;)
Look, routers do block websites all the time, including encrypted ones. Sites can be blocked via IP, but the more direct way is to block using DNS. TLS doesn't stop 1.1.1.3 from working, and even once ECH comes in, any device that is capable of supporting ECH is also going to support setting custom DNS servers, including a local resolver managed by a router.
But maybe you don't want to use a router, fine. Maybe DNS is too hacky for you. That doesn't mean that iOS and Android devices can't also implement this kind of blocking.
----
> And the laws here seem to all ban retention, which is good. Perhaps they could have higher penalties, but they do ban it.
See above, this is false. I covered this already.
> It's not clear to me how governments would get any records to retain, but sure they should disallow it.
But they don't disallow it, do they? :) I'd like a lot of things in a theoretical version of the legislation, but unfortunately we're talking about the legislation that exists -- and the legislation that exists does not appear to bar indexing of consumer internet habits by the government (or by private businesses).
> Generally e-commerce sites don't have retention regulations.
Any site that accepts payments has defacto retention regulations, at the very least for taxes -- in practice at least. Given how ambiguous most of these laws are about enforcement and what "reasonable" means, there is a heavy incentive for sites to retain at least some user metadata even if they can't retain actual ID documents.
Also bear in mind that 3rd-party verification necessarily requires the collection and retention of information about every single person who can be verified through that service. Whether they retain the specific documents submitted or not, this is still an expansion of user surveillance -- and of course, the laws do not clearly ban collection of metadata and identifying information about user requests outside of the ID information itself -- at least, I don't think the laws couldn't be argued in a court not to cover that information.
----
> I don't really understand your point about "transactional" relationships.
Not all porn is part of a transaction at all. Porn isn't always something you buy, it's not like alcohol. It's not a purely commercial product, it's not always tied to accounts, it's not always a thing you buy or order. And forcing it into that category would hamper a lot of speech -- because porn is intrinsically tied up alongside political and social speech. Particularly where user content is concerned, porn can be extremely political, and the history of porn/decency laws in the US demonstrates that concept over and over again. Porn can not be reduced to a singular transaction in the vein of buying a beer -- not just because it may not involve an exchange of money but also because porn is speech, it is communicative, it is a thing that happens alongside and inside protected communication. Buying a beer is not like that.
Where the Internet is concerned it is actually a good thing that people can read Reddit and Twitter anonymously without making accounts and logging in. We don't want an Internet where every single site is a walled garden that requires a user account. It is a good thing that people can set up Mastodon servers that openly federate -- something that would be practically impossible if they required ID verification in order for anyone to view posts on the service. And again, it's not as simple as saying "well, but we'd only require it for porn." If you're requiring it for porn, you are requiring it for protected political speech. The implications are the same.
What people don't really acknowledge when talking about porn is that things can be inappropriate and harmful to children and also protected political and social speech that should not be restrained between adults. It cannot be reduced to a purely transactional "I would like to buy a smutty magazine" framework.
> Or a Redbox that took cash and rented adult movies with no checks.
As a sidenote, I strongly suspect that a Redbox that took cash and rented out R-rated movies would be legal in nearly every state. Did you know that it's not illegal for a parent to take a child to an R-rated movie, even one that contains sexual content? I wouldn't advise doing so, children shouldn't watch R-rated movies, that kind of content can be very harmful to them. But nobody will arrest you for it.
Did you know that compliance with movie ratings isn't legally mandated? Movie theaters actually have no legal obligation to keep children out of R-rated movies (and certainly no requirement to ask for IDs) -- the whole thing is a completely voluntary standard. Just a fun fact.
But to your broader question:
> Why is it different online?
Because mediums affect security risks and liabilities. Because it's online. Because asking for an ID to be uploaded before you look at a Reddit post has bigger security and privacy implications and as a result bigger speech implications than asking for an ID before you physically buy a beer from a liquor store. Because they're not the same thing.
There's a lot of stuff we do online that we don't do in physical spaces. In physical spaces I don't need to encrypt every single message I hand to someone else. On the Internet, we use TLS. Because mediums affect things. They always have affected things and they always will. And this is not new, newer mediums have been affecting how we write laws and regulate communication since the founding of this country.
----
We cycle back around to my previous point: you can think these laws are reasonable, it's a free country, you can think whatever you want. My problem is not whether or not you think the laws are reasonable, my problem is that you're spreading misinformation when talking about the laws.
I'm still waiting for an explanation of why you said that porn companies have done "absolutely nothing" and had proposed no standards when you apparently knew that was straight-up false and that porn companies had in fact proposed standards and advocated for them.
You're allowed to think that online IDs are no big deal; just don't say things that are provably untrue, that's all I'm asking.
Where exactly have they put forward a proposal? I checked on pornhub's FAQ and Press section, and on their parent company Aylo's site. I see nothing. What is the proposal they have? That we put some liability on sites and browser/OS vendors to implement RTA headers? Including non-commercial (e.g. FOSS) distributors? That seems like a much larger abridgement on speech, and without it, you could trivially work around the filter by e.g. running Konqueror off a flash drive.
As far as I know RTA headers date back to IE6, and have gone mostly unimplemented (neither Firefox nor Chromium have any parental control options in their settings). It's actually hard to find any information about it. I only know about it from trying to find out how the old IE parental controls dialog was supposed to work. It's almost entirely undocumented anywhere.
This article[0] claims pornhub is in favor of mandating age verification, but at the "device level" whatever that means (likely it doesn't mean anything).
So what's been proposed? To which lawmakers have they presented these proposals? Have they just gone completely ignored? Where are their press releases urging people get their solution passed?
You said they placed large banners in certain states. Why not in all states? Or are they only placing banners after they've already had regulation passed against them?
The ruling from the 00s was based on technology at the time, and considered what seemed to be the least invasive way to feasibly do it. At that time, you could actually run a network-level filter. Conceivably your ISP could do it for you. That is almost impossible now, and will be completely impossible soon (except very coarse filters like geo-bans or protocol filters).
So your remaining options are (1) do nothing, (2) put requirements on companies/customers (and use geo-network filters for sites outside jurisdiction), or (3) put requirements on end-software/device providers (and porn companies). There is of course precedent for (3) (the V-Chip), but it's not even clear that that's less onerous than (2). Especially since in the meantime there's actually been an industry that's developed and can make id verification take a couple seconds. I can answer some questions without presenting any documents to satisfy bank KYC regulations; maybe some of the wording is overly vague but it seems existing commercial systems for id verification would fit the intent for "commercially reasonable" systems. As far as I understand, laws are supposed to be vague in the sense of saying things like "reasonable" in order to allow "reasonable" to change over time.
Note also that this new crop of laws seems to all be about commercial services. Mastodon, etc. are not in scope (unless your Mastodon instance is a commercial porn site). They also have the now-standard exceptions for things with literary/artistic/political/educational value. The "porn is speech" issue is tautologically handled by saying the laws only apply to the non-speech variety.
> "Please contact your representatives before it is too late and demand device-based verification solutions that make the internet safer while also respecting your privacy."
This is silly. The language here could not be clearer. You asked for a banner ad, they literally put a banner ad up saying, "contact your representatives and ask for device-based verification." RTA headers are a completed standard for content filtering. The fact that they're not implemented widely is because they're not legislated and have never been legislated.
> So what's been proposed? To which lawmakers have they presented these proposals? Have they just gone completely ignored? Where are their press releases urging people get their solution passed?
> The video’s release coincides with a previously unreported effort by Pornhub — and its private equity owners, Ethical Capital Partners (ECP) — to convince the world’s largest tech companies to intervene in the wider debate over age restrictions for digital porn and social media. [...] In recent weeks, ECP has lobbied Apple, Google and Microsoft to jointly develop a technological standard that might turn a user’s electronic device into the proof of age necessary to access restricted online content, according to Solomon Friedman, a partner at ECP. [...] One possible version of the idea, Friedman told CNN, would be for the tech companies to securely store a person’s age information on a device and for the operating system to provide websites requesting age verification with a yes-or-no answer on the owner’s behalf — allowing sites to block underage users without ever handling anyone’s personal information. [...] “We are willing to commit whatever resources are required to work proactively with those companies, with other technical service providers and as well with government,” Friedman said.
You are wrong. Porn companies are putting effort into this. You're moving goalposts for how much effort you think is fair, but what's wild is even with you moving those goalposts, you're still wrong.
What you're right about that we haven't seen much progress in this area. Why not? Well, from the same article:
> But it is far from clear the effort is succeeding. Friedman declined to say how, or even if, the companies have responded to Pornhub’s communications. Microsoft declined to comment for this story; Apple and Google didn’t respond to requests for comment. [...] Friedman characterized the discussions as being in “early stages,” though his other remarks implied the talks may be largely one-sided.
So companies reach out to tech companies and encourage law makers to pass laws, they're ignored, and then you come along and say "well, they should have said something". They did. They have been saying something. You came along and argued that these companies have said nothing. That there's been complete silence. You're wrong.
----
> That we put some liability on sites and browser/OS vendors to implement RTA headers? Including non-commercial (e.g. FOSS) distributors? That seems like a much larger abridgement on speech
Well... it's not. It's not completely absent 1st Amendment concerns, but it certainly has less of them. Liability on the level of distributors is a much clearer 1st Amendment problem, we literally have Supreme Court precedent on the books saying that blocking distribution of porn can be unconstitutional. Congress passed laws about communication decency that got shut down -- that's why despite you suggesting otherwise, there is no federal ban on distributing adult material to children. We tried it, and the Supreme Court ruled it unconstitutional (https://archive.nytimes.com/www.nytimes.com/library/cyber/we...). And all of those issues still exist for these bills.
This is something you should do more research on; I glossed over some of your earlier comments about "it's already illegal to give porn to kids", but since we're talking about 1st Amendment challenges, I should point out -- it's not federally illegal to give porn to kids, and attempts to make it illegal have been struck down before.
> and without it, you could trivially work around the filter by e.g. running Konqueror off a flash drive.
If I may quote a wise commenter: "It's not perfect, but that's a silly reason not to do it."
Part of parental controls on an iPhone, Windows, or Android device could be restriction of installation of 3rd-party software. And it's not clear to me that legislation of non-commercial software or platforms is even required here. All of the major browsers (Firefox included) are commercial browsers owned by commercial for-profit businesses. That's even questioning whether they'd need to be regulated: like I said earlier, every one of these browsers already has controls for setting DNS settings through administrative policies.
If you're worried about non-commercial escape-hatches, bear in mind that the current bills you're championing only apply to commercial sites, which would not limit resharing, re-uploading, or access to sites that are operating outside of the US. I promise you that shady porn sites will still be available in Utah after this decision. You seem to believe that's an easy problem to solve, but you also seem to believe that it's an impossible problem to solve when we talk about filters, so who knows? You seem to believe a lot of incompatible things.
Nor do they limit VPNs and it is not that hard to find a non-commercial or free VPN or proxy. Every Google Fi Android phone ships with a free VPN that doesn't monitor what you're accessing. Apple has been pushing for mobile VPN support through iCloud as well, although their setup is a bit more limited and doesn't (yet) obscure your state. At some point your child will have a VPN available to them literally just because they have an iPhone and you buy iCloud.
Device-based filtering using parental controls isn't perfect, but it is a better solution. Because even ignoring the constitutional issues, the privacy issues, and the security issues -- I hate to tell you this, but there's porn on Mastodon. As you've pointed out, there's porn on Reddit. None of these laws target those sites, none of those sites have been sued.
The current laws being passed do not protect kids from porn. Objectively, factually -- we know this because the laws are in effect, and Reddit and Mastodon still serve porn in those states. This is not something that's debatable.
----
> but at the "device level" whatever that means (likely it doesn't mean anything).
Very, very obviously, it means age verification would be handled through parental controls on the device. This is not complicated.
Yes, a law would need to elaborate more on what parental controls were sufficient, but that's part of writing a law. You're confused at what "device level" protections are, but have no issue with laws offering zero definition of what reasonable standards of verification are?
Nevertheless, if you're genuinely somehow confused, Pornhub itself clarifies what it means by this on its own blog (where once again, it encourages users to push for alternate ID solutions): https://www.pornhub.com/blog/age-verification-in-the-news
----
> The ruling from the 00s was based on technology at the time, and considered what seemed to be the least invasive way to feasibly do it.
Citation very much needed, rulings from 00s are still established case law. Nobody at the Supreme Court has said, "hey y'all just ignore these that was back when we thought networking was easier."
Also... you can still block network requests. A general reminder that uBlock Origin, 1.1.1.3, browser-level malware blocks, and Piholes are all things that work today and are going to continue to work for the forseable future even with encrypted DNS lookups.
----
> You said they placed large banners in certain states. Why not in all states? Or are they only placing banners after they've already had regulation passed against them?
You're moving goal posts. The fact is, you claimed that porn companies had made zero efforts to propose alternatives. And that's not correct, they have proposed alternatives. You claimed that they had never come up with standards for labeling. That's wrong, they came up with standards all the way back with IE6.
But now you can move to saying that the problem is that they didn't do enough advocacy. Personally, I feel (and the constitution agrees with me) that when an alternate solution for furthering state interests exists that doesn't abridge free speech, the state is obligated to pursue it -- that's part of what strict scrutiny requires.
----
> They also have the now-standard exceptions for things with literary/artistic/political/educational value. The "porn is speech" issue is tautologically handled by saying the laws only apply to the non-speech variety.
God, please grant me the confidence of a HN commenter saying that speech distinctions are handled by a bill saying "don't infringe speech." There is no reliable test for where to draw that line, it's silly to let the government decide where that line is on a case-by-case basis, there have been multiple Supreme Court cases pointing out that the government drawing that line on a case-by-case basis is unconstitutional, and we have political leaders on the books saying that they want to use proposed Internet filtering laws to abridge LGBTQ+ rights.
You yourself aren't applying those qualifications when you think about this -- you've argued elsewhere that somewhere between 30-40% of Reddit content is porn. How much of that porn has artistic/literary/political/educational value? What percent of Reddit porn is and isn't speech? Of course, that's not an easy question to answer.
So they put up a banner after these laws went into effect, only in states affected. My original point was where were their banners during the last 20 years? Obviously people have felt there's an issue. They did not put forward their idea. Other people did (even if it's a bad one). The article you posted also claims
> In recent weeks, ECP has lobbied Apple, Google and Microsoft
i.e they were not doing it until they found themselves being regulated.
Your quote indicates that device based age verification is not filtering:
> One possible version of the idea, Friedman told CNN, would be for the tech companies to securely store a person’s age information on a device and for the operating system to provide websites requesting age verification with a yes-or-no answer on the owner’s behalf
How you get that information is not specified. The rest of the article implies the idea is your phone would store your government id. What they're suggesting seems compatible with these laws. Their suggestion is even explicitly spelled out as acceptable in the Utah law. Utah seems to already have an app for the device side to handle the id. This site seems to be a demo of how to query it?
Like now I really don't understand what they're suggesting. They seem to be happy with what's being asked of them (at least in Utah and Louisiana)? Maybe they're still upset with Texas (though where they lack an existing system, they provide stronger privacy liability for a third party), but what's the issue with Utah?
Why are they starting discussions with Apple and Google to build it? Shouldn't they be integrating with the wallet provider who already has?
Are they upset that the timeline for integration was too short or the id app was missing part of the implementation? Why don't they complain about that if so?
My read at this point is that this is more an attempt at stalling tactic. They seem to suggest they're not even actually against mandatory age verification because at this point, it seems to have already been thought through and implemented in a privacy friendly, standardized way by at least two of these states.
On the tangent, most (all?) states have obscenity laws about giving e.g. porn to kids. Movie ratings are not mandatory because they are not obscenity without artistic merit. An R rated movie will be safe. A porn movie likely not. The government doesn't decide the artistic merit question; a jury does (it is a question of fact, not law).
Arizona where I grew up has a law specifically covering vending machines like Redbox, and says that if you did want to make a porn Redbox, you'd need to have a way to ensure the customer is an adult (e.g. a membership card or token that you buy with an id check). As far as I know no one's challenged it.
> So they put up a banner after these laws went into effect, only in states affected. My original point was where were their banners during the last 20 years?
No, your original point was, and I am literally quoting you here:
> They should be proposing systems that they believe are reasonable to meet their obligation, but they are not. Instead, they've gone from at least requiring credit cards to... absolutely nothing. They've frankly brought this on themselves.
This is categorically false. Not only are they proposing alternatives, not only have they only pulled out of states that do not offer a government ID system (even though it's offered criticism, Pornhub has not pulled out of Louisiana), they also proposed systems way before this legislation took effect -- like you said, RTA standard has been around for ages.
No, Pornhub has not preemptively lobbied for it to be legislated, but that is hardly unusual and hardly a cause for criticism; companies generally don't preemptively lobby for themselves to be legislated unless they're shooting for regulatory capture. Quite frankly, usually when companies lobby against regulation, they don't put forward alternatives. It's unusual that content companies are going this far out of their way to try and help solve the problem instead of just pointing out flaws with the government proposal.
----
> Like now I really don't understand what they're suggesting.
There are several ways of approaching this: one is to do age verification using a standardized system -- ideally that system would be standardized on a federal level. Where states have such a system, Pornhub hasn't pulled out. This is the least-good solution, but it is a solution that Pornhub in specific seems to be generally fine with.
A better way of approaching this is to do age verification using a standardized system that is purely device-bound -- ie, a system where a flag is set purely locally, possibly with one-time verification through a company like Apple or Google, and where requesting websites are sent no data other than a general "yes/no" byte alongside requests. This would be a considerably better system for privacy and security, and it is the ideal that Pornhub in particular is advocating for. One reason why this system would be better is because once verified, verification data would never need to be transmitted off-device at any point. It would also not run the same risks of training customers to upload ID information to arbitrary websites, which is a large phishing risk.
Pornhub's stance on this is weaker than my own. I would prefer for this to be handled entirely through filtering. In practice, the vast majority of parents can easily enter an age into a device when creating an account, and then any standardized age verification system could pull from that parental control with no need to ever expose sensitive ID information to even Apple/Google/Microsoft. Or, even better, parents could be given the option to be more granular with their filters, relying on devices to filter specific content and pages based on their own determinations about what their children can and can't see.
Pornhub also advocates for filtering solutions, but is comfortable with verification/blocking if there are systems in place that make that secure and private.
I don't know the specifics of Utah's digital ID system, but given that Pornhub hasn't pulled out of Louisiana, I would guess the reason they have pulled out of Utah is because they believe that Utah's system isn't secure enough or comprehensive enough to meet their needs. I can only guess what the reason would be -- whether it's a lack of desktop support, or whether the app transmits more data than Pornhub would like to receive, or some other critique. Maybe they will eventually adopt that system in Utah.
But the biggest critique Pornhub has around these laws is a defacto requirement to use 3rd-party ID systems or to collect data themselves. Because they (very correctly) point out that 3rd-party ID systems have security risks, are generally run by shady companies, and generally teach users bad data and privacy habits. Again, their stance is less extreme than mine, Pornhub is only lobbying for a workable ID system, I would argue that these ID systems are inherently insecure, inherently raise 1st-Amendment questions, and as designed fundamentally do a worse job of protecting kids than labeling laws would. I would also argue that several of the states pushing these laws have directly proposed creating registries of trans and LGTBTQ citizens and that like 3rd-party verification industries, those governments themselves should also not be trusted with touching ID verification data at all (again, I would note that none of the bills bar collection of data for these purposes).
But Pornhub is OK with those systems... if they exist and are (somewhat) secure and private. Pornhub has some other critiques that I think are pretty reasonable (and that have been spelled out in the articles that I've linked), including the fact that the enforcement mechanisms (lawsuits rather than direct regulatory action) generally leave smaller and less responsible porn sites untouched and make kids more likely to visit them. And we've already covered how these laws fail to protect kids from porn spread on general social media like Reddit and on non-commercial sites like Mastodon. But the most basic critique Pornhub has is that the 3rd-party ID verification ecosystem as it exists today makes it dangerous to do this kind of verification.
> Why are they starting discussions with Apple and Google to build it? Shouldn't they be integrating with the wallet provider who already has?
A general solution here built into platforms is obviously preferable to a state-by-state solution, particularly given how bad most states are at building secure software. It makes a ton of sense to work with Apple and Google directly on this -- governments themselves should be working directly with Apple and Google on this.
----
> My read at this point is that this is more an attempt at stalling tactic.
Okay, think through this for a second. This doesn't make sense. Pornhub is pulling out of these states. Pornhub does not win in any of these interactions; there's no benefit to Pornhub to stalling, every day they stall hurts their business.
Paypal "stalls" when I try to withdraw money because they get something out of it, they get continued interest on the money they hold. Apple "stalls" on app store regulation because they get something out of it, they get continued revenue from the app store while regulators go back and forth with them. Pornhub doesn't get any of that -- they get zero revenue from these states while this is being litigated.
This does not make sense as an analysis. If Pornhub thinks that they're going to need to go back to these states, they lose more money the longer they wait. Clearly there's something else here going on other than just greed.
----
> Movie ratings are not mandatory because they are not obscenity without artistic merit. An R rated movie will be safe. A porn movie likely not.
What percentage of Reddit porn doesn't have artistic merit? This is nonsensical, you're still looking at a situation where 50 Shades of Grey and Game of Thrones are legal to show to children. That content would rightly fall under NSFW classifications on most sites, and I think most adults would agree that content shouldn't be shown to minors. By any reasonable definition, 50 Shades of Grey and Game of Thrones contain pornographic content. But it's still legal, and you're arguing that this kind of content wouldn't be covered under these laws.
> As far as I know no one's challenged it.
This does not necessarily mean that if it was challenged, it would hold up. Most of the movie industry voluntarily restricts access beyond what the law requires. What we do know is that when these laws have been challenged, particularly on the federal level, and particularly where the Internet is concerned, they've been difficult to defend and have been struck down in high-profile cases (https://en.wikipedia.org/wiki/Communications_Decency_Act)
Regulations on technological capabilities are not free from constitutional risk, but they are far less likely to run into these problems.
Now, if your point here is that these filtering laws are only going to protect kids from X-rated full-on smut with no plot, and that artistic pornography won't be covered -- then these aren't effective laws. They're not protecting kids. Yes, we have obscenity laws in the United States, but if we're going to go in-depth on those laws, we have to start with the point that "porn" and "obscenity" are not the same thing legally speaking. Porn can be obscenity, but not all porn is classified that way. You draw a bright line between R rated movies and X rated movies, but it's not the government that makes that classification, it's a completely arbitrary industry-drawn line. Where content online is concerned, there is no easy test to determine whether a pornographic piece of art or video has artistic merit -- and in fact R-ratings are not based on artistic merit or social value, only on how graphic or disturbing the content is.
Yet the laws you're championing require making that distinction on such a large scale that we would be able to tell what percentage of a website consists of obscenity. It's not realistic, it can't be done without disregarding 1st Amendment concerns.
If you're trying to protect kids from porn, it is not enough to target obscenity -- there is plenty of 1st Amendment protected pornographic speech that should never be shown to children. Which is why filtering laws in these situations are preferable; because they dodge (some) 1st Amendment concerns while allowing parents agency to filter material that would not fall under obscenity law, but that is still probably not a great thing for kids to look at it.
I suppose I conjugated my verbs poorly then; the poor agreement between "should be" and "have been" may have hinted at that, but conceded: I should have written that they "should have been".
Like I said it's quite difficult to find information about this stuff. I don't even know if RTA is what IE used. It's not clear that anyone notable ever implemented it. I don't see it referenced on bugzilla.mozilla.org. Mozilla came up with their own proposal (Prefer: safe) in 2014 and actually submitted it to IETF, and didn't reference the Rating header. Did anyone try to tell them about it? They had like a 30% market share at the time. I can't find any references to it on issues.chromium.org either. I don't see any discussions on chromium's developer mailing list archives. I don't see it on the Android archives. Did they bring it to a lawmaker? To any standards body? To anyone?
Did they even reach out to tech companies like they said?
The howto for android https://www.rtalabel.org/index.html?content=howtoandroid just says you need to agree to their terms, gives no instructions, and has... an ad for travel services. Is there even an android implementation? This seems to be representative of the effort here.
Anyway, my original point was that the whole discussion seems to be disingenuous. They say they want an on-device age verification, and they even said that specifically in response to Utah's law. But Utah explicitly allows that already.
The reporting sucks. They didn't link to the laws. Almost none of the articles about this even name the laws (e.g. SB 287) so you have to go searching for it. The reporters don't seem to bother to read the laws, even when they're only 2 pages long. That CNN article says Pornhub doesn't like Utah's law because they want on-device verification. Utah's law explicitly allows for that, and they already have a working system. It's in fact an ISO standard, and seems to have wide traction building among US states:
(Incidentally, that site seems to be exactly what it looks like when someone is actually advocating for a proposal)
Why don't the reporters ask for some clarification on what they don't like about the law? Or the system? On their face, their complaints seem to be silly.
It's also disingenuous to characterize KYC services as shady. Their main customers are banks, and they're going to undergo annual audits for SOC 2, ISO 27001, etc. because every bank requires that. Their entire business is legal compliance as a service. If the law says not to store your info, they wont.
Pornhub may not be used to people who think this way, but in the financial services sector where these vendors currently operate, compliance with the law is just an assumed baseline feature. It is entirely normal for customers to have their own security architects examine your architecture documents, have multi-month back-and-forths about how to ensure legal requirements will be met, and require annual third party audits and penetration tests of your system. A company I worked for had a system to help automate answering these kinds of questions because they come up constantly.
Service providers here also already have to deal with both retention requirements and non-retention requirements like CCPA, and figuring out which data has which requirements. Pornhub's use-case is less complicated.
They complain they don't want to store whatever info. But the laws don't say they need to, and in fact say they must not. If they need help, there are companies who sell exactly that service.
Why don't the reporters ask for clarification on what appear superficially to be contradictions?
> Like I said it's quite difficult to find information about this stuff.
Quite honestly, I don't think it is. I'm not an expert on this, I'm using the same search engines you're using. I'm able to find stuff online.
> I don't see it referenced on bugzilla.mozilla.org. Mozilla came up with their own proposal (Prefer: safe) in 2014 and actually submitted it to IETF, and didn't reference the Rating header. Did anyone try to tell them about it? They had like a 30% market share at the time. I can't find any references to it on issues.chromium.org either. I don't see any discussions on chromium's developer mailing list archives. I don't see it on the Android archives
This is a lot of critique that boils down to "browser makers and lawmakers didn't implement it." But porn companies are not in charge of browsers. I could ask the same question in the opposite direction -- lawmakers have literally entire teams of paid staff to research this stuff, they are literally required by law under strict scrutiny to research it... and like I said above, I'm able to find information when I search online. So why weren't they able to find anything?
I don't think this is an excuse, I don't think lawmakers need to babied about looking for potential solutions to bills when strict scrutiny is in play. Strict scrutiny does not say that the government should be narrow and specific and research alternatives unless nobody sent them an official proposal on letter paper in which case how were they to know, we can just do whatever, all rules are off. Strict scrutiny places an obligation on the government to do research.
----
> That CNN article says Pornhub doesn't like Utah's law because they want on-device verification. Utah's law explicitly allows for that, and they already have a working system. It's in fact an ISO standard, and seems to have wide traction building among US states:
Looking more at it, I will say that MDL looks reasonably interesting, there's stuff here that I like quite a bit. I will also say that it's not available on Windows, Mac, or Linux, and that it doesn't look like it will ever work via 3rd-party ROMs. But sure, other than that it looks promising. And maybe Pornhub will adopt it at some point, I do think this system looks like it would be an improvement over a lot of ID verification I'm forced to do for services with KYC rules. So I'm all for that.
I will also point out that it's not available in Texas. And we have talked about this, you can't treat these laws like they're some kind of composite whole where one state addressing a problem means the other states no longer have that problem. Okay, you think that Pornhub is being disingenuous about Utah? Fine. The original link at the top of this thread is about VPN usage surging in Texas, which does not implement an MDL standard.
----
> The reporting sucks. They didn't link to the laws. Almost none of the articles about this even name the laws (e.g. SB 287) so you have to go searching for it.
> [...] Why don't the reporters ask for clarification on what appear superficially to be contradictions?
This is not specific to these laws, all political reporting about bills has this problem. Every time that I want to find the original text of a bill that's being reported on by even mainstream sites, I have to search for it. Could it be better? Sure, I regularly advocate that reporters should link to bill text. Do reporters in most interviews tend to ask only softball questions (regardless of who they're interviewing)? Yes. Does that common problem get rid of criticisms of the bills? No, it doesn't.
----
> It's also disingenuous to characterize KYC services as shady. Their main customers are banks, and they're going to undergo annual audits for SOC 2, ISO 27001, etc. because every bank requires that.
I will 100% stand by my representation. Common KYC services are shady. Credit reporting services are shady. This entire information economy is shady; it doesn't matter if they're working with the government. We're only a few years out from Equifax (which is used for customer verification sometimes) leaking the financial information of nearly every single adult American in the US. But what, they work with banks? They work with the people who haven't learned how to do proper 2FA yet? They work with the people who retain massive amounts of customer information and offer credit cards that are privacy nightmares? I have bad news for you about bank privacy in the US. None of these companies have a good track record on this.
I fully stand behind my characterization of them: these services are shady and should not be expanded recklessly to other areas of our life. I think that's an easy conclusion to draw.
> Their entire business is legal compliance as a service. If the law says not to store your info, they wont.
3rd-party KYC services fundamentally can not work without storing your info. Like, by definition -- the requirement is literally know your customer. That involves... knowing them. And comparing pre-gathered information is still storing info. You can not do a "verify your identity by telling us something we already know" question without already knowing the answer to the question that you're asking.
> They complain they don't want to store whatever info. But the laws don't say they need to, and in fact say they must not.
We have been ever this multiple times already: no they do not. None of these laws ban storing metadata or linking identities to requests by these 3rd-party companies. There is nothing in these laws that clearly prevent a 3rd-party ID service from aggregating data about which users have accessed porn. None of these laws ban government storage of information (and once again, states have said that they want to have databases of LGBTQ+ citizens). The majority of these laws do not offer sufficient penalties to incentivize companies not to violate restrictions (user-brought lawsuits are not sufficient, data privacy laws get violated all the time). None of these laws clarify how long information can be retained and most don't clarify what damages a user would actually be entitled to if their information was leaked.
----
I do want to loop back around to:
> Anyway, my original point was that the whole discussion seems to be disingenuous.
These bills have problems. At their best, even if MDL turns out to be great and private -- they're still going to increase user propensity to fall for phishing attacks, they still use a selective enforcement mechanism that will let off the worst actors, they still have 1st Amendment concerns, they still don't really address the majority of porn online (I will remind you that Reddit demands verification in zero of the states that have passed this legislation), they still have insufficient protections against data retention. They still require distinguishing between obscenity and porn on a scale that is impossible to do without abridging 1st-Amendment speech, and they still hue closely to similar federal attempts to legislate porn that have been ruled unconstitutional.
And we're reaching the point where we're basically arguing over "has Pornhub done enough? Why haven't they looked at this standard? Why didn't the government look at this standard? What is everyone's intentions?"
I want to take a step back and say that even if Pornhub did absolutely nothing (which again, I would argue they did not), that doesn't change anything at all about the objections to these bills. And if we're talking about disingenuous, it feels disingenuous to have a conversation that's constantly bouncing between incompatible statements like "this protects kids", and "R rated movies like 50 Shades of Grey wouldn't be covered", and "Mastodon wouldn't be affected" -- and to have all of those problems and contradictions swept under the rug in favor of "but Pornhub was asking for it."
We can look at the laws as implemented today and look at their effects and we can say objectively and indisputably -- they are not working. A lot of porn is still available in those states. So what the heck is the rest of this conversation? You don't need much evaluation beyond: you passed the law and r/insert-depraved-porn-sub is still available in your state without age verification, so... the law didn't work.
I do still feel like you're looking at this through a lens that misrepresents what most lobbying effort and what most political reporting looks like on every issue. But you know what? It doesn't matter. You think that Pornhub should have gotten more involved, great, that's very idealistic. You want political reporting to get better, great, that's an effort I can get behind. It doesn't mean that these bills don't have 1st Amendment concerns, don't contradict themselves in talking about retention and data collection while advocating 3rd-party services that literally can not operate without collecting data, it doesn't mean the bills aren't vague. And it doesn't mean the bills work. And I'm sorry if you don't like porn companies, but these are still bad laws. I'm sorry if you think that porn companies aren't playing nice, but you're still spreading misinformation about ID verification and 1st Amendment protections as they exist today that is just not true.
What is the disingenuous thing here: litigating whether or not Pornhub cares enough about kids, or dismissing obvious problems with legislation and spreading misinformation about that legislation just because you don't feel an industry was proactive enough in preempting it? I'll loop around again to -- I don't even care if you support the laws; fine. But don't say things about the laws that are not true.
The laws are template laws, but do occasionally differ in important ways. You've mentioned before that Texas includes a financial penalty for retaining user IDs beyond verification. You didn't mention that Texas is pretty much the only state that does this, and the majority of the other bills only allow for suing for harm and attorney's fees. Harm can be difficult to prove for information retention, and these provisions rely on individual action for enforcement.
You mention later in this comment that Utah includes provisions for ID-only verification. You don't mention that Utah is (as far as I can tell) one of the only states that offers this kind of detail, most merely mentioning that "government identification" could be used for verification.
These things matter. When we treat these bills as a single unit, we run the risk of building a composite bill that theoretically addresses every concern, even though that composite bill doesn't actually exist anywhere.
----
> Identity verification is not that mysterious.
Agreed. Do you believe that the security professionals who are intimately familiar with identity verification services and who know how the current services work are just... lying? Like, what do you think is happening here? This is not something complicated where there are a bunch of debates about how ID verification can work, we know how the ID verification services today work. And security professionals are saying there's a security risk.
Does the Texas AG know something that they don't? Is there some secret new ID verification system that only lawmakers know about? Like you say, this isn't that mysterious, ID verification online exposes users to privacy and security risks. It's straightforward, this is a known risk.
The fact is, there are no identity verification services I'm aware of that I think are secure enough enough to use for this level of transaction -- and every 3rd-party ID service I'm aware of works by retaining and accessing stored information about users.
The people talking about the security risks know how existing identity verification services work. They're not that complicated. They work by collecting and transmitting and cross-referencing personally identifying data, and that process is vulnerable to attack and data misuse.
----
> i.e. KBA, which is already a thing. These companies already know facts about everyone. You claim you're person X. They ask you to tell them a fact they already know. They check your answer against their database. They don't need to store anything you tell them.
Okay, are you listening to yourself?
> They check your answer against their database.
So personally identifying information is collected and stored. And that information is linked to requests to access potentially compromising or embarrassing material on a level of granularity where those requests, if intercepted, can be used to link personal identities back to those requests. By your own admission.
I don't know, you're agreeing with me and then saying "see, that means that data doesn't have to be stored." No, you just described data getting stored and held by a 3rd-party (notably, a set of 3rd-parties that have had historically awful security and have regularly been irresponsible with those databases) and then cross-referenced with individual access requests in a way that would necessarily require personally identifying to these data brokers which individuals were interacting with which companies.
Sure, those services don't need to store your newly uploaded ID -- they already have it! But what comfort is that? They still have the ID either way. You are describing a system that can only exist by hoovering up and retaining huge amounts of data on individuals, and you're advocating that this system should be expanded.
And while we're on this subject, none of the laws I've read ban retaining records of this access or selling information about which individuals' identities are verified, even though that could be compromising or personal information. More PII and data is created during this process than just the ID you transmit, and I don't think a single law that I've read addresses that fact. But sure, the data broker that already has your ID won't store the image you sent them. That'll be a huge comfort to Texas users when those sites get hacked and leak access information about which users had their IDs verified for which services.
What you're describing is not a privacy-respecting system.
----
> The Utah law also allows the user to present a "data file from a state agency or an authorized agent of a state agency that contains all of the data elements visible on the face and back of a license or identification card and displays the current status of the license or identification card."
I avoided pushing this point too hard before, but reminder that there is no requirement in any of the laws I've read for state agents or authorized agents of the state to delete records of that request or to avoid linking those requests to individual services. The laws as written do not block government agencies from using this information to build detailed records of who accesses which services.
> No need for the site to save anything. Just check the signature and age.
This would not pass a check for fake IDs. Nor would it prevent shared IDs. The laws I've read provide no guarantee that a system that was trivially bypassed would be sufficient to ward off State action. Again with the ambiguity about what "reasonable" means, which is a major problem in these bills. "Don't violate privacy, but it has to work." Well, if all you're doing is OCR on a license and you're not cross-referencing that data or storing information about attempts, that is not a system that is hard to bypass.
Also as I mentioned above, there isn't just one law. Other laws do not go into this level of detail about what kinds of IDs are accepted or how they could be verified. Great that Utah does (although Utah's example is not sufficient to address concerns) -- that just leaves all of the other bills.
> I don't see what makes porn sites unique vs. any other e-commerce business that requires customers to identify themselves wrt. security.
Multiple things:
A) not all porn sites are e-commerce businesses, and not all platforms affected by these bills are porn sites. These bills are not typically restricted to commercial transactions -- merely accessing commercial sites requires verification, even without a business relationship.
B) e-commerce businesses with traditional verification requirements typically do not allow for anonymous usage in the first place. Many of them have extensive "know your customer" rules and are not concerned with protecting the privacy of their users -- quite the opposite, many of them are required to retain information about their users.
C) Security-wise they're not that different, and the criticism of these bills directly extends from knowledge about the security risks and bad practices of many of those e-commerce sites. Whether or not you understand the security implications, I promise you the organizations and security experts that are pushing back on these bills already understand that Flowroute exists.
Note that the theoretical instant, private identification that you seem to be proposing sites will implement doesn't exist for the companies that are relying on this verification today. Once again, I'm left pointing out that you're describing a happy-path scenario that isn't the case for any online identification system I can find. As far as I can tell, these services all store data about their users' individual identities.
----
> Also many grocery stores do scan IDs when you hand them to the cashier. Who knows what they're doing with that info. Wouldn't surprise me if they retain and sell it.
Shouldn't you check up on that before advocating that Internet ID verification is fine because it's just like local verification? Me personally, before I compared digital ID verification to local ID verification, I might make sure that local verification isn't retaining and selling all of your data, because otherwise the comparison would look awful. Have you checked to see whether security professionals have also raised alarms about local storage of ID information? Because... they have, for the exact same reasons :)
Local ID verification ideally should not involve scanning an ID, and the fact that it sometimes does anyway is worrisome. It doesn't bode well for expanded digital ID verification.
If your point is "local verification doesn't require sending information to multiple parties across the Internet and yet companies still do it anyway, and we still don't know what's happening to your data in that scenario" then... I mean, you have to understand that's not something that is likely to make anybody feel more charitable to your argument, right? That's not something that makes online ID verification seem like a good idea.
----
Once again, I'll repeat:
- Texas's own language refers to these systems as storing user information.
- There are no ID verification systems that I'm aware of for online services that work without maintaining and storing information about users.
- Addressing long-term retention of submitted information is not sufficient to address the privacy and security concerns that researchers have brought up.
- None of the bills I've read are clear that an unverifiable zero-retention policy would be sufficient to avoid liability, this seems to be something you're just reading into the text as an assumption of good will.
What you're suggesting above about retention practices and the ability of ID verification services to do this without storing customer data isn't true -- but even if it was true (which it's not) it changes nothing. Regular transmission of this kind of information is dangerous, users should not be trained to submit this kind of information casually, especially not to sites that they don't have business relationships with. The transmission and collection of this information exposes users to risks to both privacy and security.