Hacker News new | past | comments | ask | show | jobs | submit login
Calif. Law to Protect Children's Privacy Could Lead to Invasive Age Verification (reason.com)
143 points by pseudolus on Oct 8, 2022 | hide | past | favorite | 100 comments



I'm surprised how little this issue has come up in the past, given how much many here are focused on privacy. This issue can be seen even today on YouTube. When I browse videos, I usually do so in FF's private browsing mode, primarily to prevent the recommendations from spilling over into my main account. However, videos marked as age-restricted cannot be viewed without logging into a Google account. Even yt-dlp couldn't download age-restricted videos without credentials when I last attempted it! (I hear there has since been a patch to fix this, but I have not tested it to see if it still works.) And apparently, EU users are sometimes required to upload their government ID to YouTube to prove they are over 18.

I get that privacy issues can be most sensitive to those under 18, but it's important to recognize that enforcing far stronger privacy restrictions alongside strong age-verification expectations (no more "are you 18+? yes/no" or "what is your birthday?" questions) can only come at the expense of anonymous browsing.



Note that you have to trust the proxy injecting age verification on behalf of the user is not logging data, being malicious, etc; if they are running targeted attacks, this would be very hard to detect; for example, say the proxy was run by China and user requests a video that’s clearly anti-China; to be clear, NOT saying this is the case, just that it does require trusting a proxy, which generally speaking are hostile to users.


I want to come back to this comment later and try these out. Thanks


"... can only come at the expense of anonymous browsing."

False dichotomy.

(Also known as false dilemma, bogus dilemma, either-or fallacy, black-and-white fallacy.)[FN1]

If the website protects the privacy of all persons, not just those under 18, then there is no problem complying this law.

It's like the folks who claim the web cannot or would not exist unless advertising is permitted. It is a self-serving statement of an opinion. It has no evidentiary basis.

This false prediction, nonsense reasoning is probably accepted by generations that did not experience the internet before advertising was permitted, before online advertising became pervasive, and/or before online advertising was based on data collection and targeted. The fact is the internet and web did exist without advertising. There is no technical limitation that requires a website to collect data and/or serve ads.

Common retorts to such historical facts include such gems as

"I would never want to go back to 199x", or

"All the wonderful content of the web today would disappear."

1. Computers and networks are never going to return to 1990's prices and speeds. Speeds keep increasing and prices keep decreasing.

2. Who would have guessed, but today's commercial web content is actually being generated mostly by website users themselves rather than website operators. "Tech" companies operating high traffic websites have promoted their own "business", online advertising, to lure people into uploading to their websites without expecting payment in return. Like some sort of opaque lottery. This is why the web has become a cesspool of "clickbait" and garbage "content".

FN1. https://en.wikipedia.org/wiki/False_dilemma


Facebook collecting data with "consent" obtained through use of dark patterns.

https://www.thedailybeast.com/beyond-sketchy-facebook-demand...

https://www.theverge.com/2019/4/18/18485089/facebook-email-p...

Facebook does not sell user data. No need. They sell access to the targets of the collection to advertisers, including political campaigns, among other things. We have no way to know all the uses Facebook makes of the data they collect.

If Facebook were to go under, what would happen to the all the data collected.

What should be most concerning is that Facebook buys data. When combined with the data they have collected about people, this creates a potential hazard the Facebook user has no way to assess. There is nothing in any Privacy Policy, including Facebook's, that suggests a website operator will not obtain data about the website users from other sources.

In addition, with use of "beacons" or similar, Facebook collects data about what websites Facebook users, and www users in general, visit outside of Facebook.

Perhaps people would not consent to Facebook holding so much data about them, from various sources. Far more than what they themselves have voluntarily submitted to Facebook. We will never know because people are not full information and the choice whether to opt out.

Facebook is not a credit bureau. When credit bureaus tried to collect this much data about people from various sources, the practice became the subject of federal regulation. For example, FCRA, FACTA, FCBA and Reg B.


Perhaps I worded that somewhat confusingly. I'm in favor of privacy regulations existing in the first place. What I'm against is having privacy regulations suddenly become stricter below age 18. Most website operators want to minimize the time and money spent on compliance, and the lowest-friction way to do that is to implement a privacy-invasive age gate and deny any user who does not pass it. (Either the website receives your identity directly, or the government learns which websites you've been accessing. I've since been informed that less invasive age gates are technically possible, but they would require competent regulations to implement.)

The usual ethical difference, as I've understood it from prior discussions, is that a legal adult user can always consent to a company using their data, so long as the data is knowingly given to the company, the user has been honestly informed of what that data is used for, and the user can revoke the company's access to their data at any point. (Some people recognize less-strict conditions as acceptable, I'm just taking these as a baseline.)

However, many of these new regulations outright prohibit minor users' data from being used in certain ways, even if those users have freely consented to it, with the justification that minors are unable to evaluate the full consequences of their data being used. Since businesses face major liability for violating these regulations, they have two options: either implement the strictest possible age-related regulation for every user, or implement an age gate that cannot be circumvented short of the user committing fraud.

In this particular regulation, some of the wording is vague, requiring a "compelling reason" to collect minor users' data, regardless of those users' consent. Businesses abhor that kind of liability: how do they protect every user as if they might be a vulnerable minor, under every possible interpretation of the regulation? The easier path by far is to require an age gate.

Thus, I'd argue that the better path would be to require uniform privacy regulations, recognizing that random websites don't want to be in the business of age verification. Either require the same strong privacy regulations for all users, or don't implement the new regulations for minors at all.


Do not collect data from anyone, including minors.

No one, whether they are 18 or 81, wants to consent to surreptitious data collection for undisclosed uses.

But "tech" companies cannot survive without conducting surveillance. And so it is the false "tradeoff" (false dilemma) that they must sell to the public to survive.^1

Why do so many "dark patterns" exist. Because no one in their right mind wants to consent to what is actually going on with the collected data at these websites, which need not be disclosed by law so how would anyone on the outside even know. We let these websites get away with this nonsense in the case of adults but with children it is just too egregious to allow. Hence the AADC. This by no means suggests that adults want to consent. The dark patterns target people of any age.

1. If do not allow data collection and surveillance then there can be no web. That is total BS. We just saw instructions on how to download Wikipedia yesterday on the HN front page. We can build hospitals with opioid money but that does not provide a justification for selling opioids without restriction. We can also build hospitals with taxes or donations.


Out of curiosity, what exactly would you define as "collecting data"? There has been much debate over what that particular term encompasses. Clearly, if a website sells users' browsing data to third parties so they can build an ad profile, they would be collecting data in your sense. But on the other end, if a forum site stores the contents of its users' posts to display them to other users, then it's considered business as usual.

So what exact actions make up "collecting data"? Is it how long the data is stored? How the data is obtained from the users? Whether the users are aware that it is stored? What kind of data is stored? How the data is used by the website? Whether the data is transmitted to third parties? Some combination of all of these?

People both on this site and elsewhere have come up with different answers to all of these questions, and we need to nail it down if we want to have effective regulations.


I find ridiculous having to login in YT (which i have no account nor will) to view “age restricted material” when you can watch hardcore porn just by clicking “I’m over 18 years old”


>> strong age-verification expectations [...] can only come at the expense of anonymous browsing

No, this is not true, there are numerous ways for a system to authenticate a user without the endpoint requesting the authentication knowing who the user is — or the system verifying the user knowing what endpoint system is requesting the authentication; only the user knows both who they are and the endpoint they want access to.


In an ideal world, yes. Blind signatures and all that.

But that isn't how collection-maximizing surveillance companies work. Right now, the lowest friction is to ask for birthdays where they can get away with it, but never really verify. If the law gives them a reason to verify, then the incentive is to expand their plaintext-based all-knowing systems to perform verification.

Sure, we'd likely see a startup that promises to do privacy-respecting age verification, but it would go nowhere. What incentive would there be for Google et al to adopt it, as opposed to say just requesting your driving license number with a message implying that California "made" them to do this ? The companies without the cloud to demand personal info would then outsource the verification to the bigger surveillance companies.


unfortunately, there are Thiel-like support services in the wings actively building business bridges at every step. Some elements in society really do try to get paid to surveil others, and that will continue. I predict only hard-won litigation, meaningful laws with enforcement, and public discussion like this, will slow the slide on the slippery slope.


"Some" ? The entire market cap of modern Silicon Valley is based upon monetizing surveillance.

This topic demonstrates one of the main pathologies of the US model of governance. Instead of pushing constructive actionable privacy restrictions for everyone, the government is poised to create another hook that merely emboldens corporate control.

What we need is a US GDPR with an analogous definition of consent - if implementing functionality to examine and delete my data is too hard, then just don't store surveillance records about me in the first place! But what we'll end up getting is ineffectual nonsense to be nullified with more clickwrap legalese that nobody reads.


> The entire market cap of modern Silicon Valley is based upon monetizing surveillance.

no, its not.. but a bulk of FB/Google dollars that are, dwarfs a lot of other things.. its a sick situation, I can agree with that.. important to speak out


You have to ask yourself if we should be building these kinds of censorship tools to begin with. If I wasn't allowed to use the internet the way I did as a kid, I wouldn't be here today.


I first used the internet in my mid-twenties and am here today.


I'm not sure how this is related. What does your experience have to do with another's?


How could the user verify that the endpoint never learns anything about the user's identity, and that the system never learns anything about the requesting endpoint? In situations where privacy is particularly sensitive, either or both could have a motive to learn more about the user.


No matter how you design this, at some point it involves trusting a third party who views your ID doesn't leak PII to other parties.

This problem doesn't exist with today's simple prompts that just ask for your birthday and blindly trust the end user.


Indeed. It seems like most proposals delegate this role to the government, since they already have your basic ID anyway. The question is, how can the website operators interact with the government in such a way that the websites don't learn which government ID you have, and the government doesn't learn which websites you're visiting?

Surprisingly (to me at least), certain cryptographic tricks can do a pretty big chunk of that, by having the website/government and user/government interactions occur ahead of time. But as far as I can see, there's still the issue with these tricks that 18+ people could share their access with under-18 people, which the government would want to prevent.


I actually do have an economic technical solution to the issue of remote authentication of user’s physical presence using zero-trust systems that makes sharing a one-time token impossible, but currently not motivated to share it, since it’s possible I might want to use it myself formally; super simple, very possible someone else has already used it before. Realize you have to take my word for it, but to me, that’s non-issue.

Also, does appear Privacy Pass covers the other aspects, which is covered on this page, but still trying to work through if it covers all aspects of a system that at least for myself I would be agreeable to use for things like: age verification, citizenship, captchas, etc — basically anonymous verification of identity attributes.


Huh, I read up on Privacy Pass, and it really does seem to have the properties I'm looking for. Taking the government as the issuer/attester, it seems like the endpoint (origin) gives a challenge to the user (client), the user requests a token from the government (which, critically, the government can verify without knowing the user's ID), and uses this token to respond to the endpoint's challenge. Thus, the government might learn that some user accessed that particular endpoint (depending on the exact setup used), but cannot learn who it was. And since it does not know the user's identity, it cannot censor a particular user, only potentially a particular endpoint.

I also don't really see how present physical identity could be proven in a privacy-preserving manner, if you don't trust the government or endpoints not to abuse that physical data. Alas, even if I take your word for it, I doubt that governments would implement such an indirect system for age verification in practice, especially since they might want to revoke any erroneously-issued tokens.

(And even if anonymous non-circumventable age gating were to become a reality, I still wouldn't be a fan of differential privacy regulations for minors, since many ordinary sites would become inaccessible to teenagers from following the path of least resistance. Either impose the regulations on all users, impose them gradually depending on age, or don't impose them at all.)


which, critically, the government can verify without knowing the user's ID

Wouldn't they know your IP address from which the request is coming? Or potentially use other browser tracking and fingerprinting tricks?

Maybe I misunderstood but it didn't sound like you were describing blind tokens issued in advance.

Thus, the government might learn that some user accessed that particular endpoint

In my view the government knowing the endpoint associated with an individual request is a critical shortcoming. It's just too short a crevasse for them to jump to get that missing piece (identity). Even if the protocol is sound there are other means (eg. force an endpoint to hand over logs, associate with authentications via timing or other characteristics, and use other tracking metadata provided by the endpoint itself or other third parties or even ISP's to figure out who accessed what). No thank you.

Also creates an easy, centralized chokepoint for more widespread censorship. Simply put, the government is not a choice actor I would trust with this type of capability. The technology is not mature enough to truly, in practice, provide the protections needed to do this right.


I have seen it explained before, believe it was done using a combination of combination of differential privacy and zero knowledge proofs, but might wrong. Currently trying to find a detailed explanation of how to implement such a system using existing technology. In mean time, if anyone else is aware of such a system, please post a link.

EDIT: This appears to cover some aspects, still reviewing it and to confirm it covers all the system attributes I described:

https://crypto.stackexchange.com/questions/96232/zkp-prove-t...


Hmm, I don't think that protocol as described would be sufficient for the government. What would prevent Alice (age 18) from sharing her signed age statement with her friend Charlie (age 17), who passes it to Bob to falsify his age? The government would not want underage users to collude with 18+ users to gain access to age-gated endpoints.


Agree, still trying to find full explanation, seen it described before. Aside from sharing the token, other issue is if the endpoint cached the token, it’s unique and could be cross-referenced to deanonymize the user if 3rd-party or government gained access to list of tokens an endpoint collected.

Obviously looking for example, but if you’re able to think of an additional issues that need to be accounted for, let me know.


Unless it's a blinded token. In which case the endpoint cannot match the token that was issued to the token that was spent. They can only verify that the token is valid.

I first saw them used with Clouflare's Privacy Pass https://www.hcaptcha.com/privacy-pass, they've also been used with Google's trust token proposal.


Thanks, know I have looked at Privacy Pass before, not 100% sure after quick review of the related links if it covers the all system attributes I described; still looking it over.

For those interested, here’s the related paper covering it:

https://www.petsymposium.org/2018/files/papers/issue3/popets...

And yes, my understanding was tokens entered a pool of one-time tokens and neither the government or endpoint was able to cross reference them to a specific user.


Would these be susceptible to the issuing organization refusing to issue new tokens to some individual?


If you load link below and look for “Problem: Malleability” it say, “ It’s possible to create an arbitrary number of pairs of points that will be verified. The client can create these points by multiplying both T and sT by an arbitrary number a. If the client attempts to redeem aT and a(sT), the server will accept it. This effectively gives the client unlimited redemptions.”

Which to me might mean it might address if user is refused new tokens, it might not account for if the tokens are revoked or the authorizing entity ceased to exist, but the attribute was uniform, for example age.

https://privacypass.github.io/protocol/

All and all, at very least, appears to be an extensive and extendable standard, assuming those involved are reasonable and there’s no conflicting interests.


Yep, that's another thing I was thinking of: if Alice uses the same token all the time for every endpoint, then the endpoints could trivially use that as a unique identifier to track Alice. Perhaps the government could create a service for Alice to create new age tokens whenever she wants.

Even then, if the government actively compelling endpoints to hand over their data is within the threat model, the government could inspect any cached temporary tokens to deanonymize the users. This could perhaps be mitigated with some way for users to irreversibly transform the temporary tokens given by the government, but I'm not sure if that's possible while retaining the ability for endpoints to verify the tokens.

Overall, though, the sharing problem seems to me to be the biggest issue by far with this scheme. What makes a user an individual human? In current implementations, their unique government ID (perhaps made illegal to falsify) is used for this. But I can't see how individual humans can be distinguished in a privacy-preserving way.


My memory was that the tokens enter a pool of tokens and that neither government or endpoint is able to trace which user used which token and they’re one-time use tokens, so users are unable to share them.

Never really gave system much thought at the time, since governments rarely want less data or to provide anonymous access — and average person does not care about anonymous access, in fact, in my experience they want everything logged. Same topic could easily be applied to numerous functions such as border crossings, but governments want to track who is crossing instead of just checking individual authorization to enter.


Another possible issue, for any scheme that uses government-issued temporary tokens (even blinded tokens) that have an expiration date: the government could stop issuing tokens to a user at any time, preventing them from accessing any gated sites after their current tokens expire.


Agree, more to the point, once some attributes are verified, for example age, it shouldn’t require government to reverify. That said, some attributes should require if needed real-time verification, for example if an individual is legally allowed to drive, enter a country, etc.


Wouldn't caching verification data require the endpoints to hold user sessions? Thus, if the government withheld verification services for a user, wouldn't it prevent that user from creating new independent sessions on their current endpoints, as well as preventing that user from verifying on new endpoints?


Not 100% sure understand you, but believe this comment might address your question:

https://news.ycombinator.com/item?id=33133749

Ideally, endpoints would not even know the specific authentication provider, but that the authentication came from a authentication service that’s authorized to confirm a given identity and related attributes; for example, being over X age is a binary attribute and numerous entities should be able to authenticate that and it be honored by a jurisdiction similar to how passports are uniform.


Have there been instances of governments providing cryptographic identification services yet? That seems like a hugely beneficial thing they could do in this space (with the side benefit of social websites that aren't overrun with children). The only part that would be challenging is preventing the government authentication servers from knowing which websites you're visiting.


> (with the side benefit of social websites that aren't overrun with children)

I've commonly seen this cited as a benefit of uncircumventable age gates, but I think that most proposed systems I've seen are far stricter than they need to be for this: they have a hard line exactly at 18 or 21, and allow neither read nor write access to gated-off sites. Should a 17-year-old teenager be able to access no more social media than a 7-year-old child can? And must read access always be gated, even though gating write access would be sufficient to prevent this overrunning? Teenagers have to learn how to operate in the adult world at some point, and I worry that pushing so much access to age 18 all at once could have major unintended consequences.


Those are probably a reflection of legal issues. 18 year olds are clear from child protection laws, and 21 year olds can be shown ads for alcohol (and maybe tobacco, I think that's 21 in many states now).

As long as there is potential, significant liability attached to showing under-18's certain content it's safer to just not show them anything questionable until they're 18.

I'm with you, I think a gradient would be developmentally better, but our legal system discourages that. I think the same thing exists across other parallels; people get a ton of options dumped on them at 18. It would probably be better to phase that in over time.


This is a fair point. Primarily the reason that children can dominate a website is that they have so much more time to participate (and probably are more likely to contribute in a low-effort way), so putting the mark at 16 or 17 would still significantly reduce the issue. I think having places that are certified <12 would be great for kids and could be over-moderated without great consequence. A spectrum of age gates would make a very cool internet, and I'd extend that as far up as ages go (35-55? Centenarians only? Why not?).


You don’t have to worry. Teenagers will circumvent any filter or restriction. When I was at school we circumvented the school filter and did whatever we liked on the network. Perhaps there’s a silver lining: training the next generation overcome filters develops a certain level of technical proficiency.


Google and Apple are both implementing an ISO that will allow for you to prove you are over 21, 18, etc. It’s just a super verified boolean. The tech can also provide much more, but you can think of it like OAuth scopes. The user is shown what will be shared from a set of discrete properties.


There is no way those properties won't include PII 99.999% of the time. Your option will be to identify yourself or not access the site


The option in "Sign In With Apple ID" to provide a randomized email address that proxies to your own, alongside the boolean, would do a lot to mitigate this. Of course, Apple can associate you with your usage of the site (and it's worth keeping a close eye if they start to pivot more towards advertising as a revenue stream), but the site itself would be unable to.


That is a yet to be solved problem. But people are working to make sure that doesn't happen.


Trying to post everything in same spot, see this comment for more information:

https://news.ycombinator.com/item?id=33133010


That would require some privacy focused gvt infrastructure. The reality is that pretty much all gvt these days are all in on privacy invasion and monitoring.


You want yt-dlp on the latest version since YouTube breaks/throttles quite often otherwise. Try yt-dlp --update. (You could also try a custom repository for your package manager, though the best solution in this case is a rolling release distribution.)


Reddit added this as well within the last week. You can no longer view 18+ subreddits without being logged in.


old.reddit still allows this with just an “im sure” prompt


We should legalize the use of fake IDs with private organizations as long as there is no intention to commit a crime.

Only the government should need real IDs.

For example, if I am actually over 18 and intend to watch a video intended for over 18, I should be able to use a fake ID for it.


Really strange to me that given its obviously possible to securely verify someone’s age without requiring personally identifiable information at the endpoint requiring verification (or for the system verifying attributes to know the endpoint requesting verification) that anyone would even want that information flowing through their systems — unless they wanted the information for other reasons. To me, unless the prior technological features I just mentioned are baked into the law, this feels like it’s either intentional or laws are being crafted by people that have no clue what they’re doing.

More generally speaking, governments should start seeing the hoarding of data by itself and others as a national security threat, open source any government systems — and offer bounties for anyone able to maintain the objectives of a system while reducing the system’s access to sensitive data.

______________

* EDIT: 100% sure that it’s technically possible to have an endpoint anonymously verify age and a verification system to do so without knowing the endpoint requesting the verification, but unable to find technical explanation of how this might be done using existing technology; might be wrong, but believe it included a combination of differential privacy and zero knowledge proofs. Does anyone have a link to detailed explanation of how to implement such a system?


It's the last one. It's just very poorly drafted legislation that hasn't been thought through. this is our one party rule in action. With no meaningful opposition, laws are not scrutinized and unintended consequences never considered.


Just look at AB5 for a perfect example.


There are 29 members of the state legislature who are not in the "one party" and exactly zero people voted against this bill at any stage. It passed 72-0 in one chamber and 33-0 in the other.


Arguably, "one party" in this sense could mean "rulers."


It could mean that, if you're juvenile or just stupid. It's a democratically-elected, bicameral, proportionally representative legislature with an independent executive elected in a non-partisan race.


It’s poorly thought out legislation, but it’s not like the GOP is known for writing good law.


IRMA [1] comes to mind. From their site:

With IRMA it is easy to log in and make yourself known, by disclosing only relevant attributes of yourself. For instance, in order to watch a certain movie online, you prove that you are older than 16, and nothing else.

Their docs[2] are pretty good.

[1] https://irma.app/?lang=en [2] https://irma.app/docs/what-is-irma/


Thanks, agree system provides privacy protecting attribute verification (age, legal-voter, citizen, income, etc) — but after quickly reviewing it, does not appear to support anonymously doing so in a way that only the user knows their identity and the endpoint identity — but neither the endpoint or government know both, even if they or an attacker have access to the security tokens from the endpoint and governments servers.

Did I miss something?

Another user suggested Privacy Pass system, which I have reviewed before and currently reviewing to see if it fits system profile I am describing:

https://news.ycombinator.com/item?id=33133758


You just have to be comfortable with a single application on your phone holding all your details for you, it having implemented the protocol perfectly, and having amazing protections around the handling of the third-party request phases.

And this is all to improve the privacy of children?


Simple questions: CD or vinyl? That will identify everyone old enough to remember. Then the classic: how do you tame a horse in minecraft, which identifies everyone under 18.


Understand the trick, but think we all know people are clever, especially when they’re motivated, and bypassing systems like that would likely be trivial and even potentially result in the endpoints requiring age verification helping bypass them covertly.


The goal isnt to actually block children. They goal is to satisfy the corporations legal obligations: whatever scheme requires the least effort, the least information from the applicant, but still meets the minimum standard.

Better idea: webcam verification of government ID. Client-side/in-browser AI software matches only name and birthday before passing OK to website. Zero information is actually transmited/stored. Patent please.


What is to stop someone from modifying chromium to always report "yes" in that case? Or is the idea that we will add another widevine style opaque binary blob to our system that handles this with an encrypted response that can't be examined by the user?


Nothing would stop someone from hacking the system if everything was client-side.


Not following, how would the system know the document was authentic without transmitting data; understand that client-side it would be easy to cross-reference the ID-face and Live-face and scan the barcode to extract any data attributes, such as age. Also, worth noting that assuming the individual had a smart phone, scanning document would likely provide no additional security and there numerous apps that provide real-time end user verification using biometrics.


Perfection is always the enemy of "good enough to meet legal oblugations". Compare the minimal ID checks for buying cigarettes ... or handguns.


This is not about perfection, as is, unless I missing something, your suggested solution does not solve the problem.

There are numerous ways for a system to authenticate a user without the endpoint requesting the authentication knowing who the user is — or the system verifying the user knowing what endpoint system is requesting the authentication; only the user knows both who they are and the endpoint they want access to.


If you attempted to set it to its dirtiest mode, the game Leisure Suit Larry would ask you a series of sports and pop culture questions from the 1970s in order to verify you were an adult (in the 1980s). Of course, kids just looked up the answers and swapped them on crib notes on the playground.


Somebody who was 10 when horses were added to minecraft is 19 now.

But minecraft has always been an "all ages" game, not a kids game. And it was first advertised/promoted on 4chan.


> CD or vinyl?

As someone who prefers tape I feel discriminated against.


You are. Tape is evil.


If having shock-resistant playback, analog audio, and easy recording is evil than call me Charles Manson, 'cause y'all can keep your record scratches and CD skips.


The Right Way to do this is to develop a standard for web pages to identify themselves as not suitable for children, and then build clients with a setting that locks out those pages (which could be disabled only with a passcode or some such security measure). Then it becomes the responsibility of parents to insure that the lockout is enabled no the devices their children have access to.


Reverse that.

All things should be considered UNRATED (adult) by default.

Adults that want to should set a flag in the OS account, which should be passed along (or optionally also set as a child account) by browsing software. In child safe mode such software would then follow a set of local policy decisions and refuse to operate with content not declared child safe. Such an account feature might also forward the data to other accounts for review (parent, guardian, teacher, etc).

THEN, the enforcement, sites which incorrectly claim to be a given rating should get hit with the charges that enables.


Agreed; the ISP is paid by an adult, adult paid for computer/tv/ect, and its the adult to state when said child can operate such equipment. Last I'm aware said adults dont let children watch TV completely unrestricted (having an adult locking adulty-content). It should be up to parent/guardian to oversee what kid is doing and lock them out, with younger kids getting more supervised attention. Keep our IDs/PII/phonenumbers out of browsing the web.

We could go a step further by banning highly accurate fingerprinting and make things nice n generic; hopefully making ads as generic as the wonderful ads as-seen-on-tv.


I really like this. An industry enforced standard with consequences would stop most problems.


That would be reasonable and easily workable, but...

...daunting for millions of parents, who'd imagine (often rightly) that their kids, aided by friends & on-line instructions & such, would all-to-easily bypass the parental controls.

...dis-empowering for politicians lusting after attention & votes, who think that moral panics and big-government-control-freak "solutions" are their best friends.


No, the right way to do this is to stop trying to use legislation to do the jobs of parents. Stop expecting the world to nerf itself so you can be a lazier parent.


This was floated around in the UK for porn websites and was an absolute disaster for the government: https://theconversation.com/amp/why-age-verification-is-anot...


Germany did this for porn sites and there's 0 German porn sites anymore. At least 0 that do officially business out of Germany.


In Germany you have to verify yourself with google to download dating apps.

They only accept credit cards (debit cards being the norm in Europe), effectively forcing you to send a photo of your passport to Google.

Very nice. Bravo.

And Germans supposedly pride themselves for their privacy concerns.


Czechia is a very nice country and is right across the border.


Can you access American porn sites in Germany without verification?


yes. but no one claimed german legislation would make any sense :)


Google provides two alternatives for age verification, a credit card payment and sending an electronic copy of a valid government issued ID.

This explanation seems to be quite terse, "If you use a credit card, any temporary authorization will be fully refunded. If you use an ID, Google will delete the image after verifying your age."

Both methods would require me to provide them (or some other entity) new personal data. In the case of credit card payment, credit card number (and thus what bank I am using, maybe also what kind of card I have) and my home address.

My government issued ID has also other information like a photo of me, my signature, social security number, all my given names and card number. I wonder if they would accept an electronic copy that would have all those covered.


Once again good intentioned but bumbling idiots make things worse:

https://www.nytimes.com/2021/11/09/opinion/democrats-blue-st...


"The law also mandates that businesses make it obvious to children when they are being monitored or location-tracked (by a parent, guardian, or any other consumer)"

This just horrifies me. Silicon valley companies have already made it incredibly hard to parent children, with things like disappearing photos, and locked down phones given even to young children.

Like I get 16 years and up needing stuff like that, but younger than that? Parents should be able to see what's going on their kids life. These days 90% of their life is in the phone, which is the one place blocked from parents.

In the past children had a limited circle of people in contact with them, and for the most part parents knew who was talking to their children. These days? It's the entire world who has access.


If you live in a state, that allows ballot measures, then get involved. Looking at whats happening in legislature in the US it is unlikely they will produce anything with sensible approach to privacy.

California had couple of iterations of CCPA and so far it is not looking that good either, but worth trying.


What I don't like about these news is how abstract they are. Sure, it could lead to age information leaking but my imagination isn't working to figure out how that would impact me in a concrete way. Anyone could share possible bad outcomes?


It forces internet usage to be attached to real identities, in order to make sure your not breaking the laws with children, and all the bad, bad consequences that can come from that.

And if you don't see how that can be an issue, you have lived a blessed life where you or your family & friends were not prosecuted by violent authoritarian governments arms, that exist in liberal democracies today or discriminated against for minority attribute X, both of which exist even in Scandinavia today.


OK, so the theory is that age will be another signal in profiling users for $purpose. Thanks.


It wont 'just' be age, it will be hard verified identities vs. vague fingerprints dependent on the user playing ball with their browser software and not using VPNs. Otherwise how would they actually know the user's age?


More that the process of verifying age leaks a lot of unrelated information.


Maybe we should just parent better. Seriously. If your kid is so young that they can’t figure out if they should or shouldn’t be watching a particular YouTube video on their own, they shouldn’t be on YouTube to begin with.


Kyc can easily be done with provable privacy, but that’s now how it’ll be implemented because SV and the people that work there do not value consent nor privacy.


why can’t we just protect everyone’s privacy?


This bill accomplishes nothing for the children, but (much like the GDPR) imposes onerous recordskeeping on site owners, which just serves to keep the bar to entry high. Great news for MAANGA! Bad news for startups.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: