Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> > Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions?

> I would say yes [...]

So if you're a doctor making manual decisions about how to treat a patient, and some harm/loss occurs, you'd be criminally guilty-until-proven-innocent? I feel it should require evidence of negligence (or malice), and be done under standard innocent-until-proven-guilty rules.

> The mere presence of AI (anything based on underlying work of perceptrons) [...]

Why single out based on underlying technology? If for instance we're choosing a tumor detector, I'd claim what's relevant is "Method A has been tested to achieve 95% AUROC, method B has been tested to achieve 90% AUROC" - there shouldn't be an extra burden in the way of choosing method A.

And it may well be that the perceptron-based method is the one with lower AUROC - just that it should then be discouraged because it's worse than the other methods, not because a special case puts it at a unique legal disadvantage even when safer.

> The problem of fines is that they don't provide the needed incentives to large entities as a result of money-printing through debt-issuance, or indirectly through government contracts.

Large enough fines/rewards should provide large enough incentive (and there would still be liability for criminal negligence where there is sufficient evidence of criminal negligence). Those government contracts can also be conditioned on meeting certain safety standards.

> Merit of subjective rates isn't something that can be enforced

We can/do measure things like incident rates, and have government agencies that perform/require safety testing and can block products from market. Not always perfect, but seems better to me than the company just picking a scape-goat.





> So if you're a doctor making manual decisions about how to treat a patient, and some harm/loss occurs, you'd be criminally guilty-until-proven-innocent?

Yes, that proof is called a professional license, without that you are presumed guilty even if nothing goes wrong.

If we have licenses for AI and then require proof that the AI isn't tampered with for requests then that should be enough, don't you think? But currently its the wild west.


> Yes, that proof is called a professional license, without that you are presumed guilty even if nothing goes wrong.

A professional license is evidence against the offense of practicing without a license, and the burden of proof in such a case still rests on the prosecution to prove beyond reasonable doubt that you did practice without a license - you aren't presumed guilty.

Separately, what trod1234 was suggesting was being guilty-until-proven-innocent when harm occurs (with no indication that it'd only apply to licensed professions). I believe that's unjust, and that the suggestion stemmed mostly from animosity towards AI (maybe similar to "nurses administering vaccines should be liable for every side-effect") without consideration of impact.

> If we have licenses for AI and then require proof that the AI isn't tampered with for requests then that should be enough, don't you think?

Mandatory safety testing for safety-critical applications makes sense (and already occurs). It shouldn't be some rule specific to AI - I want to know that it performs adequately regardless of whether it's AI or a traditional algorithm or slime molds.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: