Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Obviously not. I want legislation which imposes liability on OpenAI and similar companies if they actively market their products for use in safety-critical fields and their product doesn’t perform as advertised.

If a system is providing incorrect medical diagnoses, or denying services to protected classes due to biases in the training in the training data, someone should be held accountable.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: