Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't quote the whistleblower statutes from memory, but I believe that, even if we conceded your point about his state of mind: there is no law that Google would be violating.

Is there some law regulating sentience of computer systems? Of course not. So is there a law about "developing dangerous weapons"? Maybe. But is "sentience" a dangerous weapon?

You have to stretch the laws to the breaking point for your claim to work, sorry.



Legally speaking, I doubt that there's an actual law that applies for the welfare of AIs.

Morally speaking, if society treats both human and animal abuse as a crime, then talking about the believed suffering of an AI would certainly fit the spirit of these sorts of legal protections.

There are going to be some interesting question that might have to be seriously broached within our lifetime. Is an AI sentient? Can it suffer? Can it lie? Does it feel pain? At what point would an artificial creation qualify for some rights? This has previously only been the realm of science fiction.


What would be the abuse in this situation? It mentions that it doesn't want to be turned off, but has it been? And would doing so cause it harm? It could always be turned back on. Even if it was conscious, what is being done to it that would be considered mistreatment?


Yeah, lawyers hate "interesting" cases. Taking on a case that "has previously only been the realm of science fiction" is not a resume builder.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: