I tend to hold with Anne Foerst, who was the theologian working on the ethical side of the MIT robotics project back in the 90's. http://www.cs.sbu.edu/afoerst/
If it is relatable and seems to have human-like consciousness, we have to treat it like any other human, because consciousness is an empirically meaningless term.
If it is relatable and seems to have human-like consciousness, we have to treat it like any other human, because consciousness is an empirically meaningless term.