I think in the short term, companies that develop these things will behave ethically without any oversight (by making machines "enjoy" what they are doing) because doing something else would be inefficient or counterproductive. Why would you make an expensive thinking machine miserable? Humans that are happy are way more productive - and machines that are based off humans will be as well.
In the long term, if and when these things become mass-produced and cheap, people may want to do terrible things to them, in the same vein as animal torture. That may be when laws get put in place to protect them.
Ethically? I bet they will kill them thousands of times during development.
Suppose you, at one stage, have a simulation of a brain that isn't quite there; it talks and sees, but it's audio system doesn't work right. What do you do?
I don't see anything unethical about shutting it off. If nobody is emotionally attached to it, and it doesn't suffer when it is shut off, who is harmed by the shutdown?
As long as they do not care that they could be "shut off", I see nothing wrong with it. If they dislike that notion (like real humans do), then the possibility of shutdown would cause suffering and would be immoral/unethical to allow.
You're assuming that the machines will care about being shut off - we would probably design them so that they don't care about this, because this makes them easier to work with. And then it's no longer unethical.
I don't know. If you can simulate a brain, you can alter it. And if you can alter it, you can make it artificially happy, or simply remove areas hosting willfulness, sleep, sexuality or independent thought. Won't make them suitable for all tasks, but for some it would be more efficient.
In the long term, if and when these things become mass-produced and cheap, people may want to do terrible things to them, in the same vein as animal torture. That may be when laws get put in place to protect them.