Hacker News new | past | comments | ask | show | jobs | submit login

If they really were simulating a human brain, that would have some pretty serious moral implications.



I think in the short term, companies that develop these things will behave ethically without any oversight (by making machines "enjoy" what they are doing) because doing something else would be inefficient or counterproductive. Why would you make an expensive thinking machine miserable? Humans that are happy are way more productive - and machines that are based off humans will be as well.

In the long term, if and when these things become mass-produced and cheap, people may want to do terrible things to them, in the same vein as animal torture. That may be when laws get put in place to protect them.


Ethically? I bet they will kill them thousands of times during development.

Suppose you, at one stage, have a simulation of a brain that isn't quite there; it talks and sees, but it's audio system doesn't work right. What do you do?

Even live debugging to repair it can be controversial (http://en.wikipedia.org/wiki/Cochlear_implant#Criticism_and_...)


I don't see anything unethical about shutting it off. If nobody is emotionally attached to it, and it doesn't suffer when it is shut off, who is harmed by the shutdown?


Consider a person without relatives or friends. Would you consider it ethical to shut him/her off, as long as that person didn't suffer from it?


As long as they do not care that they could be "shut off", I see nothing wrong with it. If they dislike that notion (like real humans do), then the possibility of shutdown would cause suffering and would be immoral/unethical to allow.

You're assuming that the machines will care about being shut off - we would probably design them so that they don't care about this, because this makes them easier to work with. And then it's no longer unethical.


Suppose the simulated mind goes insane within a few seconds. Just long enough to solve a captcha, before being restarted...


I don't know. If you can simulate a brain, you can alter it. And if you can alter it, you can make it artificially happy, or simply remove areas hosting willfulness, sleep, sexuality or independent thought. Won't make them suitable for all tasks, but for some it would be more efficient.

Ethical? That's the question.


Not at all. Simulating your brain != simulating your mind. Don't become a victim of 'neurobollocks'. http://www.newstatesman.com/culture/books/2012/09/your-brain...


Your New Statesman link only talks about pseudoscience and doesn't support your assertion. The mind is an emergent property of the brain. If you can't simulate a mind by simulating a brain, then what in your opinion would it take? Or is it impossible because we all have ineffable souls that the laws of physics mysteriously cannot access?


This is exactly the sort of problem that convinced me that consciousness/mind must be a fundamental part of the (multi/uni)verse, and not something that "emerges" from matter.

I find it terribly unconvincing that a specific arrangement of matter, or even an algorithm in software, just simply "spawns" a discrete consciousness from nothing. It might make sense if there was an underlying "consciousness field" or some such concept that the matter-arrangement tapped into in some manner.


> I find it terribly unconvincing that a specific arrangement of matter, or even an algorithm in software, just simply "spawns" a discrete consciousness from nothing.

It's not only unconvincing but unscientific, a pseudo explanation.


If we assume that, could a machine then tap this field?


> The mind is an emergent property of the brain.

This is the kind of pseudo-science that gets so much attention nowadays. Incredible.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: