Hacker News new | past | comments | ask | show | jobs | submit login

I'm happy that they're putting money into developing better diagnostic methods for mental disorders, because, frankly, the existing tools are terrible.

After 9 years of misdiagnoses by several different doctors, earlier this year one astute doctor was able to uncover that I had bipolar 2 and not unipolar depression or SAD. I had mixed feelings about it (ironic) but I was put on the correct medications and they have made a world of difference. In fact, I read that it can often take close to a decade, on average, to diagnose types of bipolar properly, because doctors and patients simply don't communicate well. Any tools that shorten that time gap would benefit individuals and society as a whole.




I think there is a lot of basic research to be done. I hope this research amounts to real advances!


Strangely enough it may be that our attempts to create artificial people (AGIs) will permit us to understand what mental disorders are all about.


Strangely enough it may be ____________ will permit us to understand what mental disorders are all about.

( ) Hypnotism ( ) Psychoactive drugs ( ) Functional programming ( ) Meditation ( ) Phrenology


None of your examples entail building a mind. Also there's a kind of irony in my example, since as a branch of computer science it is independently funded and seemingly unrelated to mental health.


But you're guilty of assuming your conclusion.

You have a model of the brain. You do not have a model of the mind. You assume that by simulating a brain with significant details, a simulated mind will emerge. I find that's a big pill to swallow.


Nah, I don't assume it. Hence 'may' and 'attempts'. But the explanation is simple enough. If you can discover what it takes to code a mind then you're also going to learn some of the systemic faults that minds, qua minds, can develop. Of course, I could be wrong about this. It could be that mental diseases are all purely down to hardware issues (brain health). But it seems like a bad bet. For example: addiction, or at least specific types of addiction, depend on people's culture and choices.


Is it ethical to experiment on artificial people? Wouldn't the artificial person describe itself as experiencing consciousness?


Sure, it isn't ethical to experiment w/o consent. But in order to program an AGI, you need first to conjecture an explanation of how the mind works. One may be able to deduce from that explanation some of the ways in which minds can go wrong. Also, assuming consent is given, it will likely be easier (and less physically invasive) to observe the internal state of an artificial person.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: