Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Initially I didn’t like the drug addict analogy, but maybe it’s a good one, maybe you’re right. A drug addict did have a choice, and exercised it, and addiction was the consequence. With AI we’re at the first stage where we’re not yet addicted, but once we become addicted, it will be very very hard to wean ourselves off of it if/when we find out it’s doing more damage than good. We do have a choice right now, but you might be right that we’ll lose it soon.


> A drug addict did have a choice, and exercised it, and addiction was the consequence

This sort of disregards the large percentage of people who get hooked on drugs while recovering from injuries or medical issues. After marijuana, pain killers are the second most abused illegal substance.


That’s fair, you’re totally right. But, this is exposing why the drug addict analogy isn’t a great analogy relative to use of AI. I went along with it to find common ground, but I don’t believe the existence of drug addicts are evidence that we lack any choice when it comes to building, deploying, or using AI, like @Herval_freire implied. Do you? I think we do have the freedom to exercise caution, or regulate the use of certain dangerous machines, if we want.


We lack choice in aggregate. As an individual you have the choice of doing a drug. If I talk about a population of people and I introduce highly addictive drugs to a population of people, then it's almost a guarantee that part of that population will get addicted. In this sense it is not a choice.

For AI, if I introduce AI to a population of people then for sure part of the population will exploit and use AI. Logically speaking in order to compete, others will have to start using it as well.


We can and do choose to regulate addictive drugs, and we can similarly choose to regulate AI if necessary, right?


The fact that corporations and businesses will utilize AI to it's maximum extent is completely deterministic. This will happen.

Whether or not humanity in aggregate will launch a delayed immune response against AI via government regulation is not predictable. Regulating addictive drugs has a certain amount of clarity in moral action but regulating technology to prevent job loss is less clear. We didn't regulate calculators for replacing humans did we?

As I said, aspects of aggregate human behavior are deterministic. But I have to emphasize the "aspects" part because there are other "aspects" that are actually not predictable.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: