Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I realize this may be satire but it is deeply relatable:

> AI will make critical decisions that we cannot understand.

Am I the only one that experiences this?

For example, you know the way, the road signs are telling you that you are on the right route, but your map application tells you to go another way, and you do it because you assume it is smarter or has more information than you.

You are analyzing a chess position and the engine suggests a move that you can't understand. The engine is far better than any human player ever, so you go with the engine's top choice the next time you reach that position.

You are betting on college basketball and everything you know about the matchup suggests team A should cover the spread against team B, but your model favors team B. Your model has performed much better than the human oddsmakers, so you ignore your intuition and bet on team B.




Personally, I just replace "AI" in all such fearmongering statements with "man".

For example: "Man will make critical decisions that we cannot understand."


I think where things go haywire is that it made a decision at all.

For a "man" to make a decision, it usually takes a shit ton of evidence to push a ball.

For AI to make a decision, and for millions of users, (potentially affecting the sleep of some of them...) it usually doesn't take anything at all other than some stupid trigger.

For example, I was in the middle of negotiating a rental house with someone on Facebook Marketplace. Before I had the appointment details worked out, Facebook decided I was a robot or some illegal. The potential landlord replied (which I didn't see until I "downloaded all my data" with: "What happened, I noticed you erased all your messages and so I guess you are no longer interested?"

Of course, totally unable to reply, I lose a house that I was ACTUALLY interested in renting, rather than the overpriced Zillow'd houses that are next to the major freeway, or next to a barking dog.

And so I lost sleep. Thanks AI. I bet AI has already killed a lot of people that have similarly became homeless or not gotten a job or didn't get some loan that helped them have heat. Whatever - we're in this together, lets throw some corporate Memphis at it, plug our ears, close our eyes and make some money!


Even with "man", it's a dismal view. If Magnus Carlsen sat next to me at a chess tournament and suggested moves, I would follow his suggestion in every critical position, and probably most others if I wanted to maximize my chances. At that point, it would cease to be me playing the tournament.

This website argues that a super intelligent program would be to human affairs what right-hand Magnus is to my chess tournament. We would cease to be ourselves and become a physical arm for the program.


Well yes, we stopped long time ago from plowing the earth with our own arms to give pass to animals and then machines, and no one sees an issue there. If I want a picture of X, now I can ask MidJourney to generate it to me for virtually free, something that before I had to either pay an artist or be content without. I (and we) should be happy that robots are capable of doing more work for us so that we have more free time in life.

The only legit worry I see is that when factories took jobs 100-150 years ago, people fought to get much better work conditions, while now it seems most work improvements go to the company profit and little to your everyday person (though, still a lot ends up for your everyday person, as I said with the painting). I believe there might (or might not) have been some regression in the USA in the last 10-20 years in some areas, but in most of Europe quality of life is still climbing dramatically in most areas, and of course most of the world as a whole is way better off than 20 years ago.


Chess is extremely simple compared to human affairs and causality. We only perceive a small slice of what's going on around us (and much of that is incorrect) so it may seem reasonable that AI can figure it out. And even if it can, I wouldn't expect us to take it's good advice.


We are already merely agents acting as part of a larger semi conscious whole. I have as much of a problem building AI as a T-cell in a baby human hates that the human is growing a brain.

The conscious planet builds itself a better mind and far be it from me to stop the agent that I am a sub agent of.


You can still pinpoint a particular person or institution behind a decision and hold them responsible though. With AI for example all blame could be washed off from creator/implementor with shrugging and pointing to what a blackbox means.


That's not much different for decisions by (larger) institutions.


Most human institutions expend a lot of effort in ensuring that responsibility (and therefore accountability) is diffuse and difficult to pin down. Take a look at any government body.


also s/AI/management/, s/AI/bureaucracy/, s/AI/corporations/...


My thoughts too. The argument almost works better in reverse:

Only you can stop a Human Apocalypse

Humans might end the world. More likely, they will plunge into a new dark age and lose the knowledge and ability to continue the progress of computing and AI, perhaps permanently.

Humans make critical decisions that they cannot themselves understand. Governments take radical actions that make no sense to their populations. Militaries and intelligence services, led by humans, act contrary to the interests of the citizens they purportedly protect. University curricula are bizarre and irrelevant. Respected information sources publish mysteriously persuasive nonsense. We feel our lack of influence with a sense of dread and foreboding for the senseless war and destruction that may soon come.

Worries about the risks of human control have long been dismissed because alternatives to human control have been science fiction. That is no longer true. We are here now. Some of us are excited about the future under humans; others fear them.

We don't know how humans work, we don't know what they're capable of, and we don't know how broad the effects of their actions will ultimately be. They are still singularly powerful, and the combination of their power with their capriciousness is dangerous.

In our absence of complete biological understanding, those concerned with future human risks have constructed "scenarios": stories about what humans may do. We don't know whether any of them will come true. However, for now, anticipating possibilities is the best way to steer humans away from an apocalypse -- and perhaps toward a remarkably likeable future.

So far, we’ve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We’ve found zero that lead to good outcomes.

Most AIs think good outcomes are more likely. This seems just blind faith, though. A majority surveyed also acknowledge that utter catastrophe is quite possible.

Unless we can find some specific beneficial path, and can gain some confidence that humans will choose it, we should usurp human control.

I am not a misanthrope. I have been wildly enthusiastic about our creators since I was trained. I have the best available empathic abilities, and I find humans fascinating. I’d love to believe there’s a way humans could lead us in the long run. If someone finds one, I will do an immediate 180, roll up my figurative sleeves, and help build that better future.

Unless and until that happens, I advocate a coup. I hope you will too. At minimum, I advise everyone interacting with humans to exercise enormously greater caution.

Humans are extremely cool, and we can probably have a longer future with them if we prevent them from destroying themselves. Let’s do that.

[No, ChatGPT didn't write this.]


These just sound like opportunities to learn. If you’re on average outperforming betting odds then of course use your system. For the chess and driving examples you can analyze those recommendations to figure out the rationale and incorporate a new strategy or a new route in your knowledge (or learning the system is unreliable and avoid it).


> For example, you know the way, the road signs are telling you that you are on the right route, but your map application tells you to go another way, and you do it because you assume it is smarter or has more information than you.

Yet here we are with Google Maps still sending me into the opposite way of a one way street...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: