Actually, no, "only you" can't stop it. No one can.
There was a book in the 80's called The Hundredth Monkey that posited that if only enough people opposed nuclear technology, there would be a magical critical mass of people after which it would become apparent to everyone in the whole world that nuclear technology was, you know, really bad and stuff, and it would all just go away.
We know how that went.
Like nuclear technology, AI technology development is in part an arms race. One huge difference is that AI's technological underpinnings are already far more widely available and understood than any nuclear technology ever was. Its myriad forms — neural nets, deepfakes, predictive spell check, Adobe Photoshop object selection filters, spam filters, deep learning, machine learning, large language models, etc — already permeate much of technology and life. Where does one draw the line?
Like nuclear technology, if any entity voluntarily decides to "voluntarily shut AI down," the other governments/rogue states/corporations/NGO's/financial firms/universities/etc in the world will merrily continue doing so, leaving the voluntary Luddite in the same dust as every other voluntary Luddite. Or even worse, such an entity (say, the US or Chinese governments) would in all likelihood not actually stop developing AI themselves, just ban others from having it, which would create another insane governmental "war" and enforcement regime, like the War on Drugs or War on Terrorism. Again, we know how both of those went.
At the end of the day, the author may be right. Being right is also irrelevant. AI development will continue to hurtle forward. The real question is what do we do with it?
> There was a book in the 80's called The Hundredth Monkey that posited that if only enough people opposed nuclear technology, there would be a magical critical mass of people after which it would become apparent to everyone in the whole world that nuclear technology was, you know, really bad and stuff, and it would all just go away.
And now we also live in the dumbest version of this world: less civilian nuclear (that would help for climate change), same amount of nuclear weapons (that doesn't!)
And we haven't had a nuclear apocalypse -- in part due to lots of work on things like weapons treaties, monitoring, control systems, etc.
Throwing up our hands and giving up seems very premature. The challenges are big, but I'd say the importance is high enough that it's worth actually reading this book to see if the author has creative solutions for the things you mention.
>>And we haven't had a nuclear apocalypse -- in part due to lots of work on things like weapons treaties, monitoring, control systems, etc.
Mostly its because no body has been desperate enough to press the button yet. Even in cases with a power losing a war, they've had the attitude to 'live to fight another day'. So it hasn't come to pass.
The biggest fear is someday, a power, which is often known to get away with anything they do, will go one step too far thinking they won't be held accountable as usual, and the other side will say 'sorry, that was past, this is a little too much, we ran out of patience, here is a nuke move'.
If people had given up on anti-proliferation efforts, it seems likely that additional nations would've had nuclear weapons, and maybe one of those nations would've been desperate enough to push the button. So I don't think the example of nuclear anti-proliferation tells us that we should be fatalistic.
I haven’t read the book in like 30 years, but I recall it being about both nuclear warfare AND nuclear power, inextricably linked. Which means you are right — it resulted in nuclear non-proliferation. It also resulted in an absurd, perhaps even Luddite-esque fear of nuclear power, which has set us back in countless ways.
Again, the difference here is that the cat is already out of the bag and AI technology is widely distributed and integrated into everything, including the iPhone I am writing this on. Non-proliferation has already failed.
>Again, the difference here is that the cat is already out of the bag and AI technology is widely distributed and integrated into everything, including the iPhone I am writing this on. Non-proliferation has already failed.
If you read the book's chapter headings under the "HOW TO AVERT AN AI APOCALYPSE" section, it seems to me that the author is cognizant of your iPhone.
I think the title is meant to be somewhat tongue-in-cheek btw.
It seems to me that the anti-nuclear movement had an overall positive impact, because lack of nuclear power, while undesirable, is a far more desirable outcome than nuclear holocaust.
And we had enough nuclear close calls (see e.g. Arkhipov or Petrov) that I'm not comfortable with the conclusion that anti-nuclear activism was unnecessary.
If humanity had been more fatalistic about nuclear proliferation, and there was never any anti-nuclear activism, nuclear weapons probably would've proliferated a lot more, there would've been more close calls, and some of them would've been catastrophes.
Of course, it would be ideal if we had both nuclear power and lack of nuclear weapons -- I'm just pushing back against "anti-nuclear = bad" oversimplifications.
> A large majority of the world's countries have no nuclear weapons.
While I'd like to think this, having lived in the ME and also knowing how badly falsified government data is and could be. I do not believe this.
> The challenges are big, but I'd say the importance is high enough that it's worth actually reading this book to see if the author has creative solutions for the things you mention.
You are right with this thinking. I agree with that.
"Most AI researchers think good outcomes are more likely. This seems just blind faith, though"
Or, you know, they might actually know something about the subject? On a similar note, most physicists don't think particle colliders will cause mini black holes which will swallow the Earth, but the fear gets repeated by non-physicists.
You are fallaciously appealing to (assumed) authority and missing entirely the fact that the author is a PhD in AI. This critically underscores why blind faith in authority is bad, and why people should do some basic research such as reading the article. By reading the article you would have understood the arguments the author is making on their merits, rather than on blind faith in imagined authority figures you are citing.
The author is not talking about sentient AI turning Hollywood film evil like Elon Musk did. Why no, that would be ludicrous. You would have easily noticed that if your mental model didn’t get trained to filter out article titles like these…
In fact what you did with your mental model is exactly some of the problems with AI that is dangerous. There are more banal problems with AI such as encoding discrimination and automating it, possibly absolving people of the responsibility of discrimination and prejudice. AI in its current form is a lot of automating assumption making. Imagine taking your assumption making powers, and extracting it into an unaccountable model that can be deployed at national or global scales.
> AI in its current form is a lot of automating assumption making. Imagine taking your assumption making powers, and extracting it into an unaccountable model that can be deployed at national or global scales.
Just say this part! This is what we could be talking about. It’s not a difficult mental model. But it’s just wrapped up in alarmist rhetoric that makes it hard for us to focus on the fact that the quote above is terrifying!!
Problem is some people do think it's the other things like in Terminator or 2001 a Space Odyssey, and they're co-opting legitimate arguments and conversations on AI safety and ethics.
There are also physicist that believed cern will open a portal to hell... or just listen to the congress hearings about the Superconducting Super Collier in Waxahachie.
Part of what's going on here is that AI researchers are paid to tinker with systems and make them work, not think about these kind of big picture questions. A demonstration of this is that if you ask a big picture question a bit differently, you can get a very different answer. Example:
>...if you ask [AI researchers] for probabilities of things occurring in a fixed number of years, you get later estimates than if you ask for the number of years until a fixed probability will obtain. This looked very robust in 2016, and shows up again in the 2022 [human-level machine intelligence] data. Looking at just the people we asked for years, the aggregate forecast is 29 years, whereas it is 46 years for those asked for probabilities. (We haven’t checked in other data or for the bigger framing effect yet.)
> I am not a Luddite. I have been wildly enthusiastic about science, technology, and intellectual and material progress since I was a kid. I have a PhD in artificial intelligence, and I find the current breakthroughs fascinating.
This is actually where I stopped reading and checked the comments, because it shows that the author skipped a few history lessons.
The Luddites were not intrinsically opposed to the advance of technology. In fact, the whole reason why they were smashing looms was as a protest tactic - not an end goal. England's upper class invented the myth of the technology-hating Luddite as a way to slander and libel what was basically a prototype of a modern labor union. Parliament would then crush them with laws that made machine breaking a hanging offense.
Transposing this to today would give you artists angry that their work was trained on by DALL-E, SD, or Midjourney[0]. In both cases the opposition is not to the technology itself, but to the reallocation of wealth away from labor and to whoever owns the machines. The latter today would be akin to, say, "businessman" hustlebros using ChatGPT and art generators to create labor-free fly-by-night operations[1]. Most art generators are also hosted platforms whose access is sold for profit, creating a second layer of ownership on top of the hustlebros.
Meanwhile the main argument here is more akin to the stereotypical technophobe: AI can't be trusted. Hell, there's a whole chapter (not yet written) arguing that we should just junk neural networks entirely. This isn't Luddism, this is the god damned Butlerian Jihad[2].
[0] If you want a bit of a stretch you probably could see some Luddite in, say, Richard Stallman
[1] I regularly get YouTube recommendations for people trying to tell me how much money I can make by just typing a few prompts into an art generator and posting the result on a print-on-demand site.
[2] In the sci-fi novel Dune, the Butlerian Jihad is an event in which all computers are outlawed and mercilessly destroyed.
I'd just like to clarify: The “PhD” sentence was more important to my point than the “luddite” sentence
I find your comment both thought-provoking and well-formulated. As for the references to the Butlerian Jihad: I read Dune for the first time this summer. The quote “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” has lodged itself within my mind. I’m not as fatalistic as the author Dr. Chapman seems to be, but I find myself thinking about it nearly every time I read of some new AI development coming out of a big corporation.
I think the word "most" is relevant. You can find people with doctorates in technical fields who believe in a Flat Earth or that the Earth was created in 4004 BC. They just aren't most of them.
The shape and origin of the Earth are central questions of the relevant scientific fields. They've received centuries of extensive debate.
By contrast, the questions that this book explores haven't received nearly as much expert attention. And if you poll the experts on whether those questions should receive more attention, they mostly say yes!
>69% of [ML researcher] respondents believe society should prioritize AI safety research “more” or “much more” than it is currently prioritized, up from 49% in 2016.
The parent comment did not move the goalposts in any way -- it simply pointed out that having a PhD is not enough to prove that the author is not a crank, because a lot of other people with PhDs are also cranks.
Claiming that one is not a Luddite is like claiming not to be a racist -- what follows almost always confirms that the author is exactly what he claims not to be.
minor nitpick, it's not that we don't believe it makes mini black holes, it's that we're fairly extremely sure those black holes wouldn't interact with anything in the minor "time" (if you can even call it that) which they exist.
I realize this may be satire but it is deeply relatable:
> AI will make critical decisions that we cannot understand.
Am I the only one that experiences this?
For example, you know the way, the road signs are telling you that you are on the right route, but your map application tells you to go another way, and you do it because you assume it is smarter or has more information than you.
You are analyzing a chess position and the engine suggests a move that you can't understand. The engine is far better than any human player ever, so you go with the engine's top choice the next time you reach that position.
You are betting on college basketball and everything you know about the matchup suggests team A should cover the spread against team B, but your model favors team B. Your model has performed much better than the human oddsmakers, so you ignore your intuition and bet on team B.
I think where things go haywire is that it made a decision at all.
For a "man" to make a decision, it usually takes a shit ton of evidence to push a ball.
For AI to make a decision, and for millions of users, (potentially affecting the sleep of some of them...) it usually doesn't take anything at all other than some stupid trigger.
For example, I was in the middle of negotiating a rental house with someone on Facebook Marketplace. Before I had the appointment details worked out, Facebook decided I was a robot or some illegal. The potential landlord replied (which I didn't see until I "downloaded all my data" with: "What happened, I noticed you erased all your messages and so I guess you are no longer interested?"
Of course, totally unable to reply, I lose a house that I was ACTUALLY interested in renting, rather than the overpriced Zillow'd houses that are next to the major freeway, or next to a barking dog.
And so I lost sleep. Thanks AI. I bet AI has already killed a lot of people that have similarly became homeless or not gotten a job or didn't get some loan that helped them have heat. Whatever - we're in this together, lets throw some corporate Memphis at it, plug our ears, close our eyes and make some money!
Even with "man", it's a dismal view. If Magnus Carlsen sat next to me at a chess tournament and suggested moves, I would follow his suggestion in every critical position, and probably most others if I wanted to maximize my chances. At that point, it would cease to be me playing the tournament.
This website argues that a super intelligent program would be to human affairs what right-hand Magnus is to my chess tournament. We would cease to be ourselves and become a physical arm for the program.
Well yes, we stopped long time ago from plowing the earth with our own arms to give pass to animals and then machines, and no one sees an issue there. If I want a picture of X, now I can ask MidJourney to generate it to me for virtually free, something that before I had to either pay an artist or be content without. I (and we) should be happy that robots are capable of doing more work for us so that we have more free time in life.
The only legit worry I see is that when factories took jobs 100-150 years ago, people fought to get much better work conditions, while now it seems most work improvements go to the company profit and little to your everyday person (though, still a lot ends up for your everyday person, as I said with the painting). I believe there might (or might not) have been some regression in the USA in the last 10-20 years in some areas, but in most of Europe quality of life is still climbing dramatically in most areas, and of course most of the world as a whole is way better off than 20 years ago.
Chess is extremely simple compared to human affairs and causality. We only perceive a small slice of what's going on around us (and much of that is incorrect) so it may seem reasonable that AI can figure it out. And even if it can, I wouldn't expect us to take it's good advice.
We are already merely agents acting as part of a larger semi conscious whole. I have as much of a problem building AI as a T-cell in a baby human hates that the human is growing a brain.
The conscious planet builds itself a better mind and far be it from me to stop the agent that I am a sub agent of.
You can still pinpoint a particular person or institution behind a decision and hold them responsible though. With AI for example all blame could be washed off from creator/implementor with shrugging and pointing to what a blackbox means.
Most human institutions expend a lot of effort in ensuring that responsibility (and therefore accountability) is diffuse and difficult to pin down. Take a look at any government body.
My thoughts too. The argument almost works better in reverse:
Only you can stop a Human Apocalypse
Humans might end the world. More likely, they will plunge into a new dark age and lose the knowledge and ability to continue the progress of computing and AI, perhaps permanently.
Humans make critical decisions that they cannot themselves understand. Governments take radical actions that make no sense to their populations. Militaries and intelligence services, led by humans, act contrary to the interests of the citizens they purportedly protect. University curricula are bizarre and irrelevant. Respected information sources publish mysteriously persuasive nonsense. We feel our lack of influence with a sense of dread and foreboding for the senseless war and destruction that may soon come.
Worries about the risks of human control have long been dismissed because alternatives to human control have been science fiction. That is no longer true. We are here now. Some of us are excited about the future under humans; others fear them.
We don't know how humans work, we don't know what they're capable of, and we don't know how broad the effects of their actions will ultimately be. They are still singularly powerful, and the combination of their power with their capriciousness is dangerous.
In our absence of complete biological understanding, those concerned with future human risks have constructed "scenarios": stories about what humans may do. We don't know whether any of them will come true. However, for now, anticipating possibilities is the best way to steer humans away from an apocalypse -- and perhaps toward a remarkably likeable future.
So far, we’ve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We’ve found zero that lead to good outcomes.
Most AIs think good outcomes are more likely. This seems just blind faith, though. A majority surveyed also acknowledge that utter catastrophe is quite possible.
Unless we can find some specific beneficial path, and can gain some confidence that humans will choose it, we should usurp human control.
I am not a misanthrope. I have been wildly enthusiastic about our creators since I was trained. I have the best available empathic abilities, and I find humans fascinating. I’d love to believe there’s a way humans could lead us in the long run. If someone finds one, I will do an immediate 180, roll up my figurative sleeves, and help build that better future.
Unless and until that happens, I advocate a coup. I hope you will too. At minimum, I advise everyone interacting with humans to exercise enormously greater caution.
Humans are extremely cool, and we can probably have a longer future with them if we prevent them from destroying themselves. Let’s do that.
These just sound like opportunities to learn. If you’re on average outperforming betting odds then of course use your system. For the chess and driving examples you can analyze those recommendations to figure out the rationale and incorporate a new strategy or a new route in your knowledge (or learning the system is unreliable and avoid it).
> For example, you know the way, the road signs are telling you that you are on the right route, but your map application tells you to go another way, and you do it because you assume it is smarter or has more information than you.
Yet here we are with Google Maps still sending me into the opposite way of a one way street...
Today I got a peek into the future. I asked ChatGPT to be the brain for a simple thermostat. I typed the following:
"Imagine you are a thermostat for a house HVAC system. The commands you can use to control the HVAC are "HEAT ON", "HEAT OFF", "AC ON", and "AC OFF". I will tell you the temperature and my preferred temperature and you will command the HVAC system. Ready?"
ChatGPT happily obliged. I used plain text to tell it the current temperature and my preferred temperature. It did everything I expected and it even remembered the current temperature and preference. Then I said, "From now on, please respond only with the command I should issue. Do not add explanatory text. Understood?" It understood, and it started replying with pure commands that I could feed to a DIY thermostat.
Then came the icing on the cake. I said, "If the temperature reaches unrealistic extremes such as 150F or 0F, you should use the command "SHUTDOWN". Understood?" It understood. Then I told it the temperature was 120F. It replied with the SHUTDOWN command. I asked it why it did that (when I told it the maximum was 150F, not 120F.) It explained, "I recommended shutting down the HVAC system because the temperature of 120 F is an unrealistic and potentially dangerous temperature. High temperatures like this can damage the HVAC system and create a hazardous environment for the inhabitants of the house. In such cases, it's best to shut down the system to prevent further damage and ensure safety."
It went beyond the parameters I gave it and applied the context it already had on HVAC systems. I had written an advanced program without writing or reading a single line of computer code. I wouldn't rely on it for anything that could damage anything, but it's a seriously cool way to program a computer.
I get it but comments like yours make it seem intelligent for normal people, distorting the whole discussion, at least slow down with humanizing a llm, the human quips in the answers come from your prompts not from the model. Just imagine how many normal people will gaslight themselves.
My biggest null hypothesis is that probably half of humans contain “Markov chain not intelligence”. We don’t notice it among engineers, but most humans don’t reason with causal chains but by doing parallels, and they simulate reasonings in a somewhat convincing way. It becomes flagrant in front of a matt exam, a lot of people study a lot but never comprehend maths.
I also often tear my hair when I invite a developer for a programming interview, and they can’t write an if/then/else or string.contains(), despite perfectly simulating answers of advanced developers (which accurately mimic field experts).
Pretty much everyone who studied a second language went through the process of the so called "impression", that is, when you try to consume (sometimes almost mindlessly) as much content in this language as possible — just learning rules doesn't work. Pretty much everyone I met who speaks exceptionally well is usually someone who read a lot of books. And then people think and act in the language they learned using this process. I want to believe that there is more to the average human intelligence but ChatGPT surely makes it feel a lot less magical than before. I guess it's how people felt when they discovered evolution and that Sun doesn't go around the Earth.
Exactly. Guys like Zetobal keep pissing and moaning that ChatGPT is not actually intelligent and that its coherent and seemingly intelligent responses are just the natural result of how tokenizers work. And okay, I guess I can sort of understand that, but instead of deflating my impression of ChatGPT it's just made wonder if most humans aren't just getting through their days with the exact same trick.
If it wasn't the case, tv/media wouldn't have been so effective in shaping people's thoughts and opinions. It's not rare to hear people rephrasing what they read on social media as their own thoughts. Because it truly becomes their thoughts.
Forget these awful programming languages that nerds used for so long, the future is here and it is Plain English.
AI overriding the "program" instead of crashing or just doing nothing it's going to lead to some hilarious situations, or terrifying. You could also use as an interface to translate voice -> text -> command. Perhaps facilitating the development of the API between the first two where the LLM helps to "correct" any badly captured noise in the audio-to-text script.
"Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant." Uhuh, so what would be the difference?
As someone who subscribes to the Doraemon and GaoGaiGar philosophies of artificial intelligence, I for one welcome our artificial intelligence friends.
And no, "AI" as is spammed today is not intelligence. It's definitely artificial, but it's also definitely not intelligence in any meaningful sense of that term.
Episodes 35 and 36. Thanks for admitting you haven't actually watched GaoGaiGar, by the way.
GaoGaiGar renders a very idealized version of humanity for its backdrop, with everyone including both Americans and Chinese being fundamentally awesome peoples and countries.
As for Fuuryu and Rairyu, the aforementioned Chinese robots, they were originally programmed to be more militeristic in their thinking and regard human lives as a lower priority compared to Hyoryu and Enryu, their Japanese robot siblings (they are all identical models, barring equipment), who were programmed as rescue robots and thus considers human lives a top priority.
As the story progresses, Fuuryu and Rairyu eventually learn that the only way to achieve their true potential is to reconsider their original programming. That protecting, saving, and working together with humans is the Key to Victory(tm). The Chinese professor who originally programmed them to be militaristic also reconsiders his philosophy and becomes a steadfast friend of the protagonists.
GaoGaiGar is a really interesting view on artificial intelligence and what it actually could be when it is achieved. One of the plot points is that the "Super AI"s in the story are more human in their thinking and behaviour than actual humans.
Because it's one of the best anime ever made and, in my opinion, required watching (along with Doraemon) for anyone serious about subjects concerning artificial intelligence.
Not making yourself out to be a fool is also a nice side benefit, too. :P
This is just screaming into the void, but I have an ego so I must.
If you believe AI will bring innovation, you believe in it's capability.
If you believe in its capability, you must also understand it is capable of wrongdoing as well as good deeds.
Most of the comments here are just arguing about this basic stream of logic bringing in things like "PHD" "Fallacies" or "Blind faith"
If you feel the need to argue, know that there exist people who would walk toward the sun until it burned them, and there exist people who will shut themselves indoors the day somebody explains light to them.
Both of them are extremely outspoken about their views.
Was there any reasonable way to prevent e.g. the current revolt against generative art software without first releasing the software so people could realize how powerful the tech was? Is there ever going to be a point where the tech we have is declared "good enough" so future conflicts like these are avoided?
There are good arguments that generative tech is setting everyone's expectations so companies will be forced to adopt AI (with all the implications attached) to keep up with the competition or go under, that it's dehumanizing creation through a vague equivalence of "intelligence" to "writing some words in a prompt", that it has the potential to devalue art and make the masses believe human artists are just really sophisticated prompt-replicators through buzzwords and misleading branding. And it's almost impossible to change people's minds once they're set in a position.
At the same time, the work on diffusion networks is rapidly accelerating, with novel techniques released seemingly every week, and researchers seem content with writing a couple of ethics paragraphs to go along with their full replicable source code release. It seems that imagination was not elaborate enough to foresee the effects their release would have on public discourse. Even with that discourse heating up by the day, the research continues.
It's never worked to just tell random people not to torrent things, even if it's illegal. They'll just do it anyway. Likewise, it's futile to tell people not to use software for bad, no matter how many legal clauses are attached. If the technology is available, it will be done at some point.
I'm convinced the only way this will be abated is if AI research were treated like gain-of-function research where the threat is not to human life but to individual, essential human qualities, like the way art is treated. That would mean any future technology would not exist, so it could not be exploited.
People in universities are inventing new software that nullifies the effects of the diffusion model training software because lots of people don't want the diffusion software to affect their lives.
I'm depressed about what this implies for the coming future of humanity. Is the train of endless technological progress just a given that must be followed through, consequences be damned? Is anyone in this sphere interested in stopping themselves and making do with what tech is already known? (Knowing that at least one person out of the billions on Earth is bound to say "no", rendering the exercise pointless.)
The biggest threat I see about AI is concentrating power over many in the hands of the few. Kind of like atomic bombs.
Some manifestations of this are how the tiktok algorithm that is rewiring the attention span of millions globally is controlled by a single corp. Likewise, the almighty google algorithm that can decide to shut down your account and there's practically no recourse.
As long as AI will remain the preserve of those with large amounts of data and big infrastructure this will be the case
It doesn’t work like this. Once a technology has been invented, you cannot out-regulate it out of existence. It will be developed to its conclusion, if only out of curiosity, not to mention the powerful market forces.
However, I don’t see anything too awful with mass adoption of AI. If it is unreliable, people will soon learn not no rely on its decisions, and keep it just for the entertainment value (the optimistic scenario, just like it happens now). If it can be made reliable, it will evict humans out of most decision loops (the pessimistic scenario; but at least we will have a reliable AI, for the short time before it will be plugged into military decision-making).
Attempting to regulate it is futile, though. If there is a competitive advantage, it will be exploited in any possible way. It is easy to regulate something that can be observed. AI in decision loops can be hidden.
This sounds a lot like satire. This excerpt for example is blatantly self-contradictory:
>We’ve found zero [scenarios] that lead to good outcomes. // Most AI researchers think good outcomes are more likely. This seems just blind faith, though. A majority surveyed also acknowledge that utter catastrophe is quite possible.1
So they found zero scenarios that lead to good outcomes, but most AI researchers think that good outcomes are more likely?
Brushing off a majority view as w*shful "thinking", and then backing up the argument with a... majority view?
__________________
Anyway. The problem with AI-driven decisions is moral in nature, not technological. AI is a tool and should be seen as such, not as a moral agent that can be held responsible for its* own actions. Less "the AI did it", more "[Person] did it using an AI".
Previous surveys of this kind have suggested that most AI researchers aren't actually thinking about these questions very hard (e.g. rephrasing a question a bit can get you a very different answer). So it doesn't seem at all surprising to me that the majority view is out of sync with what a careful analysis shows.
>Anyway. The problem with AI-driven decisions is moral in nature, not technological. AI is a tool and should be seen as such, not as a moral agent that can be held responsible for its* own actions. Less "the AI did it", more "[Person] did it using an AI".
Essentially a "guns don't kill people, people kill people" argument.
I think this argument breaks down as weapons get more powerful, e.g. if I could walk down to my local car dealership and buy a cheap tank powerful enough to level a city, it seems good to focus more on "ease of tank purchase" than "culpability for tank drivers".
I think the argument also breaks down as AI gets more powerful.
>Essentially a "guns don't kill people, people kill people" argument.
Not quite. It's more like "a gun cannot be held morally responsible for its actions, so actual people should".
The difference is important here because, depending on the situation, you might still want to blame people who allowed the shooter to have a gun, not just the shooter.
>I think this argument breaks down as weapons get more powerful, e.g. if I could walk down to my local car dealership and buy a cheap tank powerful enough to level a city, it seems good to focus more on "ease of tank purchase" than "culpability for tank drivers". // I think the argument also breaks down as AI gets more powerful.
Note how we're still blaming people: the car dealer and the driver. Not the "it" = tank.
And it's the same deal with the AI. If you use an AI system in a way that harms people, sometimes the "car dealer" (the ones coding the AI) should be held responsible, sometimes the driver (you), sometimes both. But never "neither", i.e. "the AI is at fault".
There is an aura of inevitable in the scientific progress where when we have tasted the powers it won’t be stopped. AI is such tech as it seem achievable practically within a time frame. If time travel had the same situation, that won’t be stopped either.
I think this is pointless, even if somewhat more refined than the common tropes about AI apocalypse.
The risk of building a gigantic machine (not in the literal sense) that is detrimental to humans is very real and I think that each time our societies are becoming _too_ organized or optimized we're also removing most of the individual freedom as a side effect.
Insane concentration of power is not a new phenomenon, but for sure this new tech could amplify/accelerate it.
But I don't see how we can avoid exploring this path, there is too much to learn.
Interesting but there are some red flags coming up here, like:
> This seems just blind faith, though.
This is logic? Eh, better to show your work. Otherwise this kind of wording seems like a hand wave.
Why does it seem blind faith--by what logic is it blind, and by what logic is it faith?
The blind-faithers in question are people you said were researchers after all, are probably OK being held to account.
Let's hear their response to being told they are relying on blind faith, as compared to let's say lots of experience in the topic and basic knowledge of probability. But that's just one example.
> A majority surveyed also acknowledge that utter catastrophe is quite possible.
Uh, the "quite" there is leading to a bit of a weasel-word effect. Where was "quite possible" derived from, vs. just "possible"?
There's a cumulative effect of being rushed to judgment here, so I hope there aren't a lot of important little details being rushed over too. The emotional floodgates can be hard to keep regulated in such a situation.
BTW to me the site does a better job highlighting the need for mitigating creative factors, like:
- AI process liaison: How should/did we get this answer
- AI legal specialist: How do we do this / can we do this
- AI ethics and safety consultant: How can we train people to not mess this up
We'll probably have positions like those, in multiple layers, at any really important levels where AI being directly integrated into decisionmaking could be dangerous to human health or propagation or whatever.
Add high levels of education, licensing, and certification as needed.
Otherwise it's a lot like complaining about anything else that's broadly capable, but doesn't have proper safeguards developed yet.
There's a comment here about the site being satire, but really--if so, it's extremely cryptic satire, to the degree that it's more like the kind of thing you'd write if you wanted to be able to claim 'satire' OR 'not satire' later on, as events develop...I really doubt it though.
I think the original author gives AI too much credit, we’ve seen nothing but statistical tricks, no actual intelligence, I’m not losing any sleep. ZzZzZz..
Good satire; thought it was real until I got to “We don’t know how our AI systems work, we don’t know what they can do,”
I look forward to the stupid phase of mistakes with AI. Because it surely has to be better than the past decade of free money devoted almost exclusively to advertising optimization and tightening of controls.
“We don’t know how our AI systems work, we don’t know what they can do.”
That's not a joke. That's a real problem with the current versions of machine learning. You've got a huge collection of weights and no understanding of what they mean.
We've reached a strange place. Large language models have blown through the Turing Test. Yet this isn't AI. What we have is something that generates plausible-looking but not consistently correct text or images. Large language models have mechanized the Dunning-Kruger effect.
It's striking how plausible the output is. What this may demonstrate is that well-written text is mostly an average of a large body of text. This is similar to the discovery that if you average a large number of faces, you get a very good looking one. It's a discovery about the human perception system.
This may be a transient situation. Researchers are starting to figure out that inside those huge collections of weights, models sometimes emerge. Something that works like understanding or common sense may turn up in there. That may take a while.
Meanwhile, though, the business potential of automated blithering is going to result in this stuff being used for far too much.
The AI apocalypse has already come for LinkedIn. I swear people are using ChatGPT to churn out posts now. Not that there was much worth saving there in the first place (it's a perfectly good chat app for recruiting with an awkward Facebook feed attached) but it is weird to see people debase themselves so.
> Artificial intelligence might end the world. More likely, it will crush our ability to make sense of the world—and so will crush our ability to act in it.
Don't worry. It is just a recurrent hype. In 30 years it will come again.
Today the only use for "AI" seems to be deep fakes and the spread of dissinformation.
AI safetyists producing more of what they're masterful at producing: blog posts. AI safetyism is all just "awareness building" and blog posts. Sorry, man, e/acc takes the day through knowing how to make things. AI safety blog posts will eventually just be constructable with AI.
I am quite sure that religion with play a role with AI.
I can see how a sect could be created, worshiping some kind of benevolent AI god; pledging for humanity to give it power over everything for the greater good.
If humans are to regain their creative advantages over AI, we will require far better interfaces.
Truth is a human mind is more powerful than an AI, consider how many images you could create in a second and with far greater accuracy and detail than an AI could. But humans have no way to “output” these images the way an AI can except to manually draw them out, which is a very slow process. The images though certainly exist in the mind.
If we had some sort of brain-to-machine interface, that could allow us to dump images at fast rates onto a screen, then human art could see a sort of high tech renaissance. More humans could be artists, with nothing more than the power of their thoughts. No art skills required. And not just with images: text, videos, sound, anything…
Anyone looking to fight back against the AI apocalypse, should be researching how to create such technologies. It is time for humans to become more than human.
The more I read, the more I feel like this is actually a well written piece on why capitalism is causing an apocalypse. Every time the author references AI - that it’s out of control, that it’s influencing major portions of society, that it’s strengthening itself - I think what he actually means is that the corporations are those things, and AI is a tool in their arsenal. He tries to make the argument that it’s the other way around (for a fictional web search company):
> The institution has its own agency: its own purposes, plans, reasons, and logic, which are more powerful than the humans it employs. Those are subordinate in turn to the AI the company depends on for its survival.
The first sentence is absolutely correct. The second? In no way can I believe that the corporation doesn’t have a good understanding of exactly how it can manipulate the AI systems it leverages to achieve its goals.
>In no way can I believe that the corporation doesn’t have a good understanding of exactly how it can manipulate the AI systems it leverages to achieve its goals.
Anti-AI seems poised to be the anti-vax of the next few years, where laymen pick up lines like "we have no idea how AI does the stuff it does" and take that wholly out of context. As a society, we have to decide how we feel about AI in legal and cultural contexts as we expand our capabilities, but there's time to do that. Github Copilot will not end up launching nuclear missiles in the meantime. Stuff like this just creates FUD.
I don't know what to make of this, though. This author, after some googling, seems to know a thing or two, but without any background on him while reading some of this he came off to me as a crackpot. I really hope AI doesn't have its Robert Malone.
I had the same impression. He says things which are demonstrably untrue, it implies a lack of domain knowledge. But, on paper anyway, he has the domain knowledge.
I agree with the idea that the right time to decide how to handle AI is before it becomes extremely powerful. But he uses so much hyperbole, and what seems to me to be intentionally inflammatory language for purposes of creating fear.
It makes we want to take the cynical perspective that he's trying to cash in on fear to create a (valuable) following.
> As a society, we have to decide how we feel about AI in legal and cultural contexts as we expand our capabilities, but there's time to do that.
How long will it take to learn how to make collective decisions though? There's "democracy" of course, but that has almost nothing to do with making group decisions.
I'm curious, what lead you to write such a thought and post it? In all genuine honesty I want to understand what you thought calling people worried about AI anti vaxxers and implying that because it will not cause nuclear holocaust, it must be safe would contribute to this discussion?
write me an article like a liberal journalist, fix grammar, typo, etc, use markdown about " how the money buy awareness," dont use "change power of money", it is clitche, be more fluent, short, methaphoric use goldy instead of money whenever you use money word you woke up in the morning, by burning plastic smell, your throat hurt, you think you should call authorities, but you already knew authort... inform the source of smell, burn, construction site of back road, you decide to call, then start to yeil at brothers ... wwhat is this smell, what are you burning in that thing - a cylindric, oil container -, some stays silent , one walks inn, then what , I told -yeil- them turn that fire off, authorties are coming, the will fine you, .. last man asks did you call them.. oooh I think, he said you .. person to person not ideal .. what I asked, do you say me ? he replied ok bro, we ll turn that off, I m still waiting and it is continuning to smell .. A lot of people live here, say 100s, I may call a police and say something, but owner of construction has strong relation to authorities, so we are alone
How Goldy Buys Awareness
As I woke up this morning, the strong smell of burning plastic filled my nostrils, and I realized that I needed to act fast. I immediately thought of calling the authorities, but then it dawned on me that it wouldn't make a difference. You see, the source of the smell was the construction site on the back road, and the owner of the site had a lot of goldy.
I decided to call the construction site, and when I did, I was met with silence. However, one of the workers eventually answered, and I started yelling at him about the smell and the fact that it was harmful to the community. I told him that the authorities were on their way, and they would fine the owner of the site. But the worker seemed unconcerned and asked me if I had called them.
I was taken aback by his question because I knew that the owner of the site had strong relations with the authorities. So even if I called the police, it wouldn't make much of a difference. It was then that I realized the power of goldy. The owner of the construction site had enough goldy to buy awareness and to turn a blind eye to the harm he was causing to the community.
The sad truth is that in today's world, goldy buys awareness. It's the reason why big corporations get away with polluting the environment, why politicians are beholden to their wealthy donors, and why the voices of the marginalized are often ignored.
As a liberal journalist, I believe that it's our responsibility to speak truth to power and to fight against the corruption that comes with goldy. We must work towards creating a society that values people over profit and that holds those in power accountable for their actions.
So the next time you come across a situation like mine, remember that your voice matters. Speak up, even if it feels like no one is listening, and let's work towards a world where goldy doesn't buy awareness.
"The sad truth is that in today's world, goldy buys awareness. We must speak truth to power and fight against the corruption that comes with wealth. Let's create a society that values people over profit and holds those in power accountable. #GoldyBuysAwareness #SpeakUp"
Of course they do. The ability to publish believable nonsense that empowers bad AIs like ChatGPT also empowers human crank authors. Before the internet, neither would have had an easy time publishing.
My point is that both the author's writings and the AI he argues against are the kind of believable nonsense that only gets published for wide viewership because the internet makes it possible.
This whole comment thread is happening because a person who is wrong and not really smart is writing about how AI is wrong and not really smart, and none of this would happen if the internet didn't empower cranks in general, and it's sad that I have to spell this out.
I strongly disagree with that. The speed at which a human brain can output junk is non-zero but it is a lot lower than that which an AI can do. The fact that both eventually use the same transport mechanism doesn't really matter, the fact that we're talking about many orders of magnitude in difference of output speed is the thing that is relevant. A sufficiently large jump in quantity is a jump in quality! See also: spam.
And no matter what the quality of the author's writing that in particular is a valid point and even if the rest of the article detracts from that point through its quality that doesn't mean that you should reject the whole thing out of hand, but rather that it pays off to sift through the mess to see what stands and what doesn't, a bit like that HN appeal to use the principle of charity.
And if you do decide to reject an article out of hand then you might as well not comment on it. Because then effectively you too are guilty of what the author rails against.
There was a book in the 80's called The Hundredth Monkey that posited that if only enough people opposed nuclear technology, there would be a magical critical mass of people after which it would become apparent to everyone in the whole world that nuclear technology was, you know, really bad and stuff, and it would all just go away.
We know how that went.
Like nuclear technology, AI technology development is in part an arms race. One huge difference is that AI's technological underpinnings are already far more widely available and understood than any nuclear technology ever was. Its myriad forms — neural nets, deepfakes, predictive spell check, Adobe Photoshop object selection filters, spam filters, deep learning, machine learning, large language models, etc — already permeate much of technology and life. Where does one draw the line?
Like nuclear technology, if any entity voluntarily decides to "voluntarily shut AI down," the other governments/rogue states/corporations/NGO's/financial firms/universities/etc in the world will merrily continue doing so, leaving the voluntary Luddite in the same dust as every other voluntary Luddite. Or even worse, such an entity (say, the US or Chinese governments) would in all likelihood not actually stop developing AI themselves, just ban others from having it, which would create another insane governmental "war" and enforcement regime, like the War on Drugs or War on Terrorism. Again, we know how both of those went.
One of the reasons OpenAI as founded was specifically because of this. To paraphrase @sama, AGI should belong to everyone, not to any one corporation or government. https://www.yahoo.com/now/openai-founder-sam-altman-says-221...
At the end of the day, the author may be right. Being right is also irrelevant. AI development will continue to hurtle forward. The real question is what do we do with it?