I do think one of the major weaknesses of “smart people” is they tend to think of intelligence as the key aspect of basically everything. Reality is though we have plenty of intelligence already. We know how to solve most of our problems. The challenges are much more social and our will as a society to make things happen.
There’s a very big difference between knowing “how” to solve a problem in a broad sense, eg “if we shared more we could solve hunger”, and “how” to solve it in terms of developing discrete, detailed procedures that can be passed to actuators (human, machines, institutions) and account for any problems that may come up along the way.
Sure, there are some political problems where you have to convince people to comply. But consider a rich corporation building a building, which will only contract with other AI-driven corporations whenever possible; they could trivially surpass anyone doing it the old way by working out every non-physical task in a matter of minutes instead of hours/days/weeks, thanks to silicon’s superior compute and networking capabilities.
Even if we drop everything I’ve said above as hogwash, I think Vinge was talking about something a bit more directly intellectual, anyway: technological development. Sure, there’s some empirical steps that inevitably take time, but I think it’s obvious why having 100,000 Einsteins in your basement would change the world.
100,000 Einsteins in your basement would be amazing. You'd have major breakthroughs in many fields. But at some point the gains will be marginal. All the problems solvable by shear intellectual labor will run dry, and you'll be blocked on everything else.
An AI-driven corporation wouldn't be able to surpass anyone doing it the old way because they'd still have to wait for building permits and inspections.
Having worked with some very intelligent people my own personal theory is that they forget that they don't have expert level knowledge in everything and actually end up making some pretty silly mistakes that far less smart people would never make - whether this is hubris or being focused and ignoring "trivial" day to day matters is a question of personality.
Imagine a scenario where instead of AI, a billion dollar pill could make one person exponentially smarter and able to communicate with thousands of people per second.
That does not have the same appeal.
This provokes me to some musings on the theme.
We imagine superintelligence to be subservient, evenly distributed, and morally benign at least.
We don’t have a lot of basis for these assumptions.
What we imagine is that a superintelligence will act as a benevolent leader; a new oracle; the new god of humanity.
We are lonely and long to be freed of our burdens by servile labor, cured of our ills by a benevolent angel, and led to the promised land by an all knowing god?
We imagine ourselves as the stewards of the planet but yearn for irrelevance in the shadow of a new and better steward.
In AI we are creating a new life form, one that will make humans obsolete and become our evolutionary legacy.
Perhaps this is the path of all technological intelligences?
Natural selection doesn’t magically stop applying to synthetic creatures, and human fitness for our environment is already plummeting with our prosperity.
As we replace labor with automation, we populate the world with our replacement, fertility rates drop, we live for the experience of living, and require yet more automation to carry the burdens we no longer deem worthy of our rarified attention.
I’m not sure any of this is objectively good, or bad. I kinda feel like it’s just the way of things, and I hope that our children, both natural and synthetic, will be better than we were.
As we prosper, will we will have still less children? Will we seek more automation, companionship in benevolent and selfless synthetic intelligence, and more insulation from toil and strife, leading to yet more automation, prosperity, and childlessness?
Synthetic intelligence will probably not have to “take over”, it will merely be filling the void we willingly abandon.
I suspect that in a thousand years, humans will be either primitive, or vanishingly rare. Or maybe just non-primitive humans will be rare, while humans returning to nature will proliferate prodigiously as we always have, assuming the environment is not too hostile to complex biological life.
A thousand years on is interesting. I'm guessing much of the earth will be kept as a kind of nature reserve for traditional humans rather like we have reserves for lions and bears and the like today. Pure AI stuff may have moved to space in a Dyson sphere like set up. I'm not sure about enhanced humans and robots. Maybe other areas of the planet similar to our normal urban areas. However it goes it'll probably start playing out much sooner than in a thousand years.
Predictions are great, and useful in making good decisions today based on how you think the future might be affected… the danger lies in believing in your predictions.
As for me, I think many things, suppose a few, imagine lots, conjecture some, believe very little, and know, most of all, that I know nothing.
“Believing”, in anything, is a dangerous gambit. Knowing and believing are very distinct states of mind.
So you're saying that it's naive to suppose that everybody being much smarter than they are now would transform society, because any wide-scale societal change requires ongoing social cooperation between the many average-intelligence people society currently consists of?
Here’s a simpler way to put it: intelligence and social cooperation are not the same thing. Being good at math or science doesn’t mean you understand how to organize complex political groups, and never has.
People tend to think their special gift is what the world needs, and academically-minded smart people (by that I mean people that define their self-worth by intelligence level) are no different.
Yes, because you need to spend a lot of time doing social organization and thinking about it to get very good at it, just like you need to spend a lot of time doing math or science and thinking about it to get very good at it. And then you need to pick up patterns, respond well to unexpected situations and come up with creative solutions on top of that, which requires intelligence. If you look at the people who are the best at doing complex political organization, they'll probably all have above-average intelligence.
I don’t agree at all. Charismatic leaders tend to have both “in born” talent and experience gained over time. It’s not something that comes from sitting in a room and thinking about how to be a good leader.
Sure, some level of intelligence is required, which may be above average. But that is a necessary requirement, not a sufficient one. Raw intelligence is only useful to a certain extent here, and exceeding certain limits may actually be detrimental.
When it comes to "charismatic leaders" I like this quote from Frank Herbert:
"“I wrote the Dune series because I had this idea that charismatic leaders ought to come with a warning label on their forehead: "May be dangerous to your health." One of the most dangerous presidents we had in this century was John Kennedy because people said "Yes Sir Mr. Charismatic Leader what do we do next?" and we wound up in Vietnam. And I think probably the most valuable president of this century was Richard Nixon. Because he taught us to distrust government and he did it by example.”
Edit: Maybe what we really need to worry about is an AI developing charisma....
> Edit: Maybe what we really need to worry about is an AI developing charisma....
That is the most immediate worry, by a wide margin. It seems to be dangerously charismatic even before it got any recognizable amount of "intelligence".
Not really a good example, honestly. Kennedy’s involvement in Vietnam was the culmination of the previous two decades of events (Korean War, Cuban Missile Crisis, Taiwan standoff, etc.), and not just a crusade he charismatically fooled everyone into joining. If anything, had Nixon won in 1960 (and defeated Kennedy), it’s possible that the war would have escalated more quickly.
Yeah - I really meant to only copy the first part of the quote - I agree that it is a bit unfair to Kennedy who I think did as much as anyone to stop the Cuban Missile Crisis becoming a hot war.
Someone with IQ 160 might have trouble empathizing with what IQ 100 people find convincing or compelling and not do that well with an average IQ 100 population. What if they were dealing with an average IQ 145 population that might be much closer to being on the same wavelength with them to begin with and tried to do social coordination now?
I guess it’s possible, but again I don’t think empathy and intelligence are correlated. Extremely intelligent people don’t seem any better at navigating the social spheres of high-intelligence spaces than regular people do in regular social spaces. If anything, they’re worse.
All of this is just an overvaluation of intelligence, in my opinion, and largely comes from arrogance.
The prisoner's dilemma is a well known example of how rationality fails. To overcome requires something more than intelligence, it requires a predisposition to cooperation, to trust, in faith. Some might say that is what seperates Wisdon from Knowledge.
Why will it never be? If the adequate intelligence is what something like 0.1 % of the populace naturally has, seems like there's a pretty big difference between that level of intelligence being stuck at 0.1 % of the populace and it being available from virtual assistants that can be mass-produced and distributed to literally everyone on Earth.
> An aligned AI is not AGI, or whatever they want to call it.
There's a few ways I can interpret that.
If you mean "alignment and competence are separate axies" then yes. That's well understood by the people running most of these labs. (Or at least, they know how to parrot the clichés stochastically :P)
If you mean "alignment precludes intelligence", then no.
Consider a divisive presidential election between Alice and Bob, no this isn't a reference to the USA, each polling 50%: regardless of personal feelings or the candidates themselves, clearly the campaign teams are both competent and intelligent… yet each candidate is only aligned with 50% of the population.
Of any specific human to a nation? That's the example you replied to.
Of all the people of a nation to each other? Best we've done there is what we see in countries in normal times, with all the strife and struggles within.
We have yet to fully extend from nation to the world; the closest for that is the UN, which is even less in agreement with itself than are nations.
I think that's my point. The notion of maintaining an alignment, pro-human or whatever for a replicable general AI, doesn't seem to make sense. The traits of planning, learning and goal setting don't seem concordant with maintaining an alignment. I think this discussion has veered to much to anthrocentrism to be interesting, but alignment however loosely defined here isn't some constant for an individual through their life either. It can be imprecisely manipulated especially in a population by outside forces, but it can't be directly controlled.
I think I understand, but let's check by rephrasing:
"Alignment" is only possible up to a vague approximation, and an entirely perfectly aligned with another entity would essentially be a shadow rather than a useful assistant because by being perfectly aligned the agent would act tired exactly when the person was tired, go shopping exactly when the human would, forget their keys exactly when the human would, respond exactly like the human to all ads and slogans, etc.?
I agree, though:
(1) this has already been observed, last year's OpenAI dev day had (IIRC) a story about a writer who fine tuned a model on their slack (?) messages, they asked it to write something for them, the response was ~"sure, I'll get on it tomorrow".
(2) for many of those concerned with "solving alignment", it's sufficient for the agent to never try to kill everyone just to make more paperclips etc.