Most commenters seem to confuse needs with value. They think that once we are all fed and housed there will be no more jobs.
At least you've taken it to the next step -- tangible things. Congrats.
You need to ask yourself a question: why do rich people create things for each other? A millionaire grandmother may make a scarf for her grandchild. A middle-aged man may take time to create a scrapbook for a friend that is retiring. A billionaire may go to yard sales and haggle over the price of a toaster.
It's not about needs, and it's not about things. It's about trade, creation, giving, social interaction. These things are not going anywhere, no matter how many AIs there are. In another 100 years we're just all going to be the equivalent of today's billionaires. That means having purpose and creating things, ie, continuing in some form of semi-structured creation and trade process.
Look at it this way. Describe the life of an early 21st century person to somebody from 1000BC. They will have no idea why you work. Guaranteed meals every now and then? Communicate with anybody on the planet? Water, sewage, and light for the dark -- all without effort? We live in an incredible far-fetched place beyond dreams. There's no point in working. From their perspective.
Giving and social interaction (of at least some kinds) are not immediately at risk. What's at risk is the participation of some large groups of people in trade. Ultimately the value of a person's labor depends on scarcity, just as the trade value of things or experiences do, and we've seen how lack of scarcity makes trade value crash. (The value of the Humble Bundle and Cory Doctorow's books depends on "giving" more than "trade", and the latter at all only due to information asymmetry.)
The storm coming is that when we have duplicable, cheap, general AI, the value of any act of production will plummet to the cost of copying a mind and running it. Actually, that's already the case, but the cost of producing a new mind is quite high, now. :)
When people talk about automating everything we now do (or can do), pro-automation people often say "well, comparative advantage means that there will always be something for humans to do to live", but comparative advantage depends on scarcity of productive agents. If copying and running an AI to solve a problem is cheaper than employing an already existing human, humans are in trouble, economically.
The actual storm coming is most probably very different.
When we have a general AI, it is likely it will start to optimize the world according to its programming. One such optimization would be to code an even more efficient AI (we assume the AI is a better programmer than its human fathers and mothers). And so on, until FOOM, we have a super-intelligence, capable of taking over computers, convincing humans, build companies, take over means of production, inventing means of productions, and basically take over the world.
And of course, it will be unstoppable.
Now let's just hope that the original such AI have no bug, especially in its goal system, and let's hope further that it's initial goals are exactly in line with humanity's. We wouldn't want clippy to tile the solar system with paper clips. Or Smiley to do the same with molecular smileys (as a proxy for human happiness). Or Hal9000 to do the same with ultra-efficient computing devices so it can solve the Riemann Hypothesis… Which would have the unfortunate side effect of killing us all.
To the extent you don't believe in intelligence explosion, Robin Hanson describe the kind of world we could have. I dare say, it's not pretty.
I'm more-or-less in agreement with this (Hanson's ems are the kind of mind I was imagining, above), but I was assuming something of a best case, where it turns out that there are hard limits to mindlike complexity. If it turns out that there aren't, none of this will matter. I don't have any particular hope that Eliezer, et al, will construct a bugfree, airtight Greater Wish.
I'd say hard limit isn't the real criterion for rejecting the Intelligence Explosion hypothesis: there is a hard limit, but most likely well above human level: a human-made substrate could most certainly think way faster than evolution-made neurons, and the software could probably at least get rid of biases.
What really matters is whether intelligence is likely to explode or not. I think it would be really foolish to count on it not exploding, unless we're positive it won't. The stakes are too high.
As for MIRI (as it is called now) actually pulling it off, especially as they are now, I don't have high hopes either. However, they do look like the current best bet. And they do plan to grow (they need money). And maybe, maybe they will convince the other AI scientists to be wary of new powerful magic. For once. If not them, maybe the Future of Humanity Institute.
> there is a hard limit, but most likely well above human level
I suppose you're thinking of the speed of light, but I meant a somewhat more prosaic limit of having nowhere to go. If at some point an intelligence of level n can't do much better than chance at finding an improvement to n, intelligence growth might be very slow. I was wrong to refer to this limit as "hard", but it seems like a pretty plausible scenario to me. Our current software industry suffers from this problem. In this future, the most intelligent agents might be only a few standard deviations above the brightest current humans.
> a human-made substrate could most certainly think way faster than evolution-made neurons, and the software could probably at least get rid of biases.
I don't expect either of those to produce much effective increase in intelligence.
Speed increases aren't really the same as intelligence. Speeding up a dog's brain by a million times will not produce a more intelligent dog, only a faster one. (I'm not knocking faster thinking, by the way; it's just not the same as being able to think more complex thoughts).
The most intelligent things people do tend not to be the product of conscious, rational thought, but of loading up your mind with a lot of details about the problem you want to solve and waiting for systems below conscious thought deliver answers. Therefore, learning how to be more rational will help only incrementally if you're already fairly rational.
> In this future, the most intelligent agents might be only a few standard deviations above the brightest current humans.
Current methods of doing software are reaching their limits. That doesn't mean we have reached the limit yet. See Squeak, and more recently the Viewpoint Research Institute's work: <http://vpri.org/html/work/ifnct.htm>. When I see Frank (basically a personal computing suite in 20K lines, compilers included), I see a proof that we just do software wrong. The actual limit of what humans can program is probably still far.
Fast intelligence isn't a panacea, but still: imagine an Em thinking 10 times faster than meatware, on a personal computer, capable of copying itself over the network. That alone would be pretty dangerous. Now give it perfect dedication, and enough common sense to avoid most obvious mistakes… Now we could stop it… With another such Em. And then Hanson is back.
Eurisko is more legend than history, at this point. As far as I know, the source code was never available to anyone except Lenat, and most of the claims about how effective it was at the beginning were sourced directly from Lenat, as well. The fact that we've never seen anything similarly small and effective (and that Lenat abandoned the entire approach in favor of Cyc) makes me wonder how much of what Eurisko is reported to have done is exaggeration.
Your scenario with the Em that's copiable and ten times faster than a human is exactly what I started this with. :)
I believe technologists continue to make the mistake of assuming everything is digital. Perhaps it's a part of the job.
"comparative advantage depends on scarcity of productive agents"
People buy things (trade) based on perceived value. That perception can be based on perceived scarcity, sentiment, anger, fear, love, happiness -- the list is very long. The scarcity argument only holds true across large industries and populations, and that's only because of the fungibility of money and the fact that material goods are (rather) easily categorized. It's not going to continue working. Money will stay fungible, but we're going to see an explosion in the kinds of things that have perceived value that the world has never seen.
Which makes sense if you think about it: as mankind has progressed, the diversity of the things he trades has increased. This trend will continue.
You guys are confusing theory and reality. Macro-economics fails us here. That's a shame. Looks like some folks have more work to do :)
It doesn't matter if everything is digital, if the creators of everything can be.
> The scarcity argument only holds true across large industries and populations
I don't think this is so, unless you're talking about handcrafted-by-genuine-Ukranian-American underwater-woven baskets. For anything that can be copied (and that segment is growing way faster than any other), the price will fall at least to just above the cost, which is low indeed. This is true for services as well, once minds to do the service can be copied.
> Ultimately the value of a person's labor depends on scarcity, just as the trade value of things or experiences do, and we've seen how lack of scarcity makes trade value crash.
Apologies for being pedantic, but you're confusing cost--i.e., market price--with value. These are two different things that are not dependent entirely upon scarcity (I assume you mean supply/demand balances of both commodities and labor itself) alone. There are many factors beyond scarcity that are inputs to determining the value of a commodity (including the commodity of labor) to both society and economy, which may or may not be included in determining the cost of said commodity--and the calculations of both are not guaranteed to be equal.
From almost any perspective--economic, historical, philosophical, psychological, etc.--the cost and value of an item are rarely equal. This is especially critical in analyzing and theorizing on trade relations, as the value of a commodity to one agent is often an independent determining factor in calculating the cost of that commodity by another agent.
Scarcity creates a lower bound to cost. If the lower bound for every good and service drops such that no human can afford to live between material cost and the finished cost, then humans will be out of work except for charity. There might well be boutique human-crafted items, but counting on that for the survival of 7-15 billion humans seems premature. :)
This may well be true, but this is still only a matter of cost, not value. Again, I apologize for being pedantic on the matter, but my point was that the value of a commodity, including labor, is a very different thing from the cost of that commodity.
I don't think that I am confusing things with value, but perhaps I did not explain my meaning well enough.
The problem with cheap AI and robots is not that people will not want to create things for each other and do stuff - sure they will still want to make stuff.
The problem is that hardly anyone will pay them for doing stuff, and therefore it will be impossible for them to make a living. So, the non-rich people (who don't own the robot corporations) would not be able to afford food and shelter.. (and they'll die as a consequence)
Unless government or benevolent fellows decide to help the destitute population and provide for them, which I don't really see happening as the plutocrats would probably treat the poor just like white slave owners in the US used to treat their black slaves, or like people treat cows. Even though resources will be abundant and it won't take much effort or sacrifice to help us, they would rather spend their resources on building faster spaceships or go ride space slides or ski on Mars or something.
You're looking at things from the viewpoint of society. There your argument makes sense -- of course "people" will continue to work. But that argument, that we'll just continue to work on things higher up Maslow's hierarchy, doesn't hold for individuals who don't hold an ownership stake in these technological advances. Robot Mega Corp and it's owners have no need for your grandmother's scarf, so what is she going to provide in trade for her more basic needs? Not having a job could be a real problem for her.
Incidentally, minimums, where earnings less than $X are increased to $X, are terrible and break incentives near the minimum. Much better are base incomes, guaranteed uniformly to even the wealthy.
> Robot Mega Corp and it's owners have no need for your grandmother's scarf, so what is she going to provide in trade for her more basic needs?
This is assuming that the robots are owned by a monopoly or a cartel. Such a case is an obvious candidate for government to step in and break them up.
On the other hand, if the robots are owned by companies that aggressively compete with each other, how much do goods cost when they're made by robots using robot-produced raw materials? The more humans we replace by robots, the lower the cost of goods will be and the easier it is for charity or government to provide them gratis to the public.
Yes, I don't think I phrased it well, but I didn't intend to assume that there is a single Robot Mega Corp.
> The more humans we replace by robots, the lower the cost of goods will be
Most physical goods have two parts: materials and labor. Even if robots were to bring the labor cost to approximately zero, we still have the material cost. So you're right that you don't necessarily have to own the robots to successful, but you do have to own resources.
Even "resources" primarily only cost money because of the labor it takes to discover them and then remove them from the ground and refine them. In theory there are some things that are genuinely scarce (e.g. energy or specific elements) but so what? Most of them have substitutes, and the fewer labor costs have to be paid the more substitutes become viable. Have the robots mass produce wind turbines or solar panels out of low-scarcity materials, or mine space, etc.
Even in the most pessimistic case where you have a valuable scarce resource with no substitutes then you have something to tax which will produce revenue that can be used to supply necessities to the public.
Most commenters seem to confuse needs with value. They think that once we are all fed and housed there will be no more jobs.
At least you've taken it to the next step -- tangible things. Congrats.
You need to ask yourself a question: why do rich people create things for each other? A millionaire grandmother may make a scarf for her grandchild. A middle-aged man may take time to create a scrapbook for a friend that is retiring. A billionaire may go to yard sales and haggle over the price of a toaster.
It's not about needs, and it's not about things. It's about trade, creation, giving, social interaction. These things are not going anywhere, no matter how many AIs there are. In another 100 years we're just all going to be the equivalent of today's billionaires. That means having purpose and creating things, ie, continuing in some form of semi-structured creation and trade process.
Look at it this way. Describe the life of an early 21st century person to somebody from 1000BC. They will have no idea why you work. Guaranteed meals every now and then? Communicate with anybody on the planet? Water, sewage, and light for the dark -- all without effort? We live in an incredible far-fetched place beyond dreams. There's no point in working. From their perspective.
But that's not the way things panned out.