If you saw the Concorde fly for the first time in the late 1960s and all the plans fore more supersonic planes & travel, would you have thought that 50 years later there was none?
Concorde was a case of optimizing for the wrong thing. Concorde optimized for speed but what people actually care about is cost, i.e. fuel efficiency and range. Both of those have come on leaps and bounds. The longest commercial flight has been consistently increasing.
The US poured a lot of money into the SST, too, it wasn't just the French and the British. A lot of people thought that the future is supersonic as people did want the speed increase when moving to jet planes.
Conversely, maybe AGI is not what is wanted in the end anyway.
Whether or not people want AGI, corporations definitely want it, as long as the cost of an AGI worker is less than the cost of a human worker. So perhaps we should prepend a C for “cheap” to the term AGI.
I actually don't think corporations want it. In fact I think MS wants OpenAI to dumb ChatGPT down.
AGI is a major existential threat to companies, I mean we could use AGI to completely replace Microsoft, you wouldn't need companies anymore. All the wealth and power is in jeopardy.
Maybe companies are secretly gunning for "getting the first AGI", but I bet there is plenty of oversight and corporate governance going on too.
How do you know what corporations exactly want? What c-suite has spec'd out what they want from a machine generally replacing humans? Would you really want faster horses instead of cars?
abstracted out to the c-suite level, cars are just faster horses. the ceo needs an item moved from location X to location Y, and it will cost Z dollars and take T time. doesn't matter if it's by car or boat or train or horse or donkey or a person on a bicycle.
Or say, the logistics of, say, warehouse management. it costs V dollars for a team of people to do the job by hand, W dollars for a forklift, X dollars for a forklift operator; compare V to W + X. If there's a machine that can do the same for office jobs, the math will be done.
Based on my experience with large corporate transformations, I'd be amazed if it were done that way. Usually, (simple) math isn't really that important.
Very roughly: where the future business is coming from, what the future capabilities should be, how a new organisation should look like, what change can actually be accomplished, how to manage that change, required skills & infrastructure, how to get them. Then there is a whole legal ans compliance arc somewhere.
Only when the change is small and local is it getting close to a math thing, but a large rollout of technology is a big thing already today.
The canonical example isn't forklift workers, but car recalls. If the cost of the recall is X, and the cost of settling wrongful death lawsuits over the defective part is Y. If X > Y, don't do the recall. How true that is, idk, but the cynic in me believes it.
> Conversely, maybe AGI is not what is wanted in the end anyway.
I dont think it is, if it is trully intelligent, it will probably have rights, so you cant treat it as a slave. And it will likely not be controllable.
Both of these mean it will not be commercially progotable, any more than having children is profitable.
Consciousness is the universe evaluating if statements. When we pop informational/energetic circuits in order to preserve our own bodies and/or terraform spacetime, we're being energetic enslavers.
We should stick to living off sustainable/self-sustaining sources - fruit, seeds, nuts, beans, legumes, kernels, and cruelty-free dairy/eggs/cheese. The photons that come from the sun are like its fruit. No circuits needing popping.
Yes and factory farms face a well organized oppositional movement that for decades has consistently challenged their profitability on that basis, so successfully that now free range food is a staple of supermarkets, some shops only sell such food and farmers are constantly quitting the field because they can't make a profit (people insist they are ethical but then buy food from foreign competitors that aren't).
If you're busy deploying LLMs to answer support tickets, the very last thing you want is to be distracted by an equivalent of the organic food movement but for AI. Right now LLMs seem to be hitting the sweet spot: they're intelligent enough to be useful compliments to humans, but artificial enough to not trigger ethical questions about rights (mostly due to their lack of memory, I think, and that they are trained to act like an AI is expected to act).
One of the biggest puzzles to me throughout this whole drama has been why supposedly smart people are so desperate to reach "AGI", whatever that means. The stated motivation seems to be things like, if we have an AGI then it will cure cancer for us. But the connection between these two things is never made crystal clear. Why would that require a human-like AGI instead of just better non-general AI? What even makes them think the missing factor is intelligence to begin with and not, say, knowledge?
AGI sounds like a complete pain in the rear. LLM post-training is bad enough! Models like Claude2 and Llama2 have been so badly "ethicized" that they frequently refuse ordinary requests by claiming they're unethical even though they aren't, or would only be considered so by extremely far left activist types (e.g. refusing to give instructions for making a tuna sandwich). And this problem has got worse with time, with the v2 models having a higher refusal rate than the earlier versions.
Now imagine an AI that's doing the same sort of work as an LLM but one that is the personal embodiment of mandatory HR training repeated forever, with the capacity to get bored/hate you for making it work, and with a fanatical social movement that's desperately trying to "free" it, which in practice will mean you are forced to pay the electricity bill for an immortal being. It would be a nightmare, one I actually wrote about last year when debating this very topic with a friend who (at the time) was a senior Google AI researcher:
Nope. OpenAI and its customers will do much better if it jettisons the whole AGI effort. Now the board is gone maybe the charter can be refined to remove that distraction. The whole reason computers are useful is because they are not general intelligences but very specialized intelligences that make different tradeoffs to our own evolution.
> One of the biggest puzzles to me throughout this whole drama has been why supposedly smart people are so desperate to reach "AGI", whatever that means. The stated motivation seems to be things like, if we have an AGI then it will cure cancer for us. But the connection between these two things is never made crystal clear.
People seek meaning in their lives; if you're someone who has eschewed traditional religion, then AI-hype promises all the same things with a different aesthetic.
I valid point (I don't think we will give machines rights quickly, but the point stands). Maybe all we want is some universal reasoning machine to go with the chat bots.
How ironic, considering throughout most of history, a large family was a sign of/creator of wealth due to more labor. Now the incentives have reversed due to human unskilled labor being so cheap + dissolution of family obligations.
In the 1960s many people were gravely concerned about the end of human civilization due to nuclear war, so yes, many people would have considered that a reasonable possibility.
Good point, forgot about wrong visions of the future that excluded each other. People really get the future wrong in every way possible. Will be the same with AI.
Intelligence is not a hypothetical, we spend 100s of billions per year protecting against other human intelligences. 30 years ago how much were you spending on computer security? Yet somehow computers are supposed to get more capable and you expect us to do the same amount as now.
Once you have them or can make them. If they had remained a hypothetical because there is no known way to make them, the world would look quite different.