Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Assuming AGI doesn't kill us all, I would imagine the argument for UBI will become much easier to defend once it causes 100x, 1000x, 10000x etc growth in the economy. Our job is mostly to hang on until one of those two outcomes occurs.


It's impossible to grow an economy without consumers, which this also eliminates. Standard metrics probably won't be much use here.


This is basically the gist of my comment, thank you for rephrasing it so concisely.


The thing is that economy does not make sense without people. Economy is a way to allocate human work and resources, and provide incentives for humans to collaborate, factoring in the available resource limits.

Now if AGI make people's work redundant, and makes economy grow 100-10000x times... what does that measure mean at all? Can produce lots of stuff not needed or affordable by anybody? So we just hand out welfare tickets to take care of the consumption of the ferocious production, a kind of paperclip-maximizer is doing? I suggest reading the novel Autofac, it might turn out prophetic.

Will that "growth" have any meaning then? Actually the current we print money and give it to the rich economic growth is pretty much this, so with algorithmic trading multiplying that money automatically... have we already achieved that inflection point?


This isn’t complicated. Economic growth means cheaper access to things people want.

Imagine a list of things many people wish to happen in physical reality. We’ll have more of that.

-Better healthcare

-Curing most things that destroy quality of life

-Curing aging and age-related death

-Much better treatment for all sources of mental suffering

-Far better and cheaper and reversible body modification

-More free time to spend at whatever you want

-Everything much cheaper

-Bigger and better homes and living spaces

-Bigger, faster, cheaper transport

-Easier to organize meaningful social interaction

-Better and more immersive entertainment

-More time to spend with close friends and loved ones


Ageing is not an illness.


Out of curiosity, I looked up the definition of illness. Seems to be so loosely defined that it can be either a disease or a patient's personal experience, including "lethargy, depression, loss of appetite, sleepiness, hyperalgesia, and inability to concentrate", which (possibly excluding hyperalgesia, I've not heard of that before now) are associated with aging.

Regardless of if "illness" is or is not a terminological inexactitude, it looks like ageing is a chronic progressive terminal genetic disorder. I think "cure" is an appropriate term in this case.


Involuntary ageing is the very worst tragedy of human life.

Funny that this kind of ideological conflict will likely be a key fulcrum of the machine intelligence revolution. We will have a very loud minority that attempts to forcefully prevent all other humans from having the voluntary choice to avoid suffering.

Are you in it?


Wow, really good job with this satire account!


Really? I can't even imagine an economy of like, sentient dogs?

Or paper wasps: https://www.bloomberg.com/features/2017-biological-markets/


> The thing is that economy does not make sense without people. Economy is a way to allocate human work and resources, and provide incentives for humans to collaborate, factoring in the available resource limits.

I disagree with the underlying presumption. We've been using animal labour since at least the domestication of wolves, and mechanical work since at least the ancient Greeks invented water mills. Even with regard to humans and incentives, slave labour (regardless of the name they want to give it) is still part of official US prison policy.

Economics is a way to allocate resources towards production, it isn't limited to just human labour as a resource to be allocated.

And it's capitalism specifically which is trying to equate(/combine?) the economy with incentives, not economics as a whole.

> Now if AGI make people's work redundant, and makes economy grow 100-10000x times... what does that measure mean at all?

From the point of view of a serf in 1700, the industrial revolution(s) did this.

Most of the population worked on farms back then, now it's something close to 1% of the population, and we've gone from a constant threat of famine and starvation, to such things almost never affecting developed nations, so x100 productivity output per worker is a decent approximation even in terms of just what the world of that era knew.

Same deal, at least if this goes well. What's your idea of supreme luxury? Super yacht? Mansion? Both at the same time, each with their own swimming pool and staff of cleaners and cooks, plus a helicopter to get between them? With a fully automated economy, all 8 billion of us can have that — plus other things beyond that, things as far beyond our current expectations as Google Translate's augmented reality mode is from the expectations of a completely illiterate literal peasant in 1700.

> Can produce lots of stuff not needed or affordable by anybody?

Note that while society does now have an obesity problem, we're not literally drowning in 100 times as much food as we can eat; instead, we became satisfied and the economy shifted, so that a large fraction of the population gained luxuries and time undreamed of to even the richest kings and emperors of 1700.

So "no" to "not needed".

I'm not sure what you mean by "or affordable" in this case? Who/what is setting the price of whatever it is you're imagining in this case, and why would they task an AI to make something at a price that nobody can pay?

> So we just hand out welfare tickets to take care of the consumption of the ferocious production, a kind of paperclip-maximizer is doing? I suggest reading the novel Autofac, it might turn out prophetic.

Could end up like that. Plenty of possible failure modes with AI. That's part of the whole AI alignment and AI safety topics.

But mainly, UBI is the other side of the equation: to take care of human needs in the world where we add zero economic value because AI is just better at everything.


> With a fully automated economy, all 8 billion of us can have that

We probably can't. I mean why stop at humans? Let's give every pet the same luxury, or ... in the limit we could give this to every living being. Ultimately someone is going to draw the line who gets what and who is useful or not "for the greater good".

It just happens that many living beings don't contribute to the goals of whoever is in charge and if they get in the way or cause resource waste nobody will care about them, humans or not.

Human rights and democracy is all cool, but I think we just witnessed enough workarounds that render human rights and democracy pretty much null and void.


Exactly right. It's playing out like a bankruptcy: "Slowly at first, then all at once".

Humans have rights insofar they're able to enforce them. Individually by withholding their labor (muscle or brain power), or collectively with pitchforks if need be.

Once labor is dime-a-dozen and pitchforks ineffective (OP's premise of "fully automated economy"), human rights and democracy go the way of dodo, inevitably. Nature loves to optimize away inefficiencies.

Although the "fully automated" bit is quite a stretch at the moment. The end-to-end supply chain required to produce & sustain advanced machinery and AI is too complex, a far cry from "LOL let's buy some GPU and run chatbots".


> Although the "fully automated" bit is quite a stretch at the moment. The end-to-end supply chain required to produce & sustain advanced machinery and AI is too complex, a far cry from "LOL let's buy some GPU and run chatbots".

It's ahead of us, and that's good because we're not ready for it yet either.

But how far ahead? Nobody knows. For all its flaws, ChatGPT's capabilities were the stuff of SciFi three years ago.

We might hit a dead end, or have an investment bubble followed by a collapse, either of which may lead to another AI winter and us doing nothing interesting in this sector for 20 years. Or someone might already have a method of learning as quickly and from as few examples as humans manage, and they're keeping quiet until they figure out how to be sure it's not the equivalent of a dark triad personality in a human.

If I was forced to gamble (which I kinda am by thinking about a mortgage for a new house), I don't think we'll get a complete AGI in less than 6 years at the fastest. My modal guess is 10 years, with a long tail.

Even when we finally get AGI, there's a roll-out period of unclear duration, because the speed of rollout depends in part on how much hardware is needed to run the AGI, but also on the human reaction to it: if it needs the equivalent of a supercomputer, this will definitely be a slow rollout; but it still won't be instant even if it's an app that runs on a smartphone — it's amazing how many people don't know what theirs can already do.


> We probably can't. I mean why stop at humans? Let's give every pet the same luxury, or ... in the limit we could give this to every living being. Ultimately someone is going to draw the line who gets what and who is useful or not "for the greater good".

Eh.

A line, drawn somewhere, sure.

Humans being humans, there's a good chance the rules on UBI will expand to exclude more and more people — we already see that with existing benefits systems.

But none of that means we couldn't do it.

Your example is pets. OK, give each pet their own mansion and servants, too. Why not? Hell, make it an entire O'Neill Cylinder each — if you've got full automation, it's no big deal, as (for reasonable assumptions on safety factors etc.) there's enough mass in Venus to make 500 billion O'Neill Cylinder of 8km radius by 32km length. Close to the order-of-magnitude best guess for the total number of individual mammals on Earth.

Web app to play with your size/safety/floor count/material options: https://spacecalcs.com/calcs/oneill-cylinder/

> It just happens that many living beings don't contribute to the goals of whoever is in charge and if they get in the way or cause resource waste nobody will care about them, humans or not.

Sure, yes, this is big part of AI alignment and AI safety: will it lead to humans being akin pets, or to something even less than pets? We don't care about termite mounds when we're building roads. A Vogon Constructor Fleet by any other name will be an equally bitter pill, and Earth is probably slightly easier to begin disassembling than Venus.


First, don't count on AI being aligned at all. States who are behind in the AI race will increasingly take more and more risks with alignment to catch up. Without a doubt, one if the first use cases of the AI will be as a cyberweapon to hack and disrupt critical systems. If you are in a race to achieve that alignment will be very narrow to begin with.

Regarding the pet vs humans - the main difference is really that the humans are capable of understanding and communicating the long term consequences of AI and unchecked power, which makes them a threat, so it's not a big leap to see where this is heading.


> First, don't count on AI being aligned at all.

I don't. Even in the ideal state: aligned with who? Even if we knew what we were doing, which we don't, it's all the unsolved problems in ethics, law, governance, economics, and the meaning of the word "good", rolled into one.

> Without a doubt, one if the first use cases of the AI will be as a cyberweapon to hack and disrupt critical systems.

AI or AGI? You don't even need an LLM to automate hacking; even the Morris worm performed automated attacks.

> humans are capable of understanding and communicating the long term consequences of AI and unchecked power

The evidence does not support this as a generalisation over all humans: Even though I can see many possible ways AI might go wrong, the reason for my belief in the danger is that I expect at least one such long-term consequence to be missed.

But also, I'm not sure you got my point about humans being treated like pets: it's not a cause of a bad outcome, it is one of the better outcomes.


It's always nice to see someone else on Hacker News who has pretty much independently derived most of my conclusions on their own terms. I have little to add except nodding in agreement.

Kudos, unless we both turn out to be wrong of course.


AGI in the sense that its so smart that it decides to kill us all, without any way of human control, is pretty much impossible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: