Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The orbital are literally scarce resources, as are radio spectrum. If you have people just doing whatever, you'll get Kessler syndrome, especially as our orbits are filled with more satellites each year. Similarly you just can't have random folks blasting out radio signals at random.

Yes, satellites are robots. However, they have no agency. Incentive structure decides if we have kessler syndrome, which then direct humans to solve problems with robots.

So, yes, they are either directly analogous to or are literal form of land.



Space is much more than circular orbits around earth, and is not a scarce resource — it's big enough that you can disassemble the earth, all the planets, all the stars, all the galaxies into atoms and give them so much padding it would still be considered extraordinarily hard vacuum. Something like 3.5 cubic meters per atom, though at that scale "size" becomes a non-trivial question because the space is expanding.

Which reminds me of a blog post I want to write.

> Similarly you just can't have random folks blasting out radio signals at random.

That's literally what the universe as a whole does.

You may not want it, but can definitely do it.

> Yes, satellites are robots. However, they have no agency.

Given this context is "AI", define "agency" in a way that doesn't exclude the people making the robots and the AI.

> Incentive structure decides if we have kessler syndrome, which then direct humans to solve problems with robots.

Human general problem solving capacites do not extend to small numbers such as merely 7.8e20.

For example, consider the previous example of the moon: if the entire mass is converted into personal robots and we all try to land them, the oceans boil from the heat of all of them performing atmospheric breaking.

And then we all get buried under a several mile thick layer of robots.

This doesn't prevent people from building them. The incentive structures as they currently exist point in that direction, of a Nash equilibrium that sucks.

Humans do not even know how to create an incentive structure sufficient to prevent each other from trading in known carcinogens for personal consumption even when labelled with explicit traumatic surgical intervention images and the words "THIS CAUSES CANCER" in big bold capital letters the outside.

If anyone knew how to do so for AI, the entire question of AI alignment would already be solved.

(Solved at one level, at least: we're still going to have to care about mesa-optimisers because alignment is a game of telephone).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: