> How many human neurons do you have to use before you get an actual human consiousness?
How many grains of sand makes a heap.
(I don't know the relative importance of organic neurons as hardware vs. whatever specific connectivity architecture they happen to have in us, so the same might apply to perceptrons).
China AI and chip investments are off the charts. Note that China is scheduled to attempt a Mars sample return mission with Tianwen-3. They are scheduled before any other competing team. China has the industrial support of the non-democratic government. They are willing to bear the risk for their future. This might be more stable funding then commercial SpaceX or high-deficit-USA-NASA.
There's a massive gap between "probe" and "colonise". I meant the latter, and I don't see that kind of risk-taking from China — their first few crewed launch was not even announced as such until after the launch itself had succeeded, implying they were not willing to risk the optics of that failing.
I think there's a reasonable chance the actual Sputnik 2.0 will be them making a permanent lunar base before the US does. Musk doesn't seem to care about going to the moon beyond it helping prove (and fund) the Starship project, even though Starship could put a station there in a handful of landings.
I also think the moon is a better choice for the same reason I think China will prefer it over Mars: when things go wrong, it's much, much easier to act like your emergency rescue mission was the actual mission plan all along — "No, the 阿波罗-十三 module didn't have an explosion in the oxygen tank, we were simply venting unnecessary resources as part of a planned exercise. This mission was never intended to land on the Moon.", that kind of thing happens on a Mars mission, everyone just dies.
> China AI and chip investments are off the charts
That's as may be, but it doesn't change my point with any of that. The best reaches atomic resolution in (by my guess) 2032… and then what? China isn't going to beat that.
AI is limited by electrical power, both for training (data centres) and inference (for cars and other real-time robotics). The power envelope is whatever it is, but I'm expecting 5x hardware efficiency improvements for a fixed power envelope by 2030 (slower than Moore's Law used to suggest). The algorithmic efficiency also improves at the same time, which I'm assuming brings it back in line with Moore's rate, but that still means going from a car (with ~1kW spare for compute) to a robot (with 100W spare for compute) will take 5 years (or 10 years if you assume 3kW for the car and 30W for the robot). And the global power grid is 2TW, which is 250 W/person, so humanoid robots can be in an awkward place of driving up demand for electricity so much people literally can't keep the lights on while taking all our jobs and yet still be less than half of all labour.
And that's equally true regardless of if China does or doesn't take a lead over the US, or if the EU gets organised and has its own EUV fab , etc.
Indeed, though looking at the rate of change in various fields, I feel everything goes weird some time around 2032 or so: https://benwheatley.github.io/blog/2024/03/23-17.24.34.html
> How many human neurons do you have to use before you get an actual human consiousness?
How many grains of sand makes a heap.
(I don't know the relative importance of organic neurons as hardware vs. whatever specific connectivity architecture they happen to have in us, so the same might apply to perceptrons).