Hacker News new | past | comments | ask | show | jobs | submit login

I also don't think instrumental convergence is a risk from LLMs.

But: using up all resources for computational power might well kill a lot of — not all — humans.

Why? Imagine that some future version can produce human quality output at a human speed for an electrical power draw of 1 kW. At current prices this would cost about the same to run continuously as the UN abject poverty threshold, but it's also four times the current global electricity supply which means that electricity prices would have to go up until human and AI labour were equally priced. But I think that happens at a level where most people stop being able to afford electricity for things like "keeping food refrigerated", let alone "keep the AC or heat pumps running so it's possible to survive summer heatstroke or winter freezes".

Devil's in the details though; if some future AI only gets that good around 2030 or so, renewables are likely to at that level all by themselves and exceed it shortly after, and then this particular conflict doesn't happen. Hopefully at that point, AI driven tractors and harvesters etc. get us our food, UBI etc., because that's only a good future if you don't need to work, because if you do still need to work, you're uncompetitive and out of luck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: