The above link is about AI, but it could apply to any alien species just as well.
Any organism living in a thermodynamic universe and subject to competition and evolutionary pressures will develop instinctual drives that align with resource acquisition, replication, and survival. As the above link details this surprisingly holds true even when you remove evolution and consider designed minds.
Even if a hypothetical civilization didn’t care about resource acquisition, replication and survival, they’d be out-competed by one which did. So where is that civilization’s ever expanding sphere of Dyson clouds?
The authors referenced at the above link don't use that seemingly unwarranted tone of certainty. I can't see anything in Omohundro's papers of anything like the same flavour, and Bostrom's conclusion is the sensible-sounding
It should be emphasized that the existence of convergent instrumental reasons, even if they apply to and are recognized by a particular agent, does not imply that the agent’s behavior is easily predictable. An agent might well think of ways of pursuing the relevant instrumental values that do not readily occur to us. This is especially true for a superintelligence, which could devise extremely clever but counterintuitive plans to realize its goals, possibly even exploiting as-yet undiscovered physical phenomena. What is predictable is that the convergent instrumental values would be pursued and used to realize the agent’s final goals, not the specific actions that the agent would take to achieve this.
I’m not what you’re picking up on other than academic prose. The very quote you give is very certain that convergent intermediate goals would be pursued.
There are a lot of problems with the dark forest hypothesis, the most critical for this purpose being that it is not a stable outcome. A civilization with the capability to create a Dyson sphere and harness the power of an entire star could easily defeat any invader attracted to their presence. E.g. with gamma-ray lasers that fry anything and everything that might be hostile.
Even with the most advanced technology we can imagine, you can't expect to be able to detect a neighboring civilization, cross the gap between stars with an invading fleet, and arrive with enough power to overwhelm defenses every single time. Local development of solar system resources is inherently an exponential process, whereas mustering of interstellar resources is quadratic. A potential dark forest / wolf entity (I much prefer Alastair Reynolds over Liu Cixin) would have to arrive right within the (cosmically brief) window of opportunity in which a developing civilization announces its presence but before it achieves Type II status on the Kardashev scale. Statistically, someone will eventually be lucky enough to avoid the wolves long enough to get that far, and then they'd have the capability to defend themselves from any plausible enemy.
Where are the Dyson spheres of those civilizations?
https://wiki.lesswrong.com/wiki/Basic_AI_drives
The above link is about AI, but it could apply to any alien species just as well.
Any organism living in a thermodynamic universe and subject to competition and evolutionary pressures will develop instinctual drives that align with resource acquisition, replication, and survival. As the above link details this surprisingly holds true even when you remove evolution and consider designed minds.
Even if a hypothetical civilization didn’t care about resource acquisition, replication and survival, they’d be out-competed by one which did. So where is that civilization’s ever expanding sphere of Dyson clouds?