Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The point I found most interesting is what the author calls "robustness".

Another advantage of AlphaEvolve was robustness: it was relatively easy to set up AlphaEvolve to work on a broad array of problems, without extensive need to call on domain knowledge of the specific task in order to tune hyperparameters.

In software world "robustness" usually implies "resistance to failures", so I would call this something different, more like "ease of integration". There are many problems where in theory a pre-LLM AI could do it, but you would have to implement all this explicit modeling, and that's too much work.

Like to pick a random problem, why does no superhuman AI exist for most video games? I think most of the difficulty is not necessarily in the AI algorithm, it's that the traditional method of game playing involves programming a model of the game, and for most video games that's an incredible amount of work, too much for someone to do in their spare time.

LLMs, on the other hand, are decent at integrating with many different sorts of systems, because they can just interoperate with text. Not quite good enough at video yet for "any video game" to fall. But a lot of these problems where the difficulty is not "algorithmic" but "integration", the LLM strategy seems promising for cracking.





Looks like he's updated the text, striking through "robustness" and substituting "adaptability"



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: