Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In this case AlphaEvolve doesn't write proofs, it uses the LLM to write Python code (or any language, really) that produces some numerical inputs to a problem.

They just try out the inputs on the problem they care about. If the code gives better results, they keep it around. They actually keep a few of the previous versions that worked well as inspiration for the LLM.

If the LLM is hallucinating nonsense, it will just produce broken code that gives horrible results, and that idea will be thrown away.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: