Hacker News new | past | comments | ask | show | jobs | submit login

Edit: I'd be curious which other papers/authors folks would add who are doing interesting work here.

One starting point is to look at great synthesis, mechanized proofs, and PL folks who have been exploring neural synthesis, and the papers they write + cite:

* Arjun guha: starcoder llm, llm type inference, etc

* Sumit Gulwani & Rishabh Singh (whose pre-llm work is on the syllabus)

* Nikhil Swamy (f*), Sorin Lerner (coq)

* A practical area that has a well-funded intersection because of urgency is security, such as niches like smart contracts, and broader bug finding & repair. Eg, simpler applied methods by Trail of Bits, and any academics they cite. Same thing for DARPA challenge participants: they likely have relevant non-DARPA papers.

* I've been curious about the assisted proof work community Terence Tao has fallen into here as well

* Edit: There are a bunch of Devin-like teams (ex: Princeton team markets a lot, ...), and while interesting empirically, they generally are not as principaled, which is the topic here, so I'm not listing. A lot of OOPSLA, MSR, etc papers are doing incremental work here afaict, so work to sort out

An important distinction for me is whether it is an agentic system (which is much of it and can more easily use heavier & classic methods), training an existing architecture (often a stepping stone to more interesting work), or making a novel architecture (via traditional NLP+NN tricks vs synthesis-informed ones). Most practical results today have been clever agentic, better training sets, and basic NLP-flavored arch tweaks. I suspect that is because those are easiest and early days/years, and thus look more for diversity of explorations right now vs deep in any one track.






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: