Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for offering your time!

1. How do you/y'all feel about the ethics of publishing this kind of detailed instructional material for self-replicating machines on the open internet, given the potential risk of behavioral singularities from as-of-yet-uncharted intelligence explosion dynamics? I mean this question in the most friendly, least “gotcha” way possible — feel free to answer flippantly!

2. What would you say was the “frame problem” of this work that blocked progress, either recently or in your historical readings? I’m just starting to examine this literature after intentionally not engaging while designing my own systems, and it seems pretty damn robust in a post-frame-problem world. What might I be missing out of naïveté?




1. I've been a doomer since before the term was invented, and I never had any worry about synthesis research based on symbolic techniques. Gödel is our friend here: they're not powerful enough to model themselves. The best you get is stuff like https://ieeexplore.ieee.org/abstract/document/7886678?casa_t... . An important component of the Sketch synthesizer is its simplifier, and this paper used Sketch to synthesize a better simplifier, and that speeds up synthesis times by 15%-60%. But that's a one-time improvement. The exponential take off comes from somewhere else.

2. Let me see if I understand your question: I think you're saying "The frame problem of AI and philosophy, https://plato.stanford.edu/entries/frame-problem/ , was the major blocker in AI systems, and now it's been solved by deep learning. What is the corresponding problem in program synthesis?" I think your "pretty damn robust" comment is important, but haven't yet figured out what you mean.

Part of the reason I found this confusing is that the literal frame problem also appears in PL and synthesis, primarily in the verification of heap-based imperative programs. See the "frame rule" of separation logic.

So anyway, what do I see as the big challenges stopping synthesis from reaching its dream of being able to generate well-designed large programs with little input?

My two answers:

First, there's the problem of finding good abstractions. You see this in very old work:trying to prove an imperative program works is easy to automate EXCEPT for the problem of finding loop invariants. This has remained a challenging problem in the 50 years since it was first described. Craig interpolation is the classic solution, but it only goes so far. The synthesis view is: figuring out how to write loops is not straightforward because you need to summarize infinitely possible states. And that's loops; designing data structures and finding good invariants for stateful applications is harder still.

Second is what I like to call the "semantic veil." Generally, for any given chunk of code, there are multiple goals and abstractions that yield the same chunk of code. See my analogy to dairy-free chocolate chips in https://www.pathsensitive.com/2023/03/modules-matter-most-fo... . So deducing the intentions of a piece of code from just the code itself is literally impossible. Unfortunately, the way humans design programs is much better understood as being about these higher-level organizational structures, and not about the code itself. This is an area where I'm excited about the potential of LLMs: they can make excellent use of the information in priors and in natural language to guess said intentions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: