Hacker News new | past | comments | ask | show | jobs | submit | lbm's comments login

The big problem I see with program synthesis is that specifications will need to be exceedingly complex in order to cover all cases (http://www.commitstrip.com/en/2016/08/25/a-very-comprehensiv...). Even then, you’re not going to be able to predict exactly how these black box programs will behave.


It seems like any sufficiently detailed specification is pretty much already code or is trivial to translate into code. A language for providing those specifications to a synthesizer would likely balloon into a full-blown programming language, defeating the entire point of this approach. At that level of complexity, a synthesizer just becomes an especially aggressive compiler.

That said, maybe there's a need for something like that. A programmer could write an exceedingly thorough set of unit tests and let the synthesizer worry about how to satisfy those tests. Writing tests is almost universally simpler and easier than writing implementations. I can see the potential productivity enhancing value of that, especially in situations where I have to work in an unfamiliar framework. Like I might not know how to write an Electron app, but I could certainly write the unit tests to specify the front end behavior I want. A synthesizer could save me from having to learn all the nitty gritty framework specific techniques. Basically TDD without having to do the actual implementation. And eventually if the application required performance enhancements, a developer with the correct expertise could tune my synthesized code.

Because, right now, developing specifications is by far the hardest part of coding. But implementation is still not trivial in most cases. For certain use cases, it might be helpful to free up developer time to focus on specifications and QA instead of feature crunching.


Sythesising an entire application is not going to happen even if you managed to lock down its behaviour with unit tests because the time it takes to search for a program of size n is exponential.

What could work is synthesis as a more intelligent form of code completion for small snippets.


The same exponential search space problem applies to chess, and we still managed to achieve high quality results by using heuristic algorithms and ML/statistics to augment the tree search.


I agree, but what if you added a test-case generator like QuickCheck[1], then the system could refine the initial spec using the programmer/designer as an "oracle"? You give it the initial specs, the thing generates the tests for the edge cases and ambiguities and then asks you, "What should happen here?".

[1] https://en.wikipedia.org/wiki/QuickCheck


Specs can be made simpler by giving sufficiently high level objectives. Theorems can be very concise, yet can need the introduction of various lemmas to be proven correct. Counter Example Guided Inductive Synthesis (like in Z3) can offer certain guarantees.

Letting the computer the freedom to find those intermediate steps is nothing new (e.g. Koza's Automatically Defined Functions) ; More recently techniques like Open AI Hindsight Experience Replay where the machine is given a strategy to create its own intermediate goals to allow to "learn on his own" to navigate towards the harder objective.

Quite often in the world there is a lot of complexity, but most of the time it either emerges from simple rules (Hi Wolfram :) ), or come bundled from artificial complexity at the boundary (i.e. you need to interface with existing *ware). Artificial complexity will result in complex specifications (2nd principle?), even though you can try either to regularize them (e.g. by putting various constraint on the code size and form), relax them, or approximate them probabilistically by black box where we have the whole literature about searching for neural architecture (AutoML).

Of course, once you wrote it with simple rules, we then call it a game and we already know how to obtain superhuman performance.


There's probably a lot of scope for synthesizing optimized versions of plainly written reference code, in which case there's already a bulletproof specification.

If you didn't have to worry about performance, I think programming would be immensely easier.


Smart contracts even include the incentive for others to search for these uncovered cases :)


Even if it's a bit of a gimmick in its current state, it's a great step towards something truly useful. Technology like this could be amazing in fields like medicine and education.


Not necessarily. There are plenty of projects in their infancy that follow semver correctly. I'd argue that a project with a high major number is more likely to be indicative of improper usage.


I don’t think high major version number tells you that. Maybe they left 0.x.y (unstable) too early and were just honest with their early churn since which is as semver as you can get.

But one if the main semver violations I see in the wild is a project slotting major changes into the minor version number because they want to avoid high major version numbers for some reason or have some romantic idea of what a major version bump “should be”.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: