Hacker News new | past | comments | ask | show | jobs | submit login

>the realization that translating constraints to a model (variables, structure etc) is 90% of the work and the most difficult part.

LLMs can help a lot there. I've been wanting to write an LLM => Constraint model adapter that does it for you. It's such low hanging fruit, I wonder if anyone else would benefit from it though.






Indeed, it seems like an obvious thing to do. But just as you noted, it's not very clear LLMs really can improve over Prolog in terms of expressiveness and practicality given that Prolog already was designed for natural language parsing and is a concise formalism based on predicate logic (and ultimately propositional and first order logic) with constraint domain theory embeddings such as for arithmetic. Prolog syntax is also the starting point for most constraint solvers, and Prolog evaluation is also often referred to as basis for generalization into constraint solving. Though I'm not sure this generalization bears much value tbh when the break-through successes in constraint solving were particular domain-specific techniques (SAT solvers, interval propagation, arc consisteny/finite domain propagation, etc).

They're already very good at it—I myself have been using OR-Tools's CP-SAT solver for a large bin packing problem at work (via https://github.com/ankane/or-tools-ruby) & Chat-GPT was a big help working out the details of some of the constraints and objectives.

It is indeed a very good fit. There is some cool research about it: https://github.com/skadio/ner4opt

I think that I would. Using natural language to describe the problem and constraints would be much better than figuring out mid project that the variable structure I've chosen does not allow to express a particular constraint. Defining the right structure is just Art at this point.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: