Hacker News new | past | comments | ask | show | jobs | submit | Macuyiko's comments login

I've noticed that puzzles that can be solved with CP-SAT's presolver so that the SAT search does not even need to be invoked basically adhere to this (no backtracking, known rules), e.g.:

    #Variables: 121 (91 primary variables)
      - 121 Booleans in [0,1]
    #kLinear1: 200 (#enforced: 200)
    #kLinear2: 1
    #kLinear3: 2
    #kLinearN: 30 (#terms: 355)

    Presolve summary:
      - 1 affine relations were detected.
      - rule 'affine: new relation' was applied 1 time.
      - rule 'at_most_one: empty or all false' was applied 148 times.
      - rule 'at_most_one: removed literals' was applied 148 times.
      - rule 'at_most_one: satisfied' was applied 36 times.
      - rule 'deductions: 200 stored' was applied 1 time.
      - rule 'exactly_one: removed literals' was applied 2 times.
      - rule 'exactly_one: satisfied' was applied 31 times.
      - rule 'linear: empty' was applied 1 time.
      - rule 'linear: fixed or dup variables' was applied 12 times.
      - rule 'linear: positive equal one' was applied 31 times.
      - rule 'linear: reduced variable domains' was applied 1 time.
      - rule 'linear: remapped using affine relations' was applied 4 times.
      - rule 'presolve: 120 unused variables removed.' was applied 1 time.
      - rule 'presolve: iteration' was applied 2 times.

    Presolved satisfaction model '': (model_fingerprint: 0xa5b85c5e198ed849)
    #Variables: 0 (0 primary variables)

    The solution hint is complete and is feasible.

    #1       0.00s main
      a    a    a    a    a    a    a    a    a    a   *A* 
      a    a    a    b    b    b    b   *B*   a    a    a  
      a    a   *C*   b    d    d    d    b    b    a    a  
      a    c    c    d    d   *E*   d    d    b    b    a  
      a    c    d   *D*   d    e    d    d    d    b    a  
      a    f    d    d    d    e    e    e    d   *G*   a  
      a   *F*   d    d    d    d    d    d    d    g    a  
      a    f    f    d    d    d    d    d   *H*   g    a  
     *I*   i    f    f    d    d    d    h    h    a    a  
      i    i    i    f   *J*   j    j    j    a    a    a  
      i    i    i    i    i    k   *K*   j    a    a    a
Together with validating that there is only 1 solution you would probably be able to make the search for good boards a more guided than random creation.

All of the above is true, but between solving quicker, and admitting we gave context:

I do agree with you that an LLM should not always start from scratch.

In a way it is like an animal which we have given the ultimate human instinct.

What has nature given us? Homo Erectus is 2 million years ago.

A weird world we live in.

What is context.


Weirdly it has gotten so far that I have embedded this into my workflow and will often prompt:

> "Good work so far, now I want to take it to another step (somewhat related but feeling it too hard): <short description>. Do you think we can do it in this conversation or is it better to start fresh? If so, prepare an initial prompt for your next fresh instantiation."

Sometimes the model says that it might be better to start fresh, and prepares a good summary prompt (including a final 'see you later'), whereas in other cases it assures me it can continue.

I have a lot of notebooks with "initial prompts to explore forward". But given the sycophancy going on as well as one-step RL (sigh) post-training [1], it indeed seems AI platforms would like to keep the conversation going.

[1] RL in post-training has little to do with real RL and just uses one shot preference mechanisms with an RL inspired training loop. There is very little work in terms of long-term preferences slash conversations, as that would increase requirements exponentially.


Is there any reason to think that LLMs have the introspection ability to be able to answer your question effectively? I just default to having them provide a summary that I can use to start the next conversation, because I’m unclear on how an LLM would know it’s losing the plot due to long context window.


A bit of a rant, but this is the kind of fact checking I wish the media and all our EU "trusted sources" would have jumped on instead of going for the most trivial and idiotic cases only a toddler (or a journalist) would get stumped by. (Example: recent posts on Tiktok 'claiming to be images from Pakistan but taken from Battlefield 3...' again. Who is impressed or even surprised by this kind of investigation?)

Much more interesting, but also with more effort required, so of course it never happens.

It would have a more beneficial societal effect, because it is this kind of article, neutrally written, deep investigation, that truly would make people capable to self-discover "maybe I should question a bit more things".


That, and there is a big incentive to just sell content. Sensational, eye-catching, controversial content will grab more readers.



From an age perspective (but the crowd here will not like that): before I trusted myself I could always find it back so I don't need to save it. Now I can't anymore, but I don't care so much.


I am not so sure, but indeed it is perhaps also a sad realization.

You compare this to "a human" but also admit there is a high variation.

And, I would say there are a lot humans being paid ~=$3400 per month. Not for a single task, true, but for honestly for no value creating task at all. Just for their time.

So what about we think in terms of output rather than time?


Some more interesting approaches in the same space:

- https://github.com/openai/evolution-strategies-starter

- https://cloud.google.com/blog/topics/developers-practitioner...

And perhaps most close:

- https://weightagnostic.github.io/

Which also showed that you can make NNs weight agnostic and just let the architecture evolve using a GA.

Even though these approaches are cool and NEAT even is somewhat easier to implement than getting started with RL (at least that is what based on so many AI Youtubers starting with NEAT first) they didn't ever seem to fully take off. Although knowing about metaheuristics is still a good tool to know IMO.


A few weeks ago I was planning to design a model I could send to a local 3d printer to replace a broken piece in the house for which I knew it would be impossible to find something that would fit exactly.

I looked around through a couple of open source/free offerings and all found them frustrating. Either the focus on easy of use was too limiting, the focus was too much on blob, clay-like modeling rather than strong parametric models (many online tools), or they were too pushy to make you pay, or the UI was not intuitive (FreeCAD).

OpenSCAD was the one which allowed me to get the model done, and I loved the code-first, parametric-first approach and way of thinking. But that said I also found POV-Ray enjoyable to play around with around the 2000s. Build123D looks interesting as well, thanks for recommending that.


The major advantage of Build123D for your use case -- sending it to someone else to fabricate it -- is STEP output support.

This really expands your options for what you can make and who you can ask to make it. There are now some online fabrication places that will do CNC from mesh formats, but really the only way to have proper control is sending them a STEP file.


I follow RL from the sides (I have dabbled with it myself), and have seen some of the cool videos the article also lists. I think one of the key points (and a bit of a personal nitpick) the article makes is this:

> Thus far, every attempt at training a Trackmania-playing program has trained the program on one map at a time. As a result, no matter how well the network did on one track, it would have to be retrained - probably significantly retrained

This is a crucial aspect when talking about RL. Most of the Trackmania AI attempts focuses on a track at a time, which is not really a problem since they want to, given an individual track, outperform the best human racers.

However, it is this nuance that a lot of more business oriented users don't get when being sold on some fancy new RL project. In the real world (think self-driving cars), we typically want agents to be way more able to generalize.

Most of the RL techniques we have do rather well in these kinds of constrained environments (in a sense they eventually start overfitting on the given environment), but making them behave well in more varied environments is way harder. A lot of beginner RL tutorials also fail to make this very explicit, and will e.g. show how to train an agent to find the exit in a maze without ever trying it on a newly generated maze :).


By the end of the article, and in the subsequent article, they're no longer doing it one track at a time.


At first I thought you were talking about some Rocket League AI stuff haha


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: