Hacker News new | past | comments | ask | show | jobs | submit login

LLMs, to a first approximation, literally "just" do one thing: given some text, predict the text that follows it. There is nothing magical.

It turns out you can create clever prompts that use that functionality to do a huge variety of tasks, though.

For instance, you can prompt it like:

    The following is the contents of main.py:

    ```
    <some simple code here>
    ```

    This code will print the following:
And then GPT will do its best to predict what the code prints out. For simple programs, this will give the appearance that it is "running" the program. With copious print statements, it can actually "run" fairly complicated programs, such as Dijkstra's algorithm: https://twitter.com/GrantSlatton/status/1600950846216237057



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: