Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>I suspect we're going to discover at some point, or maybe OpenAI already did, that training on code isn't just a neat trick to get an LLM that can knock out scripts.

This is a thing that's already fairly well known

https://arxiv.org/abs/2210.07128



Thanks for the link. That paper seems a bit different though. They're asking the model to do reasoning by emitting serialized graphs using a custom declarative data format, which it struggles with of course because it hasn't seen any such format before. Then they switch to asking it to emit code and it does better. But what I was meaning was more that code training helps it reason and speak better even in English, where no code is being emitted at all.


To be fair Codex was much better than GPT-3 on reasoning benchmarks like MMLU and the like. And people have kind of noticed the Code trained models to reason better. Don't know if a paper was published about that though.


Thought can be seen as a process that encompasses both rational and irrational thinking. Rational thought, in programming languages, involves precise logic, determinism, and the ability to simulate outcomes. On the other hand, human language, like English, embraces subjective interpretation and approximations, allowing for the expression of emotions and nuanced understanding.

Thought, as a cognitive process, can bridge the gap between these two realms, enabling individuals to move back and forth between rational and irrational modes of thinking, depending on the context and objectives at hand.

With data, unstructured text could be considered "irrational" and structured text (like code or a column in a database) could be considered "rational".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: