Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What about the Toolformer paper? And things like LangChain?

The speed at which I see this space changing in the last few months will render the argument obsolete pretty fast. Toolformer is just one example, but even the focus on "predicting words" seems pretty myopic. From what I have seen, these models keep getting upgraded with new layers of training which enhance their capabilities. So of it doesn't currently do X, but all it takes is the right annotations on a training set and then all of a sudden it does do X, to where will the goalposts be moved? Now that this genie is out of the bottle there is a very fast feedback loop finding flaws, errors or deficiencies and plugging those gaps.

Today I asked chatGPT to write a bullet point list of steps of how to accomplish a particular goal, and then asked it to write a bit of software in C# that executes those steps, and I kept the program simple and just asked for each step to print what it would do, and I then asked it what the output of the program would be and it gave me the correct answer, but what really floored me was when I just asked it somewhat ambiguously "can you make it easier to read?" and it modified the code to add indentation for each level of nested subtasks etc. It's somewhat simple, and a fairly trivial program to write on your own, but all arguments about "they don't do context or logic" sort of become moot for me if I can give it an instruction and it does what I want.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: