Hacker News new | past | comments | ask | show | jobs | submit login

From a first principles approach: it does not really make sense to use an LLM to do fundamental analysis directly. Maybe you can use an LLM to write some python code to do fundamental analysis. But skipping that model building step and just applying a language model to numbers does not make intuitive sense to me.

I am surprised at the results in the paper. The biggest red flag is that the researcher are not sure why there is predictive ability in LLMs. Maybe they didn't control for some lookahead bias.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: