Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPT and transformers are not the end game for logical analysis. Pretrained LLMs cannot capture factual knowledge from text, that is true. MIT is proposing a unified model for Knowledge Embedding and Pretrained LanguagERepresentation (KEPLER) [0], which can better integrate factual knowledge into LLMs.

Knowledge Embeddings could be created by anyone who wants to better simulate a particular system, including kids, given the right exploratory environment.

[0] https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00360...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: