Hacker News new | past | comments | ask | show | jobs | submit login

It's just concept space. The entire LLM works in this space once the embedding layer is done. It's not really that novel at all.



This was my thought. Literally everything inside a neural network is a “latent space”. Straight from the embeddings that you use to map categorical features in the first layer.

Latent space is where the magic literally happens.


Completely agree. Have you see this?

https://sakana.ai/asal/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: