Hacker News new | past | comments | ask | show | jobs | submit login

It does sound appealing, doesn't it :-)

Regarding the model: as described in the article, you can create specialized ML models in millisecond scale. This means that all computational, latency and consistency issues disappear, as you don't need to update models.

The not so nice side of this approach is that there are currently limitations on scaling, so that it may work fine with 100k samples, but becomes somewhat slower with 1M samples and likely won't work in 10M and 100M scale. Still, like described in the article, I believe this issue is solvable within near future, and it's not an issue for many/most applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: