Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is version 1 of what's going to become the new 'PC'.

Future versions will get more capable and smaller, portable.

Can be used to train new types models (not just LLMs).

I assume the GPU can do 3D graphics.

Several of these in a cluster could run multiple powerful models in real time (vision, llm, OCR, 3D navigation, etc).

If successful, millions of such units will be distributed around the world within 1-2 years.

A p2p network of millions of such devices would be a very powerful thing indeed.



> A p2p network of millions of such devices would be a very powerful thing indeed.

If you think RAM speeds are slow for the transformer or inference, imagine what 100Mbs would be like.


Depends on the details, as always.

If this hypothetical future is one where mixtures of experts is predominant, where each expert fits on a node, then the nodes only need the bandwidth to accept inputs and give responses — they won't need the much higher bandwidth required to spread a single model over the planet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: