Hacker News new | past | comments | ask | show | jobs | submit login

Also, AI doesn't seem that network-intensive to me. Chatting with a LLM is nothing, and image and voice processing/generation is not that much either, compute dominates the costs and latency is high no matter where you are. The models themselves are kind of big, so is training data, but you don't need to move them very often.

I would understand better if it was for VR, or simply video/communication.




AI data centers are not latency sensitive because their purpose is model estimation, not service delivery.

But they still need fibre optic connnections to move immense amounts of data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: