Hacker News new | past | comments | ask | show | jobs | submit login

maxing a CPU is easy, keeping it fed with data, and being able to save that data out is hard.



Yes, but the work units (frames) are large enough that I'm still surprised.

Maybe they're not as parallelizable as I'd expect. E.g. if there's serial work to be done by reusing scene layout algorithms between frames.


A scene will have many thousands of assets (trees, cars, people, etc) each one will have the geo, which could be in the milllions of polygons (although they use sub-ds)

each "polygon" could have a 16k texture on it. You're pulling TBs of textures and other assets in each frame.


Hmm, yes I see. TBs? Interesting. I'd like to hear a talk about these things.

Naively I would expect that (as is the case for my MUCH smaller scale system) that I can compensate for network/disk-bound and non-multithreaded stages by merely running two concurrent frames.

On a larger scale I would expect to be able to estimate RAM-cheap frames, and always have one of them running per machine, but at SCHED_IDLE priority, so that they only get CPU when the "main" frame is blocked on disk or network, or a non-parallelizable stage. By starving one frame of CPU, it's much more likely that it'll need CPU the short intervals when it's allowed to get it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: