Hacker News new | past | comments | ask | show | jobs | submit login

That calculation is incorrect. You need to fit both the model (140GB) and the KV cache (5GB at 32k tokens FP8 with flash attention 2) * batch size into VRAM.

If the goal is to run a FP16 70B model as fast as possible, you would want 8 GPUs with P2P, for a total of 192GB VRAM. The model is then split across all 8 GPUs with 8-way tensor parallelism, letting you make use of the full 8TB/s memory bandwidth on every iteration. Then you have 50GB spread out remaining for KV cache pages, so you can raise the batch size up to 8 (or maybe more).




I’ve got a few 4090s that I’m planning on doing this with. Would appreciate even the smallest directional tip you can provide on splitting the model that you believe is likely to work.


The split is done automatically by the inference engine if you enable tensor parallelism. TensorRT-LLM, vLLM and aphrodite-engine can all do this out of the box. The main thing is just that you need either 4 or 8 GPUs for it to work on current models.


Thank you! Can I run with 2 GPUs or with heterogeneous GPUs that have same RAM? I will try. Just curious if you already have tried.


2 GPUs works fine too, as long as your model fits. Using different GPUs with same VRAM however, is highly highly sketchy. Sometimes it works, sometimes it doesn't. In any case, it would be limited by the performance of the slower GPU.


All right, thank you. I can run it on 2x 4090 and just put the 3090s in different machine.


I know there's some overhead, it's not my calculation.

https://www.tweaktown.com/news/97110/tinycorps-new-tinybox-a...

Quote: "Runs 70B FP16 LLaMA-2 out of the box using tinygrad"

Related: https://github.com/tinygrad/tinygrad/issues/3791




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: