Hacker News new | past | comments | ask | show | jobs | submit login
Layer-wise inferencing and batching: Small VRAM doesn't limit LLM throughput (verdagon.dev)
5 points by one-punch 6 months ago | hide | past | favorite



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: