Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
QLoRA 4-bit finetuning of LLMs (github.com/artidoro)
7 points by kashifr on May 24, 2023 | hide | past | favorite | 1 comment


An efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: