Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
QLoRA 4-bit finetuning of LLMs
(
github.com/artidoro
)
7 points
by
kashifr
on May 24, 2023
|
hide
|
past
|
favorite
|
1 comment
kashifr
on May 24, 2023
[–]
An efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: