Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
indeyets
on Sept 12, 2023
|
parent
|
context
|
favorite
| on:
Fine-tune your own Llama 2 to replace GPT-3.5/4
The plan was to do it in-house. And buying 8xA100 is a bit too much ;)
FrostKiwi
on Sept 13, 2023
[–]
I'm in the exactly same boat. Targeting to fine tune llama 2 70b on 2xA100, with the hope of having one A100 run an 8bit quantized 70b model 24/7.
If you have an experiences to share, successes or failures, please do.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: