Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
int_19h
on March 8, 2024
|
parent
|
context
|
favorite
| on:
Fine tune a 70B language model at home
They do, but for inference at least, it's memory bandwidth that is the primary limiting factor for home LLMs right now, not raw compute.
sroussey
on March 9, 2024
[–]
Wonder if the apple silicon ultra series will start using HBM3(e) on desktop in the future.
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: