Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
psynister
4 months ago
|
parent
|
context
|
favorite
| on:
Ask HN: Which LLMs can run locally on most consume...
Check out Ollama, it's built to run models locally. Llama3 8b runs great locally for me, 70b is very slow. Plenty of options.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: