Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
int_19h
10 months ago
|
parent
|
context
|
favorite
| on:
Llama.cpp guide – Running LLMs locally on any hard...
The biggest frustration with Ollama is that it's very opinionated about the way it stores models for usage. If all you use is Ollama, that doesn't matter much, but it's frustrating when that GGUF needs to be shared with other things.
Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: