Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
pantulis
on April 21, 2025
|
parent
|
context
|
favorite
| on:
Gemma 3 QAT Models: Bringing AI to Consumer GPUs
How did you manage to run open-codex against a local ollama? I keep getting 400 Errors no matter what I try with the --provider and --model options.
pantulis
on April 21, 2025
[–]
Never mind, found your Leanpub book and followed the instructions and at least I have it running with qwen-2.5. I'll investigate what happens with Gemma.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: