Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
dsp_person
23 days ago
|
parent
|
context
|
favorite
| on:
Lumo: Privacy-first AI assistant
So is this aimed at small models only? Is there any advantages to these models compared to what I can run locally on a 16GB VRAM GPU?
Would be nice for something at the level of like Claude 3.5
Alex-Programs
22 days ago
[–]
Yeah, proper V3/R1/K2/Qwen 235B are the point at which open LLMs become worth using.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
Would be nice for something at the level of like Claude 3.5