Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
malnourish
35 days ago
|
parent
|
context
|
favorite
| on:
Qwen3-Omni: Native Omni AI model for text, image a...
The sales case for having LLMs at the edge is to run inference everywhere on everything. Video games won't go to the cloud for every AI call, but they will use on-device models that will run on the next iteration of hardware.
Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: