Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Only way to have hardware reach this sort of efficiency is to embed the model in hardware.

This exists[0], but the chip in question is physically large and won't fit on a phone.

[0] https://www.anuragk.com/blog/posts/Taalas.html



I think you're ignoring the inevitable march of progress. Phones will get big enough to hold it soon.


Instead of slapping on an extra battery pack, it will be an onboard llm model. Could have lifecycles just like phones.

Getting bigger (foldable) phones, without losing battery life, and running useable models in the same form-factor is a pretty big ask.


I think the future is the model becoming lighter not the hardware becoming heavier


The hardware will become heavier regardless I'm afraid.


Good. It's ridiculously tiny and lightweight these days.

Especially with phones; the first thing everyone does after buying their new uber thin iPhone is buying a case for it, which doubles its thickness.


I think for many reasons this will become the dominant paradigm for end user devices.

Moore's law will shrink it to 8mm soon. I think it'll be like a microSD card you plug in.

Or we develop a new silicon process that can mimic synaptic weights in biology. Synapses have plasticity.


One big bottleneck is SRAM cost. Even an 8b model would probably end up being hundreds of dollars to run locally on that kind of hardware. Especially unpalatable if the model quality keeps advancing year-by-year.

> Or we develop a new silicon process that can mimic synaptic weights in biology. Synapses have plasticity.

It's amazing to me that people consider this to be more realistic than FAANG collaborating on a CUDA-killer. I guess Nvidia really does deserve their valuation.


> bottleneck is SRAM cost

Not for this approach


That's actually pretty cool, but I'd hate to freeze a models weights into silicon without having an incredibly specific and broad usecase.


Depends on cost IMO - if I could buy a Kimi K2.5 chip for a couple of hundred dollars today I would probably do it.


I mean if it was small enough to fit in an iPhone why not? Every year you would fabricate the new chip with the best model. They do it already with the camera pipeline chips.


Sounds like just the sort of thing FGPA's were made for.

The $$$ would probably make my eyes bleed tho.


Current FPGAs would have terrible performance. We need some new architecture combining ASIC LLM perf and sparse reconfiguration support maybe.


Wouldn't it be the opposite of freezing weights?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: