Hacker News new | past | comments | ask | show | jobs | submit login

Incredibly low power draw in the low end, which allows for more mobility and diverse sources of energy. Like if you could wear meaningful compute that is powered with body kinetic energy or some flexible solar thing, I think that unlocks computing to happen in many places with many physical and virtual things working in concert. ESP-32 on BLE sleep can sip tiny amounts of energy, but cannot do any meaningful compute, and a low top end speed. I want a chip that can burst to 5GHZ and rest at 0.05mha. That can make generating or streaming a massive bitrate from a wearable. Then you can do sick-ass offline-first edge compute and be able to drive one of these haptics devices, projectors, VR, whatever with a tiny machine or local cluster. Wearable k3s if you will. If you could drive any display from your person, with local and remote indexes and objects of everything you want or need, interfaces to people and things, replay of everything, totally offline as you want or need.

If it was fast enough, and didn’t require a whole backpack to drive and power it, then I think a computer like that would feel like a superpower.




Hang on, is that actually true though? I was under the impression that shrinking process nodes represented a trade-off in terms of power consumption on the low end. There must be reasons why nobody makes a 14nm microcontroller, right?

Shrinking the process node means that you get less power consumption per transistor flip, but it can also increase the amount of static leakage current, which hurts designs that aim for energy budgets under say, 100uA.

I could be wrong, and I think that leakage can be mitigated by the lower operating voltages on smaller process nodes, but I don't believe it is as simple as "smaller process = more power efficient". If you're talking about GHz-scale application processors it holds true, but getting that sort of chip to idle at 0.05mA might be hard.

RAM can also consume a lot of power, if you have gigabytes of it. So until we have cheap high-density NVRAM, you might need a sort of 'hibernate' mode to get really low power consumption. And if you did that, you'd need to burn a bunch of energy to wake up and go back to sleep...busy, busy, busy.


I really am out of my depth to speak to specifics of 3nm or how the hell physics even works at that scale. I can just assert that mass adoption of ubiquitous and tactile computing has dependency on speed, locality, and energy consumption.

That said, I found this really interesting: https://www.pcgamesn.com/samsung-3nm-production-performance

They seem to have shipped a novel gate design rather than just shipping a smaller-yet FitFET.

You're spot on that CPU power consumption is a just a part of the equation!


The chief reason no one makes a 14nm MCU is because the cost of designing on leading edge nodes is staggering and MCUs are a small market that doesn’t make that much money. Just as importantly, the MCU often tends to have a lot of analog and RF content on it, which actually gets easier worse as the nodes shrink




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: