Hacker News new | past | comments | ask | show | jobs | submit login

If speed was held back by gate time, then sure, but i'd have thought that propagation delays between gates will be kind of relevant.

Making the clock 1,000,000 times faster would mean the silicon would be 1,000,000 times shorter (in each dimension) so I guess such designs would support some super high clock rates for some specialist applications for small gate arrays, but for general purpose computing, hmm, i'm not so sure.




Propagation delay isn't purely about distance: it's about the time needed for the output to settle in reaction to inputs. That includes capacitive delays: containers of electrons having to fill up.

Say we are talking about some gate with a 250 picosecond propagation delay.

But light can travel 7.5 cm in that time; way, way larger than the chip on which that gate is found, let alone that gate itself. That tells you that the bottleneck in the gate isn't caused by the input-to-output distance, which is tiny.


Ya the article focuses on computing but I think it could enable totally new electronic devices like frequency/phase controllable leds, light field displays and cameras, ultra fast ir based wifi etc...


I could see this potentially allowing VLB interferometry for optical frequencies, allowing even higher resolutions than the Event Horizon Telescope.


I think that's fast enough for gravity gradiometry on a chip.


That is, by just putting a clock on each corner and counting their relative ticks.


Think pipelining ...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: