Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it usually refers to accuracy out of a range of 1. typically 3 decimal places means 1000ppm and 4dp means 100ppm.

The typical problems with analog computers are many... precision of components (e.g. gain or attenuation) is limited to ~0.1% for resistors and ~1% for capacitors (inductors aren't typically used). You can try to tune things (ratiometrically) to get higher accuracy, but at the cost of increased noise and temperature sensitivity. The more complex the system, the more things can go wrong... so you end up needing simple systems or simple tools (digital).

The typical problem is that if you build a filter (e.g. a transfer function with a summer or differencer) then you will tend to clip the dynamic range or either with a maximum voltage (integrators) or a minimum noise level (differentiators) pretty quickly. You can play some games with log converters, but accuracy really still matters and drift or gain error with time is rarely an option.

The best way to use analog computers is with negative feedback to null the input. They do that amazingly well... so you can build a temperature controller, missile tracker, or actuator that only minimizes an error so that high gain corrects for any inaccuracy or offset.



A big technical EE problem for analog computers is interconnects and their EMI/EMC interference issues and impedance issues. The analog specs for on-chip digital circuitry are much more relaxing to develop around. You can work around the interconnect issues on analog computers by dumping lots of power into the driver and input circuits but eventually some joker is going to point out that it would be electrically cheaper (in terms of current/power draw, etc) to transmit that 0 to 5 volt signal using something like I2C or SPI and then you're on a fast slippery slope to turning your analog computer into an exercise in DSP programming. At some point of complexity the interconnect cable driver circuitry is going to be power hungry enough that its cheaper to emulate the whole thing in floating point on a digital computer.

If you make a graph of PITA vs bit resolution, we're all pretty comfortable emulating digital computers on analog real world circuits using binary ones and zeros. Surely the gain is very little and the PITA increases very much by implementing digital computers on trinary + - 0 analog computers. Some think the graph is U shaped and at some resolution level, the PITA of analog high resolution falls beneath performance so it makes sense. Many like me think that graph never U shapes such that anything is "better" at emulating digital computers than using analog physical computers based on binary 0/1. AFAIK no one has built a modern floating point accelerator using opamps and A/D and D/A converters, so I find it unlikely its useful.

A two transistor NAND gate is after all just a analog computer using simple binary signals. All computers are analog its just the popular digital ones are only defined and well behaved when using binary analog signals.

There is some audiophile effect going on. Surely a mp3 codec running on a vacuum tube opamp would sound more mellow and all that.


I always hear such things from EE's. So, what's your thoughts on stuff like this in terms of analog "always" being more expensive or power hungry:

http://www.cisl.columbia.edu/grads/gcowan/vlsianalog.pdf

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.325...

Now, I won't argue with cheaper to develop since analog is manual with a lot of issues to contend with. I'm just wondering if there are more applications that can get huge speedups at lower power or cost than digital. I know the ASIC makers in power-sensitive spaces are already moving parts of their chips to analog for power reduction. That's what mixed-signal people tell me anyway: the specifics are often secret. So, I have to dig into CompSci looking for what they've tried.


The analogue neural net stuff is quite a reasonable example, because it's specifically trying to mimic a real analogue system and tends to be noise-tolerant.

Couple of points from your lower link:

- return of "wafer-scale"! Nice.

- " the average power consumption is expected to stay below 1 kW for a single wafer"; not bad but you're still going to need to cool that

- actually a hybrid system: long range comms is digital and multiplexed to save wiring, converted to analogue at the synapse

- "All analog parameters are stored in non-volatile single-poly floating-gate analog storage cells developed for the FACETS project" => basically analogue Flash? A development of MLC I suppose

On reading the whole thing, it seems the magic is actually in choosing which bits to make digital. The "long range" neural events are sent as differential 6-bit bursts, multiplexed, which they claim saves significant power.


> AFAIK no one has built a modern floating point accelerator using opamps and A/D and D/A converters,

This is one of the smartest things i've read on HN. I guess you are correct. Although who knows, perhaps a differential equation solver could be faster using D/A -> analog computer -> A/D?


> ~0.1% for resistors and ~1% for capacitors

Don't forget the temperature compensation! Then there's irreducible noise like Johnson noise. As you say, the best use is in (properly stabilised) feedback systems which seek to minimise a difference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: