Hacker News new | past | comments | ask | show | jobs | submit login
Analog Computers (degruyter.com)
166 points by StreamBright on July 6, 2017 | hide | past | favorite | 102 comments



A good summary of analog computers can be found on the wikipedia article:

https://en.wikipedia.org/wiki/Analog_computer

Takeaway: Analog computers are limited in precision and by "analog noise"; the precision of the components used determine the precision of the output. Usually no more than 3 or 4 decimal places are possible, at least with the tech that was used in their heydey. I would say that is still close to the case even today. Of course, one could do things like cryogenic cooling or such, but it becomes a cost factor at that point.

Something else that wasn't mentioned:

The ADALINE/MADALINE memister technology is an analog component, and is a hardware component equivalent to a McCulloch–Pitts neuron model (perceptron).

The memister is NOT to be confused with the memristor, which is a different technology; the memister was a 3-terminal device (memory transistor):

https://en.wikipedia.org/wiki/ADALINE

...whereas a memristor is a two terminal device ("memory resistor"):

https://en.wikipedia.org/wiki/Memristor


The high point of analogue computing for control systems may have been Concorde. It normally operated in fly-by-wire through an analogue interconnection: the so-called synchro/resolver system, which is a AC servo control system. The flight computers (mostly but not entirely analogue) provided autothrottle and autostabilisation.

("Somewhere" in the Concorde megathread is a description of its analogue computers: http://www.pprune.org/tech-log/423988-concorde-question.html )


Analog (or analogue) computers have their uses, however, in my experience, with analogue electronic computers, they have many problems:

- limited dynamic range of perhaps 30dB (1000)

- it is easy to saturate a signal (there is no overflow bit)

- oscillations are easy to induce, but once again hard to detect, especially in a circuit in the middle of a calculation chain

- noise gets amplified across the system


Reminded me of A.K. Dewdney's "Computer Recreations" column in "Scientific American" back in the day (I think a lot of people may have hated on his column since it followed in the shadow of Martin Gardener's infamous "Mathematical Recreations"). One of the "analog computers" he mentioned was using dried spaghetti to sort numbers — where the length of each spaghetti noodle represented the magnitude of a number. Of course grabbing them into a bundle in your hand and setting the bundle on end on a flat surface would "sort" the values.

Would the surface tension across a film that accurately finds the shortest path between n points be considered a "computation"?

One thing fascinating by "analog computers" is the way they seem to be practically instantaneous regardless of n. That is perhaps part of the efficiency reflected in the article.


> One thing fascinating by "analog computers" is the way they seem to be practically instantaneous regardless of n. That is perhaps part of the efficiency reflected in the article.

If this were true, it would have profound implications. It's probably not true.

http://www.scottaaronson.com/papers/npcomplete.pdf


His soap bubble analogy is pretty cool. There is a direct analogy for filtering where its impossible to get instantaneous perfect response out of a filter, either DSP simulated or hardware. It would seem that merely filtering a signal is much simpler than simulating the airflow across a wing or whatever.

There are also conceptual issues that the airflow across a wing is ideally under stable low angles of attack, constant, but during "fun times" which is precisely when you'd want a model instead of a test pilot, the air flow will vary over time (to the general detriment of flying ability...) so what it means to instantly solve a wing is unclear in itself.


It's "true". The catch is that the analog computer's circuit size (which is analog to the digital computer's time) has to scale with n.


But it's not true - the interesting components (the op amps, multipliers, etc) all have bandwidth limitations.


I remember that article. This is Spaghetti Sort from Wikipedia https://en.m.wikipedia.org/wiki/Spaghetti_sort


> Usually no more than 3 or 4 decimal places are possible

By that do you mean accurate to 1 part in 100 (3dp) or 1000 (4dp) or what? Since the scale of a representation is arbitrary, I'm not sure what dp means here.


Good question. I guess precision is finally going to be fractions of the max voltage swing allowed by the computer. For example, if voltage goes from -5 to +5 volts, the voltage swing is 10v, and if noise allows 0.1mV of precision, then the precision is 1/100000 of the full voltage swing.

This could be expressed, at the end, simply in decibels, though. Signal-to-noise, as in classic analog systems.


I think it usually refers to accuracy out of a range of 1. typically 3 decimal places means 1000ppm and 4dp means 100ppm.

The typical problems with analog computers are many... precision of components (e.g. gain or attenuation) is limited to ~0.1% for resistors and ~1% for capacitors (inductors aren't typically used). You can try to tune things (ratiometrically) to get higher accuracy, but at the cost of increased noise and temperature sensitivity. The more complex the system, the more things can go wrong... so you end up needing simple systems or simple tools (digital).

The typical problem is that if you build a filter (e.g. a transfer function with a summer or differencer) then you will tend to clip the dynamic range or either with a maximum voltage (integrators) or a minimum noise level (differentiators) pretty quickly. You can play some games with log converters, but accuracy really still matters and drift or gain error with time is rarely an option.

The best way to use analog computers is with negative feedback to null the input. They do that amazingly well... so you can build a temperature controller, missile tracker, or actuator that only minimizes an error so that high gain corrects for any inaccuracy or offset.


A big technical EE problem for analog computers is interconnects and their EMI/EMC interference issues and impedance issues. The analog specs for on-chip digital circuitry are much more relaxing to develop around. You can work around the interconnect issues on analog computers by dumping lots of power into the driver and input circuits but eventually some joker is going to point out that it would be electrically cheaper (in terms of current/power draw, etc) to transmit that 0 to 5 volt signal using something like I2C or SPI and then you're on a fast slippery slope to turning your analog computer into an exercise in DSP programming. At some point of complexity the interconnect cable driver circuitry is going to be power hungry enough that its cheaper to emulate the whole thing in floating point on a digital computer.

If you make a graph of PITA vs bit resolution, we're all pretty comfortable emulating digital computers on analog real world circuits using binary ones and zeros. Surely the gain is very little and the PITA increases very much by implementing digital computers on trinary + - 0 analog computers. Some think the graph is U shaped and at some resolution level, the PITA of analog high resolution falls beneath performance so it makes sense. Many like me think that graph never U shapes such that anything is "better" at emulating digital computers than using analog physical computers based on binary 0/1. AFAIK no one has built a modern floating point accelerator using opamps and A/D and D/A converters, so I find it unlikely its useful.

A two transistor NAND gate is after all just a analog computer using simple binary signals. All computers are analog its just the popular digital ones are only defined and well behaved when using binary analog signals.

There is some audiophile effect going on. Surely a mp3 codec running on a vacuum tube opamp would sound more mellow and all that.


I always hear such things from EE's. So, what's your thoughts on stuff like this in terms of analog "always" being more expensive or power hungry:

http://www.cisl.columbia.edu/grads/gcowan/vlsianalog.pdf

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.325...

Now, I won't argue with cheaper to develop since analog is manual with a lot of issues to contend with. I'm just wondering if there are more applications that can get huge speedups at lower power or cost than digital. I know the ASIC makers in power-sensitive spaces are already moving parts of their chips to analog for power reduction. That's what mixed-signal people tell me anyway: the specifics are often secret. So, I have to dig into CompSci looking for what they've tried.


The analogue neural net stuff is quite a reasonable example, because it's specifically trying to mimic a real analogue system and tends to be noise-tolerant.

Couple of points from your lower link:

- return of "wafer-scale"! Nice.

- " the average power consumption is expected to stay below 1 kW for a single wafer"; not bad but you're still going to need to cool that

- actually a hybrid system: long range comms is digital and multiplexed to save wiring, converted to analogue at the synapse

- "All analog parameters are stored in non-volatile single-poly floating-gate analog storage cells developed for the FACETS project" => basically analogue Flash? A development of MLC I suppose

On reading the whole thing, it seems the magic is actually in choosing which bits to make digital. The "long range" neural events are sent as differential 6-bit bursts, multiplexed, which they claim saves significant power.


> AFAIK no one has built a modern floating point accelerator using opamps and A/D and D/A converters,

This is one of the smartest things i've read on HN. I guess you are correct. Although who knows, perhaps a differential equation solver could be faster using D/A -> analog computer -> A/D?


> ~0.1% for resistors and ~1% for capacitors

Don't forget the temperature compensation! Then there's irreducible noise like Johnson noise. As you say, the best use is in (properly stabilised) feedback systems which seek to minimise a difference.


Kind of seems like if you are measuring the analog computers virility by the number of decimal places it can represent maybe you are mis-using the machine. I mean how many decimal places can you or I do in our head in real-time?


If you're for example an archer you can "calculate" angles and velocities to a pretty high precision.


I agree, but I don't think it is "calculated" in decimal places (if the makes sense). Sort of like how slide rules didn't give you "decimal precision".


Decimal places are just a convenient shorthand for describing the rough order of magnitude of the available precision. "Three decimal places" means roughly 30dB or 0.1%.

Slide rules absolutely give you decimal places. A decent slide rule might give you three decimal places of accuracy. A really good one might give you six decimal places, or 0.0001%, or 60dB. You could more precisely quantify their accuracy than just a rough order of magnitude, so perhaps the accuracy would be 55dB or 62dB, but "decimal places" gives you a sufficiently good idea of the accuracy for most purposes.

To bring it back to the digital comparison, a really great slide rule that's accurate to six decimal places is equivalent to a digital computer with 20 bits of output. If you put in a ton of work building an incredibly precise slide rule you might be able to add another order of magnitude and get seven decimal places. On the digital side, you'd only need to add 3 or 4 more bits to match that improvement.


Already at the start of the thread I wondered whether a ruler already counts as analogue computer. Now what about straight edge?


You're right that the normal measure of precision for analogue systems is either a percentage error or a signal-to-noise ratio in dB.


I seem to recall there were clever but well-known techniques in analog to get higher accuracy than that of your actual components, through negative feedback IIRC. So why is it correct to say that the precision of the components is what limits output precision? Wouldn't the technique potentially make a difference? (and yeah I know accuracy != precision but I'm using them loosely... the distinction doesn't seem relevant here)


I haven't thought it through, but feedback lets you do a few (perhaps connected?) things: 1. explore a trade-off between gain and bandwidth, 2. Reject disturbances and nonlinearities.

So you could have a high gain but "low precision" (in the sense of deviating from an ideal, not in the sense of not being noisy) component, and through feedback you can make a low gain, high precision (having desired properties, not low noise) component.


"I haven't thought it through"

I don't know the math of such things but did take a stab at it. My idea was doing something similar as we do for high or unlimited precision on digital computers. They usually emulate the higher precision using a series of lower-precision, primitive operations. My thought was that you could probably implement higher precision in analog if you could do a similar emulator with operations acting within the precision common in analog components. All I could guess at, though, since I'm in over my head here.

One other thing I always note is the brain seems to be mostly analog. Look what all it can do which includes memory and high-precision math. So, there's almost certainly some tricks we can use to do something similar with analog. Maybe an analog/digital hybrid. The wafer-scale project on NN's shows the potential esp if it was made 3D w/ a cooling system:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.325...


You know, I think that might have been what it was -- it was using e.g. 5%-accurate resistors to get a 2%-accurate circuit.

Funny enough, that is the difference between accuracy vs. precision, which I thought was irrelevant here. Thanks for pointing that out. :)


While that may be true, those 3-4 decimal places are really _warm_


I have fond memories of building an analog computer as a project from Popular Electronics that simulated a lunar lander mission. At reset you had fuel, altitude, horizontal and vertical velocity. Your input was an angle and a thrust knob (two potentiometers) and a comparative that latched when altitude reached 0 based on your velocities being less than 1m/s. It was tremendous fun to play but no graphics, just some mA meters to tell you your status.


Sounded cool, so searched for it. Page 41:

http://www.americanradiohistory.com/Archive-Elementary-Elect...


Great! That is definitely the thing I built so other than wrong magazine name and it went up instead of down, it's exactly like I remember it :-)


I printed the article to (hopefully) build in the future.


If any of you guys are into analog computing, just as an FYI, there's a sub-reddit dedicated to the topic (full disclosure, I started this particular one). analogcomputing.reddit.com

There's not a ton of content there yet, but please feel free to add anything you come across. I'm a fan of the idea and suspect, like the author of this piece, that there is "something there".


How long have subreddits been accessible through a subdomain? Is this something the subreddit mods have to turn on?


It's been this way since I started using the site in 2009.


til.reddit.com !


For as long as I can remember and I've been using Reddit since about 2008.


My boss at my previous job at Fisher & Paykel Healthcare in New Zealand is using analogue computing today.

He developed Pertecs, which is a rudimentary analog computer paradigm written in C.

http://tcode.auckland.ac.nz/~mark/Signal%20Processing%3A%20P...

He got me to write some code to compile schematic diagrams into the XML config files. He's also done something similar now to compile from LaTeX, and he ported the controller from a Mac Mini to a Raspberry Pi.

Pertecs is being used to control an artificial lung, which is used for research into obstructive sleep apnoea (snoring).

The problem is, he's retiring, and I'm probably the only other person in the world who knows how to use his program. I would move back there, but the immigration policy got more difficult (minimum salary of $75k), so I'm seriously wondering whether I should stay in Taiwan longer and try to naturalise here.


I believe the $75k minimum salary is for non-skilled jobs (the point being, if you're in a non-skilled but 'high' paying role we're still interested in you). The minimum salary for skilled jobs is $50k [1][2]

That and I doubt you would find many people working at F&P Healthcare that earn less than $75k.

[1] https://www.immigration.govt.nz/about-us/media-centre/news-n... [2] https://www.immigration.govt.nz/employ-migrants/hire-a-candi...


"we're" - it sounds like you're connected to NZ immigration! I can fill you in with more details if you're interested.

$75k is 4x my current salary in Taiwan. Yes, I know the economy is totally different in NZ, but I don't have high hopes that changing country will suddenly make me become rich. My boss here pays me the minimum that the government allows for a Masters graduate on a foreigner work visa.

The other consideration is my girlfriend. She applied for Working Holiday, but wasn't one of the 600 lucky ones. We were in an internet café with the fastest connection in Kaohsiung, but the site just wouldn't load in time. She's 30, so she can't try again next year. If we wanted to get a partnership visa, we would have to live together and share a bank account for 1 year. Getting married doesn't even help, just living arrangements.

We're getting kind of sidetracked from the original topic of analog computers, but if it's something you want to talk about more, then just search for my name on Facebook and send me a message. It would be nice to personify the immigration forms.


It's strange that the article does not mention anything about hydraulic macroeconomics and MONIAC, they were once widely used to verify theories in economics.

https://en.wikipedia.org/wiki/MONIAC https://en.wikipedia.org/wiki/Hydraulic_macroeconomics


So this looks interesting, but my first thought is how can a conserved quantity like water model something like money that is created and destroyed?


I don't think there's any requirement that the amount of water in the model remains constant. In principle you could drain or open valves to add more.


It's too bad MONIAC couldn't model the Stagflation of the 70's or it might be a useful model.


A related topic - running digital logic at subthreshold voltages, where transitions usually (but not always) happen correctly. It can be useful if you're attempting to measure something probabilistically anyway

https://pdfs.semanticscholar.org/7244/1c8377b1dfde1909d21463...


So, Keith Emerson's Moog Synthesizer[0] was an analog computer, yes?

[0] http://i.telegraph.co.uk/multimedia/archive/03593/emerson6_3...


Certainly. Most modular synths have all the basics - Sum, divide, multiply, add, XOR, etc.

And don’t forget, analogue random is the shit. In your face entropy!


Cool, I had to come back and post this link of a block diagram of the Moog:

http://cdm.link/app/uploads/2016/03/dangeroussynth.jpg


I was going to speak on this as the pictures look just like my Eurorack. The module I have called Maths does all the arithmetic functions needed.


It sure is. As are all analog modular synths. Its just that the "problems" worked on by such analog computers are ...different.


Excellent. So maybe all that time I’ve wasted playing with analogue modular synthesis will finally pay off!(?)


My thoughts exactly. beep boop


The author certainly has a point, but I wonder if analog computers (in the sense of DA/AD + ICs that can be plugged into general purpose systems) suffer from economies of scale? Maybe energy + depreciation costs of general purpose computers are still lower than ordering a minimum of 10~100k units of some custom analog computer w/ good enough quality control.

Apparently something similar to FPGA exists for analog signals [1], I wonder how popular/practical it is.

[1] https://books.google.com.br/books?id=qjnnBwAAQBAJ&pg=PA93&lp...


I got some FPAA chips I assume you're referring to, the ones I've got are Anadigm ones, I need to get round to using them. The downside is the ones I've got, simply make use of switched capacitors for creation of filters, so there will be some form of discretization in the temporal domain I guess.


I'm hoping that analog computing will come back in the form of photonic analog computing. This would be more powerful than quantum digital computing (you know the things that people are wasting time on).

Fun fact, with analog computing, one could imagine achieving Real Computation (https://en.wikipedia.org/wiki/Real_computation) which is above and beyond Turing. completeness.

Note the fun sentence "If real computation were physically realizable, one could use it to solve NP-complete problems, and even #P-complete problems, in polynomial time. ".

We are on a wrong evolutionary branch of computing. Bits are lame-o-rama, whereas differentiable signals are pure unadulterated flavortown.


What you're describing is generally accepted to be physically unrealizable. In fact, the sentence that follows your quoted sentence cites two commonly known physical limitations that prevent the existence of your "computational class above and beyond Turing".

Whether or not there exist physically realizable computations that are not computable by a turing machine is an open question, but most physicists and computational complexity theorists seem to believe there does not exist such a class.

https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis


>"computational class above and beyond Turing"

what does it mean?


Roughly, a computer that can solve problems in polynomial time which do not have a polynomial time algorithm on a Turing machine, which is the same class of computers we use today.


Read unlimited as arbitrary precision.

I'm familiar with the Turing thesis but he's wrong.


Have you read Scott Aaronson's NP-Complete Problems and Physical Reality [1] ? He goes into some detail about why analog computing is not thought to be physically realizable (Section 6).

[1] http://www.scottaaronson.com/papers/npcomplete.pdf


I'm passingly familiar. I'm not convinced. I can't really explain to you why. I feel like makes many assumptions about the architecture and workings of such a machine.

I know Scott Aaronson and all, but I won't believe it until someone tries to build one and fails.


> I'm familiar with the Turing thesis but he's wrong.

Could you at least explain in what way you think he is wrong. Surely you must guess how participants on a programming forum will react to a statement like that.


The extended Turing-Church thesis states that an analog computer can be simulated on a Turing machine. It can but not efficiently.

Look into the work of Lenora Blum. She wrote a book "Complexity and Real computation".


> I'm familiar with the Turing thesis but he's wrong.

You're going to have to back up a statement like that with a whole lot of supporting evidence if you want to be taken seriously.


Look into work of Lenora Blum. She wrote a book "Complexity and Real computation".


Care to elaborate?


> Luckily, with today’s electronic technology it is possible to build integrated circuits containing not only the basic computing elements but also a crossbar that can be programmed from an attached digital computer, thus eliminating the rat’s nest of wires altogether.

This is the most important paragraph in the entire article.

Analog Computers can be made very small. It'd take an ASIC, but the 741 OpAmp was less than 100 transistors. A more modern OpAmp might be under 1000 transistors... although noise issues would be abound.

Bernd Ulmann has developed a methodology that performs non-trivial computations (such as: http://analogparadigm.com/downloads/alpaca_4.pdf), but its still hand-programmed by connecting wires together.

If it were digitally programmed with a digital crossbar switch (consisting of CMOS Analog Gates instead), then it'd be controllable by a real computer.

I think what Ulmann is arguing here... is to use analog computers as a "differential equation accelerator". Perform a lot of computations in the digital world, but if you need to simulate a differential equation, then simulate it on an analog circuit instead.

And there are a large number of interesting mathematical problems that are described as differential equations.

-----------------

The main issues, as far as I can see, would be the multiplier, logarithm, and exponential functions. IIRC, these are created using a bipolar transistor... and modern manufacturing doesn't really mix BJT with MOSFET.

I mean, IGBT transistors exist, but modern computers are basically MOSFET all the way down. MOSFETs would be able to make a lot of things though: digital potentiometers / variable resistors... the crossbar switch, capacitors, resistors, and OpAmps.

And all of those can simulate addition, subtraction, derivatives and integrals. More than enough to build a "differential equation accelerator" that the author proposes.


> Instead, you program it by changing the interconnection between its many computing elements – kind of like a brain

This is one of the worst "brain" analogies I've read lately.


>What causes this difference? First of all, the brain is a specialized computer, so to speak, while systems like TaihuLight and TSUBAME3.0 are much more general-purpose machines, capable of tackling a wide variety of problems.

That's why digital computers have so far been so much more successful than analog computers. While analog computers may have the potential to be "better", they'd require massive global hardware and software changes to become the next big thing. People are lazy, and there's no way they'd want to port everything over to a completely different paradigm just for better energy efficiency and speed.


Analog computing makes a lot of sense in the context of genetic algorithms and "deep learning" and I wouldn't be surprised if there's already some ASICs under design using those principles. One big challenge is that the design kits from the foundries aren't likely to include all the analog computer cells that would be needed (but perhaps for example a current mirror into a MIM capacitor could make for an integrator?).


At the ISCA conference last week, Yoshua Bengio, of DNN fame, talked about their early efforts in analog computing. Here are some related slides of his: http://www.iro.umontreal.ca/~bengioy/talks/Brains+Bits-NIPS2...


Genetic algos are inherently discrete, so I can't see analog computing being all that useful for them.


You can look into neuromorphic computing, there are several projects that use digital-analog hybrid systems to implement neural networks.


>The human brain is a great example – its processing power is estimated at about 38 petaflops, about two-fifths of that of TaihuLight.

Huh? So we now have computers more powerful than the human brain? I thought that was still some decades off. And how would one even measure such a thing? In the apples-to-apples comparison, a stupid human trick floating-point calculation savant might manage 1 flop/s.


In speed yes, in terms of continuous parallel computing power at low energy levels, technology hasn't come even close. In terms of sensory input and processing, not even close.

I find it amusing that there is much hype about computer systems beating humans in very specialised areas, such as go and chess.

But the missing piece here is that the human is still doing this while continuously processing all the sensory input that is occurring to that human, dealing with so much more than what the computer system is dealing with. The computer system is dealing with one and only one subject matter at a speed many magnitudes faster and is only just getting ahead.


> one and only one subject matter at a speed many magnitudes faster and is only just getting ahead.

+1

Kasparov didn't simulate 200 million moves per second to make his move.


If you see an estimation of human brain computation power, it's probably best to assume it's nonsense. There are wildly different estimates, and as computers have gotten faster, the estimates seem to have risen, which suggests ego is involved.


I recall checking a few years ago, and it would have taken about 40,000 high-end GPUs to match the common estimates of the computing power of the brain. It's no doubt much lower now.

The problem is that it takes far more than raw computing power to make AI. We have sufficient computing power but we don't know how to use it, not even close.

As for how it's measured, it's basically a matter of guesstimating the computing power of a single neuron based on its inputs, outputs, and the computation it appears to do to map between them, and then multiplying by the number of neurons in the brain. This is horribly imprecise so estimates vary a lot (describing it as "38" gives the estimate way too much credit, should probably say 10 or 100 instead) but they're probably in the very rough ballpark.


Something that might be interesting is some kind of analog fpga for neural networks. Seems like NN weighting would translate well.


At last, we will have a truly random number generator! Not the simulated fake rand() using timestamps as seed.


Most modern Intel and AMD CPUs have a truly random number generator, generated in a non-digital way.


It would be interesting to simulate that airflow in LTSpice, given that that analog contraption can do it.


this feels like forcibly renaming electronics engineering as programming


First I've heard of analog computers but it sounds super interesting. So basically the structure of the system defined the algorithm and is therefore very specialized and efficient?


The brain is not an analog computer.


Well it's definitely not a Turing machine.


But it can imagine a turing machine (well enough to write software for same.)

I think the meme of the Universal Machine is causing some sort of phase transition in humanity as it propagates.


It is however definitely Turing-complete.


You say that as it was a feat.


At least, Turing's was.


>In analog computers there are no algorithms, no loops, nothing as they know it. Instead there are a couple of basic, yet powerful computing elements that have to be interconnected cleverly in order to set up an electronic analog of some mathematically described problem.

This is exactly why the digital computer has won over the analog computer.


Uhm, no. Analogue vs. digital computers use fundamentally different kinds of computation. Analogue computers were not used for "programming", nor was it really necessary for the simulations done on them.

Essentially, with an analogue computer you have a rack full of analogue building blocks and you build an electronic system equivalent to your real-world system from them. Then you can apply inputs and observe outputs. Often, the inputs were connected to sensors in a device, and the outputs were connected to actuators or recorders.

When analogue computers were already in wide use, there were maybe three digital computers on the whole planet. Later still, in the 60s to perhaps the early 80s "analogue computers" could (to varying degrees) perform some simulations orders of magnitude faster than contemporary digital computers. Only when digital computers became fast, cheap and easy enough to do these they became a replacement, however, moving from an analogue computer to a digital program could be quite difficult, since the two operate in vastly different ways.

Large systems often used digital computers since the ~70s for e.g. recording and analyzing outputs: a company in my home town developed test rigs for performance and crash testing of cars (and also did the testing to some extent); they still had a massive hybrid computer in the 80s (multiple analogue racks plus I think two DG Nova systems).


Stop comparing brain with computer


The author's so called analog computers can be built by using FPGAs. And the term he might be reaching for is called data-flow programming.


Not really. FPGAs are fundamentally digital and pretty much give you a bunch of logic gates to work with ("Field-Programmable Gate Array"). The author's proposed architecture would instead provide an array of components that perform analog operations, such as summing, multiplication, and integration or differentiation, over analog voltages.


FPGAs are fundamentally analog, depending on if 'fundamental' means what was in the designer's head or what you actually fabricated. You are thinking about them and using them as if they were digital.

Adrian Thompson at Sussex University used a genetic algorithm to auto-design FPGA circuits in the early 90s. Since no one told the GA that FPGAs were supposed to be logic circuits, it happily used the FPGA as an analog machine.

Even an Intel i7 chip is an analog machine that approximately implements the i7 computer design. They throw away the ones that don't approximate it up to tolerance.


Obviously. But good luck implementing a human-comprehensible analog differential equation solver on one, without the help of a genetic algorithm, that doesn't depend (as Adrian Thompson's circuit did) on the temperature, the quirks of that specific board, and the effect of components which aren't even physically wired to it.

The difference isn't that FPGAs don't operate on analog voltages deep down (who said they don't?). The difference is in the set of tools and tolerances they give you, and in that sense FPGAs are only an analog coprocessor in the sense that a car can, technically, be used as a sailboat.


I think it is an interesting reminder that the world we live in is wholly analog, and while this includes systems with discrete sets of equilibrium states, using digital devices to perform analog functions looks like hacking rather than as something that one would normally be advised to do. The reason for that is that while such devices may all behave as designed, identically, in the digital domain, their analog characteristics may vary so wildly between versions and even different specimens, as well as with, say, temperature, and do it in a completely unspecified way, that any attempt to do a serious analog design based on them would seem impractical.


The author's so called analog computers

Why do you say "so-called"?

can be built by using FPGAs.

Also, FPAA's (Field Programmable Analog Array)[1]

[1]: https://en.wikipedia.org/wiki/Field-programmable_analog_arra...


Yeah I remember reading the chatter about FPAA's back in the 90's.

It seemed like an exciting idea, but it never took off perhaps because the kind of accuracy that makes it worthwhile was not achievable?


Maybe it was the wrong term. My point was we have them and they are used, but they are not cost efficient in general. So saying that they will be the future of computing, is a bit ridiculous imo.


Fair enough. I think they will be part of the future of computing, but saying they are "the future of computing" is probably hyperbole.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: