I wish there was something like that for computability theory.
Presumably for historical reasons, professors of theoretical computer science love to talk about abstract machines like finite-state machines, pushdown automata, Turing machines etc. Not about logical circuits.
But arguably, logical gates are much more conceptually primitive than most of those automata above! They are basically an implementation of propositional logic, albeit potentially with a dimension of time and delay. And they are nonetheless somewhat close to how actual computers work. So why do they ignore them as a model of computation?
My guess is that they don't talk about them because they only awkwardly harmonize with the classical models of computation: FSMs, TMs and the like, and the neat hierarchy of computability they place them (what languages in the Chomsky hierarchy they recognize).
For example, Turing machines have two kinds of states: Objects called "states", and the states of the tape cells. The former are finite and the latter are infinite. Pushdown automata make a similar distinction into two types of states. Logical circuits, on the other hand, don't distinguish two different kinds of states in such a way. It's all just circuits.
The classical abstract machines have other problems as well: Arguably, among abstract machines, a real CPU is most similar to a universal Turing machine, because it can execute arbitrary programs. But according to the theory of computation, CPUs are merely equivalent to finite-state machines! Because they lack an equivalent of an infinite tape. Infinity is a strange detail to emphasize here, as logical circuits are merely "potentially infinite" in the same way finite-state machines are merely potentially infinite. But that doesn't make them similar to each other. The relevant difference seems to be that circuits with delay allow for "memory" (like flip-flops), while finite-state automata don't.
I would like to see a course or book on theoretical computer science that tackles this issue: "From NAND to Turing Machines".
> But arguably, logical gates are much more conceptually primitive than most of those automata above! They are basically an implementation of propositional logic, albeit potentially with a dimension of time and delay. And they are nonetheless somewhat close to how actual computers work. So why do they ignore them as a model of computation?
Simple: logic circuits for propositional logic are not Turing-complete, and in a lecture about theoretical computer science one wants to teach such more sophisticated, mathematical models of computation.
> Simple: logic circuits for propositional logic are not Turing-complete, and in a lecture about theoretical computer science one wants to teach such more sophisticated, mathematical models of computation.
Logical circuits with delay (sequential [1] rather than combinational logic) are indeed not Turing complete -- but in the same uninteresting sense that a CPU is not Turing complete: In a conceptually irrelevant way. As I argued, Turing machines are precisely not "much more sophisticated" than CPUs or the circuits they are made out of. Turing machines merely have infinite rather than potentially infinite memory.
You could modify the concept of a Turing machine such that its tape is potentially rather than actually infinite, and it would change nothing of substance. Yet this machine would suddenly only be considered equivalent to a finite-state machine.
Apart from that, a perhaps theoretically interesting point about combinational logic (circuits without time/delay) and finite state machines would be that combinational logic is apparently computationally weaker than finite-state machines. Wikipedia doesn't say anything about that explicitly, but the "classes of automata" image [2] they include in the article does suggest it. But that, assuming it's true, should be taught in theoretical computer science class, not in computer engineering (if the latter mentions it at all).
But, much more importantly, they -- the theoretical computer science professors, not the engineering guys -- should explain the theoretical relation between circuits with delay (sequential logic) and various automata, including Turing machines.
The likely fact is that they ignore circuits because they don't fit well into their supposedly neat picture of abstract machines. In fact, they call their reliance on infinity as an important difference between models of computation in question.
as you say, the chomsky hierarchy is surely an influence, but i don't think that's the main thing
i think it's probably because a turing machine can do universal computation and can be described completely in a couple of paragraphs, while you need at least several hundred logic gates to describe something that can do universal computation (if you give it infinite memory), and then you still don't have a convincing argument that what it does is universal in any interesting sense
until you bring in turing machines, anyway
you can write a turing machine in about one line of c; the description of a computer in terms of nands doesn't give you that
That seems to be a somewhat strange comparison. Logical circuits are arguably not much more complex than finite-state machines, and theoretical computer science professors don't have an aversion to those. So it doesn't seem like they find logical circuits too primitive. They even seem to strive for primitivity, as Turing machines are much more conceptually primitive than things like register machines or actual CPUs.
Moreover, a logical circuit itself is actually easier to describe than a Turing machine. Both logical circuits with delay and Turing machines can do "universal computation" for any reasonable sense of the term (demanding an infinite element is arguably not reasonable).
And to give a "convincing argument" that they are universal is quite difficult even for Turing machines. What would such an argument even be? In practice, it's more the fact that there are no known counterexamples which causes us to believe in the Church-Turing thesis (that any problem that we would intuitively consider "computable" is also computable with a Turing machine or logical circuit, and vice versa), rather than any one (positive) "convincing argument".
a nand gate is not too primitive, of course, but building an actual cpu out of nand gates involves several hundred of them, and that circuit is harder to describe than a universal turing machine
turing's original paper gave a convincing argument that turing machines were universal; he said that they could do anything a mathematician could do, since the mathematician can only hold a finite number of symbols from a finite set in his memory at once, and can only fit a finite set of symbols from a finite set on a page in his field of view. so the squares on the turing-machine tape were originally pages in a notebook
strangeness doesn't guarantee insightfulness, but it's a necessary precondition to it; if you continue to dismiss all strange arguments you will dismiss everything from which you could possibly learn anything, preserving your ignorance like a precious jewel
> a nand gate is not too primitive, of course, but building an actual cpu out of nand gates involves several hundred of them, and that circuit is harder to describe than a universal turing machine
Building a CPU with a Turing machine would also be quite complex. Probably less complex, but only because logical circuits are more primitive.
> turing's original paper gave a convincing argument that turing machines were universal; he said that they could do anything a mathematician could do, since the mathematician can only hold a finite number of symbols from a finite set in his memory at once, and can only fit a finite set of symbols from a finite set on a page in his field of view. so the squares on the turing-machine tape were originally pages in a notebook
Yet according to mainstream theoretical computer science (professors), you didn't mention the allegedly relevant property of Turing machines: That the tape is infinitely long. Otherwise it is considered to be equivalent to a mere finite-state machine. Which is, of course, absurd. I think the main difference between FSMs and Turing machines (or logic circuits with delay) is that the latter allow for some sort of "memory", which is (I would guess) a special type of state that finite-state machines don't support.
Judging from the overview, they seem to implement classical abstract machines using Ruby. Implementing something relatively simple (an abstract machine) with something complex (Ruby) is quite unilluminating. I was asking for someone who implements them using logical circuits. While also explaining how their different "power" (in terms of languages they recognize in the Chomsky hierarchy) is reflected in their logical circuits.
Presumably for historical reasons, professors of theoretical computer science love to talk about abstract machines like finite-state machines, pushdown automata, Turing machines etc. Not about logical circuits.
But arguably, logical gates are much more conceptually primitive than most of those automata above! They are basically an implementation of propositional logic, albeit potentially with a dimension of time and delay. And they are nonetheless somewhat close to how actual computers work. So why do they ignore them as a model of computation?
My guess is that they don't talk about them because they only awkwardly harmonize with the classical models of computation: FSMs, TMs and the like, and the neat hierarchy of computability they place them (what languages in the Chomsky hierarchy they recognize).
For example, Turing machines have two kinds of states: Objects called "states", and the states of the tape cells. The former are finite and the latter are infinite. Pushdown automata make a similar distinction into two types of states. Logical circuits, on the other hand, don't distinguish two different kinds of states in such a way. It's all just circuits.
The classical abstract machines have other problems as well: Arguably, among abstract machines, a real CPU is most similar to a universal Turing machine, because it can execute arbitrary programs. But according to the theory of computation, CPUs are merely equivalent to finite-state machines! Because they lack an equivalent of an infinite tape. Infinity is a strange detail to emphasize here, as logical circuits are merely "potentially infinite" in the same way finite-state machines are merely potentially infinite. But that doesn't make them similar to each other. The relevant difference seems to be that circuits with delay allow for "memory" (like flip-flops), while finite-state automata don't.
I would like to see a course or book on theoretical computer science that tackles this issue: "From NAND to Turing Machines".