If you like his stuff you should check out the sigbovik proceedings each year. Most of his videos are about his papers there and he has even more papers he doesn't make videos about! I don't think anybody else is making papers as interesting as his (about half are just jokes with no real content) but there are other gems of a similar "work hard on a fundamentally silly problem" vein.
This one is my least favorite one of his, actually. The big problem is that it just didn't actually really work. Tom7 seems to start these projects only a couple weeks before sigbovik deadline (an incredible feat I marvel at regularly) so it is no surprise that some of them end up falling apart (harder drives with the cue carts failed too) but I really wish this one had worked.
The C Compiler that only emits the printable bytes is my favorite one by a mile. The executables can only jump forward and only by an offset between two modestly large numbers. This makes laying out the code in the executable a really interesting problem and relies on weird old behavior where the instruction pointer can wrap on overflow. Wild!
One of my favorites was an old one (I think 2013?) about "DollarCoin", a new cryptocurrency that is minted using a video of someone lighting a dollar bill on fire.
I just got a terrible idea: combine this with another project, that built Tetris in Conway's Game of Life. That would enable executing an arbitrary program in Tetris, simulated in Game of Life.
This is an excellent link, well known in the "community" around this.
To add to it, here's a famous and really clear sub-2 minutes representation of the scales involved if it helps make things clearer, for those who've never seen the scales involved: https://www.youtube.com/watch?v=xP5-iIeKXE8 (and that's "just" the Game of Life in the Game of Life).
I think I would enjoy helping push the state of the art in this field, but it's far beyond my abilities for the moment - which may never change without years of research in my free time, which I cannot realistically do before I'm retired basically.
If it wasn't, I would love to work on a new library specifically built for the kind of things like your idea, with the API aimed at working with metapixels-based "hardware", but with the lowest-level constructs (maybe by far) being thinks like RAM or CPU caches as building blocks - while still being able to zoom to the lowest level of cells in the renderer. But it's basically R&D and HPC combined if one wants to offer an interesting alternative to the established libraries.
From what I know of upcoming hardware, such a library may have to wait a couple of years and basically target (as in, full buy-in, maybe at the programming language-level, certainly in tooling) the next Apple Silicon-like (or beyond) breakthough in commonly accessible hardware for a nice, "free" performance boost. Maybe GPUs are next?
That's really interesting, I did not know about that, thanks for the link ! It's highly specific but could make a lot of cool "wow-effect" products or software suddenly viable, especially if it percolates to your average consumer hardware some way or another.
the claim is that this is theoretically possible in stock tetris with only one modification: an infinite board (with a floor).
the infinite board solves piece randomization, because you can have a "junkyard" where you dump pieces until you get the one you want. it also solves the time pressure, because blocks spawned at row inifinity will never reach the canvas until you hard-drop them. finally, it prevents rows from clearing because they have infinite width (so the game won't interfere with your construction).
their concept for gates is that if you nudge a block at a certain row as it's falling, it will either shift one column over or be blocked. this page is a good visualization:
> In practice, IL programs direct the agent to construct a pile near the origin that progressively grows taller and wider. That being the case, it can work with a Tetris implementation that defines “row infinity” as a finite row whose index increases as a function of the number of spawns. In such an implementation, the agent emulates a semihard drop with a finite number of soft drops.
For the method described in the paper to work, the infinite version of the game requires a semihard drop operation or something analogous as a legal move. In modern versions of Tetris, the rotation systems permit very unusual operations such as wall climbing or infinite lock delays, not to mention rotating in physically impossible ways. A semihard drop is not a big ask on an infinite playfield. It's within the spirit of the game.
You want to know how the Tetromino does memory allocation and garbage collection?
I don't think the infinite board is that big of a deal. Turing Machines and Lambda Calculus have no limits on their memory space and we implement analogues of them in our limited memory space.
i'm talking about the theoretical model, not the implementation details. i'm focusing on the definition of "infinity" here:
> On an infinite playfield, tetrominoes spawn at “row infinity” and column zero. When a newly spawned piece falls, it never gets closer to the floor due to the nature of infinity. This means, in finite—though potentially vast—time, the agent can shift the piece into any finite column. And once in position, the agent can hard drop the piece.
gate construction relies on "nudging" a block at a specific row as it is falling. but if blocks start at row infinity, the only way to get them down should be to hard-drop them -- which precludes nudging.
they've defined a new "semi-hard-drop" operation, and introduced it as a "generalization" of legal moves, but it isn't.
Why isn't it? It just hides time passage. Oh no the computer gets a higher score... but it's not playing Tetris, the high score doesn't matter.
For instance: "For instance, if the agent emulates a semihard drop with a finite number of soft drops, then it will let a gravity drop serve as the final soft drop. "
And you can't have an infinite tape, but no one is out there saying the Turing Machine is fake.
Consider this. Our physical implementations of turing machines work even though we don't have an infinite memory. We do encounter some limitations because of such.
The Tetrino also works. You can create it and run it on a finite board. It would be limited just at our physical machines are limited without access to infinite memory.
i'm not saying it's impossible to model an infinite board using software (that's quite possible). i'm saying that if it's possible to soft-drop a block to an arbitrary row, then the model of infinity is faulty and inconsistent. and, unfortunately, both the infinite board and the soft-drop are required fundamental underpinnings of the proposed mechanism for computation.
And I'm saying you don't need a board to be infinite to do work, you just need a board to be infinite to compute all work. Just like you need infinite tape for a Turing Machine to be able to compute all work.
NES tetris has a RNG that can be manipulated so you always get the piece you want as long as you never need duplicates. You don't even need a junkyard.
> the infinite board solves piece randomization, because you can have a "junkyard" where you dump pieces until you get the one you want. it also solves the time pressure, because blocks spawned at row inifinity will never reach the canvas until you hard-drop them. finally, it prevents rows from clearing because they have infinite width (so the game won't interfere with your construction).
Not to mention that you need unbounded memory anyway.
It seems like I quite often see posts here about how $thing$ can perform universal computation, or simulate a UTM, or whatever. And obviously the HN readership selects for these articles.
But is there any significance to be found in all these things having this attribute? Does it tell us anything about the universality of computation as a property of the universe?
I seem to recall that sed and C++ template installation are known to be Turing complete - and i can kind of understand that as they are designed to manipulate information, even if it is surprising that their limited syntax provides enough "power" for universality. But Tetris was not designed with this capability, yet it has it. Same with Minecraft - within which it is apparently possible to implement a Turing machine.
I've seen claims that DNA replication and other naturally occuring systems may be capable of universal computation too.
"Does it tell us anything about the universality of computation as a property of the universe?"
It tells us that computation is counter-intuitively easy to access. Our intuition says that computers are hard; look how hard they are for us to build, after all! But instead the reality is that if you try to exclude computation from a non-trivial system, it's actually fairly hard.
(I think part of this intuition may also be based on our intuitive experience mostly being with von Neumann machines. Contra some other contrarians, I think we work with them for a good reason; as wild as they can be, they are much tamer than many other systems capable of universal computation. Compare with trying to work in Rule 110 directly as our primary computation method, or cellular automata in general. Many of these accidental TC systems are very strange and would be hard to work with. Getting a universal computation system nice and friendly to humans accidentally is pretty hard. Getting a universal computation system without that qualification is like falling off a bike.)
https://www.gwern.net/Turing-complete is a good overview both of this concept, and includes many interesting consequences. I also particularly recommend the section title "Security Implications".
I guess the main takeaway is that you do not need much in order for something to be Turing-complete. On the other hand this should hardly be surprising, all you need is something to store a state and the ability to alter the state in a few different ways depending on the current state, and of course the ability to do this over and over again. The more surprising thing should maybe be that this is all you need and that adding more features does not buy you any additional power. Then again, having some state and being able to manipulate it conditionally is a very generic building block, what else could you possibly do? So maybe it should not even be that surprising that this is all you need.
Interestingly, DNA computation can behave like a non-deterministic Turing machine. [0] However, as you mention, there is a clear trade-off between resources and time complexity. In this instance, the amount of DNA needed to solve an NP-Complete problem with a large input, e.g. n = 256, would require more atoms than there are in the observable universe given that the amount of DNA needed grows at a rate of 2^n.
Any Turing machine can superficially behave like a non-deterministic Turing machine, you just have to bite the bullet and brute force the solution in exponential time or use an exponential amount of processing units. But this exactly what separates the two types of machines, requiring or not requiring an exponential amount of resources. So when I said they can be superficially similar, that actually means they are not similar, as the one thing we ignore - the required resources - is exactly the one thing that separates them.
So I think saying DNA computation can behave like a non-deterministic Turing machine is only true in the same sense that a deterministic Turing machine can behave like a non-deterministic Turing machine, i.e. not very true. The difference is the trad-off - exponentially slower vs exponentially more processing units - and there might be some use for this if you have, for some n, enough room for exponentially many DNA molecules but not enough time to wait for exponentially many clock cycles.
The primary advantage of a DNA computer might be that you can make use of all three dimensions while processors are mostly limited to two dimensions. I did not read the paper you linked to, but the DNA computations I have seen where all very slow processes, so at least in those cases I have seen, the DNA computation has to make good for many orders of magnitude of processing speed in terms of parallelism before there is even the possibility of an advantage.
"The more surprising thing should maybe be that this is all you need and that adding more features does not buy you any additional power."
The "additional power" is what questions such as P-completeness and NP-completeness is about, right? Turing-completeness, as you say, is just a very minimal set of requirements on which additional requirements are added. Otherwise we'd all just be using massively parallel arithmetic calculators as our computers.
I am not sure that I understand your question. There are different models of computation [1] like Turing machines, finite state machines, lambda calculus or rewriting systems. A priori it could be possible that they all have different powers, that each of them is capable of solving different problems or requiring different amounts of resources like time and space to solve the same problem. But, as it turned out, this is not the case, within some polynomial factor they can all solve the same problems with the same amount of resources.
Only if you add some magical feature like an oracle that provides you in constant time the answer to an NP-complete problem, you get something more powerful than a Turing machine, at least as far as we know. There you get the difference between problems in P, that can be solved in polynomial time on a Turing machine, and problems in NP, that can be solved in polynomial time with the magic device but require exponential time on a Turing machine.
Quantum computers occupy a middle ground, as far as we know, they are more powerful than Turing machines but they are not magic. Using massively parallel computers is a trade-off between space and time - you need more silicon but less time, but in overall resources you are not any better off. In practice it might still matter, being able to predict the weather for tomorrow is much more useful if you can finish the calculation before tomorrow, so using more silicon and less time is a trade-off that gains you something. Also some technicalities apply, like Turing machines having an infinite amount of memory, but this does not really affect the larger picture.
You know far more than I do. From what little I understand the specialness of a Turing machine is that it can be used as a universal calculator, not that it can be used as a universal calculator fast.
I guess the point I was trying to make is that even if CPUs only use a Turing-complete set of gates on the fine scale, the way they are arranged and connected (for things like massively parallel operations) is what sets a modern CPU apart from a basic Turing machine. Allowing it more computational speed than an equivalent number of parallel basic Turing machines.
From what little I understand the specialness of a Turing machine is that it can be used as a universal calculator, not that it can be used as a universal calculator fast.
This depends on how you want to understand fast. A Turing machine updates only a few bits per cycle while a modern computer easily updates many thousands if not millions of bits per cycle. Also a Turing machine access its memory very slowly because the head moves one cell per cycle, good luck reading a bit stored five gigabytes away from the current position of the head. So yes, in that sense a Turing machine is slow, especially random access memory is big advantage in practice, but in theoretic terms having to scan through the entire memory just gets swept under the rug of polynomial factors.
On the other hand you can build a Turing machine that emulates your modern computer even though it might need a trillion cycles to emulate just a single cycle of it. But mathematically you can just factor this out, you call a trillion cycles just one cycle and now both are the same speed, mathematically of course, not any physical implementation.
The other way that you allude to, using many Turing machines in parallel, is probably more realistic. You can probably build a small Turing machine with the equivalent of a few hundred or maybe few thousand gates. And you can program them to simulate a single logic gate in maybe ten or so cycles. Now rebuild you real computer with all gates replaced by those small Turing machines.
My gut feeling is that you could probably build a 80s or 90s era processor using current technology. A 1982 Intel 80286 has 134,000 transistors, that puts the number of gates well below 100,000 while a GeForce RTX 4090 has 16,384 shader cores which are way more powerful than our tiny Turing machine to simulate a single logic gate needs to be. So if you are willing to give up 30 years of Moore's law, you can probably have a Turing machine powered computer.
Heck [2], we have transistor level simulations of old processors [1], somebody with enough enthusiasm could probably do this at the gate level, then replacing gates with Turing machine simulations would be easy, and finally you would have to get them running on a GPU for decent speed, but I am not sure if GPUs would be any good for that kind of task, I have never programmed one. But this would be a processor running some code, simulated at the gate level with the gates implemented by Turing machines simulated on a real processor.
The only difference between a Turing machine and a finite state machine is that the Turing machine has a writeable tape with a movable head.
There is not much to it.
I think the deciding factor is that you can compose individual operations into new operations and use any previous result as input into the next computation.
Imho the fact that a Turing machine is capable of universal computation is surprising/insightful. I'm not sure the fact that other simple systems are then also complete adds much more surprise/insight, even if it's difficult to predict which ones will be.
What I think people often miss is the difference between computational completeness and computational "power". Yes rule 110 is complete, but what that means in practice is that you've shifted most of the work (for solving a real problem) onto specifying the initial conditions.
Maybe Tetris is Turing complete but it's still a lot easier to compute 2+2 on a pocket calculator.
Yes, you can indeed implement a Turing machine with DNA by modifying DNA strands to simulate Wang tiles; [0] this is modeled in the abstract self assembly model. [1] You can also build emojis with DNA. [2]
Yep. I first read that when I was twelve. I wouldnt say I understood much back then, or even now after a number of re-readings. But its an amazing book. "Sprawling" is one word that I would use to describe it.
Unlike the title suggests, the article/page is not a proof of Tetris' Turing completeness, but an actual implementation with UI and all. It's quite different from the other articles you're mentioning.
> Is there any significance of a beautiful painting of a magnificent sunset over mountain meadow with horses grazing?
Yes, but it was designed that way. By a person. For other people. Its unlikely to be perceived as "beautiful" by a whale or an oak tree or whatever.
But if things that weren't designed to do computation, or even designed at all, nevertheless appear to be capable of it, what does that tell us about computation? Given that the fundamental physical behaviour of the universe can (maybe) be described computationally, is computation special in some way?
I don't think it needs to tell us anything about computation. It can be just a fun thing to do. We've got decades of "surprise this is Turing Complete" results. One more doesn't tell us anything about computation. But it's still fun!
Think of it like writing a quine in some weird language or something. We already know it can be done. But it is still cool!
Imagine you're stuck on an island or in a forest, and you're stuck with what you find in nature only, or perhaps you get very primitive technology (e.g. basic metal working only):
What's the simplest way you could build some (mechanical) logic gates to do some form of useful computation?
I've seen some from lego or 3D printed ones, but they all look very complicated, e.g. requiring rubber bands or springs but not looking practical to combine many together in a useful circuit
If you have basic metal working facilities you aren't far from being able draw wire and then you can make relays. It might take a while to replicate Konrad Zuse's machines but I think that it might be simpler to do that than go for purely mechanical computers, see https://en.wikipedia.org/wiki/Z3_(computer)
And of course the serious calculator enthusiast would have a Curta. Needs a bit more than basic metal working I suspect: https://en.wikipedia.org/wiki/Curta
Terris is officially defined by The Tetris Company to have a 10x40 playfield with 10x20 of it being visible. They do not allow for arbitrary sizes. If you have to change the game for it to be capable of universal computation then you shouldn't claim that the game is capable of it.
Just a note, this doesn't seem to create an isomorphism between _actively_playing_ with randomised figures, tetris. It just explains how one could create a turing machine based on controlled positioning of the figures. So essentially this is boring.
The Tetris part starts at 15:00, but I'd highly recommend watching it all.