Hacker News new | past | comments | ask | show | jobs | submit login
Building a ternary computer at home: a software emulator (github.com/ssloy)
153 points by haqreu on April 13, 2020 | hide | past | favorite | 30 comments



Beautiful!

I once tried using ternary to explain to a semi-retired mainframe programmer how we wanted to modify a binary field to have more than two choices. He was so mad and insisted you can't just MAKE UP MATH when explaining your project goals. This project would have made his head explode.


I do think a set/enum is a better structure than a ternary field for T/F/null. Even if you defined the elements of the set to be { TRUE, FALSE, NULL }. Because, I believe, most of us see a field containing a T/F and assume it's a binary, not ternary field. Even if someone notices that the field is null-able, they might assume it's a mistake.

Unless you've been burned by that before and remember to check those assumptions.


I prefer having a nullable version of other types so it has to be explicitly wrapped. bool vs nullable(bool)


If you're interested in research, the International Symposium on Multivalued Logic has been studying this area for 50 years: http://www.mvl.jpn.org/ISMVL2020/

As far as practical applications of non-binary circuitry, the Intel 8087 math co-processor used 4-level circuitry in its ROM. (This chip was used in the IBM PC.) Intel needed to do this to fit the large microcode ROM on the chip. The chip's logic, though, was regular binary. http://www.righto.com/2018/09/two-bits-per-transistor-high-d...


> As far as practical applications of non-binary circuitry, the Intel 8087 math co-processor used 4-level circuitry in its ROM. (This chip was used in the IBM PC.) Intel needed to do this to fit the large microcode ROM on the chip.

Also, as noted at the bottom of that article, most modern flash storage uses multi-level cell technologies that allow two, three, or four bits of data to be stored per memory cell rather than just one. This obviously significantly increases data density and allows for cheaper drives at the cost of reduced write performance, reduced endurance, and more error correction being necessary for reliable operation.


Awesome, I hope it works.

How do you make ternary gates out of a DG403? (It’s an analog switch that would pass ternary, but the switch control inputs are binary.)

https://www.vishay.com/docs/70049/dg401.pdf


Here is the schematics:

https://hsto.org/webt/4a/mb/ad/4ambad_bocdk6_0khdqve6grzjs.p...

two ternary multiplexers out of two dg403 chips.


”It has four main registers R1-R4 and nine extra registers R5-R13“

I found that ‘4’ a weird choice for a ternary computer, until I read that R1 is special.

Register-to-register moves always involve R1. There are 12 such moves into R1 and 12 out of R1. That leaves 3 trit patterns in a 3-trit value for “increment R1”, “decrement R1” and “NOP”.

Also, this architecture doesn’t allow reading data from, or writing data to memory. The memory only is there for storing. programs. So, it’s almost as if this has one register, 12 words of data memory and 729 words of program memory.

Weird, but economical on the hardware, I guess, so it keeps the cost and amount of work down. Also, there’s room for extension commands, so the above may change.


Very cool project! I'm curious about the relatively small registers though. 3 trits feels pretty limiting, and though there's the 6-trit word example, it feels like it would get pretty cumbersome on the software side pretty quickly vs. something like native 6- or 9-trit registers. Perhaps something relating to complexities on the hardware side?


Would logic with more than 2 states let us continue to increase computational density?

I know we are doing this for SSDs but they're simple and (mostly) homogeneous.


Adding more states to an electronic system is trading robustness and noise insensitivity for better performance.

Think about it this way: as the number of states approaches infinity, you're back to analog computing.

It's a design parameter, not something that lets us break past limits on computational density, which right now is heat removal and quantum tunneling in transistors.


What do you think would be harder to implement in a fab on a budget in the future? A chip full of large gates that can handle 7 voltage levels stably, or a chip with 5x [1] as many of the smallest gates that physics and logistics allow one to build?

[1] If memory servers, adders and cache access and a bunch of other logic typically require O(nlog₂n) gates in binary, but O(nlog₃n) in ternary, Which means as the native integer size increases, bases greater than 2 scale better.


There's a huge constant factor involved, which is why we still use binary. As in many algorithms, asymptotic analysis doesn't tell the full story.


What could be the main benefits of Ternary vs. Binary computing in real world applications?


It is mathematically the most "economical" integer base in terms of space, when considering complexity of digit representation along with word length [1].

[1] Hayes, B. (2001). Third base. American Scientist 89: 490-494 (http://lrss.fri.uni-lj.si/sl/teaching/ont/lectures/third_bas...)


Of course the "optimal one" would be base "e" but nobody is going to be able to do anything remotely reasonable with it... (pg. 491 of that manuscript).


Well, you could... if you want to give up discrete integer representation...

Anyway, if ternary is actually optional in practice depends as much on how efficient the ternary logic can be implemented on a silicon process. If you increase integer storage efficiency by 10%, but circuit density decreases, maybe you're not getting enough value.


base 7 is very nearly e^2, and 20 is almost exactly e^3. Base 20 would fix many problems with floating point math, wouldn't it?

By the time someone built a base 20 computer it would probably be an anachronism anyway.


Okay but what's the significance of (non-unity) powers of e? e is the optimum of a specific function[1], and you lose the benefits for higher values.

It's true that you can conveniently represent the numbers by using integer powers of the base (like representing binary with hexadecimal), but the whole reason e is a "good" base in the first place is because you're (partially) minimizing the number of distinct symbols needed to represent a number, and you lose that once you go to a higher base.

Plus, using a slightly-off integer like 7 breaks the integer-power-mapping anyway.

[1] x ^ (N/x) for any N.


I'm replying from memory, so this might be worth researching for yourself, but I remember a cryptography conference where someone was talking about how important it was to use constant-time algorithms to prevent leaking information about the number of bits that were set in keys and messages, and demonstrated that using balanced ternary allowed some really cool techniques.

Don't know if any of that has made it into the literature, but you could have a look.


It is also an interesting way to represent Canonical Signed Digits (where each digit can be 0, 1 or -1).


Fun!


trinary.cc (don't go there except in the wayback machine, it's dead and replaced with a spam site) used to be a great reference on three-valued logic.


Of course, people DID build ternary computers.

The most notable of these is Setun' (I had a chance to play with it at Moscow State University).

https://en.wikipedia.org/wiki/Setun


Setun is the only example, and it had a binary memory.


There was also the Canadian QTC-1. [0]

[0] https://jglobal.jst.go.jp/en/detail?JGLOBAL_ID=2009020829793...


This is ROM only, not a fully functioning computer.


Head to the end of the paper about the ROM [0], and you'll find the QTC-1 was the computer said ROM was designed for, not the ROM design itself.

[0] https://wwwee.ee.bgu.ac.il/~kushnero/ternary/Using%20CMOS%20...


AFAIK it was never built, only partially designed.


Wow something that runs IOTA natively (lol)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: