I once tried using ternary to explain to a semi-retired mainframe programmer how we wanted to modify a binary field to have more than two choices. He was so mad and insisted you can't just MAKE UP MATH when explaining your project goals. This project would have made his head explode.
I do think a set/enum is a better structure than a ternary field for T/F/null. Even if you defined the elements of the set to be { TRUE, FALSE, NULL }. Because, I believe, most of us see a field containing a T/F and assume it's a binary, not ternary field. Even if someone notices that the field is null-able, they might assume it's a mistake.
Unless you've been burned by that before and remember to check those assumptions.
If you're interested in research, the International Symposium on Multivalued Logic has been studying this area for 50 years: http://www.mvl.jpn.org/ISMVL2020/
As far as practical applications of non-binary circuitry, the Intel 8087 math co-processor used 4-level circuitry in its ROM. (This chip was used in the IBM PC.) Intel needed to do this to fit the large microcode ROM on the chip. The chip's logic, though, was regular binary. http://www.righto.com/2018/09/two-bits-per-transistor-high-d...
> As far as practical applications of non-binary circuitry, the Intel 8087 math co-processor used 4-level circuitry in its ROM. (This chip was used in the IBM PC.) Intel needed to do this to fit the large microcode ROM on the chip.
Also, as noted at the bottom of that article, most modern flash storage uses multi-level cell technologies that allow two, three, or four bits of data to be stored per memory cell rather than just one. This obviously significantly increases data density and allows for cheaper drives at the cost of reduced write performance, reduced endurance, and more error correction being necessary for reliable operation.
”It has four main registers R1-R4 and nine extra registers R5-R13“
I found that ‘4’ a weird choice for a ternary computer, until I read that R1 is special.
Register-to-register moves always involve R1. There are 12 such moves into R1 and 12 out of R1. That leaves 3 trit patterns in a 3-trit value for “increment R1”, “decrement R1” and “NOP”.
Also, this architecture doesn’t allow reading data from, or writing data to memory. The memory only is there for storing. programs. So, it’s almost as if this has one register, 12 words of data memory and 729 words of program memory.
Weird, but economical on the hardware, I guess, so it keeps the cost and amount of work down. Also, there’s room for extension commands, so the above may change.
Very cool project! I'm curious about the relatively small registers though. 3 trits feels pretty limiting, and though there's the 6-trit word example, it feels like it would get pretty cumbersome on the software side pretty quickly vs. something like native 6- or 9-trit registers. Perhaps something relating to complexities on the hardware side?
Adding more states to an electronic system is trading robustness and noise insensitivity for better performance.
Think about it this way: as the number of states approaches infinity, you're back to analog computing.
It's a design parameter, not something that lets us break past limits on computational density, which right now is heat removal and quantum tunneling in transistors.
What do you think would be harder to implement in a fab on a budget in the future? A chip full of large gates that can handle 7 voltage levels stably, or a chip with 5x [1] as many of the smallest gates that physics and logistics allow one to build?
[1] If memory servers, adders and cache access and a bunch of other logic typically require O(nlog₂n) gates in binary, but O(nlog₃n) in ternary, Which means as the native integer size increases, bases greater than 2 scale better.
It is mathematically the most "economical" integer base in terms of space, when considering complexity of digit representation along with word length [1].
Of course the "optimal one" would be base "e" but nobody is going to be able to do anything remotely reasonable with it... (pg. 491 of that manuscript).
Well, you could... if you want to give up discrete integer representation...
Anyway, if ternary is actually optional in practice depends as much on how efficient the ternary logic can be implemented on a silicon process. If you increase integer storage efficiency by 10%, but circuit density decreases, maybe you're not getting enough value.
Okay but what's the significance of (non-unity) powers of e? e is the optimum of a specific function[1], and you lose the benefits for higher values.
It's true that you can conveniently represent the numbers by using integer powers of the base (like representing binary with hexadecimal), but the whole reason e is a "good" base in the first place is because you're (partially) minimizing the number of distinct symbols needed to represent a number, and you lose that once you go to a higher base.
Plus, using a slightly-off integer like 7 breaks the integer-power-mapping anyway.
I'm replying from memory, so this might be worth researching for yourself, but I remember a cryptography conference where someone was talking about how important it was to use constant-time algorithms to prevent leaking information about the number of bits that were set in keys and messages, and demonstrated that using balanced ternary allowed some really cool techniques.
Don't know if any of that has made it into the literature, but you could have a look.
I once tried using ternary to explain to a semi-retired mainframe programmer how we wanted to modify a binary field to have more than two choices. He was so mad and insisted you can't just MAKE UP MATH when explaining your project goals. This project would have made his head explode.