Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Transmeta paid Linus to work on the Linux kernel for 6 years

https://www.theregister.com/2003/06/17/linus_torvalds_leaves...



Transputers were a 1980s CPU innovation that didn't live up to their original hype, and have little to no connection with TransMeta.


Aha, no, Transmeta was a totally different thing, from the early 2000s. The idea there was they would have a special "Very Long Instruction Word" processor, kind of the opposite of RISC, where a lot of things would be embedded into a single 128-bit opcode. Think of it as being hell of a wide horizontal microcode architecture, if RISC is kind of a vertical microcode architecture.

It was pretty clever. You loaded x86 code (or Java bytecode, or Python bytecode, or whatever you felt like) and then it would build up a table of emulation instructions that would translate on-the-fly to run x86 natively on the Crusoe's ludicrous SUV of an instruction set. They were physically smaller and far less power-hungry than an equivalent x86 chip, even though they were clocked roughly 30% faster.

25 years ago they were going to be the future of computing, and people stayed away in droves. Bummer.

No no no, though, the transputer was a totally different thing. That was from 40-odd years ago, and - like the ARM chips we now use in everything - was developed in the UK by a company that did pretty okay for a while and then succumbed to poor management.

They were kind of like RISC processors. Much has been made of "you programmed them directly in microcode!" but you could say the same of any wholly combinatorial CPU, like the Good Ol' 6502, where the byte that's read on an instruction fetch directly gates things off and on.

The key was they had very very fast (like 10Mbps) serial links that would connect them in a grid to other transputers on a board. Want to run more simultaneous tasks? Fire in more chips!

You could get whole machines based on transputers, or you could get an ISA card that plugged into a 16-bit slot in your PC and carried maybe eight modules about the size of a Raspberry Pi Zero (and nowhere near as powerful). I remember in the late 80s being blown away by one of these in some fairly chunky 386SX-16 doing 640x480x256 colour Mandelbrot sets in like a *second*.

Again, they were going to revolutionise computing, this is the way the world was going, and by the time of the release of unrelated Belgian techno anthem Pump Up The Jam, transputers were yet another footnote in computing history.


Wow, the Mandelbrot set example really put things into perspective.

Unoptimized code would easily take tens of minutes to render the Mandelbrot in 640x480x256 on a 486. FractInt (written by Ken Shirriff) was fast, but would still take tens of seconds, if not longer -- my memory is a little hazy on this count.


Around that time I worked in a shop that had an Amstrad 2386 as one of our demo machines - the flagship of what was really quite a budget computer range, with a 386DX20 and a whopping 8MB of RAM (ordered with an upgrade from the base spec 4MB, but we didn't spring for the full 16MB because that would just be ridiculous).

Fractint ran blindingly fast on that compared to pretty much everything else we had at the time, and again it could show it on a 640x480x256 colour screen. We kept it round the back and only showed it to our most serious customers, and our Fractint-loving mates who came round after hours to play with it.

It still took all night to render a Lyapunov set.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: