Hacker News new | past | comments | ask | show | jobs | submit login
Strange chip: Teardown of a vintage IBM token ring controller (righto.com)
146 points by picture on Feb 28, 2021 | hide | past | favorite | 66 comments



I worked on a token ring network back in the 90s. There were more employees than network ports. So every time someone complained of not having network connectivity, we'd find a network cable where the light on the port wasn't lit up from activity, usually because someone was out sick or on vacation. If everyone decided to come into the office, we would have had a problem, but I don't think it ever happened while I was there.


The cards are supposted to bypass themselves when powered off, that's what those relays were for.

Later they had these MAU devices that let you build star like topologies where the mau would bypass inactive ports.


Yeah, we had an MAU, but I’d forgotten the term.


Token-ring networks could have ruled the world. But IBM’s insistence on licensing fees made the equipment expensive compared to Ethernet.

Token-ring was faster: 16 Mbps with no collisions, vs Ethernet’s 10 Mbps in a perfect world... but in reality it was slower when accounting for collisons and retries. No so with token ring.

If IBM had licensed the tech without fees, hardware would have been competitive to Ethernet and today we’d all be using 1 Gbps token ring in our homes.

But now I’d be surprised if a token-ring driver even exists for Windows 10 or MacOS.


The biggest problem with token passing systems is the cost of losing the token (machine holding it crashes) or failing to yield (when you then have a pope-antipope situation).

Ethernet has an immediately worse system (everybody has to do the backoff, and on a crowded network that can be painful) but on an amortized bases not-as-bad system. Also adding more hosts is pretty much automatic. Consider it a positive example of the "worse is better" paradigm.


But Ethernet worked around that problem pretty quickly with switches.

I'm sure Token Ring would have solved it the same way.

I think the point about IBM's licensing is valid.


> But Ethernet worked around that problem pretty quickly with switches. I'm sure Token Ring would have solved it the same way.

Perhaps, but "the problem" was shared access to a single piece of wire. Switches solved the problem by ending sharing - each piece of wire had exactly one computer.

So sure, a switch would have fixed Token Ring's lost token problem - by eliminating the token. I'm not sure it could be called "Token Ring" at that point as there is no token, and no ring.


Back in typing/HyperCard programming class in the mid 1990s, one of my friends jammed the Mac lab's AppleTalk network. He told me he did it by taking his computer off the network at the moment he held the AppleTalk token.

He didn't seem to be lying... he did something using mouse/keyboard on his machine and my machine couldn't use the network, and then after a couple kids complained about the network being down, he discretely did something on his machine and things were fixed.

I was always confused as to how he was fast enough or even that his computer having the token was user-visible without installing specialist tools on the school's computers. If he were to start a large file transfer, would that cause him to hold the AppleTalk token for a long time, and give him visibility via the progress bar?

I thought there was sub-second upper bound on how long the token would be held, even if your machine still had data to send. Am I mistaken about AppleTalk token ring networking?


This reminds me of anecdotes I’ve heard whereby MacOS’ lack of preemptive multitasking led to the situation whereby if somebody were to open and hold the apple menu on their machine it would receive but not yield the token (because network processing was essentially suspended while it followed the user’s every action with the GUI of that menu).


There was no memory protection on those machines so nothing to stop you from patching the network driver while it was running.


I think HyperCard and AppleScript were the only programming environments installed on those machines, and he was a big PC proponent, so I don't think he had a Mac at home. I don't think he used any specialized tools, and I don't think he was using AppleScript to bit-bang binaries into the network driver's region of memory.

That does remind me of a story I heard, that the zero page was mapped, and the first 64 bits of the zero page were initialized to zero at startup. So, if you derefernced or duble-dereferenced NULL as an *int, *float, *double, **int, **float, **double, etc., there would be no error and you'd get 0 or 0.0. Apparently, the Excel port for Macs had quite a few NULL double-dereferences, either intentionally taking advantage of this "feature" or by accident. Many developers installed a system extension that would set the first 32 bits of memory to a value larger than the amount of installed memory, guaranteeing an error if NULL were double-dereferenced.*


In the article I originally mentioned "worse is better" regarding Ethernet, but I figured it was too much of a tangent and deleted it. So it's interesting to see you mention it too. Maybe I should have left it in :-)


Editing, especially self-editing, is hard.

For the article itself, probably it is tangential and thus you were correct to excise it. I was commenting on a claim about IBM (hardly the only token passing network) which was itself tangential to the arrival topic.


> Consider it a positive example of the "worse is better" paradigm.

I'd never heard of this before. TIL.

https://en.wikipedia.org/wiki/Worse_is_better


That’s not the only reason Token Ring lost out. Token-passing yields deterministic access to the wire whereas CSMA/CD granted burst access on-demand. We observed at Wall Street Banks the implications of this difference in terms of the latency involved in delivering market data to traders. The bank using Token Ring had to segment their networks into smaller pools of stations using routers to decrease that latency.


Licensing (alongside the MCA debacle) was definitely a driver, as was the more complex topology for even a simple token ring network compared to a simple ethernet network.

Ultimately the brute-force improvement in efficiency given by developing ethernet switching settled the argument, IMO. Once you could practically utilize the bulk of the theoretical bandwidth of ethernet, token ring was toast. Switching was the Pentium Pro of network architectures.


The only way Token Ring could have ruled the world was if it was developed outside IBM. Instead it came out of the same corporate cultural moment that birthed MCA and the PS/2 line and APPC over SNA (to rival, sort of, GOSIP and TCP/IP).

IBM Networking Systems spent much of the early 1990s dithering and thrashing about how to compete with Novell and Microsoft's networking and almost completely missed the rise of TCP/IP both in the enterprise as well as the public Internet. When I first started doing networking development for OS/2 at IBM I had to get special permission to get the $1000+ TCP/IP networking kit because it was "owned" by NSD which really, desperately, wanted everyone to develop applications for APPC. APPC was tightly tied to Token Ring and thus they really also minimized R&D into TCP/IP over TR networks.

Although IBM "woke up" to the Internet and personal computer networking with the Lotus deal, it took NSD another year or more to really shift gears, and by then IBM had had enough and dumped the assets onto Cisco for a song.

If kens is still around, an odd source for token ring information might be Carnegie Mellon, which implemented a MASSIVE TR network circa 1986 or 1987 with Type 1 connectors in every dorm room on campus so students could access Andrew from …anywhere.


Am somewhat incredulous that the original announcement letter for the infamous Type 1 cable is online: https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?appname=s...


Thanks for the info!


> Token-ring networks could have ruled the world.

Unlikely. People forget that a star topology (Token Ring with MAU--the MAU was ferociously expensive) was expensive in that day and age--both in terms of technology (lots more VLSI) and infrastructure (everything was mostly wired up with RJ-11/shitty wiring for voice and nobody had RJ45/Cat5).

Ethernet installations evolved from coax to hubs to switches as the price came down.

Yes, eventually star topology won, but it got there in steps--and Token Ring couldn't do that.


I never wired a token-ring network with RJ-11. Thats ridiculous. We had 10BaseT with various connectors (fat and what have you). RJ-11 with 2 twisted pairs for token-ring sounds painful. I never saw that.


Erm, that's exactly my point--that I apparently wasn't clear enough on.

People didn't have RJ-45 and Cat5 back in 1988-1990. The existing ports in every office building were RJ11 voice (Cat3 garbage) and maybe coax for some reason.

So, normally, networking started as "wire the computers in that room". That was generally coax at the beginning. And you would keep adding computers.

Eventually, somebody got tired of always debugging where the networking break in the coax was and managed to finagle a single hub in that room. But only one hub because they were kinda expensive.

At some point you had overflow and then it went "Connect the main room to another room or two" which meant an actual cable pull somehow (probably a midnight session through the drop ceiling) and probably went into a hub in the main room--but only one hub per room max because they were kinda expensive.

And, finally, you got enough computers that congestion was now an issue and you got to use an actual switch (which was really expensive--so generally you only used that to connect to the hubs and isolate them) and could now get people to officially pull cable.

This played out from about 1988 (coax dominates) to about 1998 (TCP/IP has finally won and ethernet switches dominate and hubs disappear), I would say. For most of that range, a system based on Ethernet hardware would have been dramatically cheaper than one based on Token Ring hardware until Ethernet switches became ubiquitous (by which time Token Ring was dead)--even if the interface cards were the same price.

And, remember, some of the Ethernet cards were GARBAGE, yet people beat them into submission because "real" ones like the DEC Tulip cards were so expensive. One of the reasons for the ascendancy of Linux was its support for so much complete garbage in terms of hardware.

And, to add insult to injury, 10Base Ethernet would generally work on Cat3 garbage that Token Ring simply would have no hope on. And that meant a dramatic decrease in your physical plant support cost.


Adding my personal experience from the nineties in Berne in Switzerland: Compared to Ethernet the cables were thicker, more cumbersome to bend and you had two of them, and they were chained to neighbor computers. That was long ago and I was then not an admin, only a user. When my office relocated they also switched to Ethernet. Everybody seemed to be relieved and happy about Ethernet. Just a personal anecdote.


Ethernet 10base5 were similar cables and similar wiring topology


When we relocated, the new building already had the ethernet cables we know today. I was very impressed by the cabinets with the switches. The floors had little covers with power and ethernet. That was in the 90s.



Linux used to have ARCnet drivers.


It used to have drivers for https://en.wikipedia.org/wiki/Fiber_Distributed_Data_Interfa... too, which is another token passing network. Or at least I had drivers for Linux for them. Can't remember exactly anymore, worked for a while in government institions which used nothing else for networking. FDDI I mean, not Linux, that was an experimental exception. Which was funny, because hey, government, modern equipment? Nope... TERMINALS! Which were plugged into the ring via matchbox-like media-converters going from 100Mbit/s down to 19.200bit/s whith weird parity and stop bits. At least they didn't need a separate PSU(wall wart) because the full DB25 of the RS-232 powered them somehow.


Suppose I wanted to play around and experiment with token ring networks. Could I cobble it together somehow with common devices? Is it possible to emulate them in any meaningful way in software? Just curious as someone with no networking experience lower than IP.


If you only wanted 10 or 16Mbps (e.g. to interface with historical equipment) you could bit-bang it with a cheap 200MHz microcontroller. Then the only custom electronics needed is the analog bits.


Connectors have been bulky and wires thick compared to ethernet.


Which was just IBM over-engineering. Token ring ran fine on cat5 with RJ45.


They did shrink the connectors later on and move away from coax. There’s no reason token-ring can’t work over RJ-45 with 4 twisted pairs.


When tearing down some of this technology, there's almost an undercurrent "how this works is a mystery today" -- but the humans that still worked on some of these micro-marvels are probably still alive. Have you had success in finding people that did work on these designs, for example in the case of the "universal controller (UC) architecture" which might merit an article all its own?


I've asked around a bit, but haven't found anyone with information on the UC architecture.


It's probably buried somewhere in the IBM Journal of Research and Development.


I wonder if it's buried somewhere in Usenet.


Author here if anyone wants to discuss IBM's chips.


> IBM calls this "microcode", but it's unclear if this is microcode in the usual sense or just firmware instructions.

IBM had all sorts of unconventional usages of the word "microcode", e.g. parts of the OS/400 operating system were referred to as the "Horizontal and vertical microcode" (they were in fact the kernel of the operating system)


What is the strangest thing you’ve ever seen in a chip? (e.g. has anything ever hinted at either a lucky accident that relied on physical laws that weren’t understood or tech that seemed too advanced to have been developed by the team that developed it?)

Also- the strange parts of the chip spell DDB, which may be relevant, as the V DDB is mentioned in one of the MAU design patents. Or DDB possibly could be the initials of the team or designers. It could have also served a practical purpose.


I've seen a few things on chips that don't make sense, such as wires to nowhere. Then I figured out that these were bug fixes where they had cut connections.

Occasionally I find interesting chip art such as a tiger on a Dallas Semiconductor chip: https://en.wikipedia.org/wiki/Chip_art


Chip art is a lot of fun! Years ago when I was working at a startup, we had a wooden statue in the office of a monkey holding a cell phone. Over time he was further accessorized with a hardhat and an official company badge. On one of our prototype tapeouts, we made a not-insignificant effort to render a proper photo of it onto the top layer metal.

The tricky thing was getting multiple colors (shades, really) using what amounts to a single color. Back then, we didn't have any fancy filters like "sketch mode" to turn it into a line drawing, and we were limited to some extent by process design rules for metal size, spacing, density, etc.

We ended up opening the image in GIMP, and converting it to grayscale, then true black and white (1-bit color) by upscaling and using some filter where it preserves the shades by setting the average density of black pixels in an area to match the shade of gray of the pixel in the original. Then we wrote a script that mapped black pixels to solid metal, and white pixels to empty space, on a grid in such a way that all Design Rules were met.

It wasn't a perfect result but I think it turned out alright! https://imgur.com/a/AkB10A0


What was the function of that piece of silicon?


The whole die was a cellular transceiver. If I remember correctly that particular spot happened to be empty on the top layer. Foundries require a minimum density of metal on every layer, so we would have had to put dummy pieces of metal there anyways. We figured why not put a picture


In footnote #6 you ask which devices are PMOS and which are NMOS. My guess is that the PMOS are at the bottom, so you have a NOR gate.

In typical CMOS processes the PMOS transistors have lower carrier mobility than NMOS transistors. Holes are slower than electrons.

So in standard cells the PMOS transistors are made physically larger to compensate. This helps the device output H->L and L->H transitions be more symmetric.


This was another great post. I hope to see you do an Ethernet chip/card post in the future. I had a couple question about the following:

>"The block diagram below shows the complex functionality of the chip. Starting in the upper right, the analog front end circuitry communicates with the ring. The analog front end extracts the clock and data from the network signals."

Do all non-optical network cards have a similar analog circuit as well? Is this generally the transceiver chip on the card?

>"The chip's logic is implemented with a CMOS standard cell library and consists of about 24,000 gates. The idea of standard-cell logic is that each function (such as a NAND gate or latch) has a standard layout."

Are these cell libraries the same as an IP block that you would license today when designing a chip? Did cell libraries become common around the time of this chip?


> Do all non-optical network cards have a similar analog circuit as well? Is this generally the transceiver chip on the card?

Even optical cards have this kind of circuitry in the PHY chip. While the SFP module usually contains surprising amount of logic, most of it has to do with configuration and testing and in the end it is just an pair of LEDs with configurable analog amplifiers.

On the other hand for modern ethernet over TP (1Gbps and up) the analog interface circuitry is significantly more complex (and power hungry), because calling the thing baseband (the "base" in "1000-base-T") somewhat stretches the definition of the word. It uses various line coding and signal processing tricks to squeeze all the bandwith out of the wire.


>"because calling the thing baseband (the "base" in "1000-base-T") somewhat stretches the definition of the word. It uses various line coding and signal processing tricks to squeeze all the bandwith out of the wire."

Interesting. Can you elaborate on why using "base" is a stretch here? I don't think I've heard this before. It's been a while since I've looked at layer 1 but isn't Ethernet just Manchester encoding? What other signal tricks are generally used?

Might you or anyone else have any good resources for Ethernet PHY circuits?


I vaguely recall the terms, hopefully correctly.

High speed on copper, for networking anyway, has gone all analog now. That's more of a 'broad'-band (in the literal sense) than 'base'-band (in the single frequency, off/on) sense.

Edit:

The Wikipedia page for broadband says it better: "The key difference is that what is typically considered a broadband signal in this sense is a signal that occupies multiple (non-masking, orthogonal) passbands, thus allowing for much higher throughput over a single medium but with additional complexity in the transmitter/receiver circuitry." -- https://en.wikipedia.org/wiki/Broadband


I haven't looked at Ethernet chips in detail, but they have similar analog circuitry. A "PHY" (physical layer) module does the analog encoding and decoding.

Standard cell libraries are lower-level than IP blocks since you're dealing with gates rather than functional units. I'm sure someone here knows about how they are licensed.

On the chip I looked at, the analog module and the CPU were treated as IP blocks. These blocks were built by IBM so the intellectual property itself wasn't an issue. But the blocks were designed by other teams and essentially dropped onto the chip unchanged. For the revised version of the chip, they redesigned the logic but kept the original analog and CPU blocks.


Could the mystery analog loops be impedance matching / baluns? My first thought, the way they stand out bare on the chip, seems similar to other RF magic.


My _guess_ was some sort of (attempt at?) very minor inductance / capacitive correction factor; leaning a bit more towards capacitive for the lack of more loops. In college I tried to do something similar in a term project by using all of the spare surface space on the PCB as a capacitive fill attached to the power supply.

If they were test pads something more like the solder ball or a normal pad might be expected.


The mystery loops might be some sort of impedance matching. 16 megahertz seems low for that sort of magic, but I don't know.


This flip chip was likely made at the IBM Bromont plant in Quebec.

I visited it a long time ago.

If you x-ray (?) or break the ceramic substrate (with the actual pins) you might find it to be a complex multi-layer piece ...


I had to go on a fishing expedition to find a token ring card that had Linux drivers (IIRC it was an ISA bus) for my shiny new pentium pro machine back in the mid 90’s.

Those were the days, ‘hey boss, can i build a machine? sure, get a quote for the parts and send it over.’ Nobody gave a shot that it want a standard build it that nodding but me had root, etc etc.


I was sorting out some old box's just the other day and in one I found an old IBM PCMCIA Token Ring adapter, in the box with manual and connecting cable. I put it aside, it's so old that I won't ever use but probably niche enough today that I felt worth keeping onto or selling over e-wasting it.


I encountered the wierd token ring connector for the first time when I joined IBM in the early 1990s. But the proprietary connector got replaced with a standard RJ45 jack later on. But by that time, it was clear that Ethernet had won.


IBM's logic synthesis tool (equivalent to Design Compiler) was called "Bool-Dozer". It may have made this chip, but I don't know if the time periods overlapped.


I'd be curious to see a comparison with the die of something like the ubiquitious Realtek NICs. They're definitely a mixed-signal design given that everything is on one chip.


Realtek RTL8019AS and RTL8029AS are a straight reimplementation (with few improvements in native mode like full duplex, sleep mode, PnP, auto polarity) of National Semiconductor DP8390 which in turn was the basis for Novell NetWare NE1000/NE2000 standard. https://en.wikipedia.org/wiki/NE1000

If you really want to go back in time into hardcore beginning you need to look at something like 3Com EtherLink 3C501 aka IBM Ethernet 4 (IE-4) https://www.os2museum.com/wp/emulating-etherlink/ made somewhat famous in networking circles by Linux Kernel driver comment "Don’t purchase this card, even as a joke.". 3C501 itself was an ASIC shrink of earlier design 3C500, the first original IBM Ethernet, here in all of its huge glory: https://static.wixstatic.com/media/a03cac_005e3e9eb62b47c292...

Its surprising to learn the cheap and considered crappy RTL8019AS was technically better than what most would call top of the shelf 3Com 3C509B thanks to twice the buffer size (16 vs 8 KB).

YouTube Computer History Museum "Oral History of Kanwal Rekhi" (Excelan, Novell CTO) https://www.youtube.com/watch?v=ox0e7yVgsXM has some interesting stories about early Ethernet adapters made by Excelan, Novell, their strategies, Lite products and bonkers ideas (Mormon run, trying to compete with Microsoft, hate of Unix despite owning it).


That vintage Ethernet card you linked to is quite something. I like the big metal-can tuned inductors like a 1970s TV set. It has a lot of blue bodge wires to fix things. M favorite is the chip with four X's of wire across it. Clearly they designed something backwards.


i didn't know the actual data got passed from host to host... i was always under the impression that the data was broadcast and merely the token or "talking stick" got passed from host to host in a ring.


What you described is known as a "Token Bus". The one developed by General Motors became an IEEE standard (802.4) for a while as an alternative to Ethernet (802.3) and the Token Ring (802.5) but was quickly forgotten.

A more popular token bus was the Arcnet.

https://en.wikipedia.org/wiki/ARCNET

While IBM's Token Ring was the most famous one there were others:

https://en.wikipedia.org/wiki/Cambridge_Ring_(computer_netwo...

https://en.wikipedia.org/wiki/Fiber_Distributed_Data_Interfa...

https://en.wikipedia.org/wiki/Scalable_Coherent_Interface


do companies ever release their engineering documents for vintage systems?


I don't know about official releases of documents, but there are a lot of vintage documents on bitsavers.org.


IBM has a very long history of diligent notetaking, maintaining several journals dedicated to internal development stretching back into the 50's. While I certainly haven't read everything, I've read enough to feel comfortable saying that the quality has had a sharp decline as transparent marketing replaced useful engineering. I suspect I'm not the only one who felt that way, because they recently shuttered those operations and tragically handed everything (from what I can tell) over to the paywalls. I generally don't feel much one way or the other about corporations, but seeing IBM decay like this genuinely makes me gloomy.

https://sci-hub.se/10.1109/4.45001 https://sci-hub.se/10.1147/rd.342.0416 https://sci-hub.se/10.1147/rd.342.0428




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: