Hacker News new | past | comments | ask | show | jobs | submit login
Reverse-engineering Ethernet backoff on the Intel 82586 network chip's die (righto.com)
200 points by zdw on Oct 31, 2023 | hide | past | favorite | 34 comments



Author here. This chip is pretty obscure, but I was looking at it for another project and figured I might as well write it up. Any questions?


I just wanted to say thank you for all of these write-ups, and for the restoration and reverse engineering work that you do for the retrocomputing community.

Here's a question: how has this implementation changed in modern nano-scale ethernet implementations? Modern ethernet cards are asked to do much more offloading and processing, potentially including application protocols like TLS and DMA not just from memory but directly from NVMe devices as well. Given that we can now spam out transistors by the billions, are things like 10-bit counters still implemented via clever dynamic logic in hardware, or is there a more brute-force approach in use today?


I looked at the datasheet of a random modern Ethernet chip (ENC28J60) and it's simpler than I expected, doing much less than the old Intel chip, although it includes the low level "PHY" circuitry, which was a separate Intel chip in the olden days. I expect that the newer chip has things like 10-bit counters, but they would be implemented with standard cell logic (i.e. computer-generated layout of gates) rather than the hand-optimized circuitry of the Intel chip.

On the other hand, you can get a chip like the W5300 which includes the whole TCP/IP stack along with ARP and ICMP, presumably running on an internal microcontroller.

https://ww1.microchip.com/downloads/aemDocuments/documents/O...

https://www.wiznet.io/wp-content/uploads/wiznethome/Chip/W53...


High end NICs like Mellanox ConnectX are probably more similar to the 82586 in terms of breadth of functionality. They have (R)DMA, hardware timestamping, encryption, etc. Although, I wouldn’t expect to see much hand crafted logic design outside of the very high speed signal paths like in the serdes.


Minor detail whilst I read through this (I'm on a quest to understand my childhood Apple IIe, and at just the level to start understanding ripple counters): I think it's 10Mbit or 10Mb/sec ethernet, not 10MB/sec.


Thanks, I've fixed that.


Hi Ken, thanks for another excellent article! I just wanted to add that I recently bought an Intel EtherExpress 8/16 for my 486 PC and it uses this chip. So although the chip may be obscure, it still finds its way into the hands of new users even today. :)

The fact that I just got this card and had to search around the net for drivers and documentation, and now your die photos are available - I'm just amazed. What a cool chip and card.


> The idea of Carrier Sense is that the "carrier" signal on the network indicates that the network is idle.

Wouldn't a carrier indicate that the network is busy? Actually, Ethernet is a baseband system and so doesn't actually have a carrier, but for Alohanet you would only have the carrier when a transmitter was on even if it wasn't actually sending 1s or 0s at that instant.


That's nice simple logic that is laid out in a way that makes your reverse engineering straightforward. I really appreciate these articles because sometimes I build discrete logic circuits for fun so I see a lot of beauty in these things.


One part I didn't follow 2qs where then pseudorandom element is introduced. This seems like it can implement variable delays in powers of 2 based on the mask.


The first counter simply counts, so when you sample it you get a pseudorandom number. (Assuming the sampling time is random.) Then applying the mask gives you the power of 2 scaling.


Thanks, makes total sense. If the first counter is free running then it should look pretty random if sampled when there are collisions as those would be random themselves. Love how simple the solutions are when transistors were expensive.


I love Ken's blog posts and this one is right up my alley.

> introduced in 1973, Ethernet is the predominant way of wiring computers together.

That sentence does not really convey how hard it was for Ethernet to win the network L1/L2 protocol wars. It was fairly popular right from the start because it worked well enough (better than it should) and was very simple to understand, configure, and operate. It also kept evolving with new media.

But there were other systems it had to compete with, in some use cases (ISP's) well into the late 1990's/early 2000's.

Engineering workstations (Sun/SGI/IBM/HP) were early adopters but often had to integrate with mainframe/minicomputers running things like:

  - Token Ring: IBM was the 800lb gorilla at the time
  - FDDI: supported 100mbps over MMF!
  - DEC LAT
In the PC sector

  - Novell/IPX: was very popular in the 80's and persisted well into the 1990's. Most DOS games like Doom *only* supported Novell/IPX
  - Macintosh: AppleTalk
  - There were many other proprietary PC/DOS protocols but they were not memorable
In the ISP sector

  - Ethernet was very popular but consistently lagged in supporting the fastest link speeds that ISP's needed. The industry rapidly shifted to 1G (1999) and 10G (2003) Ethernet once they became widely available.
  - FDDI: 100mbps over MMF. The MAE-EAST peering exchange in the mid 1990's used DEC Gigaswitches with unique switched FDDI 100mbps Full-Duplex mode where each port was it's own separate FDDI domain.
  - ATM: Telco types last gasp at optimizing for circuit switched networks (phone calls) at the expense of packet switched networks like TCP/IP. ATM over Sonet/SDH however supported much faster link speeds initially.
  - Packet over Sonet/SDH: Allows the use full-size TCP/IP packets over high speed fiber (Sonet/SDH) links without ATM segmentation. This is what many OC3-OC48 ISP circuits were running before 1GE/10GE took over.
There were probably many dozens of other competitors that even I'm too young to remember. Some of these were later modified to run over Ethernet as you can see here:

https://en.wikipedia.org/wiki/Ethertype


Didn't IPX always run on top of Ethernet? I thought it was pretty much a direct derivative of Xerox PARC's research network protocol.


It ran on top of several things. I remember the company I worked for (Alfa Systems) uploading IPX drivers to Novell for testing over a modem using IPX as a protocol.

Alfa designed Sage Mainlan originally a z8530 + RS485(?) PC card followed by a 10Mbps Ethernet with our own chip design (Enzo) fabbed as an ASIC by Toshiba. We wrote IPX drivers for both versions.

Interestingly we could hang systems with the 3COM cards in our test systems if we ran at full speed and at somepoint we had the full 500 metres of thick ethernet in the office.

The IPX version that came with Netware 3 was rather nice, I seem to recall it had a buffer of segments and these got filled by the different layers of the network stack as needed along with some fancy protocol filtering so your code only saw just the data packets it was interested in.


Did IPX run on ARCnet? I remember ARCnet being popular in the DOS world.


Seems so https://www.cisco.com/en/US/docs/internetworking/troubleshoo...

NetWare runs on Ethernet/IEEE 802.3, Token Ring/IEEE 802.5, Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), and ARCnet. NetWare also works over synchronous wide-area network (WAN) links using the Point-to-Point Protocol (PPP)


It could, but not the further step of running on top of TCP/IP, which is why software like Kali [0] was needed if you wanted to play over the internet.

[0] https://en.wikipedia.org/wiki/Kali_(software)


There is a second generation of this chip, the 82596SX/DX. Much faster 32-bit bus, still 10 MBit/s on the Ethernet side of things. "Software compatible". Way bigger buffers.

http://bitsavers.informatik.uni-stuttgart.de/components/inte...


There is still a Linux driver for the 825xx chips, if you want to see how the software side works.

https://github.com/torvalds/linux/blob/master/drivers/net/et...


As far as I know, all of Intel's Ethernet controllers with the exception of the most recent ones like the horribly-named i2xx (is that a lowercase i, uppercase I, or lowercase L? Even Intel seems to be confused about that on its site) have a 825xx part number. The 82574 is commonly emulated for VMs.


From the initial article I expected some Bus mastering DMA, or at least ISA DMA, but its a big shared memory window. For ISA Network card choices were:

Bus Mastering - fancy and fast, but troublesome with lots of ram and/or in protected mode http://www.os2museum.com/wp/vds-borne-out-of-necessity/

ISA DMA - slow, example AMD Lance Am7990 like Novell NE2100. Maximum original 8237 DMA controller throughput was ~1MB/s while stealing 100% of ISA bus thus stalling CPU completely.

Memory sharing - also problematic because you lose precious DOS memory.

Port IO - like good old NE2000. Small software interface footprint, fast with 286/V20 'REP INSW/OUTSW'.


Ken, not mentioned in the article but I wondered about the history of this design. Some distant memory that it may have been a dec design or dec-involved? Obviously there were TTL-based Ethernet implementations in the 70s (Metcalf famous self-wire-wrapped prototype[1] being the first). Perhaps this chip was a VLSI re-do of one of those designs?

[1] https://americanhistory.si.edu/collections/search/object/nma...


Robert Garner is writing a book on the history of Ethernet; I'm leaving those historical details to him :-)


The Intel 82586 was co-architected by Bob Beach (Intel Santa Clara) and Dono Van-Mierop (Intel Haifa). Bob had previously designed the iSBC 550 10-Mb (dual-board) Multibus Ethernet adapter, deployed in Intel's MDS-80 microcomputer development systems. The AMD LANCE/Am7990 Ethernet controller chip was architected at DEC Tewksbury and designed at AMD Santa Clara. I've spoken with all the key players.

My technical history book will cover the Ethernet's first 15 years, from Alohanet inspiration, invention at PARC, initial chip and system products, CSMA/CD, IEEE 802.3 standardization, and appearance of twisted pair. I’ve spoken with over 120 participants so far. (Btw, the Alto-I 3-Mb Ethernet adapter was primarily designed by David Boggs, although, as they jointly debugged it, Bob Metcalfe knew even gate. Bob authored the Alto-I Ethernet adapter microcode and the initial PUP protocol layer.)


I can’t fathom how there are humans who can understand this stuff. I “just” do software and microcontrollers


I work in ASIC design, and while it seems like gobbledygook, it's not too hard to grok with some experience. Ken's blogs help a lot in that regard. For context, compared to modern chips these are extremely simple, bit at the time the tools for chip design were limited so it was a bit of art and science in one. Now there is a lot of automation to help 'draw' the layout using standard cells for digital, but analog design is still an art (and incredibly complex one). So at least for digital weve first order abstracted designers from the individual gates and wires, and design is more akin to programming, though in languages designed for hardware. I feel fortunate enough to have seen both sides, so have done both the full custom design as well as the highly EDA driven design.


To give some perspective, a chip like this from 1982 would have been designed by a very small team; maybe just 2-3 people and they were using primitive 1970s technology with no Internet access. Today you could learn how to design a chip like this in a few years, maybe faster if you just stick to logic design.


> using primitive 1970s technology

Including, but not limited to, miles of Rubylith.[0]

[0] https://en.wikipedia.org/wiki/Rubylith


Well, square feet of Rubylith maybe. I haven't been able to find the exact date that Intel switched from Rubylith to digitally-generated masks, but I'm pretty sure they had switched by 1982.


Chips at that level are often pretty simple. For fun I something build out circuits with 74xx or 54xx that are about this complex on a breadboard and drive them with an Arduino.


I’m always blown away by people who can reverse engineer dies.

I’m curious can heat be used to detect interesting parts of a die? Ie decap then re-run target functionality and see what’s heating up?


Great article! The led me into a rabbit hole, exploring collisions in Wi-Fi and the corresponding CSMA/CA Protocol. I hadn’t put that much thought into how this worked in practice.


Brings back memories. I worked at DEC when Ethernet first came out and I remember a few ocmpanies producing great benchmarks by not backing off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: