Hacker News new | past | comments | ask | show | jobs | submit login

The 20 bits of address was likely a direct result of Intel's decision to package the 8086 in a 40-pin package.

They already multiplex the 16-bit data bus on top of 16 of the 20 pins of the address bus. But with only 40 pins, that 20-bit address bus was already using half of the total chip pinout.

And, at the time the 8086 was being designed (circa. 1976-1978) with other microprocessors of the time having 16-bit or smaller address buses, the jump to 1M of possible total address space was likely seen as huge. We look back now, comfortable with 12+GB of RAM as a common size, and see 1M as small. But when your common size is 16-64k, having 1M as a possibility would likely seem huge.




Packaging makes a lot of sense. I've handled a 68000 in DIP64 and it's just comically huge, and trying to fit the cursed thing into a socket quickly explains why DIPs larger than 40 are ultra rare.

I'm sure there must be architectures that use a multiplexed low/high address bus, like a latch signal that says "the 16 bits on the address bus right now are the segment", then a moment later "okay here's the offset", and leave it to the decoding circuitry on the motherboard, to determine how much to overlap them, or not at all. Doing it this way, you could scale the same chip from 64kB to 4GB, and the decision would be part of the system architecture, rather than the processor. (Could even have mode bits, like the infamous A20 Gate, that would vary the offset and thus the addressable space...)

But, yeah, it was surely seen as unnecessary at the time. Nobody was expecting the x86 to spawn forty-plus years of descendants, and even though Moore's Law was over a decade old at the time, it seems like nobody was wrapping their head around its full implications.


> I'm sure there must be architectures that use a multiplexed low/high address bus, like a latch signal that says "the 16 bits on the address bus right now are the segment", then a moment later "okay here's the offset"

Most modern architectures do that, see https://en.wikipedia.org/wiki/SDRAM#Control_signals for an older example (DDR4 and DDR5 are more complicated). The high address bits are sent first (RAS is active), then the low address bits are sent (CAS is active).


I think DRAM always worked this way. It's because the chips are physically organized into rows and columns, and the row must be selected first.


It seems obvious to access DRAM that way, but multiplexing the address pins was a big innovation from Mostek for the 4K RAM chips, and they "bet the company" on this idea. Earlier DRAMs weren't multiplexed; Intel just kept adding address pins and ended up with a 22-pin 4K chip. Mostek multiplexed the pins so they could use a 16-pin package.

The hard part is figuring out how to avoid slowing down the chip while you're waiting for both parts of the address. Mostek figured out how to implement the timing so the chip can access the row of memory cells while the column address is getting loaded. This required a bunch of clock generating circuitry on the chip, so it wasn't trivial.

I discuss this in more detail in one of my blog posts: https://www.righto.com/2020/11/reverse-engineering-classic-m...


That's great, and dovetails perfectly into a video I just watched on the topic, which I really enjoyed. It's a little flashy with the animations at first, but getting into the meat of things, they're used well:

https://www.youtube.com/watch?v=7J7X7aZvMXQ


Hahaha, somehow I convinced myself that the rationale for the segment register scheme was a long term plan to accommodate virtual memory and memory protection. The idea being that you would only have to validate constraints on access when a segment register was loaded, rather than every single memory access.


If you read through the PDF Ken links to in another note here, you find this quote starting the "Objectives and Constraints of the 8086" section:

"The processor was to be assembly-language-level-compatible with the 8080 so that existing 8080 software could be reassembled and correctly executed on the 8086. To allow for this, the 8080 register set and instruction set appear as logical subsets of the 8086 registers and instructions."

The segment registers provide a way to address more than a max of 64k, while also maintaining this "assembly-language-level-compatibl[ity]" with existing 8080 programs.


It was not that much of an engineering decision as a simple requirement for the chip to be feasible. At that time 40pin DIL was effectively the largest package that could be made at production scale.


Intel's decision to use a 40-pin chip was mainly because Intel had a weird drive for small integrated circuits. The Texas Instruments TMS9900 (1976) used a 64-pin package for instance, as did the Motorola 68000 (1979).

For the longest time, Intel was fixated on 16-pin chips, which is why the Intel 4004 processor was crammed into a 16-pin package. The 8008 designers were lucky that they were allowed 18 pins. The Oral History of Federico Faggin [1] describes how 16-pin packages were a completely silly requirement, but the "God-given 16 pins" was like a religion at Intel. He hated this requirement because it was throwing away performance. When Intel was forced to 18 pins by the 1103 memory chip, it "was like the sky had dropped from heaven" and he had "never seen so many long faces at Intel."

[1] pages 55-56 of http://archive.computerhistory.org/resources/text/Oral_Histo...


I'm really curious about the arguments for sticking to those lower pin numbers. Was it for compatibility? Or packaging costs? Like what was the downside of using more pins (up until a certain point, obviously, but it seems like other manufacturers had no issues with going for higher pin numbers!)


I think the argument for low pin counts was that Intel had a lot of investment in manufacturing and testing to support 16 pin packages, so it would cost money to upgrade to larger packages. But from what I read, it seems like one of those things that turned into a cultural, emotionally-invested issue rather than an accounting issue.


Doubtful. 42- and 48-pin dips same width as 40-pin were a thing around that time or not much later.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: