This would make concrete and bring coherence to the grab bag of skills and experience I have. Though I think it would be 10x as much in a small group setting. It is like trying to recover the source code a binary where you don't even know the source language.
At what number of layers is it difficult to reverse engineer a processor from die photos? I would think at some point, functionality would be too obscured to able to understand the internal operation.
I've been able to handle the Pentium with 3 metal layers. The trick is that I can remove metal layers to see what is underneath, either chemically or with sanding. Shrinking feature size is a bigger problem since an optical microscope only goes down to about 800 nm.
I haven't seen any chips with a solid metal top layer, since that wouldn't be very useful. Some chips have thick power and ground distribution on the top layer, so the top is essentially solid. Secure chips often cover the top layer with a wire that goes back and forth, so the wire will break if you try to get underneath for probing.
Interesting! What is the reason of 800nm limit? I have successfully photographed my own designs down to 130nm with optical microscobes, though not with metal layer removal. The resolution isn't perfect but fearures were clearly visible.
The first thing I thought he was referring to is the wavelength of optical light, which is generally between 800-400nm IIRC. I take it your 130nm optical microscopes are imaging using ultraviolet?
Regardless, let's just get this man a scanning tunneling microscope already. :D
I'd love to see an analysis of byte ordering impact on CPU implementation. Does little vs big endian make any difference to the complexity of the algorithms and circuits?
So Epictronics recently looked at the 386SX, the version with the 16bit external bus, which was slower than the 286 at the same clock. What changed between that and this? Was the major difference the double clock hit on fetch? Or did it have a shorter prefetch queue as well like the 8088?
386SX was slower than a 286 at the same clock only for the legacy 16-bit programs and only for the 16-bit programs that did not use a floating-point coprocessor, as the 80387 coprocessors available for 386SX were much faster at the same clock frequency than the 80287 available for 286.
Moreover there was only a small time interval when 286 and 386SX overlapped in clock frequency. In later years 286 could be found only at 12 MHz or 16 MHz, while 386SX was available at 25 MHz or 33 MHz, so 386SX was noticeably faster at running any program.
Rewriting or recompiling a program as a 32-bit executable could gain a lot of performance, but it is true that in the early years of 386DX and 386SX most users were still using 16-bit MS-DOS applications.
I remember reading about naive circuits like ripple-carry, where a signal has to propagate across the whole width of a register before it's valid. These seem like they'd only work in systems with very slow clocks relative to the logic itself.
In this writeup, something that jumps out at me is the use of the equality bus, and Manchester carry chain, and I'm sure there are more similar tricks to do things quickly.
When did the transition happen? Or were the shortcuts always used, and the naive implementations exist only in textbooks?
Well, the Manchester carry chain dates back to 1959. Even the 6502 uses carry skip too increment the PC. As word sizes became larger and transistors became cheaper, implementations became more complex and optimized. And mainframes have been using these tricks forever.
As I understand it, you can use slower carry propagation techniques in parts of a design that aren't on the timing critical path. Speeding up logic that isn't on the critical path won't speed up your circuit; it just wastes space and power.
Clock dividers (for example, for PLLs and for generating sampling clocks) commonly use simple ripple carry because nobody is looking at multiple bits at a time.
I wrote blitters in assembly back in those days for my teenager hobby games. When I could actually target the 386 with its dword moves, it felt blisteringly fast. Maybe the 386 didn't run 286 code much faster but I recall the chip being one of the most mind-blowing target machine upgrades I experienced. Much later I recall the FPU-supported quadword copy in 486dx and of course P6 meeting MMX in Pentium II. Good times.
You're 100% right that the 386 had a huge amount of changes that were pivotal in the future of x86 and the ability to write good/fast code.
I think a bigger challenge back then was the lack of software that could take advantage of it. Given the nascent state of the industry, lots of folks wrote for the 'lowest common denominator' and kept it at that (i.e. expense of hardware to test things like changing routines used based on CPU sniffing.)
And even then of course sometimes folks were lazy. One of my (least) favorite examples of this is the PC 'version' (It's not at all the original) of Mega Man 3. On a 486/33 you had the option of it being almost impossible twitchy fast, or dog slow thanks to turbo button. Or, the fun thing where Turbo Pascal compiled apps could start crapping out if CPU was too fast...
Sorry, I digress. the 386 was a seemingly small step that was actually a leap forward. Folks just had to catch up.
I was programming in Turbo Pascal at the time, which was still 16-bit. But when I upgraded my 286 to a Cyrix 486, on a 386 motherboard[1], I could utilize the full 32-bit registers by prefixing assembly instructions with 0x66 using db[1].
This was a huge boost for a lot of my 3D rendering code, despite the prefix not being free compared to pure 32-bit mode.
Imagine how it felt going from an 8086 @ 8 MHz to an 80486SX (the cheapo version without FPU) @ 33 MHz. With blazingly fast REP MOVSD over some form of proto local bus Compaq implemented using a Tseng Labs ET4000/W32i vga chip.
The 286 in the benchmark was using 60ns Siemens ram, and a 25mhz unit which virtually no one has ever seen in the wild. 286's that people actually bought topped out at 12mhz.
Microsoft and IBM were both developing OS/2 together. There were a lot of disagreements between the two companies. IBM wanted to keep supporting the 286.
> OS/2 1.x targets the Intel 80286 processor and DOS fundamentally does not. IBM insisted on supporting the 80286 processor, with its 16-bit segmented memory mode, because of commitments made to customers who had purchased many 80286-based PS/2s as a result of IBM's promises surrounding OS/2.[30] Until release 2.0 in April 1992, OS/2 ran in 16-bit protected mode and therefore could not benefit from the Intel 80386's much simpler 32-bit flat memory model and virtual 8086 mode features. This was especially painful in providing support for DOS applications. While, in 1988, Windows/386 2.1 could run several cooperatively multitasked DOS applications, including expanded memory (EMS) emulation, OS/2 1.3, released in 1991, was still limited to one 640 kB "DOS box".
> Given these issues, Microsoft started to work in parallel on a version of Windows which was more future-oriented and more portable. The hiring of Dave Cutler, former VAX/VMS architect, in 1988 created an immediate competition with the OS/2 team, as Cutler did not think much of the OS/2 technology and wanted to build on his work on the MICA project at Digital rather than creating a "DOS plus". His NT OS/2 was a completely new architecture.[31]
DOS extenders had started in the 1980's but they weren't a real OS but I would barely call DOS an OS either.
> In 1987, SCO ported Xenix to the 386 processor, a 32-bit chip, after securing knowledge from Microsoft insiders that Microsoft was no longer developing Xenix.[41] Xenix System V Release 2.3.1 introduced support for i386, SCSI and TCP/IP. SCO's Xenix System V/386 was the first 32-bit operating system available on the market for the x86 CPU architecture.
I hope some day the tedious part of what you do, can be automated (AI?), so that you (or others) can spend their time on whatever aspect is most interesting. Vs all the grunt work needed to get to a point where you understand what you're looking at.
Btw. any 4 bit cpus/uC's in your collection? Back in the day I had a small databook (OKI, early '90s iirc) that had a bunch of those. These seem to have sort of disappeared (eg. never saw a pdf of that particular databook on sites like Bitsavers).
For the most part, the account posts comments on HN that previously appeared on reddit discussions of the same article (you can check this with Google). My guess is that it's an experiment in karma farming.