Having tested an x86 processor from a 3rd party manufacturer (not the big ones, and not even the 2nd league)
- Bios. Including ACPI stuff. So Windows XP would boot, with linux some would boot, some would work but with instabilities, some would not boot
- Minor incompatibilities and every nook and cranny of the x86 spec.
- Drivers for everything your board does different
- The x86 (legacy) infrastructure. Not sure how much you need to boot and make Windows work for example. It may get nasty. A20? Chained interrupt handlers? DMA controllers?
It looks like he only wants to emulate 8086, so ACPI and booting Windows/Linux are not a possibility.
Not saying that its simple to implement an 8086, but it should be significantly easier than implementing an 80386/80486 (which is what one would expect today as a minimum when referring to it as x86).
OS bringup doesn't seem to be part of the goal. It looks like he's just going to run code directly from memory to test it without worrying about needing to get an OS running on it first.
If this is his final EE project as said on the page, in my opinion the main focus of the project is to have a working proof of concept of an implementation of a X86 processor on a FPGA.
He wouldn't have the time to build a fully ready to market X86 processor all by himself, I think.
This is awesome. One of the running bets I've had is how many chapters of Hennesy [1] you can implement in an FPGA. Early on it was hard to do more than basic RISC architectures, the 6502 Etc. Then you could do the Z80 which was a good cisc variant that had some excellent code tests. The 8086 and 68000 make for good follow on targets. At some point we should be able to do a VAX, its sort of a local maximum of CISCyness.
That's the one! I happen to have what is a fairly complete collection of all of the 'chip' VAX cpus (for Qbus) starting with the KA610 (MicroVAX I), through the KA692 (VAX 4000/700a), and its fascinating to watch the architecture go from a nearly pure microcode implementation to a nearly pure 'gate' implementation. From the perspective of looking at the tradeoffs of microcode vs gates it is really an excellent tutorial on computer architecture.
If we want a microcoded architecture, I'd prefer the PDP-10, but that's me.
How much more complex is a mostly-microcode VAX implementation compared to a MIPS? The point about being able to move stepwise up the hardware complexity ladder by progressively replacing microcode with gates is a really good one, though.
My god, the ability to use this for tracing code makes me more excited than anything I've seen in a while. The debugging facilities on x86 are, well, limited to the point of being damn near useless. I may spend some time hacking solid trace functionality into this, if it ends up being an open core.
There are already several open x86 cores. You're not likely to find these as useful for modern code though, since most people are finally using x86_64 these days and a lot of the instructions being used (and a lot of the processor characteristics of modern chips, from SMT to the trace cache) are very different in modern Intel chips.
You might try instrumenting bochs, which might both run your code faster, require less hardware and give you more accurate results. In many ways, bochs is much better built for the type of thing you want.
I like the project, but for tracing code, you could probably do better with Bochs. Bochs lets you set breakpoints at certain addresses and CPU states, for example.
If bochs doesn't do enough, it would probably be easier to hack it up to do what you need.
For such purposes, binary instrumentation is probably a more proper tool. Various options out there, I personally prefer Pin [1] because it's extremely robust and gives you good control of the overheads from instrumenting.
For an undergrad OS course, we had to build an OS from scratch. Part of that was doing the dance to get from real mode to protected mode. We had a bug in our boot loader that we were pretty stumped with; to solve it, we ended up hacking debugging printfs into the "CPU" inside QEMU and found the problem very quickly.
I don't want to be working at that level every day but it sure was a fun project.
I remember dumping registers to text-mode screen memory so that I wouldn't waste a register. Half of the result landed in the color values, so sometimes I couldn't read all of the value because it was flashing green on green. I prefered Bochs' Port E9 hack.
Oh, OS courses at the university... good old time. (Some participants complained that going from zero to bare-metal x86 was too difficult.)
So in my day to day job I'm actually a component design engineer working mostly on design validation at a company that's "involved" in x86 development. I'm sort of curious, what debugging features are you hoping for or what's missing that's negatively impacting your workflow?
Honestly, when I'm debugging hardware where I have the high level specifications, the microarchitecture spec and the system verilog files that implemented the design it's still kind of a pain to trace things.
The other downside of having all the signals is that little things can be unintentionally misleading. As an example I was working with coworker trying to trace a memory transaction through some complicated logic blocks and there was a point we originally missed where the bottom few bits of an address aren't necessary for the hardware. Later after this point the bottom bits were reused to communicate transaction properties along with the significant address bits. There was a bit of confusion about why we were reading a "bad address" before we realized what happened.
Not sure what he tries to accomplish, but it seems like a small system based on a CPU, memory, VGA controller(? not seen) and a 16550 UART and a lot of tests to proof correct implementation?
Hmm, besides being from 2009, this looks very very incomplete, there's almost no real functional code, just a multiplier, divider, and a simple ram module (and not sure if any of those work properly...). So all in all, not too exciting, unfortunately.
Actively developing a x86 FPGA project isn't easy. The ML403 boards originally sold for $495 each. Most SoC/Processor development requires large FPGAs and the boards cost even more.
This is awesome! Maybe one day we can extend it to more modern members of the x86 family (286,386) that introduced more opcodes (hence producing the complicated ISA encoding that x86 has) and operating modes (unreal mode, protected mode, SMM, ... )
Apparently at -1 I guess there are stupid questions in this world. Thanks for the replies though guys, didn't know EE stood for that. Electrical engineering doesn't involve any FPGA's in Norway.
Why?
Having tested an x86 processor from a 3rd party manufacturer (not the big ones, and not even the 2nd league)
- Bios. Including ACPI stuff. So Windows XP would boot, with linux some would boot, some would work but with instabilities, some would not boot
- Minor incompatibilities and every nook and cranny of the x86 spec.
- Drivers for everything your board does different
- The x86 (legacy) infrastructure. Not sure how much you need to boot and make Windows work for example. It may get nasty. A20? Chained interrupt handlers? DMA controllers?