Hacker News new | past | comments | ask | show | jobs | submit login
FPGA x86 Processor (code.google.com)
75 points by Cieplak on Feb 23, 2013 | hide | past | favorite | 36 comments



Good luck, this is very, very hard

Why?

Having tested an x86 processor from a 3rd party manufacturer (not the big ones, and not even the 2nd league)

- Bios. Including ACPI stuff. So Windows XP would boot, with linux some would boot, some would work but with instabilities, some would not boot

- Minor incompatibilities and every nook and cranny of the x86 spec.

- Drivers for everything your board does different

- The x86 (legacy) infrastructure. Not sure how much you need to boot and make Windows work for example. It may get nasty. A20? Chained interrupt handlers? DMA controllers?


It looks like he only wants to emulate 8086, so ACPI and booting Windows/Linux are not a possibility.

Not saying that its simple to implement an 8086, but it should be significantly easier than implementing an 80386/80486 (which is what one would expect today as a minimum when referring to it as x86).


Aah that changes the picture

Maybe he can boot FreeDOS then (not sure if it uses i386 mode, probably not)

It would be nice to boot, to show something, maybe even play an old game


OS bringup doesn't seem to be part of the goal. It looks like he's just going to run code directly from memory to test it without worrying about needing to get an OS running on it first.


I'm not sure it that's the focus of the project.

If this is his final EE project as said on the page, in my opinion the main focus of the project is to have a working proof of concept of an implementation of a X86 processor on a FPGA.

He wouldn't have the time to build a fully ready to market X86 processor all by himself, I think.


Just reuse SeaBIOS like QEMU and KVM do! http://www.seabios.org/SeaBIOS


AFAIK a lot of the IBM PC legacy are already required for the legacy PC BIOS.


This is awesome. One of the running bets I've had is how many chapters of Hennesy [1] you can implement in an FPGA. Early on it was hard to do more than basic RISC architectures, the 6502 Etc. Then you could do the Z80 which was a good cisc variant that had some excellent code tests. The 8086 and 68000 make for good follow on targets. At some point we should be able to do a VAX, its sort of a local maximum of CISCyness.

[1] http://www.amazon.com/Computer-Architecture-Quantitative-App...


> At some point we should be able to do a VAX

You mean a processor where all of the complex opcodes are implemented in loadable microcode?


That's the one! I happen to have what is a fairly complete collection of all of the 'chip' VAX cpus (for Qbus) starting with the KA610 (MicroVAX I), through the KA692 (VAX 4000/700a), and its fascinating to watch the architecture go from a nearly pure microcode implementation to a nearly pure 'gate' implementation. From the perspective of looking at the tradeoffs of microcode vs gates it is really an excellent tutorial on computer architecture.


If we want a microcoded architecture, I'd prefer the PDP-10, but that's me.

How much more complex is a mostly-microcode VAX implementation compared to a MIPS? The point about being able to move stepwise up the hardware complexity ladder by progressively replacing microcode with gates is a really good one, though.


> I'd prefer the PDP-10

yeah, or a Foonly...


You're thinking of the Perq (http://www.chiark.greenend.org.uk/~pmaydell/PERQ/). The VAX was probably implemented with microcode, but it wasn't that modifiable.


Timestamps on the source code repo seem to date this to July 2009.


My god, the ability to use this for tracing code makes me more excited than anything I've seen in a while. The debugging facilities on x86 are, well, limited to the point of being damn near useless. I may spend some time hacking solid trace functionality into this, if it ends up being an open core.


There are already several open x86 cores. You're not likely to find these as useful for modern code though, since most people are finally using x86_64 these days and a lot of the instructions being used (and a lot of the processor characteristics of modern chips, from SMT to the trace cache) are very different in modern Intel chips.

You might try instrumenting bochs, which might both run your code faster, require less hardware and give you more accurate results. In many ways, bochs is much better built for the type of thing you want.


I like the project, but for tracing code, you could probably do better with Bochs. Bochs lets you set breakpoints at certain addresses and CPU states, for example.

If bochs doesn't do enough, it would probably be easier to hack it up to do what you need.


For such purposes, binary instrumentation is probably a more proper tool. Various options out there, I personally prefer Pin [1] because it's extremely robust and gives you good control of the overheads from instrumenting.

edit: grammar

[1] http://software.intel.com/en-us/articles/pin-a-dynamic-binar...


For an undergrad OS course, we had to build an OS from scratch. Part of that was doing the dance to get from real mode to protected mode. We had a bug in our boot loader that we were pretty stumped with; to solve it, we ended up hacking debugging printfs into the "CPU" inside QEMU and found the problem very quickly.

I don't want to be working at that level every day but it sure was a fun project.


I remember dumping registers to text-mode screen memory so that I wouldn't waste a register. Half of the result landed in the color values, so sometimes I couldn't read all of the value because it was flashing green on green. I prefered Bochs' Port E9 hack.

Oh, OS courses at the university... good old time. (Some participants complained that going from zero to bare-metal x86 was too difficult.)


So in my day to day job I'm actually a component design engineer working mostly on design validation at a company that's "involved" in x86 development. I'm sort of curious, what debugging features are you hoping for or what's missing that's negatively impacting your workflow?

Honestly, when I'm debugging hardware where I have the high level specifications, the microarchitecture spec and the system verilog files that implemented the design it's still kind of a pain to trace things.

The other downside of having all the signals is that little things can be unintentionally misleading. As an example I was working with coworker trying to trace a memory transaction through some complicated logic blocks and there was a point we originally missed where the bottom few bits of an address aren't necessary for the hardware. Later after this point the bottom bits were reused to communicate transaction properties along with the significant address bits. There was a bit of confusion about why we were reading a "bad address" before we realized what happened.


Most of the implementation is based on Zet http://zet.aluzina.org/index.php/Zet_processor and cpu86 http://www.ht-lab.com/freecores/cpu8086/cpu86.html as mentioned in the project description. The Zet implementation can already run several DOS games on a pretty low-end FPGA board like the DE1, but only does the older 16-bit instructions.

Not sure what he tries to accomplish, but it seems like a small system based on a CPU, memory, VGA controller(? not seen) and a 16550 UART and a lot of tests to proof correct implementation?


ya, I have posted that several times here but I keep getting shadow banned for some reason. here is my implementation of Zet PC-XT SoC that I did like 4 years ago. https://github.com/donnaware/ZBC---The-Zero-Board-Computer


will have a look at this :-)


Hmm, besides being from 2009, this looks very very incomplete, there's almost no real functional code, just a multiplier, divider, and a simple ram module (and not sure if any of those work properly...). So all in all, not too exciting, unfortunately.


Actively developing a x86 FPGA project isn't easy. The ML403 boards originally sold for $495 each. Most SoC/Processor development requires large FPGAs and the boards cost even more.


This is awesome! Maybe one day we can extend it to more modern members of the x86 family (286,386) that introduced more opcodes (hence producing the complicated ISA encoding that x86 has) and operating modes (unreal mode, protected mode, SMM, ... )


That seems insane. Even decoding X86 is insanely complicated.


Awesome, that's a lot of connections to manage. Do you have any problems with memory or IO constraints on the Xilinx?


I recognize Norton Commander in the first photo. Good times.


EE? Extended Essay at the IB?

That's only supposed to be 4000 words. A project like this could turn into a book.


Much more likely EE => Electrical Engineering. This is probably a senior project or something like that.


Apparently at -1 I guess there are stupid questions in this world. Thanks for the replies though guys, didn't know EE stood for that. Electrical engineering doesn't involve any FPGA's in Norway.


I think EE stands for Electronic Engineering in this context.


What kind of clock rate could this achieve? As fast as a historic 8086?


The first thing that I think of when I see this is bitcoin mining! Maybe could give the Jalapeño / Butterfly Labs guys a run for their ... satoshis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: