I've been working on an annotated disassembly of the Yamaha DX7 ROM, it's been quite a trip so far. Figuring out all the hardware, how the operators and the envelope generators work by looking at the code, the midi implementation and so on.
I work a little bit on it every week on Saturday evening, it's my way of winding down from a busy week. Other people play Sudoku ;)
In case you're wondering why: I'd like to de-solder the CPU from a broken motherboard that I have (besides two working ones) to see how hard it is to take over the hardware by simulating the CPU with a RasPi, but for that I have to know exactly how the bloody thing works in the first place and it is quite a complex beast. The ROM is 16K.
This I would like to know more about. Last year I started to get into electronics by creating on my own synth using an Arduino. As a software boffin, it was marvellous to learn about PWM, timers and the MIDI protocol (along with everything else). Obviously, I read up on the various synths over the years to see how they did their sound generation.
Where do you even start with your project? How to you even get to the disassembled code?
Dump the ROM through an EPROM programmer used to read back the data, then try to figure out what chip it is (found a schematic somewhere), turns out there are two CPUs in it, one which is mask programmable which handles all the A/D and the buttons and keys, the second one which drives the whole thing. The two CPUs talk on some funky parallel link (think 'LapLink' but then for two old 8 bitters sitting 5 cm away from each other).
After you know the CPU it is relatively straightforward, you try to figure out the memory map by looking at the address decoding circuitry , and from there you can infer things like where the vectors are pointing (usually they are at the end of the memory map in CPUs from that era, alternatively at '0'). So that will give you a good idea where the ROM should be located, (in this case from $C000 upwards).
Then you spend lots of evenings with cups of tea and some quiet music on trying to figure out what all the instructions are trying to achieve. A good starting point is any ASCII strings, then tables of pointers to code and strings, then trace the boot vector until you come to the inevitable main loop and trace inwards from there. IRQ and NMI vectors are also useful entry points to dig around in (NMI not used here). It's a job that requires quite a bit of patience and a false start somewhere can cost you dearly. I love the little 'aha!' moments whenever I've sussed out what a particular bit of code does. For instance, the DX7 can display the battery voltage and the patch numbers and other information in decimal, like it should. So then you come across this repeated subtraction test with the contents of a table, turns out it is the binary-to-decimal convertor.
It's going to be a while before this is done but it is very interesting.
Brilliant comment, this is what I come to HN for. I'm starting to get into electronics myself and as a developer I fear it's only a matter of time until I get sucked into this side of things...
That chip reminds me of the Elektor 'GDP', the Graphics Display Processor, a pretty quick 2D accelerator. I wrote a MOS replacement driver for it for the BBC micro that was good enough to run almost anything on except software that wrote directly to the screen.
I did similar synth reverse-engineering of DS-10 synth from Korg for Nintendo DS, so not much desoldering involved :-) The goal was to get it running as a VST plugin, but ARM emulation wasn't fast enough, so I wrote a static-JIT decompiler and decompiled the synth portion of the code into C instructions (see "c-code.h").
That's a very cool project, kudos on the stamina, that must have taken you a really long time. And thanks for the reminder the fix the pictures on my own blog, exact same story.
I'd definitely read a full writeup on this if you ever get a chance. So many of these little proprietary systems risk being completely lost - just look at the amount of effort put in to be able to emulate the various SNES chips
I actually did this a few years ago, for my work on what's now the Dexed engine. But it's likely I've lost the work, it was probably on my work computer and I've left the job. Happy to chat if you want to see what rusty recollections I have.
Oh cool, that would be worth gold in terms of time saved.
I'm in the midst of a pretty intense commercial job this week (but not intense enough to stop me from commenting on HN ;) ), my email is jacques@mattheij.com (I do not see an email in your profile).
I'd love to verify details like the memory map and anything that you still remember about how the button dispatch worked so I can identify a lot of code or the guts of those two main synth chips would help me tremendously.
I don't want to shit on him though. It's a great exercise. It's only 256 bytes, and you can compare to the original's comments when you're done to see if you were on the right track.
Oh, absolutely. It is in no way bad, I'm just wondering how one gets this far down the retro rabbit hole (An Apple I is a pretty obscure piece of hardware) and misses a thing it was well-known for. Obviously, it happens.
Why do you think Wozniak would care? Man is a multi-milionaire on account of his Apple shares, and has better things to do these days than bum single bytes out of 6502 code ;)
My post is made for the benefit of current and future 6502 programmers.
I guess you must not be a 6502 programmer, because if you were, your response would be, like, "dude, fuck, that's awesome, holy shit, let me take some notes". Because that was what my response was the first time I saw this code. And that's why I'm passing it on.
6502 zeropage addressing effectively made it a risc by today’s standards. Done right it made for fast and compact code.
Sometimes I’d throw in an extra instruction as alternate entry point into a subroutine. We’d be scrounging for ROM space and would adhoc a subroutine into the end of a different one.
Time delays were another good reason. An extra cycle mattered when you needed to sync with something.
Made for fun times when debugging but you did what you did.
> I guess you must not be a 6502 programmer, because if you were, your response would be, like, "dude, fuck, that's awesome, holy shit, let me take some notes".
It is weird, isn't it? You might almost think that I just typed out the first thing that sprang to mind, and then clicked 'reply', without even wondering whether it made any sense... well, I couldn't possibly comment.
But I do stand by what I said, no matter how much its actual value might be in line with what you paid to see it. Wozniak is massively rich because of his Apple shares, and I claim that does indeed limit his interest in bumming single bytes out of some 6502 code he wrote 40 years ago. And I do also claim it limits how concerned he might be that somebody had spotted some slack in his code.
While I'm on this theme, in fact let me guess that you are not a 6502 programmer yourself either ;) Because if you were, even if you'd never seen the nybble->hex trick before, even if you were actually seriously all like "dude, fuck, that's [etc.]" - not that this isn't a reasonable response to this trick, because it is awesome, and rather surprising - you'd still at least be very well aware that in 2018 this is a hobby that even its devotees, e.g., me, would freely consider quixotic. (And that's being very generous.) And you would therefore appreciate the humour.
256 bytes is a reasonable amount for a "first stage bootstrap"; in other words, it's useful enough to enter a bigger program such as an assembler, through which you could eventually write a compiler... https://news.ycombinator.com/item?id=17851311
I wonder how big implementing the equivalent functionality with an x86 PC would be --- my guess is somewhat shorter, because the x86 ISA is denser than 6502.
the constrained space means woz used a lot of tricks, like having a routine "fall thru" to another instead of calling it explicitly
Basically tail-call optimisation with jump elimination. Nearly all compilers can do it too if you have the optimisation settings right.
> I wonder how big implementing the equivalent functionality with an x86 PC would be --- my guess is somewhat shorter, because the x86 ISA is denser than 6502.
(grain of salt alert: I don't know a ton about modern PCs at this low a level)
Eh, maybe if a modern PC were just an Apple I with an x86 chip dropped into it - although even then, don't underestimate the 6502 for code density (or overestimate x86) when it comes to simple bit-banging and byte-bashing. And remember, your x86-64 CPU when it starts up isn't much more than a hot-rodded 8086, so we're comparing the 6502 to that - real mode x86 with 16-bit registers and all - rather than something more modern.
But my guess is that the complexity of modern IO would absolutely crush you in terms of complexity. For keyboard and video alone, the absolute bare minimum, you need to speak to the USB controller and across the PCI-E bus to the graphics card, and my intuition tells me this hardware wasn't designed to be easily programmable in a microscopic number of bytes.
(I also don't know how much stuff you strictly need to do to get a stable system, but looking through the pages of Advanced Options in my BIOS concerning things like memory timing and who knows what else makes me think the motherboard itself needs a fair amount of configuration on start up...)
Yes, it all depends where you start; a similar program written as a boot sector would be quite short, but by the time you've got there a lot of BIOS and ACPI code has already run.
This really takes me back. I loved programming in 65C02 assembly. It was simple but complex enough to understand how computers worked.
Random aside.
One thing I miss is that I never had an assembler for my Apple //e. The small assembly programs I wrote, I had to write using the bare bones mini-assembler that you got by doing
6502 was an amazing assembly language. I wrote a VT-100 terminal emulator from scratch in a week or so way back then. Thankfully I had an Apple III with an assembler to build it in. The syntax was so simple to learn, but you could build interesting and sophisticated code by taking advantage of the quirks and tricks in the design. Of course I remember none of it any more. I am still in awe of Woz building the Apple I ROM without any tools at all.
This got me thinking about my time as a teenager teaching myself assembly on my C64 (which of course used the 6502 instruction set). I was trying to recall the assembler I used and I finally dredged it out of my memory: Develop-64 by French Silk Software, a now-forgotten program that I used a lot. My mom ordered it for me (her credit card) and she mentioned that the person on the phone wondered how in the hell some kid in northern Canada had ever heard of them. Good times.
There are references to a "develop64_cracked" on a retro FTP server that's 404ing for that file; this RAR seems to contain the file was was on the FTP server.
Also, backspacing the file off the path in the first URL gives a listing of some other releases, but this file isn't listed. Woot.
I wish I could use bold for this line - THIS is how you keep data alive, people!! :D
Also for reference, the reocities link in the above URL is dead, but oocities.com works. Unfortunately the geocities paths are not in oocities' archive.
And these links were near the top of Google's results.
Still got the book for that board. Friend of mine had a box of the things controlling a set of greenhouses and other sheds. They were the equivalent of the arduino back in the day. Reliable as well.
6502 was the first assembly language I learned and still my favorite. Nowadays we see “programmers” who can’t even read assembly, let alone write it :(
Between the 6800, 6809, Z80 and the 6502 I far preferred the 6809, but I did a lot more programming for the 6502 because I got a machine with much better hardware using that chip. After that it was 68K (ST), and then PCs felt like a step down again.
In high school, we had access to ORC/ASM which was a high level assembler. I spent a long time trying to understand how to program high speed graphics from machine language (Apple II had a painful mechanism for accessing graphics from machine language).
You might be thinking of ORCA/M (which is 'macro' spelled backwards), ORC/ASM would have been a bit too edgy even in those less-corporate days.
There is a lot of NES programming info out there these days and if you haven't had a chance to, it's entertaining to take a look at it - it's a machine with the same CPU but real graphics hardware (for the time). Bitblitting into the mildly non-linear Apple ][ frame buffer will look pleasantly simple. The NES can do an awful lot more, of course.
Learning 6502 assembly, and subsequently implementing worm, and then tetris on an apple ][ from scratch, were the single greatest things I've ever done to advance my understanding of computer programming and computing architecture in general.
I work a little bit on it every week on Saturday evening, it's my way of winding down from a busy week. Other people play Sudoku ;)
In case you're wondering why: I'd like to de-solder the CPU from a broken motherboard that I have (besides two working ones) to see how hard it is to take over the hardware by simulating the CPU with a RasPi, but for that I have to know exactly how the bloody thing works in the first place and it is quite a complex beast. The ROM is 16K.