Hacker News new | past | comments | ask | show | jobs | submit login
Minimal single-board computer based on Motorola 68000 (github.com/74hc595)
186 points by homarp on July 29, 2020 | hide | past | favorite | 104 comments



Designer of the rosco-m68k (which was mentioned in another comment - thanks!) here, always good to see another 68k SBC on the block. Looking forward to seeing how this evolves, would be especially nice to make it so $0 isn’t permanently a ROM address (to allow interrupt vectors to be changed). Even on 68010 I had to do some magic in the address decoder to make ROM be temporarily at $0 at reset.


Most of the boards I've seen that solve this have a shift register that counts the first 4 bus cycles. The data in pin is tied to 5v and the reset is tied to the reset pin on the 68k. IIRC, the clock pin is connected to the AS signal. If the 5th bit is zero, the ROM is selected. The addresses written into the first 8 bytes of the ROM are the initial (supervisor) stack pointer (typically the end of your RAM for embedded systems) and the ROM entry. At the start of the 5th bus cycle, the 5th bit goes high and the ROM is no longer mapped to the lower addresses and the CPU has jumped to the true ROM entry point.

You could also of course have a register to flip the ROM being active in the lower addresses (default selected) that you later disable.

You can also make a nifty DTACK generator by using another shift register. The clock is attached to the system clock, the data in pin to 5v, and the (synchronous, active when AS is deasserted) reset pin to AS. You then just AND'd the output of your chip select generators and an output bit of this shift register. (N)OR all the AND gate outputs together and wire that to DTACK (active low for the DIP parts).

The first rising edge of the system clock after AS is asserted is the exact clock phase that DTACK should be asserted for a 0 wait states. Anything AND'd with the first bit of the shift register is a 0 WS peripheral, the second bit is 1 WS, third bit is 2 WS, etc. The DTACK de-assert at the end of the bus cycle also works correctly. AS deasserts in S7 (which is a falling edge of the system clock). The next rising edge of the system clock is in S0, which is when DTACK is supposed to be deasserted. (since the reset is synchronous, the clock edge will cause the shift register to reset and all the outputs will go low)

My current side project is interfacing a 68010 (12.5 MHz) to an FPGA. Most of the issues are the bus voltage-level translation since all the FPGAs are only 3.3v tolerant these days. I have machinations to build a paged MMU for it that can address more than 16 MiB. I'd like to use a bigger m68k (68030 / 68040) in the future, but I'm starting with what I'm more comfortable with. the "Texas Cockroach" chips have fewer signals to worry about. Also, learning KiCad and how to use PCB-Way assembly since I have trouble with small surface mount parts (high-speed/density bus translators, currently designing with the 74LVC16T245).

It's a bit hard to stay motivated when someone with a bit of patience and motivation (and have taken a decent computer architecture course) can build a 100+ MHz 32-bit RISC-V system in verilog on a ~$100 FPGA devkit.


You're right that's how most boards handle having ROM low for the first four cycles, and that's how mine does it. I use a 74LS174 hooked up just like you say.

Early in the project I did a DTACK generator similar to what you describe, but now it's handled by a GAL which allows it to be zero-wait-state in certain address spaces while supporting external DTACK for IO devices. This also allows me to easily tri-state the signal do expansion boards can generate their own DTACK.

Your side-project sounds interesting, I'd love to take a look :) is it online anywhere?


I've got to actually start writing things down about my personal projects :)

At this point I've still just been learning about the bus timing. I've got an m68k hooked up to an 16 MHz Arduino Mega which is running the m68k at 4 MHz so I can get 4 samples per clock of the m68k (https://imgur.com/CxSlIHL). I can actually drive the CPU with the arduino since I can manage DTACK just quick enough with some inline AVR assembly.

Hit a small snag though. Turns out the 68010's I ordered are fakes.

Ended up with the same parts you did (saw your post at https://hackaday.io/project/164305-roscom68k/log/175626-fake...) (what I received: https://imgur.com/WCwRpHJ).

I was fairly naive to the fakes when I started ordering parts. I've basically ended up with all fakes. Got some Harris 80C286-25s which are fakes (internet seems to suggest they are rebadged 20 MHz parts, so at least they are the same base part (static core 286)), but I don't have the capacity to test them quite yet. I've also ended up with a 50 MHz 68030 that seems to actually be a 33 MHz 68030. Again, I can't test it yet so I'm really hoping it's not an EC part rebadged. I wanted the MMU.


Awesome :) I've played around a bit interfacing Arduino to mine too, and found I had to use AVR assembly to get the timing right.

Yep, those are exactly the same as some of the fakes I have here. Adeleparts definitely rings a bell as the source of some of mine too.

Sadly it seems to be quite common to remark the "lesser" 030/040/060s to suggest they are the fully-capable ones :( Fingers crossed yours isn't like that!


If you don't mind me asking, do you know what the fan-out capability of the "big DIP" m68k's are? I've not noticed a hobby board built around one that has bus transceivers or line drivers, but all of the boards built around things like the 8088 and 8086 have them. I know those required latches to de-multiplex the bus, but the datasheet for the Intel parts recommend line drivers and bus transceivers except for simple "minimum-mode" systems.


The Atari ST had its own way of dealing with this that didn't use the standard 68k trick.

As explained by EmuTOS contributor, Vincent Rivière: "Note that such "standard trick" is not used on Atari ST hardware. The first 8 bytes of the address space are always remapped to the start of the ROM. So the first 8 bytes of RAM are always inaccessible, as they are shadowed by the ROM. Actual usable RAM starts at address 8, even of cold boot."

Basically as I understand it, RAM "starts" at 0x000000 but is only writable at 0x000008 and beyond. So the first 8 bytes is actually ROM and be used to jump to the rest of the ROM. The rest of the vectors after 0x000008 are written to by the OS as it boots.

A little fiddly in the address decoder, but maybe less complicated than timer tricks and the like.

Your 68k + FPGA project sounds interesting. There's another one that's been discussed on the EmuTOS list lately, that is using an older MAX10 FPGA which is 5v tolerant. But 68000, not 68010. I think mainly because the 68000 was produced in packages that could do higher clock rates, while the 68010 never made it above the low teens.

But have you thought of just using Coldfire?


Author here. I really like the rosco-m68k! I never intended my project to be a "competitor"--one day I saw that Jameco was selling 68000s, ordered a couple, and designed this board to "scratch an itch" and learn about the 68k. It was a quick and dirty project, hence the cut corners w.r.t. memory map and ROM fixed at 0x000000.


Hi! I got started pretty much the same way, I used the 68k family a lot many years ago, but then a couple of years ago I randomly decided to order a 68010 that popped up on eBay and, well, here we are!

I love seeing other boards, and don't think of it as competition - I think the more of us there are keeping this stuff alive, the better!

I really like your project, I'll be keeping an eye out for updates :)


Same issue on the Atari ST; the custom chips just permanently mapped the first 8 bytes of RAM to the first 8 bytes of ROM. The remaining vectors stayed in RAM.

Other systems I've seen just put a ROM at 0x00000000 and do an indirection (just a couple instructions of overhead).


The memory map hurts.

68000 doesn't have a vector base register like 010+. Instead, the vector base is always 0x0, which here is in ROM, which is too much of a restriction. Installing a 010 instead should allow for getting around this.

Also blatantly missing is a NMI switch.

Still, it always makes me happy when I see open SBC designs based on the 68k family. Retrobrew[0] has a bunch of them, and they are less restrictive, or use 030 instead of 000/010.

[0]: https://www.retrobrewcomputers.org/


Project author here, I agree. This was a toy project that I allocated a fixed amount of time to, hence the cut corners. I'm working on another design that uses programmable logic to handle multiple interrupt sources (yes, including an NMI button) and allow either ROM or RAM to be banked into address 0.


I'm looking forward to that :)


Unfortunately, it seems like the 68010 is the only one completely out of production. You can find all the others easily on Digikey, even the '060.

It seems like the 68030 would be ideal for both power and simplicity; 32-bit, better than the '020, yet easier to build than an '040 or '060 because it still has dynamic range bus addressing[1].

[1]http://s100computers.com/My%20System%20Pages/68030%20Board/6... (ctrl-f for dynamic range bus addressing)


Out of production is most relevant at scale. Not as much as a hobbyist.

010 are easy to get from china. Some of them might be modern made clones, but I couldn't care less if they're equivalent. This many years later, it shouldn't be hard.

030 is indeed great, although no dip variant anymore.


You do have to be careful with 68010's from China... I have a couple hundred remarked 68000s that I bought as 68010s before I got good at spotting fakes and knowing who to deal with.

I just keep them in a box to take them out of circulation, it's not even worth the hassle of returning them...


The ones I have do have the VBR, so they are legit 010.

Of course, buying from China is what it is.

They are however 8/10MHz. Likely pulled from boards. 14MHz DIP are supposed to exist, but unobtanium.


> Also blatantly missing is a NMI switch

Like a "programmer's key?" ;-) https://en.wikipedia.org/wiki/Programmer%27s_key


I've had to make myself one for the Amiga 500.

Just IPL lines -> diodes -> switch -> GND.

The IPL lines can be found on CPU socket, Paula socket and left expansion port.


Yeah I was going to comment that this could boot headless EmuTOS, but the choice to put the vectors in ROM means it can't.


Also worth checking out: Bill Shen (plasmo) has his Tiny68K (in a number of variations including '020) system and one for the RC2014 eco system. The original:

https://www.retrobrewcomputers.org/doku.php?id=boards:sbc:ti...

More recently: https://www.retrobrewcomputers.org/doku.php?id=builderpages:...


> Due to the minimal address decoding circuitry, accessing certain memory regions will cause multiple devices to be selected. This should be avoided.

Two bus drivers enter, one bus driver leaves!


Clarification on the "Forbidden (multiple devices selected) " bit - Am I reading correctly that the memory addressing is, essentially, a little buggy as a side effect of optimizing for simplicity, and that results in mapping multiple things to certain addresses?

Also, I somehow didn't realize that you could buy what appears to be a new 68000, and for $8.95 (https://www.jameco.com/shop/ProductDisplay?catalogId=10001&l...). In my defense, last time I searched it wasn't obvious, mostly because nobody labels it as a "68000"; it's a 68HC000P-12, which only in the details is listed as a "6800" (sic) family - I assume that's just a typo. And to be fair, I'm sure most people looking for a 68000-series know how to look for it; it's like expecting people to know that a 80486 is an x86 usually called a 486. Just a bit of friction for a newbie.


This was quite common. You only decode out as much as you need and if that results in phantom appearances of devices or ROM as long as it doesn't interfere with the operation of the device that is perfectly ok. Memory map aesthetics are important but sometimes circuit simplicity is more important.


I think it's still 'buggy' if the result is a bus fight (that might actually damage chips)


It shouldn't, normally unless you designed the map wrong. That would be a real fault. In some systems it was possible to use dynamic map changes to position ROM over RAM and then to do weird things like getting the CPU to write to ROM but I've never heard of that actually damaging anything though speculation that in a tight loop that should be possible was rampant. We sure tried ;)

But when decoding banks of addresses you'd typically decode just one chip at the time, but possibly in multiple locations, otherwise vacant.


They are probably using A[19] as chip select for one device, A[18] for a different one, etc.

So if you read from 0x000c0000, you get a conflict.

I wouldn't call it a bug if it's done deliberately to save some gates. :-)


Heh, clever. But you'd be even smarter to hook A19 to ~CS and A18 to ~EN to get the same effect without the conflict.


That would require two additional NOT gates, which, in turn, would require an additional chip, which given the layout of the board, would require additional board space.

But what would the benefit be, other than a somewhat cleaner address space? You don't get to actually use the space that isn't mapped in this scenario--RAM, or whatever, doesn't magically appear there. You also end up having to worry about additional gate delay.

So, from the software's perspective, you go from, "Don't use these addresses, unpredictable things will happen." to "Don't use these addresses, nothing will happen."

Either way, code that uses these addresses is buggy. So you are paying extra hardware for very minimal benefit.


> So, from the software's perspective, you go from, "Don't use these addresses, unpredictable things will happen." to "Don't use these addresses, nothing will happen."

If the address decoder allows certain address ranges to select more than one device (which appears to be the case), the problem is far more serious: don't use those addresses [even for reads!] because it will literally fry the output drivers of the conflicting chips.


Bus conflict damage is largely a myth, at least for modern ICs. Output drivers are a lot more robust than you think. More likely the extra current will cause a reset or glitch if the power supply circuitry/decoupling can't keep up with the current spike.


Really? So a CMOS driver that's driving a high logic level which is connected to a CMOS driver that's driving a low logic level won't get destroyed by the resultant short circuit?

As far as I understand (and I admit that I might be wrong here) a typical CMOS driver outputs a high logic level by connecting the output pin to Vcc (via a low-resistance FET) and it outputs a low logic level by connecting the output pin to GND (also via a low-resistance FET). And the circuit traces on the bus are also fairly low resistance. Wouldn't this short circuit result in dangerously high currents flowing?

Certainly high enough (>100mA) to violate the device's "absolute maximum ratings", the ones that you aren't supposed to exceed even momentarily?

I'd be very interested to hear more about the robustness of output drivers and the amount of abuse they can tolerate.


You're correct about how the outputs work, but the practical reality is that currents end up limited enough by the FET's on resistance that nothing gets damaged. Yes, it's outside the spec, but exceeding the AMRs doesn't mean your chip dies. I had a 5V microcontroller survive being put across 12V once. And I had a Threadripper motherboard die by shorting out its CPU Vcore FET, which would send the PSU's 12V rail into the CPU, and the CPU survived (PSU shut down before any damage was done).

If shorting a pin to the opposite rail destroyed your IC people would be destroying Arduinos left and right with trivial mistakes while experimenting, and they wouldn't be able to get away with having no I/O protection :-). You usually don't get >100mA out of a single IO line short - maybe 50mA. Having a bunch of paralleled bus contention can cause more damage (not to the drivers, but to power routing and other shared resources), but at that point you should be hitting PSU current protection limits (which are more important for overall design robustness).

Console modchips of olde (PS1/2/GameCube/etc) worked by overdriving bus lines with a stronger driver (often multiple lines ganged together). No consoles were hurt by this.

I did part of the design and board layout for Glasgow revC, an FPGA-based USB interface board, and I stress tested our IO level shifter chips for short circuit robustness. Leaving the outputs shorted hard to the opposite rail overnight did no apparent harm (bypassing the protection resistors), other than getting the chip nice and toasty for the duration of the test.

Edit: just remembered a personal anecdote. I only recall ever killing an output driver with a short, on a typical microcontroller, once in my life (could've been a fluke). And I've done many stupid experiments. Shorting outputs briefly for experimentation or unorthodox workarounds is solidly in the "no big deal" category in my mind, and it's practically never been a problem. E.g. "I don't know which side is TX and RX in this UART, so I'll just try both" "This device is bricked so let me short out the flash to force it to fall back to bootloader mode", etc.


> I stress tested our IO level shifter chips for short circuit robustness. Leaving the outputs shorted hard to the opposite rail overnight did no apparent harm (bypassing the protection resistors), other than getting the chip nice and toasty for the duration of the test.

That's impressive!

Thanks for the reply, sounds like I can be a bit less cautious without fear of blowing things up.


I've never seen this happen on the time-scales that a CPU runs at. Such short time shorts would cause a local voltage drop but nothing you could not handle with a small cap.

Have you actually ever fried a chip because of a bus conflict?


yes - not a short term one though, some thing got too hot for too long - it really depends on how the gates are built, the width of the bonding wires etc etc


No, you would not need NOT gates, you'd just have the chip appear at the different spot. They're address lines, after all.


That's fair; I hesitated to call it buggy, because a well-understood bug that's documented and easy to work around or ignore and which nets you some benefit is only barely a bug at all.


I'd classify it under "hack that is fine if you know what you're doing". Fancier 68K systems would use a PAL for address decoding, primitive programmable logic.


PALs are expensive and required programming '138s and '154s were cheap and work out of the box.


I used to feel that way, until I replaced about ten 7400 series in my address decoder with one 16v8 that costs around a dollar and frees me from propagation delay concerns... YMMV of course :)


Yes but how long is Microchip going to continue producing the 16V8? They're the only remaining manufacturer. At least with 74xx138 decoders you have several suppliers available -- not to mention the choice of multiple logic families to select from.

If you're willing to accept address aliasing (i.e. chips appearing at more than one memory address) you can simply use multiple '138 decoders in parallel to make a 1-of-24 or 1-of-32 decoder. In fact that's precisely why the '138 has so many enable inputs: the one active high (G1) and two active low (~G2A and ~G2B)enable inputs let you build a 1-of-24 decoder using three '138 chips and zero additional glue.


Fair point, but I happen to have a few thousand ATF16V8BQLs in stock, so I'm not worried for the time being :) Originally (on breadboard) the project didn't use any programmable logic, but board space dictated that I switch to them when I did the first PCB.


That's the difference between doing it professionally and doing it on a hobby budget.


Doesn't matter too much for hobby projects though, even if they go EOL there's still plenty on eBay.


Partial decoding, causing aliases, is indeed common and mostly not a problem. What this system appears to have is overlapping decoding. I can't say I've seen any system until now that was deliberately designed that way.


The 68010 is sort of a "fixed" 68000. Restoring state after a page fault doesn't work right in the 68000.

If it had, 68000 machines with MMUs would have worked right, and the history of computing might have been more Motorola and less Intel. (The Lisa and the Apollo Domain did have 68000s with MMUs and horrible kludges to make them work.)


I'm a huge 68000 fan (I am happy it helped make the programmer I am today), but support for virtual memory was hardly the reason IBM selected the 8088 over the 68000 for the PC.

Looking back at the history of 68000 based computers in the 1980s, there were so many missed opportunities. Both Commodore and Apple could have shipped systems with higher clocked 68000s in 1985, but didn't. Commodore didn't ship a real 32 bit system (the Amiga 3000) with the 68030 until 1990, 3 years after the 68030 was released in 1987. Apple at least shipped the IIx in 1988, but it was priced so high as to not be a real option in the home market.

In contrast, anything Intel shipped was brought to market far more quickly thanks to the booming clone market. There was even quite a bit of innovation in external caches for 386 and 486 based systems - I remember reading reviews in Byte magazine in 1987 highlighting the performance differences of the same CPU in a wide variety of systems.

The 68000 series had so many architectural advantages over 8086/8088, but that one choice by IBM for the PC effectively doomed m68k to the dustbin of history. That still makes me sad to this day.


808x also had architectural consistency/continuity with the popular CP/M systems that already existed. 68000 had continuity with nothing.

Motorola had a habit of throwing away architectures when the new thing came out. 6809 was only assembly level compatible with the 6800. 68000 was completely different than the 6809 & 6800. They pushed the 88000 as the new hotness to Apple, NeXT, etc. but it was a total failure and the new hotness became PowerPC. At no point was there backwards compatibility. Apple had to engineer their own solution for 68k -> PowerPC migration, and it was crashy.

Intel just kept plodding along with the x86 instruction set forever, keeping customers in a constant -- if ugly -- upgrade chain, until they made the Motorola mistake with Itanium. They smartly backpedaled from that, though.

68k still had half a chance in the late 80s, early 90s -- not to own the whole market, but to have a crack at a segment of it but Motorola flubbed it. Coldfire is really good, but was too late, inconsistent with their PowerPC strategy, and targeted towards the embedded market.


The 68000 was so powerful that I'm sure porting assembler to it from 8080/Z80 wouldn't have been a big deal.

What was the timing between Coldfire and Arm? It's interesting to see how history turned on that axis too.


Two explanations I read in the late 80's.

When the decision was made you had the 8088 and the 68000. 8088 had an 8 bit bus and needed 8 DRAMs. 68000 had a 16 bit buss and needed 16. And memory was about half the cost of a PC. So the 8088 minimum BOM was cheaper.

Second I think the 8088's instruction set was designed to make porting 8080/Z80 assembler easy. I can't say that was important in practice once things were bootstrapped. But might have got them to market faster.

I also think partly the dominant OS in the late 70's was CPM which ran on Intel/Zilog processors. 8088 was of course the manufacturer of the 8080. And my experience with Motorola in the 80's is they weren't exactly eager to sell into smaller markets. Of course now Intel is far worse and has been for the last 35 years.


68k was too new, too large a die (lots of articles at the time saying it might be unmanufacturable due to low yield) too costly.


68008 (1982) had 8bit bus, and some 68000 versions were made much later which supported both 8 and 16bit bus.

But of course, this was way too late for IBM PC (released 1981).


Not IBM. The Macintosh. The Macintosh came out with a good GUI bolted onto a DOS-type OS. No processes, no threads, no swapping, no paging, no hard drive. The Lisa had all of that, but cost too much. However, it was actually useful. The Mac was a money-losing toy until it got better hardware, especially a hard drive and more memory, in the form of the Macintosh SE. But then Apple was tied to an OS cut down to fit the original 128K floppy-only machine. Which they stayed with far too long.


It was IBM that built the IBM PC that lead to the market for Intel x86 CPUs in clones. The Mac, Amiga, Atari ST, Sun... all of these combined were not enough to support Motorola with the resources needed to keep developing the 68000 series.


I remember reading reviews of some overengineered Compaq 386-SX with cache onboard :)


Hard agree. 68010's few changes are minimal, but they add up and constitute what 68000 should have been but wasn't.

From the top of my head, besides the writing a proper stack frame for recovering from a bus error you describe, there's the vector base register so that the vectored interrupts and exceptions aren't forcibly defined at 0x0 anymore, an instruction that should have been privileged from the start (move from SR) is now privileged, an alternative that's not privileged (move from cr) is provided for specific cases where that's needed, and a hack to speed up short loops (one instruction + conditional branch) is implemented, saving instruction fetches in these cases and making it slightly faster than the 68000.


Wasn't the kludge in some Apollos that they had two 68000s arranged so they did exactly the same thing, but with one delayed? If the leading one hit a page fault, they would generate an NMI for the trailing one before it too hit the page fault? (I'm sure I heard of someone doing that if it wasn't Apollo).

Even after the 68K family got working page fault handling there was an important different between it and that on Intel processors. The Intel processors used "instruction restart". If the page fault happened somewhere in the middle or at the end of an instruction it discarded any work it had done to that point. After the fault is resolved, the instruction would restart from the beginning.

The 68K family used "instruction continuation". When it got a page fault it would include in the exception stack frame enough internal processor state so that when it returned from the exception processing it could continue from where it left off.

Continuation is presumably more efficient than restart because you aren't discarding any work--although with continuation you have to write a bigger stack frame (and later read a bigger stack frame) because of that internal state as opposed to restart which can use the same small stack frame that ordinary interrupts and exceptions use so I'm not sure that you actually come out ahead with continuation. We aren't talking a VAX here with instructions like "evaluate polynomial" that might do a lot of work before faulting.

We had a hard t track down problem with instruction continuation. I was working at a small 68K workstation company (Callan Data Systems) which ran a swapping version of Unix. I was hacking out the process and memory handling to replace with demand paged virtual memory code, and it was going quite well--except that occasionally when I would ^C a process the damn thing would hang hard.

What was happening was that sometimes a process would page fault, the kernel would handle it, and while that was going on the kernel would let some other user process run. By the time it finished getting the faulted page, and it was time to resume the first process (the one that had page faulted) I had hit ^C on that process and so there was a SIGINT to deliver.

The way it delivered signals to user process signal handlers was by diddling the user stack frame so that it looked like the user process itself had called the signal handler just before whatever interrupt or exception had caused the process to enter kernel mode. Then when the kernel returns to user space, it returns to the signal handler.

That turned out to not be a good thing if the exception stack frame was a continuation stack frame. The processor was very much not happy to try to continue an instruction that was not the instruction whose internal state was in the stack frame.

The fix: when the kernel has a signal to deliver, first check if the stack frame is a page fault frame. If it is, instead of delivering the signal right away set the trace flag and return to user mode. That returns to the interrupted instruction, continues it, and when it finishes generates a trace interrupt, which has a normal stack frame. The trace interrupt handler can then clear the trace flag, and just go ahead and do the normal return to user mode processing which will have no trouble delivering the signal now that we are only dealing with a normal stack frame.


Wasn't the kludge in some Apollos that they had two 68000s arranged so they did exactly the same thing, but with one delayed? If the leading one hit a page fault, they would generate an NMI for the trailing one before it too hit the page fault?

Wow, that's the kludgiest kludge and the stupidest clever thing I've ever heard. Wikipedia agrees with you so it seems you remember correctly.

"This system used two 68000 processors and implemented virtual memory (which the 68000 wasn't theoretically capable of) by stopping one processor when there was a page fault and having the other processor handle the fault, then release the primary processor when the page fault was handled."


It's the kind of thing that would only make sense in the workstation market where customers were willing to pay higher cost.


The Lisa had an even worse kludge. The instruction continuation was broken for instructions which incremented registers in addition to doing a memory reference. The solution was to have the compiler not generate instructions with index register incrementation. Slower, of course.

When Motorola finally did come out with a MMU for the 68010, the 68541, it was terrible.[1] It was a segmented MMU. Sort of like the 80286, but worse. And on a 32-bit machine, which didn't need that approach just to address memory. The 68030, with an on-chip MMU, finally got it right. But that didn't come out until 1987. The Macintosh SE/30, which used it, came out in 1989. By then, the IBM PC was too far ahead.

[1] https://en.wikipedia.org/wiki/Motorola_68451


Rochester Electronics has a license to manufacture new Freescale/Motorola 68___ IP - https://www.rocelec.com/search?q=mc68 shows active production of the 68020


Just needs a frame buffer and you almost have a Sun-1.


I was just about to say that. More to the point, though, you'd need an FPGA for the custom MMU, which might be a sticking point these days, although there is a Sun-2 emulator:

https://news.ycombinator.com/item?id=22350986

https://github.com/lisper/emulator-sun-2


Or an Atari ST. Just sayin'. :-)


Another new 68k board that came out recently is Rosco:

https://rosco-m68k.com/

It's for sale on Tindie and the person who made it seems to focusing a lot on making a toolchain available etc. If you want to actually programme the 68k it looks like a good bet.


Great. At 12MHz and 1M RAM that's still above an Atari ST or Amiga 500.

There used to be plenty of such computers based on 8bit to 16bit discrete CPUs (Z80, 680x, 8255, etc), with schematics in electronics magazines.

This was great because everything was simple enough that you could fully understand and use 100% of the hardware yourself and code 100% of the software yourself. And all components were standard discrete ones (like this project), not SMCs, so there were also very easy to handle.

You did not even need a PCB. I once built such simple computer based on a Z80 using good old wire wrap [1] on a prototyping board, which was quite a common thing to do, but quite a torture to be honest and I would not want to try with a 68000...

[1] https://en.wikipedia.org/wiki/Wire_wrap


Having worked on a fairly large 68K board that was wire wrapped I would not recommend it.


I built one with graphics floppy and scsi. Wasn't too bad mind you I was 19 so had infinite spare time and I worked the summer for a defense contractor so had unlimited wire wrap wire and gold plated sockets.


I worked on a prototype board that was getting a little older (I did not wrap the original), apparently the tension of the wrapping tool was off or something else did not go according to plan during the original build because quite a few of the connections were flakey and had to be redone. This can be pretty tricky if there are sometimes 6 wires running to one pin and the whole thing is a rats nest of identically colored wires.

I did get it all sorted out and working again but that wasn't my idea of having fun. Fortunately I got into software from hardware or I would have likely given up. You know you have a nasty problem when tapping the board can make it crash. Shades of Gollum and Coke.


Wire wrapped connections can actually be more reliable than soldered but a 16/32-bit 68K design is going to be at least twice the work (and twice the wires) of an 8-bit Z-80 design...


They indeed can be. When wrapped properly... I liked working on that board once it was brought up but the first two weeks seemed to last forever with minimal progress. Wire wrapped boards critically rely on the tension of the wire during the wrapping to make good metallic contact in such a way that oxidization can't make the connections degrade.

The big advantage over soldering in my view is that you an 'unwrap' without damage to the pins in case of errors. If you do that with a closely populated board that has wires that have been soldered you will likely cause some damage.


>Great. At 12MHz and 1M RAM that's still above an Atari ST or Amiga 500.

If CPU clock and base ram is all you care about, sure.

The Amiga and the ST(E, the ST was pretty sad actually) chipsets did far more than some cpu paired with some ram.


So what are the options for proper digital video (DisplayPort/HDMI) generation on retro systems like this?

Other than the typical (and ugly) "duct-tape a Raspberry Pi to it" and/or "software composite video to a converter dongle" approaches.


The OSSC[0] is my go-to. OSHW, it takes component input (RGB, VGA, YCrCb) and audio, and outputs HDMI. Its line by line processing means virtually no latency is added.

[0]: https://videogameperfection.com/products/open-source-scan-co...


FPGA makes all things possible. But then you ask yourself why you did this when you could have done the whole CPU there, too...

As others have mentioned, there are plenty of chips that will take R, G, B + sync and spit out HDMI/DVI. But then you still have the issue of how to generate video in the first place. The Atari ST just had a simple 'shifter' type setup which stole RAM on the off cycle and spat (planar) video ram out to DACs, and if one is avoiding FPGA one can do that with just some 74series type counters and bus multiplexing... But it'd be a lot of chips... and an FPGA makes it sooo much simpler...


You can duct tape vintage ISA VGA card instead http://tinyvga.com/avr-isa-vga + $5 vga-to-hdmi cable/dongle


We were taught on the newer Coldfire processors which use a modified m68k instruction set.

I never understood the reason for the split address and data registers, it also seemed like the address registers had a few bits intended for segmentation?

Anyone know the rationale behind the split address vs data registers design and architecture? Most other instruction sets use the same registers for both data and addressing, what advantage do specialized address registers give.


Out of my ass I think three things.

One issue is gate loading. Connecting a large number of registers to a bus limits the max throughput. I remember some RISC processor they flubbed that.

The second is the 68000 had two ALU's one for data and the other for addresses. So can do address and data calculations in parallel.

Third separation means you need fewer bits to encode the register addresses in the instruction word.


I think Gibbon1 has hit the nail in the head in reply to this - it makes the instruction encoding smaller. It also allows the registers to be treated differently on-chip. Combined, this makes stuff like the register indirect with offset addressing modes on 68k both small and fast.


Two ALUs?


Yup. The Signetics 68070 that was used in CD-i was like a 68000, but slower because it didn't have that second ALU.


I wonder why they used a 68000? The 68020 was a significant upgrade, and other than not being a DIP package it made a great SBC.


Jameco was stocking the TI 68k earlier this year. Not certain what the author’s motivations were, but I picked one up after playing with the 6502 (their choice of vasm and the eeprom programmer are the same Ben Eater took off the shelf, coincidence?) The DIP package is convenient, at least until you see it comically take up 3/4 of a typical breadboard.

Edit: yes, an opportunistic find at Jameco https://news.ycombinator.com/item?id=23993221


He did mention through hole being desirable. A DIP 16Mhz 68010 could be an easy upgrade. And you would get virtual memory addressing.


68010's still need an MMU for virtual memory. So do 68020's, now that I think of it.


It might have been the 68030 that I was thinking of.


I see some evidence that an MMU is optional: https://news.ycombinator.com/item?id=7684824


MMU is optional, unless you want virtual memory. The 68010 fixes a bug in the 68000 whereby it didn’t stack enough information to recover from an address or bus error, but you still need an MMU if you want to actually do virtual memory in any performant fashion.


My understanding was the 68010 had "support" for virtual memory when used with an MMU due to improved exception recovery handling. For example, when accessing an invalid / unmapped memory address. See https://en.wikipedia.org/wiki/Motorola_68010 68000's didn't have that capability. An MMU of some sort would still be needed to implement the virtual-to-physical mapping of memory addresses.


>A DIP 16Mhz 68010 could be an easy upgrade.

Do you know how to source these? I can't even find the 14MHz ones, which I could use.


I'm just wondering if it wouldn't be easier to do without the ROM and map the 16MB to RAM, and then carve out the last 64KB at the 16th megabyte for IO space for devices, using an AVR as a bootloader to download code into the RAM? That would make it easier to manage the memory and interrupt vectors, yes?


If you want the same instruction set and and even smaller board you could use the 68008: https://en.wikipedia.org/wiki/Motorola_68008 they're going to be hard to find though.


It’s been too long since I last played around like this. This put a bee under my bonnet and has me wanting to try something similar. It’ll be a disaster no doubt but I’ll learn from my mistakes. Thanks for posting.


68K single board computers are fun. Here's the one I did in college in 1982 [1][2]. 6 MHz 68K (I think). 4K 16-bit words of EEPROM. 1K 16-bit words of static RAM. Two RS-232 ports.

The way the class worked was that they would supply the processor, and I think they may have also supplied the RAM, but everything else you were on your own for. Anything you see in those photos that makes you wonder "Why the heck did he use that!?" probably has the answer "It was cheap".

That's why it is on an S-100 bus prototyping card--I found that at a surplus store. That's why the reset button says "CLR"--the Caltech EE stockroom had for some inexplicable reason a box full of cheap buttons from some old calculator. That's why the power connector is weird--found that and its mate at the same surplus store I found a power supply.

I put a nice feature in the RS-232 connections. Note that the cable between the RS-232 connectors and the board plugs into an ordinary DIP socket, which is not a keyed connector. The way it is wired up is that it works both ways, but one of the ways is like using a null modem.

There was one very amusing incident when I was writing software for it. The 68K cross assembler ran on Caltech's IBM 370. There was some HP workstation in the lab that you could enter your code on and it could submit it to the 370 for assembly and retrieve the output.

The HP workstation was a few years old, and no one really knew much about it. It ran some weird OS and nobody had bothered to learn much about it--they just all knew enough to edit, submit stuff to the 370, and do simple file manipulation.

The thing was full of several years accumulated projects from students, research code from professors, and who knows what else, so space was tight and no one really knew what was safe to delete.

One day I'm using it to submit my code to the assembler, and I notice that in a lot of the file command there was some letter or digit (I forget which) that you had to include but didn't seem to have an obvious purpose.

So I did the obvious thing--I tried one of the commands but with that letter or digit incremented.

It turned out that was the drive specifier, and I was now using the second drive in the workstation--a drive that nobody else knew it had and was completely empty. They had been struggling with lack of free space on this thing for years, and all that time there was a second drive in it just sitting there empty!

[1] https://i.imgur.com/Ts9wcfW.png

[2] https://i.imgur.com/3D4rvdC.jpg


It's funny how much this looks like the internals to my original Palm Pilot which I disassembled not too long ago :)


no pdf diagram is a poor form :(


As someone who has done a fair bit of tinkering with stuff like Arduinos, NodeMCUs, ESP32s, etc. along with Pis and similar, what sort of itch does this scratch for people that those wouldn't? From looking at the Rosco version, it doesn't seem like it's a cost savings or anything, and the hardware is certainly much weaker than modern options.


To be clear, I'm genuinely curious - this isn't intended as an insult to any of the folks who have worked on these projects.


Most of the "modern" embedded stuff comes as a complete system-on-chip these days, i.e. all the peripherals and memory are integrated into the CPU on a take-it-or-leave-it basis. It's also rare for chips to offer an external parallel bus interface (the classic A0-A15 and D0-D7 pins), which means that any extra peripheral devices need to hang off an I2C/SPI port where the CPU core can't access them directly.

And the available chips (AVR, ESP32, LPC2xxx, etc.) are all proprietary designs with a single manufacturer. Even the ARM chips, which generally use some variant of a Cortex-M core, have wildly different peripherals. So migrating between families is difficult/impossible.

In contrast, "classic" chips like 8051, 6502, x86 and 680xx all have external parallel bus interfaces and are (or were) produced by multiple manufacturers (often as part of "second source" agreements).

So when building a system using these chips, the designer has a large degree of flexibility and freedom to design the system architecture. Whereas building something using modern embedded chips is mostly an exercise in parametric search trying to find an existing chip which offers exactly the right set of peripherals for the intended design. It reduces the system designer to a mere consumer of off-the-shelf SoCs instead of being a true builder/architect.


That's helpful; thanks. I guess coming from much more of a software background, having the ability to write C code (which feels like an acceptable veneer over the hardware to me and works well across chips) and having programmatic access to the pins feels pretty empowering and all-encompassing. However, it does make sense to me that someone coming from a hardware-first view of the world would feel those barriers to direct hardware access much more acutely and recognize the limitations that I don't.

I appreciate the thoughtful and detailed response.


This. Well said.


Compared to Arduino & other microcontrollers, a 68k has more capabilities- you have the real system bus available to you, which I don't think you do on an Arduino (or a rasPi for that matter.)

Compared to a raspberry pi, a 68k SBC is more "knowable". There aren't any proprietary blobs there... You can still wrap your head around the whole computer- it's simpler.


Nostalgia might be a factor.

But I think the bigger appeal is that a system like this can be understood in a much more complete way than what you can achieve with a Pi or even an ESP32.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: