Hacker News new | past | comments | ask | show | jobs | submit login
Building a SuperH-compatible CPU from scratch [video] (youtube.com)
102 points by jsnell on June 12, 2016 | hide | past | favorite | 24 comments



Cliffs notes:

- http://j-core.org/ ("Webpage was written by developer who knows <p> tags ... If anyone would like to donate a stylesheet, that would be welcome ..."), git repo coming soon

- Corporation is https://twitter.com/seinstruments

- goals were: not to license any IP, in order to have truly open source SoC that you can trust, minimum impl that can run linux, open source toolchain

- minimal impl supports 32-bit addressing, no MMU, DRAM controller, UART

- named "jcore" to avoid infringing on Hitachi's brand

- target market includes IoT, (AC) power-quality monitors

- in order to avoid the well-encumbered semiconductor space, the strategy is to duplicate an arch whose patent has just expired (Hitachi SH2, SH4)

- VHDL is BSD-licensed

- The cheapest dev board (3rd party, minimal FPGA dev board with little/no I/O) is only $50 + shipping from India

- discussion about breakeven for fab for ICs

- How is this distinct from OpenRISC? Automated transformation from high-level design to low level design outputs

- Chuckles from audience when simulator is revealed to have been written in Ada

- jcore design doesn't fit in biggest FPGA (Xilinx ICE40)

- "We've got linux booting to a shell prompt on real hardware so you can start with a known working environment and then break it."

- SH-FDPIC is working w/musl


Bit of a clarification, the jcore design doesn't fit in the biggest FPGA with a purely open source tooling, which is made by Lattice, not Xilinx. 7k gates is tiny for an fpga these days -- the spartan 6, which they also mentioned is an older design has up to 19 times the gates and can fit a few cores and DSP modules, but needs a closed source toolchain.


They build an open source CPU to run an open source OS namely Linux to make IoT applications possible. Okay,

But then they mention in the beginning with Linux you need minimum 8 MiB Ram, because "less is awkward".

To be honest 8 MiB is a lot for embedded, for IoT. Normally we deal with way less then 1 MiB. Especially in IoT I need low power and less components. A state of the art Cortex M0 has somewhere of 256 KiB of Ram. With that they basically prove, that Linux is not suited for IoT. Is that the point they want to make? Really?


It's only a matter of time before that kind of RAM size is normal for very low-power and low-cost devices though, and if running Linux makes your development faster or easier or whatever then it's good to have it as an option.

It really depends on the application, how much processing power you need, etc. but more options are only a good thing.

Remember that the 'T' in IoT doesn't just mean little things - these guys are talking about devices that monitor large transformers and power distribution equipment (critical electrical infrastructure). It's likely that an IoT device for that would be doing a lot more than, say, a smart meter for a home for which Linux probably would be overkill.


> It's only a matter of time before that kind of RAM size is normal for very low-power and low-cost devices though

Really? SRAM isn't cheap and takes power, DRAM cheaper (especially once process/packaging technology means you can place it on the same die or in the same package as the processor) but still costs power. Fundamentally if you want very low-power you're gonna go for lowest RAM possible. Ditching it is an easy way to save power. Even if RAM is cheap using 1/32th of it (8mb vs 256k) makes things cheaper still. Important if you're going for very low-cost.


Once you go outside the size of RAM chips that are used in mass-market devices the cost goes up by a lot.


The problem I have with that is that I hear it every decade about niche-x. And people are right, but as soon as costs drop enough that niche-x can support the full desktop stack there's now a niche-y where you're still stuck with hundreds of bytes of memory, the market is x10 as big, and you're using the same tools you were using a decade ago.

An 8-bit micro controller will be with us forever because it is the smallest computer that is useful for more than trivial tasks, it will just keep getting smaller and smaller. I won't be surprised if the first space ship to land on a different solar system has trillions of 8-bit computers on it the size of atoms.


They built it for high assurance applications where being able to review absolutely everything from ground is important foremost. IoT is just an application the cpu is also well suited for.


How old is this video? What is described would have barely been state of the art in the late 90s. 5 stage pipeline written in VHDL. This is now on every student's resume, as a project for their computer architecture class.

Their proposed "two-process" coding methodology for VHDL is weird. They pretend their entire CPU core is written in a total of 3 VHDL processes.

I guess their only argument is that their target a 180nm process (from 1991), which costs only $25K.


They intentionally chose an old design because they know all the patents have expired. Thus, they don't have to worry about licensing woes or anything like that. There are a number of situations where you don't need a fast processor, just a processor that is fast enough. Additionally, an older core means it's a smaller core, so they can stick more of them on an FPGA if they need to respond to several realtime events at once.


And not just for ISA, there's tons of patents on everything from the microarchitecture optimizations to ways of synthesizing things. "Up-to-date" textbooks that today's academics are using probably include plenty of infringing material because a good part of why they learn that is to eventually work for patent-holding firms to improve their I.P..

On the flip side, trade secrets obscure many great tricks in this industry, esp for analog, to try to reduce number of patent suits they get hit with if they expose their constructions. Plus for competitive advantage with consideration for cloners that ignore patent rules. So, one says we can't use many, good techniques where another says we can't know it.

The "joys" of OSS hardware...


I consider it completely sufficient if there is any true OSS hardware which is able to run Linux with acceptable performance. Of course big applications like LibreOffice won't make fun on such systems. However we have TeX etc. which runs well even on small systems. I remember writing my own TeX documents on an 8 MHz Atari ST, and it worked well!

I agree with the previous poster: "you don't need a fast processor, just a processor that is fast enough."

If some day we had a small OSS cpu for Linux with acceptable performance which could be implemented in pure SMD like the monster 6502 - that would be nice!


I always repost the same ones:

http://www.gaisler.com/index.php/products/processors/leon3

http://www.oracle.com/technetwork/systems/opensparc/index.ht...

Gaisler would be easier one to implement in ASIC. The Leon4 is a 4-core variant. In any case, the Leon3 and key I.P. are GPL'd specifically for open-source to build on it. SPARC ISA only require $99 fee to use SPARC-compatible line. OSS people just keep ignoring it. Meanwhile, academics have built on it and the Leon3FT variant is often used in space applications.

So, get either booting on a FPGA, install Linux, and have at it. :)



RISC-V is the best open one to get on right now. The ISA is well-designed plus open. There are simulators, high-speed CPU's, compilers, and so on for it. Surprisingly, many big companies are also backing it on top of the academics doing it. There's at least a dozen RISC-V implementations in progress.

Many examples here:

http://www.lowrisc.org/blog/2016/01/third-risc-v-workshop-da...

Arduino-style implementation:

https://github.com/pulp-platform/pulpino

Note: OpenRISC is not RISC-V. It was a competing ISA that's fallen to the wayside as RISC-V's popularity soared. It did get used in Milkymist IIRC. Best to ignore it except maybe out of personal curiosity in favor of RISC-V and SPARC.


Also very interesting: http://hwacha.org


The main problem with software is web browser these days. You can find niche browser engines that would be somewhat lightweight, but 80% of sites would not work correct with them.


Date on video shows April 4 2016. "Every student's resume" and yet I wager none have released their design in order to foster an open computing community.

The video clearly indicates their motivation to make silicon that won't get a C&D as soon as they get to market.


> This is now on every student's resume, as a project for their computer architecture class.

AFAICS the point is not to create the bestest fastest microarchitecture ever; I'm sure nobody, including the guys behind this jcore project, harbors any illusions of competing with high-performance cores from the likes of Intel or IBM on a shoestring budget.

But rather, the point was to pick a decent ISA (for instance, SuperH is apparently the basis for ARM Thumb, so code density is quite good) with a (hopefully!) patent-free implementation. And with an ecosystem so you can actually use it. I'm pretty sure "every student's comp arch class project" doesn't include upstreamed support in the Linux kernel, GCC, binutils, GDB, strace etc.


Looking forward to the day when they reach SH4 level so the Debian SH4 port can be run on it:

https://wiki.debian.org/SH4


Ah, I have fond memories of running NetBSD and Linux (JLime) on SH3 handhelds. SH3/4 represents a meaningful step forward, hope they are able to implement it soon-ish:

> The sh4 processor (dreamcast) has an mmu, but the last sh4 patents don't exire until 2016.


I found this talk to be really interesting, as SuperH chips were used in so many applications in the 90's. One significant application I'm familiar with were Roland synthesizers. It's very interesting that this group decided to implement this for their own application.

But, I'm curious why it wouldn't be easier to order SH CPU's from Renesas for modern applications, although I understand the implementation wouldn't be open all the way down? Does anyone have thoughts on this?


Renesas CPUs are not open source.


"Can I walk them through the numbers we ran?"

"No."

These guys are a well oiled presentation machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: