Hacker News new | past | comments | ask | show | jobs | submit login
Backers of Open-Source Chips Launch Startup (wsj.com)
66 points by poindontcare on Aug 13, 2016 | hide | past | favorite | 16 comments



I recently looked into buying a new laptop, and found minifree.org, who supply libreboot BIOS chips.

What astounded me was that Intel post 2008 and AMD post 2011 have seperate management chips that mean essentially any motherboard is rooted at the start.

I still need to dive into the complexities (I knew about Intel ME but assumed we could get round it somehow) but we need these open chips more than ever.


AFAIK AMD PSP don't have access to the network unlike Intel ME. Intel AMT is actually a useful feature for some large organizations.


The NSA is a pretty large organisation, yes.


AMD PSP has access to all the hardware in your machine. Now, whether the software running on it has drivers to control your ethernet hardware is another question, but if you could answer that, there wouldn't be a trust issue with these kinds of binary blobs.





I wonder if open-sourcing can help keep Moore's Law going.


Other way around. The end of Moore's law means that open source chips actually have a chance to compete:

http://spectrum.ieee.org/semiconductors/design/the-death-of-...


Here's my theory on why open-source products (given enough time to develop) are often higher quality than their closed-source non-free counterparts:

Industry has a vested interest in maximizing returns. This means that it tends to pump out very practical, usable products, but often gets stuck at local maxima. It makes more sense for Intel to keep hammering away at x64 than to switch to a possibly better architecture, even if switching might be better in the very long run. OSS is driven not only by industry, but also by academics (who have a vested interest in doing highly experimental research) and hobbyists (who have a vested interest in quality and simplicity). This additional set of interests means that OSS is being pulled in more directions and is more likely to escape local maxima. If software was entirely driven by industry, we'd probably still be using FORTRAN or something.

Now, the problem is that open source projects tend to have fewer resources and less time pressure, so development is slower. This is fine for computer science, where an academic who develops a novel language or algorithm can reasonably expect it to be cutting-edge for quite a while and still in use decades in the future.

On the other hand, processor design moves so fast that the only way to keep up is to dump tons of money and man-hours into pursuing the latest and greatest as aggressively as possible.

As the pace of processor development slows down (if it does), it will give a chance for the slower, but higher-quality, work of academics and hobbyists to catch up.

I am looking forward to all sorts of esoteric architectures implemented on all sorts of esoteric theories to come out of the woodwork. We're very likely to find some good stuff that we haven't thought of yet.


I think it really depends on which type of software and which part of quality you look at. The good open source software is mostly written at companies (like linux for example), but I guess having a few hobbyists and academics in there kind of helps. One field open source is usability, as you usually need to pay usability experts and conduct expensive experiments.

You're right that the hardware field is totally different from the software field, however you have to see that instruction sets, which is what RISC-V is about hardly ever change. Today's x86 processors are still compatible with 8086 from 1976 (and assembly compatible with 8008 from 1972). Similarly ARM is from 1985.

The thing that's keeping hobbyists out is that producing ASICs is prohibitly expensive.


No, that's not going to help. Moore's Law is about transistor density, and RISC-V is still dependant on the same silicon process technology the existing architecture are.

We may see some improvement in performance due to the new architecture, and maybe even more so with the Mill CPU architecture. But the end of the road for silicon lithography already in sight, and these new architectures won't fix that.


Moore's law is abt # of transistors, not density [1]. But I still see no reason open-sourcing would boost it. It's a capital-intensive process which is not where open source shines and is typically limited.

[1]https://en.m.wikipedia.org/wiki/Moore%27s_law#


Those two go hand in hand. You can't increase area arbitrarily. The more area your chip has, the less yield you get, which in turn makes it uneconomical to produce.

The reason is simple, even a single manufacturing fault can make a chip unusable. If you assume e.g. a fixed number of faults per wafer, you can easily see how increasing the chip area increases the area you have to throw away for every fault.

As you wrote, the semiconductor business is very capital intensive, so no one can afford to do that (except for some niche applications, like R&D, space, etc.).


Doubtful. If the companies with multi-billion dollar IP, talent and fabs are having trouble making chips archive the same level of performance-increases every 18 months as they did in the past, sharing only the IP (and not Intel's, but that of a smaller player) still leaves several problems unsolved.

But it's still an effort to share knowledge, which is good.


I imagine this allows for crowdsourcing and who knows how many ideas are out there. Sort of like Gauss Method for the sum of a series.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: