A tangent, but bear with me: after finishing the really good book The End of the World is Just Beginning, I think it makes a lot of sense to continue building cutting edge tech that requires international supply chains, BUT, also having locally manufactured tech good enough to power locally sourced computers, run tractors, etc. International supply chains have enriched many areas of the planet but to assume that they will last seems very risky. Always have a Plan B.
Plan B is also vital for computer freedom. In an ideal world, we would be able to make good enough chips at home, just like we can make software at home. Chip fabrication currently costs billions of dollars, they are therefore centralized operations vulnerable to regulation. Ability to make computers at home would preempt any attempt to regulate encryption, for example.
Nice! And in this ideal world, we could also make steel tools in our decentralised garden furnaces. The steel industry costs the world billions of dollar currently, imagine if we could all make our own iron tools without paying for patents e.g. torx head screw drivers from our back yard! :^)
That is totally possible though. Plenty of people know metal working and can make tools such as knives. It's not as efficient as industrial mass production but it's possible.
I think the tongue in the cheek comes from a historical context that exactly this was attempted in the Great Leap Forward movement, as part of the Great Cultural Revolution movement in Communist China, which was, as with anything Communist, a huge leap and huge progress backwards.
I'm not educated about chinese history but I will read about it. I'd like to clarify that I don't mean anything radical like replacing fabs with home production. Free software never replaced software companies, they coexist.
I just wish it was possible so that we always have the means to produce 100% freedom-respecting general purpose computer hardware that's viable for daily use. That way we're not forced to accept the status quo when corporations start bundling suspicious stuff like IME into their processors.
We already have the means to produce freedom respecting software ourselves but that gets us nowhere if the chip makers start requiring cryptographic signatures before executing software. What good is free software if we can't run it?
It might not be this easy. The dilemma is this: if the company produces good product, then they will probably gaining market beyond just regional.
The international supply chains of today's world was grown organically, driven by the market where inferior company and their products gets replaced by better (sometime just slightly) ones.
However, I do agree that a Plan B is required in many fields, but I guess you need huge amount of government subsidies to keep them running.
One way to do this is to make domestic production a requirement for government contracts. Then the government contracts ensure domestic production exists, and pays off a lot of their fixed costs, which may make them competitive in the global market. Or maybe competitors are still cheaper.
But more than one country can do this, which ensures supply chain diversity. And if one of them falters, the others may be more expensive, but they exist.
> The international supply chains of today's world was grown organically, driven by the market where inferior company and their products gets replaced by better (sometime just slightly) ones.
I would say corruption played an important part, especially in Eastern europe. MS products are expensive for poor countries. Coca cola without "exclusive deals" would not have the same market share (Hello wrigley, hello lindt, hello intel). The shitty bananas in Europe have nothing to do with organic growing of the market.
To me it seems that Risc-V is China's 'plan-B', their systems are the first to use it and SBC's do exist already in China. I've even seen IoT cores from Chinese vendors moving to Risc-V, like the popular ESP32 from Shanghai company Espressif.
AFAIK C906 is not open-source.
The open-source variant of C906 is called OpenC906, and we don't know the eventual difference between C906 and OpenC906.
Even if the C906 and OpenC906 were the same there would be no way to verify that the cores implemented in the hardware you receive are the same as those described anyway.
I'd agree, and I'd say that RISC-V's rise has much to do with this sentiment.
An open source patent/trade-agreement unencumbered architecture is of great interest to many countries for precisely the reason that they can build it locally yet harvest innovation from across the pond (without paying a foreign company).
It is plan B if you live in China. The West is not under treath to lose access to ARM.
I doubt it will come to fruition though; in ten years our world be more globalised than it is today. Trade wars, the pandemic and even the Ukraine war are short bumps on a big fundamental trend that is here to stay.
To what extent do we need computers? They feel indispensable but we could go back to more labor intensive information systems. And that wouldnt be all bad.
It's a good question, if we didn't have small microprocessors the alternatives would tend to be more material intensive (eg mechanical governors or clockwork) or be less efficient.
We'd lose the Internet and cellphones, have to go back to mechanical telephone exchanges.
I think we would miss CNC a lot, it's how the majority of production work gets done now in many industries, the manual machines are in the corner for one-offs.
The computer controlled machines also tend to be making parts or doing QC or measurement to support the non computerised ones. So sure your injection moulder or die cutter might not need too many chips but wait until the molds and tooling wear out.
Although who can send your factory orders anyway...
Payments, payroll,
inventory, invoicing. Small words but huge implications.
We'd lose basically all capacity to print (billboards, t-shirts, books, office memos etc) , which seems bad. Like we had less digital ways to do all that stuff but they went away. It's not easy to go back.
Medical imaging gone except for maybe the x-ray.
Behind the scenes all kinds of process control would disappear which would require massive rework. We'd have to retool most industrial processes to get computers out of the control loops. We'd lose the electric grid until people figured out how to decomputerise it. Probably trains, traffic lights, airlines, cars,
shipping would be impacted in a variety of fundamental ways.
The postal service would need to be re-architected (current reliance on parcel sorters and scanners).
We'd have to move back to analog tv, radio and media production workflows would change dramatically.
It's an interesting exercise to try and figure out what industries would hurt the most if chips disappeared tomorrow, I'm pretty sure it would be a catastrophe but it's not easy to follow it all through.
Mechanical solutions tend to be slower and also less reliable (more moving parts = more points of failure)
Aviation would just cease to function, nearly every commercial airliner is heavily dependent on computer control, to say nothing of things like the reservation systems that are also very complex (pricing flights is a very complex use case for algorithms)
Airplanes do predate computers, and there are plenty of airlines still flying that remain operable without electric power, even some commercial airliners.
Maybe, but you're talking about "a generation, or more" kind of lead time. For a good start you have roughly no pilots able to safely fly your computer-free planes and no instructors either. So once you've managed to design the controls, you need to bootstrap your education pipeline on using them.
I think you're imagining that current airliners and current flight instruction work very differently than they actually do. A very large fraction of flight instruction is carried out in planes that predate the embedded-computers era, starting in roughly 01975, and the more recent planes that carry most commercial air traffic are designed to simulate those computer-free planes as closely as possible.
Yeah, maybe there would be a huge spike in aviation risk, so instead of one flight in ten million crashing, one flight in a hundred thousand would crash, but that's still not enough for aviation to become a dominant cause of death for weekly business air commuters. People would freak out when they watched the news but only in countries that hand over the reins of society to nervous nellies would it be a real obstacle to aviation.
I was actually thinking about the thing you mentioned. While the flight properties of modern planes kept changing, we were using computers to emulate flying like a half a century old much smaller plane. Now that emulation would be gone.
Concorde is literally a museum piece that hasn't flown in 20 years, because even in retirement it was extremely old, and it is extremely noisy and gas-guzzling.
This was widely done with tube computers in the 1950s. The first commercial computer, Univac, was introduced in 1951.
Medical imaging gone except for maybe the x-ray.
The first commercial CT scanner in the 1970s used a Data General Nova minicomputer, which used small-scale integrated circuit logic chips, but did not include a microprocessor.
Behind the scenes all kinds of process control would disappear ...
The early PDP-8 minicomputers introduced in 1965 used discrete transitors, no integrated circuits. They were used in industrial control applications.
Farmers use GPS and other technologies to plant crops with inch-level accuracy while maximizing land utilization and minimizing fuel/fertilizer. Take that away and somebody's going to starve (or start a war to keep from starving). This might play out in Africa soon with the collapse of Ukrainian grain imports.
We need chips in everything.
Cars could work without them but would be much less efficient and bad for the environnement.
Lots of very important medical devices rely on chips or full computers.
I would like to see the media industry roll back to a lower tech leaving only the big and very bias information go through. Imagine how hard it would be to keep them accountable.
Is it possible to live without computers? Yes, but you don't want to.
I feel like that's true, but the scale is misleading. We could run most of the things we need with very small chips production. A lot of usage could be removed (most of entertainment), lots of business use could be scaled down (how many businesses need basically a spreadsheet and a filing system equivalent). There will be special usage requiring hitech chips or course. But we can seriously scale down our electronics usage without a negative civilisation impact. The pocket device I write this on has enough power to run what an office of 10 people needs to properly run if we adjust the software to match.
If processors were much more interchangeable than they are, we could scavenge chips from less important devices, or at least production that was destined for them, and put them in more important ones. I hope RISC-V can help provide this kind of buffer against short term shocks. For major civilizational collapse where we can't even build new fabs, then we probably won't have much need for new chips anyway, so it's moot.
> Cars could work without them but would be much less efficient and bad for the environnement.
Not as much as you think. Modern cars aren't a whole lot more efficient than and are just about as clean as they were 30 years ago. Most of the electronics in modern cars is for relatively useless stuff like lane keeping assist and that thing that makes the indicators blink in a cool pattern.
You can have cars just as clean as modern ones with 1980s-level microcontrollers in the ECUs. You don't even really need catalytic converters, either, because the closed-loop fuelling systems that use a lambda sensor to measure how complete combustion is.
We could rid cities of pollution right now, completely, by converting all the internal combustion engine vehicles to run on propane instead of petrol or diesel. This doesn't make finance companies or car companies any money, so it won't happen.
You don't need chips for computers. Until the mid-1960s no computers used chips. The early PDP-8 models (introduced beginning in 1965) were desktop minicomputers made entirely with discrete transistors and magnetic core memories. They were used in lots of automated process-control applications.
There is some confusion in this question and many of the reponses here between any electronics (computation and control using tubes dates from WWII at least),
transistors (which became widely available in the mid-1950s, all-transistor
computers reached the market in 1959), small scale integrated circuits that contained (for example) several logic gates or a few flip-flops (which reached the market in the mid-late 1960s) which were used to build computers but also many simpler electronic control devices, and finally microprocessors which finally reached the market about 1975.
Any electronics that reached the market before the mid or late 1970s did not use microprocessors. For example, the first commercial CT scanner used a Data General Nova minicomputer, which used small-scale integrated circuit logic gates, but did not use a microprocessor. The original Pong arcade game used around 60 small-scale integrated circuit logic gates, but no microprocessor.
If we didn't have integrated circuits, but were stuck with 1959 discrete transistors, or we didn't have microprocessors, but were stuck with 1974 small-scale integrated circuits, we could still have a pretty hi-tech world. There would be more emphasis on optimized design of special-purpose devices - that original Pong game is an example.
Something happened in the 1950s - 1960s economically. [1] It is really as remarkable in its own right, as the industrial revolution had been. The rate of economic growth, slow but steady since the industrial revolution, started to accelerate.
It's impossible to really quantify this kind of thing, probably. And the causes were numerous (it's also an era of relative peace, for example). Still. I've always suspected much, maybe most of it, is due to to a mix of telecommunications, computer automation, and computer-based knowledge-amplification.
The C906 is very slow, but so is, it appears, the Pi Zero.
The good news is that there are a lot of faster options right around the corner. The Pine folks is working on releasing Star64 (quad SiFive FU740 1.5GHz) [1] and I know of at least two other RISC-V SBCs in the pipeline. I'm not quite ready to declare "2022 is the year of the RISC-V desktop" yet though.
Yeah it's already legacy hardware, it should be comparing against the Zero 2 if anything, the original Zero was comically underpowered.
But the speed is all but irrelevant, the issue with these alternative boards is always in software support. No point in using them even if they're twice as fast if I can't apt get anything and have to compile shit from source wasting 10 times as much time. Is there even an arch tag for riscv yet like armhf and arm64? I'd assume there is, but I can't find it and the support is likely to be abysmal this early on.
Debian, Ubuntu, and Fedora have had distributions for many years. Others like FreeBSD, etc also exist. I often take a break and spend a day working entirely on a RISC-V host (BeagleV beta). Everything I care about work 100% the same, most notably Emacs and all the dev tools (Rust, C, Haskell).
The only thing that I hit in the (old) Fedora 33/RISC-V is Firefox's lacking support for WASM, but that could be working in the latest version.
If you want to try it for yourself under QEMU, I'd recommend following the instruction here: https://wiki.ubuntu.com/RISC-V
SiFive released the Linux capable RISC-V dev kits years ago and for long time now there has been many options besides what SiFive offers. Only few however are “cheap”.
Maybe not mad, but certainly not very good at searching.
CORRECTION: 2010 was the _founding_ of the RISC-V project. I don't have the date of the 1.0 release, but nobody would ship hardware based on a 1.0. Release 2.2 (user)/1.11 (priv) wasn't released until 2018! Light of this, hardware is actually coming out pretty quickly.
Hardware is expensive and takes a long time. In comparison, Arm was founded in 1990. When was the first Linux-capable Arm-based dev board available for purchase?
I'm probably as unhappy with the delays as anyone, but the momentum has not slowed and better options _will_ become available for sale.
Advanced RISC Machines Ltd was founded in 1991, but one of their parent companies (Acorn) had already shipped over 100,000 ARM-based desktop computers at that point.
The RISC-V ISA design was started in 2010. The ARM ISA design was started in 1983.
> Arm was founded in 1990. When was the first Linux-capable Arm-based dev board available for purchase?
I think the first Linux-capable ARM-based dev board was available for purchase in 01989: the Acorn/BBC A3000 had an ARM2; the outdated https://www.arm.linux.org.uk/machines/riscpc/ says, "The support for these machines is now beginning to become increasingly difficult, however there is still support for them in the Linux kernel."
2. Advanced RISC Machines, Ltd., was spun out of Acorn in 01990, but the Acorn RISC Machine project started in 01983 and the ARM1 was first fabbed successfully in 01985. So the A3000 was shipped six years after the design effort began and four years after the first working ARM hardware.
Your use of zero before the actual year, as though you’re a COBOL-addled Long Now fanatic, makes me want to think the value is in octal. C brain damage on my part, no doubt.
D1 is indeed the first mass-produced Linux-capable RISC-V SoC.
SiFive made something like 500 HiFive Unleashed boards and probably 2000-3000 HiFive Unmatched. They all used effectively prototype chips, made on MPW / Shuttle run.
When the D1 came out I heard the initial production batch was 2 million chips. As with the SoCs that go into Raspberry Pis, the SBCs are a side-line.
Indeed there're some challenging packages that we've made great efforts to port, and those efforts mainly happened at upstream level: adding RISC-V support for ldc, for chromium V8 (rv64 & rv32, both as joint efforts, great thanks to my college luyahan), for lldb-server (WIP), and crystal (WIP), etc.
As soon as upstreams accept our PRs, all distros with RISC-V port can benifit from them, which IMO sounds like a better porting style than keeping a huge patch in downstream Arch-specific repo :-D
BTW we also have an CI/CD available for upstream opensource developers to monitor their builds: https://ci.rvperf.org
It would great if your page could include instructions for how to try this (Ubuntu does this very well - https://wiki.ubuntu.com/RISC-V) or at least a pointer to steps to follow, thanks.
I've been using a RPi 4 with 8GB RAM and Ubuntu Mate as the OS for a full year now as my daily workstation. I code, I design... even video-editing and opening multiple tabs with YouTube works perfectly fine (although maybe not as fast as a beefier machine).
P.S. Besides, it's perfectly silent, since a passive aluminum cooling block with a rib structure cools it enough.
I bought the Mango Pi MQ-Pro a month ago and while at first the software support was missing (only Tina Linux was available), later some ISOs started to appear. Right now I'm using the Armbian headless image based on Ubuntu 22.04[0] which works almost perfect and allows me to work on porting things to RISC-V.
I think this is going to be the real make-or-break thing about RISC-V: early tests have shown super impressive SIMD/vector benchmarks compared to ARM/x86, but whether or not that will make it into production is another question entirely. I've got high hopes for RISC-V, but it's acceleration/HPC workload performance is going to determine whether it topples ARM or becomes the next Itanium.
Unlike Itanium, RISC-V is an ISA and there are many many different implementations of the ISA. You cannot conclude anything about the former based on a few instances of the latter. Patience.
I'm not deep in on this stuff, but isn't Itanium essentially the only implementation is IA-64? Presumably Intel was pushing the envelope on their implementation of the ISA.
According to https://en.wikipedia.org/wiki/Itanium this is actually clear as mud and it seems Itanium was both a family of implementations as well as ISAs (thus variants of IA-64?). It's also the _only_ implementations of IA-64.
Going back to the original question (can RISC-V "become another Itanium"?), as long as there are people using/support/evolving it. Unlike Itanium, RISC-V has the support very many companies already and the membership list of RISC-V International just keeps growing.
The only thing that could threaten it would be a better alternative appeared or if x86 or Arm suddenly got the same unrestricted license. The former is not impossible, but it would be huge undertaking and by definition we would be in a better place. The latter seems essentially impossible.
As there's no technical reason why a RISC-V implementation couldn't have roughly comparably performance to an Arm core (iso-effort and technology) it's "simply" a matter of sufficient investment before we'll have the X1 etc equivalents. There are very many companies working on high performance implementations. One of the high profile ones is Rivos, but there are _many_ others.
The nvgpu/tegra drivers are all open source and GPL (nvgpu for Tegra has been open source for a long time) but they were until recently on an old 4.19 kernel. The latest is 5.10. There's also the usual firmware binary blob stuff as everyone else. There are rumors about them working on mainline support for the Orin SOC, and Orin is (AFAIK) designed primarily to use UEFI as its boot mechanism. So it may be looking a lot better soon as far as full custom distros are concerned. The userspace is still all closed, however.
I suspect a lot of the push for upstreaming (and the original Tegra GPU drivers being GPL) is from their automotive/industrial customers who want continuity guarantees.
The hardware that was sold could still work perfectly fine instead of having to be trashed if only the existing drivers were recompiled, to run on a recent distribution.
Yes, and Imagination Technologies has been around for a while, designing, among other things, embedded GPUs (dating back to PowerVR days).
They've always had nasty drivers and no documentation, but this is apparently changing, with some new mesa driver funded by the company itself announced recently.
I would like more SBC options, rather than the solitary RPi right now. My sdcard shield popped off the RPi I have, and I had to resolder it back on. Two years ago, I would have just trashed it and bought another.
I just run mainline Linux on the RockPro64, like I do on all my computers. It seems to work fine for my purposes, although I don't use every feature of the board so there could be some hidden problem I've missed.
I agree that it would be a mistake to buy a wildcat board which needs a special patched Linux from the vendor. Mainline support, or no deal.
You have so many options! I have Raspberry, Banana, Mango and Orange Pi boards sat here. NanoPi, Rock Pi, Radxa, Beaglebone etc all have alternatives available too.
So many options and alternatives from all of them, yet the Raspberry Pi has by far the most software support, technical support, documentation and hardware compatibility and updates compared to the alternatives.
From the rest of the other boards it is just one kernel release (if you're lucky, 3 releases) and they have moved on to the latest SBC and dropped support and releases. The Raspberry Pi Foundation still continues to support older boards.
It just seems that with the many Raspberry Pi users, the documentation and the ecosystem built around it seems to be its success rather than the technical specs.
I'm still bummed about the discontinuation of the H2. Every alternative that I've been able to find with comparable specs is at least twice as expensive.
Maybe this is a feature. It would be a shame to trash a computer because it’s cardslot broke.
Incidentally, I’ve been trying to repurpose a Pi Zero W I blew out the USB bus on. A bunch of people in my local group expressed interest, but each I contacted in turn said “give it to the next guy”
According to public statements by the execs at Rpi, the availability issues are due almost entirely to unexpected new demand exceeding their planned production volume (which they claim was quite large and scaled up from previous gens).
They report their suppliers have been delivering their parts orders on time and RPi is shipping the full volume of units they always intended for 2022. They are just getting bought up at a much higher rate than expected. They also say some of the issues in buying single units through online consumer retail channels is because RPi is prioritizing filling advance orders from volume integrators.
I have no idea the accuracy of what they're saying but fulfilling advance commitments from volume integrators first makes sense as those are long-term customers who are counting on the orders to ship their own products. Shorting those orders would be fairly disastrous as it flag RPi as an unreliable supplier reducing future design wins.
MCU's are basically locked to a vendor currently. You choose a chip, make your design around it, and hope there won't be a shortage from that vendor, and it's high enough on the vendor's priority list to not hurt your product.
If there was a RiscV chip that met a particular standard (x voltage range, y clock speed, same supporting circuitry and software), then we'd have multiple chip vendors creating the same part, and hopefully it would turn into a jelly bean chip.
I don’t see a standardized pin definition coming soon in something as complex as a Linux compatible SoC with DRAM interface - that isn’t a thing sometimes even across a single sub-class of products from a same company.
Maybe there will be a gold standard of RISC-V SoC like there was the Intel 8086, and bunch of clones could show up. That was close to happening with ATmega32U, but ultimately the clones disappeared.
More likely in an SBC with only a few buses emerging? Advantage there could be to do your high-density routing and PCB mfg with a smaller trace-width etc process.
How is that different than with ARM? There are plenty of ARM MCUs out these days and it seems like ARM's defeated MIPS for the embedded space. But once you step away from the core ARM bits it's all proprietary.
Hah, it does! Though the 2 production runs they've done sold out pretty quickly. I think some 3rd party stores on AliExpress snapped some up and are reselling them, though if you follow their official twitter (@mangopi_sbc) they post their official page when there's more stock. They recently posted a picture of new boards being manufactured so there's hope!
In this regard RISC-V has a whole lot of catching up to do if it wants to be something other than a footnote in the story of the ARM takeover of the world.
I know the openness of the RISC-V platform is ideal to hardcore FOSS fans, but they're far outnumbered by those who will be satisfied with the cheap, fast, and plentiful ARM chips currently on the market, and the cheaper and faster ones sure to come.
I would not describe the raspberry pi as plentiful right now, especially if you want the 4gb/8gb variants. Mine was recently destroyed and I can't get a replacement
I mean, it's been two years of limited supply for the rpi and the RISC-V board under discussion is launching now. So if it has decent supply, that would leave it with the advantage over the rpi.
I’ve bought the Mango Pi and a few other cheap SBC’s on RISC-V. The performance is slow 1Ghz single core but enough for small hobbies. I was hoping the different ISA would make shellcode incompatible.
But IMO RISC-V like many Berkeley products, is a pointless waste of resources to make a political point. There’s no compelling reason to recompile all our code to help out fabless design houses out of a license fee. And nobody save a privileged well educated and funded few are in need of a free ISA.
It is, though these 2 boards are the closest in terms of specifications "on paper" as they both offer a single 1GHz core, 512MB of RAM (unless you go for the 1GB MQ Pro) etc. Comparing it to the RPi Zero 2 felt a bit pointless in a standalone head-to-head piece. There'll be a separate post in a few weeks comparing all of the "zero" style boards from the various vendors that may give you what you're looking for!
On the one hand I would say that RISC-V SBCs are not yet for anyone who cares only about price. RISC-V cores and SoCs with Linux SBC capability are in an early stage. RISC-V has already taken over a lot of the microcontroller world, based on price and freedom.
On the other hand, I would say that the price of a number of these D1-based RISC-V boards are now within the range of a Pi Zero plus lunch at McDonalds, so if you only want one of them the price difference is basically irrelevant.
A year ago a Pi 3 equivalent RISC-V board (the HiFive Unmatched) cost $665, which was a big drop from 2008's HiFive Unleashed plus MicroSemi expansion board for $2998 total. Since December there's been the VisionFive for about $180 (although with only 2 cores not 4). In a few months there will be the newly-announced Pine64 Star64 with an expected price of $60 with 4 GB RAM or $80 with 8 GB RAM (they've said "about the same price and performance as the Quartz64")
That's pretty rapid price drops.
RISC-V SBCs with considerably better performance than a Pi 4 (similar to the RK3588 boards that just came out a couple of months ago) will be demonstrated working in the next couple of months and probably shipping early next year. They will, again, probably be quite expensive at first, but existing is the hard part. Price is then just a matter of mass production economics.
I think the problem so far is that no RISC-V chip has offered any USP (Unique Selling Point), other than it's licence-free. For example, Raspberry Pis exploded in popularity because they were incredibly cheap and had a lot of support.
being license free is a huge selling point though. arm makes over 1 billion usd in license fees per year. is riscv reaches anything like comparable maturity to arm, there are a lot of companies that would rather not pay their slice of that billion.
SD card and SSD's have a latency on the order of 500us, while even a Raspberry Pi 3 has a memory latency[1] of around 200ns. That's three orders of magnitude difference in latency, which would destroy performance.
In addition, memory bandwidth is still at least an order of magnitude higher.
Old computers were vastly weaker in a lot of ways, but memory latency isn't one of those things. A 6502 was also hitting memory every few hundred nanoseconds.
You could make it work with Optane except oops that just got discontinued.
SSD speeds will probably continue increasing, so removing RAM will take us about 20 years back to the past, with the difference that these will have seemingly "unlimited" memory. Imagine Windows 2000 never complaining about being out of memory.
> SSD speeds will probably continue increasing, so removing RAM will take us about 20 years back to the past
I think you significantly overestimate how much they can reduce latency for SSDs. Keep in mind, even a 80286 from 1982 had a memory latency in the 200ns range[1].
Getting rid of DRAM would take us back a lot farther than 20 years. A typical desktop PC circa 2000 using PC133 RAM would have had a memory access latency in the tens of nanoseconds.
I've had Home Assistant running on the same 32GB SD card for over 3 years now, and the DB is several GB thanks to several very chatty Z-Wave devices. Zero issues so far. So it's not a given it'll die in a couple of months.
huho! Finally a RISC-V 64bits SOC with tons of GPIOs! But for a keyboard controller power would be provided via the usb-c from the host, it seems the mangopi expect power on a dedicated usb-c connector. Am I wrong?
According to the schematics[1], both HOST and OTG ports have VBUS directly connected to the same VCCIN net, so either can provide power as far as I can see.
Those are good omens. It is way overkill (wifi/bluetooth/gpu/video) for a keyboard controller, but at least, it does exists. If they follow raspberry on the same road, we may get a mangopi nano (hopefully with a with a risc-v 64bits cpu with a good bunch of GPIOs).
Dhrystone 2: 253.2 on the Mango Pi MQ Pro, 202 on the Raspberry Pi Zero. I don't know what this number is; presumably it's not Dhrystone MIPS (VAX MIPS), because that should be closer to 1000 for both chips, and it can't possibly be Dhrystones per second, because a VAX MIPS is 1757 Dhrystones per second, so they ought to be about 1.8 million Dhrystones per second. Reading https://github.com/kdlucas/byte-unixbench/blob/master/UnixBe... leaves me no wiser.
File Copy 1024: 124.4 on the MQ Pro, 86 on the Raspberry Pi Zero. The UnixBench README says this is measured in the number of characters that can be written, read, and copied in 10 seconds; if this were correct it would mean the MQ Pro were copying 12.4 bytes per second and the Raspberry Pi Zero W was copying 8.6 bytes per second. It seems inescapable that Bret's results are incorrect by several orders of magnitude here.
C copy backwards: 1197.4 MB/s (not MiB/s as I previously read incorrectly) on the MQ Pro, 157.2 on the Zero W. Something went wrong here.
Standard memcpy: 1200.9 MB/s on the MQ Pro, 424.8 on the Zero W.
Standard memset: 2650.6 MB/s on the MQ Pro, 1699.9 on the Zero W. This seems surprisingly slow; you'd think an 8x-unrolled loop of SD instructions on a 1GHz RISC-V would memset about 5 gigabytes per second, if it got one instruction per clock, which it normally ought to. The XuanTie C906 core in the AllWinner D1 C906 is 64-bit, which is potentially an advantage for this; the analogous code for ARM6 can only write 32 bits per instruction. (The ARM has an STM instruction that stores multiple registers, but it doesn't actually run faster.) I'm not sure what happened to the extra factor of 2 in performance. Similar remarks apply to the memcpy results above.
Amazon Basics 64GB MicroSD card: MQ Pro reads sequentially at 11.48 MB/s, writes sequentially at 10.77 MB/s; Zero W reads sequentially at 21.36 MB/s, writes sequentially at 19.6 MB/s. (I'm ignoring the random reads and writes because he doesn't specify the read and write size, yet he gives the results in MB/s instead of IOPS.) Note that these are six orders of magnitude faster than the 12.4 and 8.6 bytes per second given earlier.
Unfortunately Weber doesn't link to the benchmark code or document his compilation and execution environment. Presumably the memset and memcpy results are largely measuring the performance of the libc functions, for example, so reproducing them would require knowing if he's using glibc, musl, or a C library he wrote himself.
Mostly I feel like these benchmarking results are not well enough specified to be useful, which is a shame. I'd like to be able to use this kind of benchmark to predict the performance of a system within a factor of 5 or so, but these results are too irreproducible for that.
I got about the same speed (1100 MB/s) for huge in-RAM copies. The in-cache speed peaked at 3771 MB/s for an 8 KB copy with custom code (shown on that page) using the D1's 128 bit vector registers, 2058 MB/s using the standard glibc function.
150 to 400 MB/s on the Pi Zero is very believable. The D1 is pretty strong RAM performance. The HiFive Unmatched was about 180 MB/s when I tested it. You need top-end DRAM controller IP and also a prefetch engine to get the good read speeds, and it's no surprise if an ARM11 doesn't have that. SiFive also doesn't have it on their self-produced chips. Hopefully Horse Creek has the good Intel IP for peripherals and RAM.
I have no idea what's going on with the Dhrystone. SiFive's in-order single-issue RISC-V cores get about 1.6 DMIPS/MHz on 32 bit cores, 1.7 on 64 bit cores. I haven't measured but I'd expect the C906 to also be around 1.7 i.e. 1700 VAX MIPS at 1 GHz.
For memcpy and memset, they are usually memory bound, so it isn't surprising that it doesn't get to 1 instruction per clock. I wonder if this is due to slow memory instead.
Seems likely. Or maybe they don't have any cache and so it's spending half the memory bandwidth on instruction fetch? That can't be the Pi's excuse, though.
A tangent, but bear with me: after finishing the really good book The End of the World is Just Beginning, I think it makes a lot of sense to continue building cutting edge tech that requires international supply chains, BUT, also having locally manufactured tech good enough to power locally sourced computers, run tractors, etc. International supply chains have enriched many areas of the planet but to assume that they will last seems very risky. Always have a Plan B.
EDIT: call this Plan B Tech