Hacker News new | past | comments | ask | show | jobs | submit login
Sipeed Longan Nano – RISC-V Development Board (seeedstudio.com)
110 points by childintime on Aug 30, 2019 | hide | past | favorite | 95 comments



Datasheets are here: http://dl.sipeed.com/LONGAN/Nano/DOC/

As from my experience with GD's STM32 clones, they did not have a real flash, but copied all your code from SPI-like internal flash to main SRAM on startup. I wonder if they did pull the same stunt with GD32V's

EDIT: what is inside stm32 clone: https://zeptobars.com/en/read/GD32F103CBT6-mcm-serial-flash-...

EDIT2: original page now returns 404, it was this board: http://dl.sipeed.com/LONGAN/Nano/Spec/Sipeed%20longan%20nano... for $4.90 (https://www.seeedstudio.com/catalogsearch/result/?cat=&q=GD3...)


It looks like it executes directly out of flash. The instruction bus isn't hooked up to the main AHB matrix, but the instead goes straight into the flash controller. It doesn't look like it even can execute out of SRAM for the same reason. It also looks like you can't DMA from flash to SRAM or peripherals, which isn't the biggest deal, but could have made for some neat efficiencies.


The Datasheet has a diagram indicating that but the User Manual claims there's a bus matrix which provide SRAM connectivity from the CPU I/D ports and DMA units. The documentation is... not amazing.


Executing from flash is patent encumbered.


Do what now? People have been executing out of flash for longer than patents are valid.


STM32 has special its ART accelerator cache to prefetch instruction lines from flash and significantly reduce wait states. A clone can't replicate the same performance with a bog standard flash peripheral.

https://www.embedded.com/electronics-blogs/break-points/4440...


I know about ART. I actually have some contributions to the STM32F4 reference manual as my team had access to pre production silicon. There's bugs in the prefetching on that thing on early revs and you need to flip a chicken bit.

I still don't see anything in there that's not the usual way flash is accessed in cases where it's mapped into the address space for XIP applications. Hell, a lot of the time people will stick something nicer than a dog simple direct mapped cache like ST's implementation.


What's novel about that ?


Pardon my ignorance, but besides being an open-source ISA what's the reason for the popularity for RISC-V? The buzz alone has piqued my interest, but I'm just trying to understand the "why".


No ARM-like revenue based royalties, and broadly supported, including mainline Linux kernel support. So, companies like Western Digital are all in. And, if someone makes an "open source" CPU with it, to match the open source ISA, it's a win for trusted computing.

Also popular here because some folks imagine a new ecosystem of cheap Rpi type boards with no magical binary blobs. See, for example, lowrisc.org (though they seem far from shipping)


> Also popular here because some folks imagine a new ecosystem of cheap Rpi type boards with no magical binary blobs. See, for example, lowrisc.org (though they seem far from shipping)

I thought that the biggest blob in most linux systems is GPU driver and RISC-V doesn't solves this.


The AMD graphics drivers are completely open source.


The drivers are, but not the firmware running on the GPU (i think it's called Video BIOS). I'm not sure if a completely free GPU exists. But this is not something that RISCV is concerned with.


There are actually RISC-V extensions designed for numerical computation, though I don't recall if you could actually do a shader core or something like that based on RISC-V.

At any rate, I hadn't heard about anyone trying to make a completely open GPU. That would be cool though.


"Video BIOS" (these days, more likely an EFI GOP driver) runs on the CPU and provides display output before the OS is booted.

On-GPU firmware is not that interesting or concerning, you can consider it part of the hardware.


"On-GPU firmware is not that interesting or concerning, you can consider it part of the hardware. "

It depends on how it can communicate with the external world. If it only eats numbers and draws pixels, then no problems, but if its driver nature (ie, running at higher privileges than anything including root/administrator) allows it to create a covert channel with other hardware (say a network chip) and send vital information to the outside world, then it becomes a huge potential vulnerability. In open source systems, CPU/GPU/system chipsets firmware and closed device drivers are the places where malware can hide unnoticed for ages, and incidentally the one where it can do the most damage due to the aforementioned privileges, therefore those should be the first parts of a system in which we should demand for total openness.

A simple program with no access to files (therefore considered safe) reading the CPU load and populating a remote graph with numbers can become an effective spying tool if paired with another seemingly innocuous program which has no network access (considered safe as well) but reads files and busy loads one CPU core with values resulting from some encoding of the data it reads. They're 100% safe by themselves, but once paired (say because they're written by the same entity, or different entities obeying to the same government/s) they can exfiltrate information pretty easily. Back on the firmware topic, we can't know if there any seemingly unrelated device drivers talking each other unbeknownst of the system administrator, but should they do, that would be the most dangerous backdooring toolkit ever conceived. Until they stay closed, there's no way to know if they do other things.


> On-GPU firmware is not that interesting or concerning

So people don't like it if kernel drivers are closed source, but if all the functionality is moved to firmware and the open kernel driver just pokes the closed firmware that does all the work it's fine? What? How does that logic work?


Go ask the FSF with their questionable RYF certification. TLDR: As long as the firmware blobs are in ROM, who cares?


Why is it neither interesting nor concerning? From a security perspective it would seem to be both-- hostile firmware could at the least read secrets from your screen, and I'd be shocked if it couldn't convince the corresponding kernel driver to misbehave in interesting ways. Not knowing what is actually in the blob, you have to assume it is at least potentially that malicious.


Correct me if I'm wrong (and I really hope I am), but it doesn't appear to be possible to use "pro" features like OpenCL with the open source AMD drivers, at least not with my 2400G APU. The last two times I tried, I had to install some blob that failed part-way through the installation and rendered my system unbootable.


There's a mystery boot blob in the RPi that's pretty famous.


I'm not sure what you are talking about. There is no such thing as a mystery boot blob in the Raspberry Pi. It is well known that the video core chip is the main processor of the Raspberry Pi. It runs a proprietary RTOS called ThreadX. As an afterthought someone added a bunch of ARM cores to the Video Core chip which obviously has to cooperate with the main processor to do anything at all.

https://www.heise.de/select/ct/2016/8/1460193213013079/conte...


So you're saying there's source available for the boot binary?


This is because of the specific architecture of BCM2836, BCM2837, and BCM2711B0, which are rather VideoCore chips with appended ARM cores.


> I thought that the biggest blob in most linux systems is GPU driver and RISC-V doesn't solves this.

It makes no sense to counter a hypetrain with intelligent arguments. :-)

Of course, you are right.


There's already a lot of Allwinner-based Pi clones with no magical binary blobs, though - depending on the board the only blob may be the non-modifiable boot ROM, which is actually quite convenient for low-level hackers since it just loads your own bootloader from one of several devices and gets out of its way. This includes support for raw NAND flash so in theory you can avoid even having a binary blob in your storage device if you get one of the few boards with raw NAND onboard.


Arm license cost is nothing compared to what they have invested in testing and QA. One round of asic that fails is significantly more expensive.

If you are jumping on risc-v to save money you have your priorities very wrong. On the other hand if you are Huawei and can't do business with arm...


This argument used to come up when I worked at Cambridge Silicon Radio, where we made ASICs. We made our own CPU designs for those ASICs.

Some of the engineers thought making our own CPU designs was a stupid idea and some thought that switching to ARM designs was a stupid idea. The only fact I have is that our CPU designs were not the main cause of bugs in our chips. If you asked the digital designers, they'd say things like, "Why are software engineers so interested in CPU designs. The kinds of embedded CPUs we need are easy to design. Implementing a correct power management state machine is hard. Implementing an efficient WiFi PHY is hard. Get off my lawn..."

Some problems with putting ARM CPUs in our chips were: a) integration engineering costs (ie find a way to multiplex some existing pins on the package with the jtag debug port), b) having to do price and contract negotiation with ARM (that part of the company was less efficient than the digital engineering team), c) finding that the change you wanted to make to the chip was incompatible with the ARM contract you'd just spent 6 months negotiating.


But you also left developers with really hard to use dev tools based on a fork of gcc. The arch also left you having to do all kinds of mental gymnastics when writing code to make sure you were keeping the size down.

That said the chips certainly did there job at a good price point once you got everything stable and fixes made to the binary blobs that you had no way to debug.


> But you also left developers with really hard to use dev tools based on a fork of gcc

Yep. Sorry about that. Not that it was my fault. Anyhow, I guess my belief is that a similar company to CSR could put RISCV cores in their chips now, which would avoid the problem you describe and some of the problems I described with using ARM.


Ah, the good old XAP processor. For a while you guys were marketing a BC03 or BC04 era chip as a general-purpose microcontroller with digital audio functions.


I believe CSR went out of business after burning an investment of $1bn with nothing to show for.

Not trying to be hostile, just pointing out how costly "let's design our own soc, how hard can it be?" is.


Are we talking about same CSR? It’s still in business. It’s now part of Qualcomm: https://en.m.wikipedia.org/wiki/CSR_(company)


Hmmm.. I stand corrected.

I know a few people who joined ARM after CSR closed down but that could be just some offices shutting down.

(not an arm employee, btw)


The offices in Cambridge didn't shutdown. Approximately Samsung bought one building (WiFi), and a year later Qualcomm bought the other (BT + audio). Both Samsung and Qualcomm had rounds of redundancies but have also both hired lots of people. My guess is both offices are about the same fullness as they were ten years ago.


1.5% of your per unit revenue for a hard drive seems significant. WD seems to agree, and they don't seem like a risk prone company.


1.5% does sound significant, but does WD save 100% of those 1.5% by doing it in-house?

Furthermore, those 1.5% covered more than just the CPU (peripherals, buss technologies, patents, manufacturing know-how).

I'm not saying WD is wrong, but I think they are overestimating how much they can save. This is probably just to get a better deal from arm.


1.5% of a big number is still a big number. Western Digital's revenue in 2018 was 18 billion. Even if only 0.15% of their revenue went to ARM, that would be 27M or enough to hire 100 chip designers at 270K each.

That would be enough people for a very advanced chip, but WD doesn't need to build the next supercomputer. Instead, they want a chip customized to handle their specific workload with high throughput and probably some specialized instructions to put some algorithms in hardware. Moreover, their RISCV chip design expense should reduce over time both because they don't need to change chip micro-architectures very often and because as more companies switch to shared, open hardware, the costs are reduced

Another example is Nvidia. They continue to hold an ARM license for several ARM designs and another license for their own custom Denver/Carmel designs. Despite this, they still opted to use RISCV in their GPUs.

These companies manufacture chip designs for a living. They certainly understand the costs involved and have determined the immediate and long-term costs are lower vs the provided benefits.


From their public statements, this is not to get a better deal from ARM. they are all-in with risc-v and have already made public their first core (among other things), which is better than an equivalent from ARM. At this point I dont think ARM could bring much to the table for WD. They've invested way beyond what makes sense for getting better pricing.


It's not for the whole unit, just the chip? Isn't it Qualcomm with the whole-device royalties?


Seems you may be right, but how would you calculate the "chip selling price" for a chip inside a hard drive? It's not clear to me what the royalty charge is for a company like WD.


Probably they've negotiated a nominal value with their ARM rep based on whatever they're currently using.


> One round of asic that fails is significantly more expensive.

Does that change with ARM? If you're licensing the ISA, or even bits of HDL, you're still fabbing your own stuff, right?

If it was the choice between pre-made RISCV and pre-made ARM, then the licensing could play into a cost difference between models, but otherwise they'd both be pre-fabbed, ready-to-use-products?


But then why bother if you are just using someone elses pre-made macros with a fixed architecture (cpu + memories + peripherals + power & clock distribution)? At this stage, would it even be cheaper?

I think there is some merit to risc-v but people pushing hard for it's use RIGHT NOW don't seem to understand the challenges of SoC design and manufacturing.


As someone who works in a famous mixed-signal IC company, I note that almost everything I've heard about it comes from HN and not internal discussion. Instruction sets are just not where the leading edge of value creation is. And the design and manufacture costs of ICs remain pretty irreducible. People expecting lots of open source hardware to suddenly start popping up are going to be disappointed.

An instruction set is an API. It's the architecture behind it that matters.


The problem is that you can't have open-source chips without an open-source ISA. Without an open-source ISA we can't have a open-hardware movement for silicon.

Yes, the internal design is the important part, and having an ISA that is designed properly, will make that a lot easier and we have seen both high and low performance chips that can compete with ARM speeds while having far less development time.

RISC-V is designed to allow individual companies to add their own secret sauce and to have a tool-chain that makes that easy.

We should remember that RISC-V is still incredible young, it only just escaped officially from university 3 years ago.


It's going to start popping up as soon as Chinese chip manufactures get ramped onto the bandwagon. They will want the extra margin that royalty free cores give them. The rest of the world will be satisfied with cheap ARM devices.


A lot of innovation starts at universities. They seem to like risc-v.

And with the ecosystem available, there's less of a need to port to ARM.


Targeting all use cases with x86 is not unusual?


I'm not sure what the argument is here?


It's been quite a while (several years) since I read the original paper [1] on RISC-V's design but I remember it being very elegant, with a lot of things fixed that other ISAs have been stuck with for decades. It's truly modern in the sense that it demonstrates how much we've learned in the last few decades, without being such a radical departure that it requires rearchitecting all software for it.

[1] https://people.eecs.berkeley.edu/~krste/papers/EECS-2016-1.p...


For me it is clear RISC-V will become the defacto standard in processors, soon it'll even compete on the high end. So besides being cheaper:

- as a developer my RISC-V (assembly and system) skills will remain relevant and valuable

- by using RISC-V MCU's now I will avoid future family migrations

- RISC-V's are already the MCU's of choice embedded in FPGA's and AI chips, and I won't need to learn another tool before being able to utilize them

- RISC-V enables collective community-enabled innovation (which was the prime driver for creating it)

- Rust runs on RISC-V MCUs (besides ARM)

- RISC-V simplicity simply means less bugs


> RISC-V simplicity simply means less bugs

Fewer bugs in your code, or fewer bugs to encounter in the platform?

I've found that my bugs are usually processor-independent, and that most of the bugs in SOCs are not with the CPU cores, but the support components around them (e.g., DMA engines and timer registers with bizarre behavior).

Well . . . and cache coherency issues. But everyone has cache coherency bugs. :-)


It's not just your code, but the toolchain.

Developers of compilers, debuggers and such are way more likely to introduce bugs with CISC architectures. Complexity doesn't end at the edge of the chip. It goes down the chain.

Debugging RISC object code is also easier, for the same reason.


The ability to write or design reliable compilers/debuggers that people will use in the real world, and estimations of "hardware complexity", has very little to do with how difficult it is to emit object code for some ISA, I'm afraid to tell you.

RISC is a meaningless term today to describe design complexity. People see 'cmov' in x86 or the boot process or variable-length instructions or whatever, and think, "wow, how horribly complex this all is, it must be a huge design problem, and it's probably because of the instruction" -- but if they see some modern ARM machine do superscalar OoO execution in order to execute an instruction over multiple cycles and the compiler scheduled instructions taking that particular latency/throughput into account so it could generate optimal code for that uarch -- it's like, wow, this is all so simple, it must make everything so much easier. It doesn't, but it's worth asking what does the debate even mean at that point? All the actual complexity is elsewhere, and is (mostly) independent of the ISA.

Side note, but I have RISC-V hardware on my desk and one of the errata for this particular silicon is "the MMU may not catch and deliver all memory access violations correctly" -- meaning you just don't have memory protection sometimes, in some cases, on some days with some shirt colors! I'm pretty sure the ISA doesn't mandate that, and I'm also pretty sure it didn't come from the instruction decoder. Luckily, in this case, programmers have repeatedly proven they are extremely reliable at handling and estimating issues of memory safety (reliably bad at it).


>The ability to write or design reliable compilers/debuggers that people will use in the real world, and estimations of "hardware complexity", has very little to do with how difficult it is to emit object code for some ISA, I'm afraid to tell you.

Itanium is a counterexample to your argument.

>All the actual complexity is elsewhere, and is (mostly) independent of the ISA.

Which renders ISA complexity difficult to defend, particularly as RISC-V has demonstrated code density competitive with amd64.

Complexity is inherently bad, thus the use of complexity needs strong justification.


Thank you for the less -> fewer correction. OMG, I am horrible at grammar but for some reason this one drives me crazy and I hear it all the time in our industry.


It's not a correction, because "less" is also correct for countable things [1]. "Less" has been used this way for the entire existence of English, and the mistaken idea that it's wrong is relatively recent.

[1] http://itre.cis.upenn.edu/~myl/languagelog/archives/003775.h...


Not intended as a correction, it's just how I english.


> Well . . . and cache coherency issues. But everyone has cache coherency bugs. :-)

RISC-V is trying to make that a thing of the past.


"Can't have cache coherency bugs, if your ISA doesn't require cache coherency!"


Maybe I need to elaborate. The RISC-V foundation is currently working to strictly define both strong and weak cache coherency modes, and a separate formal verification group is working to define and validate a formal model of cache coherency behavior for the RISC-V ISA. There have been numerous updates on this project at the past couple of RISC-V workshops. When this is all done, ensuring a given implementation is bug-free with respect to cache coherency will be a mechanical process (at least, that is the goal).


You are confusing memory consistency with cache coherency.


On a multi-core system it applies to both.


Almost all of those points are true of any mainstream architecture (x86, ARM aren't going anywhere, your skills are valuable today, you don't have to throw away any code anytime soon, Rust works today on both) and the others ("chip of choice for AI", "defacto standard", "simplicity means less bugs") are either entirely arguable or outright naivety, they don't say anything unique. The ISA being experimented with and extensions being made is certainly an undeniable plus, though.

People on this website just hype it to outrageous levels because the ISA is all 95% of them interact with at any level, and sometimes not even then, so all other factors/considerations are completely ignored in favor of just believing it's really The Best (and I say that as someone who owns real RISC-V hardware and am writing my own emulator, and want it to succeed.)

But you don't have to just make stuff up to sell it. You can just say "It's a pretty good ISA, it's freely available/modifiable, and there's a ton of good tooling already available for it". That's a pretty good sell on its own, to be perfectly honest.


> x86, ARM aren't going anywhere

x86's have never been a thing in MCU's, so ARM has more to lose.

> Almost all of those points are true of any mainstream architecture

No, that isn't true at all, not when you consider the end-game: they are not open source, they are not simple enough for that to make a difference, and they'll therefore never attract the collective creativity. Maybe some other new architecture will though. Everybody should make their own bets though.

> People on this website just hype it to outrageous levels

Though RISC-V certainly is in the hype phase this is also entirely justified. From that point of view it seems you have an invested interest in some other architecture, and see that investment threatened.


Just to be clear, the ISA is open source. The design or implementation does not have to be.


Cost.

ARM and MIPS require licensing fees, and spinning your own new architecture will make your chips irrelevant. RISC-V doesn't charge you to use their instruction set, and it's already supported by developer tooling.



What particularly bugs me are the false claims that say using RISC-V will make you more secure.


The lack of speculative execution or a Management Engine strongly suggests these claims are true.


These things have NOTHING to do with the ISA.


Correct but all current implementations don't have them, and Patterson has recommended against SE as being be more trouble than it's worth.


That won’t stop any company from doing what they want.


The weapon manufacturers investing in the RISC-V foundation might disagree.


RISC-V changes the economics of processor development. It doesn't improve security for end users.


I assume what people are referring to when they say that is the embedded security processors in Intel and AMD chips.


The openness means it is accessible by academics, who are lately working hard on proving formal correctness of the spec, which we don't have for commercial designs.

It is unfortunate that they froze the base instruction set before people who understand performance optimization got a good look at it. But, ultimately, if the instruction density is no worse than Intel's, it won't be handicapped by that until something else comes along. When that happens, it will make the transition to the next easier.


Note that most of RISC-V chips arise in China. As seen in the fight between Trump and Huawei (ARM almost ended contract with Huawei) I think RISC-V becoming a great possible alternative to ARM for Chinese companies.


It's the new thing.

I don't think it is significantly better than existing ones (including the now fully open source mips). And it is already significantly fragmented.

But it's the new thing and where the mindshare currently is. Many new ideas are tested on risc-v, and there exist multiple open source implementations.


Combination of simplicity (at least before you start adding extensions), and good software support. It's easy to understand like the old 8-bit ISAs, but you can still use modern compilers.


> It's easy to understand like the old 8-bit ISAs

Oh that's kinda cool. Playing with 8-bit processors is what first got me interested in low level computing and helped a lot of things "click" for me


Previous ISAs were made for computers that don't exist anymore.


I been using esp8266 but have been looking to learn more about STM32 and RIsc-v since its closer to the hardware.

On a commercial standpoint, wroom-02 is fcc pre certified. An example of the sipeed nano dev board with WiFi on board is this, which uses esp8265

https://www.seeedstudio.com/Sipeed-MAIX-I-module-WiFi-versio...

If someone were to use STM32 for business logic, why wouldnt they just use an esp8266 since its a microcontroller, wifi, and the wroom-02 is pre certified?

Also, for clarification. The STM32 is based off of ARM Architechture. Whereas the Sipeed Longan Nano competes against the STM32 but is RISC-V based?


How I can distinguish news discussing a processor core design (seemingly, usually, marketed as a "core") from that discussing hardware that I can actually buy (a processor "chip")?

I would have thought that 1. https://blog.westerndigital.com/risc-v-swerv-core-open-sourc... 2. https://greenwaves-technologies.com/ai_processor_gap8/ and 3. https://syntacore.com/page/products/processor-ip/scr1

would be pieces of hardware that I can buy. Instead, they seem to be designs for hardware which conform to the RISC-V ISA, which could then be fabbed by a semiconductor manufacturer or flashed onto an FPGA.

What marketing/technical terms distinguish the designs of these systems from their implementations? "Core", "processor", "MCU"/"microcontroller", for example? Are any/all of these uniquely constrained to describing either a design (on paper) or that design's physical implementation in hardware?


As a tinkerer:

1. I can read the official specs without agreeing to an onerous license.

2. There are several open source implementations I can choose to run on an FPGA.

3. I can choose the simplest combination of features. Want a multiply instruction, but no branch prediction? Want caching but no MMU requiring multi-level address virtualization?

A large processor vendor has to employ a small army of processor designers. The trend will always be in the direction of adding more features and increasing complexity. Customers will write code to require those features, and the cycle continues.

Only very rarely a fresh design will appear having the right combination of attributes to give it a chance of being a viable platform for a long time. This is what makes Risk-V attractive to me.


A longan is an asian fruit related to the lychee.

FYI ICYWW NTIM


Beware! Opinionated comment!

Seeedstudio is such ignorant to their customers. Delivery of my order failed due to local mail and person from other side of customer "support" email rejected even to write a request to mail service. I'm not talking about compensation and/or resend.


I wonder who's that [-] was.


404 page not found - maybe the product was de-listed?


Well,

I'm getting a 404 too so I am guessing that the page was removed.


Page works fine for me. Maybe it was a temporary problem?


it's still 404 for me, so I opened it in private mode and it appeared.


I had the same issue until I turned off my ad blocker.


This was an instant click but it seems the site is being hugged down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: