Keep in mind the spec being violated is the privileged spec, not the core instruction set (which is what people think of when they think of an ISA violation). Personally, I think the privileged spec is a little too overzealous about the implementation-specific details that it specifies, and a little more should have been left to chip vendors.
For the uninitiated, what kind of things? I mean are these things that are necessary for desktop operating systems, but overkill when used as a DSP or something like that? I mean these aren't dumb people working on this so I wonder what this trade-off is.
Exactly that - they specify things that you need for an OS on a desktop/server system but complete overkill for embedded applications. I think the goal was to have one version of "RISC-V Linux" without vendor-specific extensions, but the chip vendors are used to their extensions, and there are good reasons to allow them to have some more customization (to allow design tradeoffs).
Also, in the ARM ecosystem, some of the things that the RISC-V spec specifies (like MMU details) are vendor-specific, leading to a minor headache for OS writers, but give chip manufacturers more freedom. Renesas may have assumed they had the same freedom.
The Arm ecosystem is a shitshow for this reason, with massive efforts required to port to each new SBC. We hope to standardize RISC-V enough to avoid at least the worst of this (Linux drivers will still be a problem, but you should be able to boot a standard single image on most boards even if some of the hardware won't always work at the start).
So this sounds like a deliberate design choice. Like when Intel went from pins to balls shifting the complexity to the motherboard. There are always trade-offs. I guess the idea is that the overhead is worth the benefit to the ecosystem and if the vendor really needs to they can create a chip that is only conformant at a user level but not at a priveledged level and still better than starting a custom ISA from scratch.
On the other hand, it sounds from the thread like the result of the deviation from the spec is "you can't run userspace binaries unless you built them with a binutils that's working around this", so it's not just a "weirdness the kernel has to deal with" kind of thing.
If everyone is understanding it correctly then that appears to be the case. Apparently user-space addresses from 0x20000 to 0x3ffff are not mapped by the MMU in the expected way, but are directly mapped to the same SRAM for every program.
A statically linked "HelloWorld" on my VisionFive 2 starts from 0x10000 and runs up to 0x4ea8e, so smack through that whole memory region.
The only way to make programs compiled with a standard binutils (or on another RISC-V machine, or a standard OS running in a VM) work would be for the kernel to memcpy() that 128k region in and out on every address space switch.
It's really an awful bug (or design decision) if you want to run standard OSes and standard code on it.
On the other hand, the privileged spec is also designed to provide process isolation and other security features. It may not seem as drastic as if a core instruction were misinterpreted, but this is IMO not the layer you want unexpected deviations from spec in.
It doesn't. Effectively the architecture spec defines behaviour so that it shouldn't matter what you choose for the text segment starting address, but this implementation's non-standard behaviour means it does matter, because there's a weirdly behaving address range you have to avoid.
This implementation has a very incompatible (and problematic) deviation from the privileged ISA spec.
It seems to act as if there was a hardcoded, stuck, TLB entry that cannot be removed, so the whole system has to work around it. And to add insult to injury, it affects a virtual memory address range that happens to be used in most Linux programs.
IMHO ASUS is doing a disservice to RISC-V by releasing a board with such a chip. They should have used something else or skipped this generation.
Did I miss the pricing? I quite fancy the "StarFive's VisionFive 2" that was mentioned here a while back (assuming software improves a bit), but once again not clear what this new board offers in terms of RISC-V extensions/level of support. btw I noticed today that Pine64 are also planning a cheap RISC-V board maybe next month.
The VisionFive 2 is currently the best RISC-V SBC option, at least until TH1520 becomes available (next month?), depending on the price.
I can vouch for the VF2 hardware - works well. Alas PCIe is oddly slow, but I'm hopeful that firmware will improve that.
EDIT: The Pine Star64 is (should be) the exact same SOC, so it should be basically the same as VF2 (small differences like PCIe edge connector instead of an M.2 slot, etc).
People are getting around 250 MB/s from an SSD on the PCIe (M.2) on VisionFive 2. Ok, so you'd ideally like 400 MB/s from 1 lane, but it's still 10x better than SD card.
It's probably being limited because the RAM to RAM copy speed isn't much more than that! I get 475 MB/s memcpy() speed for 64 MB copies on my VisionFive 2. The much slower CPU on the AllWinner D1 manages 1100 MB/s n the same code. Somehow the SiFive-based SoCs have never been good on RAM speed -- the HiFive Unleashed and Unmatched were even worse than this.
Hopefully the Horse Creek SoC has some good Intel DDR IP in it.
I'm getting ~ 180 MB/s on a Samsung SM8?? which hits around 3 GB/s on a desktop. I'm using a 5.1V 3.5A supply so that ought not be the issue (but I'll try another PSU now).
Horse Creek would be nice - should it ever become available for purchase.
EDIT: Well, I'm shocked. With a far more beefy supply (12.3V/0.7A) I now hit 295+ MB/s, not great but so much better than before. Thanks for the hint.
Nobody called me on my nonsense so I will: The 5.1V 3.5A = 17.8W was the PSU _rating_ whereas the 12.1V 0.7A = 8.5 W was actually _meassured_. I’m not a EE and I don’t understand the power section of that board, but something to
look out for.
Thanks. I saw there is a pine64 EU store now, so fingers crossed they will stock it, although a working PCIe would also be very nice! I hadn't heard of TH1520 until now, 2.5GHz is going to give the Pi4 a better run for it's money.
It looks like they're going to be running the bare chip at 1.85 GHz stock. Will probably need to add a heatsink and maybe fan to get to the rated 2.5 GHz.
And, yes, with its OoO CPU cores it should match a Pi 4 at the same MHz, but do more MHz.
Thanks, I only scanned Europe. But anyway that is quite a premium over the SV2 at around $70? And I got the impression Star64 will also be in that lower price bracket, but lets wait and see.
Amen to this. I wish the USB-IF would officially deprecate the entire mini and micro line, stating that they are not allowed to be used and will not be certified compliant in new hardware designs unless the new design is intended to be a physically identical drop-in replacement for an older design that used those ports.
There is no good excuse for these ports' continued use in new designs, just penny pinching nonsense.
Host-side ports can be full size A or C, device side ports can be full size B or C, anything else is just being cheap.
USB-IF should be forced to use PCs equipped only with Micro A and Mini A ports. And to connect their peripherals, they must first dig through a large bag of Micro/Mini B cables to find a single Micro/Mini A cable.
Although that might be so cruel that it violates the Geneva Convention.
> Or it was the only one they clild get in volume. The last 3 years have been merry he'll on supply chains.
This is Asus. They make a phone that has two USB-C ports, plus a variety of mobile accessories that all have one. They also make laptops, desktops, and motherboards that generally have 2-4 of them.
I'd be shocked if the sales of the entire Tinkerboard line added up to even the mobile products alone, much less the PC lines.
I see people complain about Micro-USB in various places, and am not quite sure what the problem is. You're connecting to some relatively small, low-power device, not powering a beefy laptop. Is it problematic to have too many cables in the drawer that fit?
It's an older standard, and if nothing else, the reversibility of type-c is worth the 10 cents.
And because one of the ports is USB OTG, one of the intended uses is for a micro B connector as the host connector. Which is a misuse of the originally intended function of that connector anyway.
Thanks, just seems like much ado about nothing, other than that last part.
I know I've got a wealth of devices that have the various formats, mostly micro.
Their functionality still seems fine, so the anguish that pops up from time to time seems a bit much.
And a lot of power (probably same or more than the device itself). Some space on the board too. While I think it would be nice to include on the board, it is for sure the better option to not include it and let people use an adapter for their use case.
[1] https://lore.kernel.org/all/CA++6G0Do001Bo+kxhUNz5T937TYU-K5...