Hacker News new | past | comments | ask | show | jobs | submit login
BeagleV-Ahead open-source RISC-V single board computer (beagleboard.org)
200 points by rapnie 11 months ago | hide | past | favorite | 128 comments



Very interesting for... other reasons.

I thought that Beagle Company was more tied to TI chips than this, I didn't expect a non-TI chip to come out of their boards.

Historically, Beagle has aimed at SBCs that are "weaker" than Rasp. Pi but far more "open". I trust their branding and the company. I don't consider them fully open (like Linux4Sam / Microchips stuff, which have open source U-Boot and other bootloaders immediately available for download). But open source is a "sliding scale", and Beagle has consistently been "more open" than most competitors. Don't let perfect be the enemy of good here, there will be far better drivers than Rasp. Pi (the biggest name competitor for sure).

BeagleBoards tend to be in the 1W to 5W region, while Rasp. Pis consistently are 5W+ (with the most recent Rasp. Pi even having a 20W+ adapter but probably averages at half that). RP4 averaging around 6W, so well above BeagleBoard stuff. Note that Rasp. Pi wins in overall compute-per-watt metrics, but also note that Xeon / EPYC are far more efficient in compute-per-watt. So... nobody should be using Rasp. Pi (or Beaglebones) for efficient computer. These are computers that "must" fit inside the 6W or 2W or whatever power-envelopes you needed. (Ex: running off of 1x 18650 Li-ion cell, or a solar-battery setup trying to shrink the size of the power-system)


The previous BeagleBoards also have a nice setup with a pair of PRU's (Programmable Realtime Units) that share memory with the main CPU. These MCU-like PRUs were leveraged in a lot of projects that needed external IO with low latency and high clock affinity, like audio processing, driving LED arrays, etc.

This Risc-V Beagleboard doesn't have anything like that. There are some entities in the space that are working towards that though...LowRisc.org has a concept of "Minion Cores".


The AM3558 (aka: Beaglebone Black/Green chip) looks like it is barely on the cusp of "doable" in KiCAD, though probably with some significant effort. Its an older chip but the fact that its ready to be laid out by an advanced hobbyist is very intriguing to me.

That being said: the latest chips from TI have significant improvements. PRUs always were wonky programs that were hard to grok. The more recent AM625 keeps the PRU for backwards compatibility (and buffs it up a bit), but IMO the on-board 400 MHz realtime Cortex-M4F is the more "obvious" realtime processor to use. Quad-Core Cortex-A53 should be plenty for most hobbyists as well.

Upgrading to DDR4 / LPDDR4 means tighter tolerances however, and the 441-pin (aka: 21x21 pins) AM625 probably needs 8-layers and maybe blind vias... while the older AM3558 (Beaglebone Black) can use DDR2 with much looser tolerances and only has 324 pins (aka 18x18). So 6-layers seems possible to me, maybe with a few pins disconnected.


Also, the main killer feature of both beaglebones and raspberry pis is long running mainline linux support and hardware availability. I am not so convinced this SoC is going to be around for that long.


it has a better chance than almost any other soc


You typically pay a premium for Beagleboards for the IO features (though at a glance this particular board doesn't look particularly interesting)

If you want an open RPi equivalent, I'd look more at LibreComputer boards. Cheap and open


Well, we're paying a premium for Beagleboards because they're lower volume and that's pretty much it. Volume is king for prices in the computer world.

But... you get better IO yes, but also more importantly IMO, more open hardware designs. Ex: I'm pretty confident in being able to get an AM355x TI chip and (with enough effort on a PCB CAD tool) creating my own Beaglebone Black/Green. (Note that OSHPark offers 6-layer boards which should be good enough to route-out impedance controlled DDR to a BGA chip like the AM355x series... though it'd be tight).

BeagleBone AI and BeaglePlay have upped the complexity so I'm not sure if they'd fit on 6-layers, but they're open-enough designs that if I were to create a custom 8-layer or 10-layer run, it probably is doable.


if you want to make your own boards I'd suggest looking at the Breadbee

https://news.ycombinator.com/item?id=36470660

https://www.cnx-software.com/2020/04/14/breadbee-tiny-embedd...

It's outside my area of expertise.. But I don't think making your own Beagleboard would be easy as you say.. At least I've not seen it done. Have you done that?


No, but I'm getting close to the point where I'll attempt it.

OSD335x is the SiP version with integrated DDR RAM, removing the most difficult part of PCB design (ie: impedance matched / delay matched DDR). It's more expensive than a raw AM3558 from TI, but it'd be my first step.

Then when I want to try the more difficult DDR routing, it's available and Beaglebone Black/green serves as reference designs for me to study.


> LibreComputer boards. Cheap and open

True. Unfortunately they're impossible to purchase outside of Amazon or at any EU shop that supports PayPal.


Yeah, they weirdly don't even have a Taobao shop.. so can't be bought in China as far as I can tell. It's a bit strange


They have an Aliexpress store that appears to ship from China: https://www.aliexpress.com/store/1102551393


Right, but Aliexpress doesn't do delivery in China! :))

(hard to believe, but it's true)


>BeagleBoards tend to be in the 1W to 5W region, while Rasp. Pis consistently are 5W+ (with the most recent Rasp. Pi even having a 20W+ adapter but probably averages at half that). RP4 averaging around 6W, so well above BeagleBoard stuff.

Please look at JH7110 instead, as it has much higher efficiency and draws much less power.

Boards include the VisionFive 2, Star64, Pinetab-V as well as Milk-V Mars.


>I thought that Beagle Company was more tied to TI chips than this, I didn't expect a non-TI chip to come out of their boards.

I would say it's simpler than that: TI failed to release a RISC-V chip in a timely manner, whereas BeagleV (correctly) identified the direction the industry is moving towards.


There was a "strong" product, the Beagleboard X-15. It came out before the Pi4 and had better performance than a Pi3B. However, it was on the expensive side at over $200.


Aren't there many ARM chips which are more efficient (compute per watt) and which also use much less power when idle than what the Raspberry Pi uses?


RP5 changes the specs again and I haven't bothered to run the math yet.

Broadcom / Rasp. Pi chips have historically been _VERY_ bad at idle though compared to their peers. So I think its safe to say that RP5 will also be bad at idle (though I haven't read the spec sheets or seen any such tests yet).


Seems a good guess. https://www.tomshardware.com/reviews/raspberry-pi-T;

“The Raspberry Pi 5 is the hottest of all the Raspberry Pi boards we have ever used. At idle, without any added cooling, it sits at around 50.5 degrees Celsius and consumes around 2.7 Watts“

If that were classified as “electrical and electronic household and office equipment”, they could not legally sell that in the EU (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%... is not yet in force, but its predecessor already limit standby power usage to under 0.5W)


IANAL, but I think the right comparison for standby power usage would be the power usage in sleep, not in idle.


Yes, though I would say the beaglebone is also not stunning in this regard.



Thanks for pointing these out. I was not sure whether I should wait for Horse Creek or get the MilkV now, but with all these bugs I'd rather wait and see which one will be less buggy.


Yikes, that last one seems bad (although I don't think there is too much software making use of floating point exceptions).


A floating point hardware bug? Was this designed by the Intel Pentium team or what?


Sorry if it’s a dumb question but how do you verify open source hardware? With software I have the code and I can build and deploy my own version. But how do you do that with an open source CPU? I can read the code [0] but will the CPU as the physical object itself be the same? And the microcode running inside?

0, https://github.com/T-head-Semi/openc910


I don't think that "open source cores" make sense outside of FPGA contexts. But FPGAs are incredibly closed and are poor performers for "just" CPU-emulation purposes. (FPGAs are best when you truly need custom hardware, and often are paired up with premade cores like Xilinx's Zynq + ARM FPGA+Core system).

So... yeah. We're all beholden to the factories that make these chips. At best, we can design systems that port between factories but unless you plan to build your own chip factory, I don't think there's much of a solution (or need) for open source CPUs.

I think there's a push for 180nm (ex: 2001-era) open source chip designs. But with many microprocessors available from 28nm or 40nm processes today (ex: 2012-era), I don't think the 180nm class chips stand a chance in any practical matter outside of weird space / radiation-hardened situations (180nm is better vs radiation apparently for reasons I don't fully understand).

------------

From "Beagleboard" perspective, their version of "open source" means freely available schematics and hardware reference designs, making it easy to build your own board and possibly even custom-boot your own versions of the Beagleboard.

At least... the previous paragraph is based on "reputation" as opposed to true analysis. I'd have to actually go through the documentation and think about PCB design carefully to really know.

For example, the "BeagleBone Black" was cloned by SeeedStudio and turned into "BeagleBone Green", a ground-up redesign of the board with the same chips. Proving that a 2nd company could in fact take these hardware schematics and rebuild a totally different project.


> I don't think the 180nm class chips stand a chance in any practical matter outside of weird space / radiation-hardened situations

if it was good enough for the fastest PC processors in 2000 and the PS2 and gamecube, why wouldn't it be suitable for something like this?


Because chips supporting open-source software are available at the 28nm or 40nm level at much cheaper prices, higher performance, and lower power usage. I believe that 22nm is expected (in the long term) to become the cost-efficiency king, but its not an economic reality today yet.

But 28nm and 40nm are the cost-efficiency kings today. So what ever could a 180nm design ever offer the typical consumer outside of radiation-hardening?

And similarly, 5nm or 3nm server chips (like Xeon or EPYC) will always be at the forefront of power/performance/cost, you can't beat physics of shrinking transistors or the economics involved (though the most advanced node will need more-and-more volume to be cost-effective, as its become harder to build on these advance nodes).

There's almost no reason to ever pick an ancient 180nm design on a low-run when millions of 40nm or 28nm chips are being made, and/or billions of 5nm superior chips are being made.


Microcontrollers for low-end hobbyist use, household appliances, or industrial use don't necessarily need state-of-the-art nodes. Price and bulk availability matter. And high-end nodes actually have disadvantages in these areas as their supply is limited, as several car makers quite recently had to learn the hard way.

Also, it doesn't really make sense to use high-end nodes for low-end controllers because the smaller the chip gets, the higher the packaging cost as it gets more difficult to handle. Microcontroller designers already throw in tons of extra features because there is now too much space available.

Finally, it is easier to build up and certify a supply chain for an ancient node than for a high-end node as all the patents have expired by now and the process is much more rugged. In the current political climate, supply chain robustness and auditability might trump pure cost for some applications.


My understanding is that 200mm / 8-inch wafers on the older nodes is a literally dying technology. It's inefficient, it's costlier, it's just worse in all possible attributes.

180mm, and other similar nodes from the late 90s, are 200mm wafers. And since area is radius-squared, the modern 300mm wafer contains more than double the mm^2 and therefore double the chip-area.

Except 28nm process also shrinks the transistors by many magnitudes.

It's infeasible for 180mm to remain cost efficient against 40nm, 28nm, or 22nm.

Top-of-the-line 5nm or 3nm are far more expensive. But a 10+ year old 40nm fab has most of it's one-time-costs already paid for and overall has a more cost effective process in general due to the upgrade to 300mm wafers.


Thanks for the reply. I was not necessarily arguing for the ancient processes, but for the existence of tradeoffs between the process generations that don't instantly make the older processes obsolete. However, depending on how important having domestic chips is rated, seemingly obsolete processes might still have the advantage of lower setup cost.


the leading edge nodes will have a substantial cost premium, but about 1 decade behind leading edge is the cheapest per transistor because when you go back further the increase in transistor size makes them less efficient to manufacture.


The old nodes are cheap because the investment has been fully paid off. There are only running costs and maintenance remaining. However, expanding productions of older nodes will be more expensive, especially if the equipment is not available in bulk anymore. Probably not as expensive as a new node though.


Right. pre EUV, the only reason older nodes were cheap is because you didn't have to build a new fab. Now though, EUV machines are expensive enough that it looks likely that the pre-EUV nodes may remain cheaper pretty much indefinitely. 22/14nm is pretty power efficient, a lot simpler to make.


"something like this" i think i would replace with "embedded applications".. too late to edit of course


It's less "how do you", and more "can you?". And according to the Novena open-hardware laptop guy, you can't: https://www.bunniestudios.com/blog/?cat=28

>Based on these experiences, I’ve concluded that open hardware is precisely as trustworthy as closed hardware. Which is to say, I have no inherent reason to trust either at all.


Thx for that link! Deserves its own HN post imho.

It lays out the issues quite well. But... there's several things people here are forgetting:

OSHW isn't just about chips, but also boards, case designs & possibly more. Those other things can be inspected easily.

OSHW is also about supply security. Vendor folds? Take existing design & have another supplier manufacture it.

US buyer doesn't trust Chinese vendor? Nothing stopping that buyer from taking design to say, a US or EU based manufacturer. This includes IC designs.

OSHW allows modification & thus exchange of parts. So any part considered not 'open enough', can be exchanged for more acceptable one.

Not saying anything about the practicality of this. But with closed designs these options don't exist or are much, much more difficult.

And any vendor found to have tampered with an open design's manufacture, would see its reputation ruined (let's be honest, manufacturer would be the 'best' place to plant a backdoor, right?).


That's actually something we're working on right now in the RISC-V community. There are formal verifications that can be done at multiple hardware levels but the community needs to ratifiy more in the way of standards.

After all... I assume wouldn't be comfortable until you can run Qualcomm's verifier (or a libre community supported option) on TI's chips, or a similar model of 'trust but verify'.


You can't, but you could create a test bench and verify that the behavior is cycle acurate the same as a simulated openc910. That's probably better than doing nothing.

We know that openC910 isn't the exact same chip as the C910, as it doesn't include the draft vector extension.


There really is no open-source hardware, not below the centimeter level. You have to trust some company to actually make your chip. I guess you could get an electron microscope and look at every transistor, but that's probably not realistic, even for most companies.


You don’t. Open hardware is a positive for the company and almost entirely neutral for the consumer.

I guess you could say It’s a positive for us too, since you are not subsidizing the ARM royalties with every tech purchase you make. But I wouldn’t bet on this cost cutting being passed down to us.


> Open hardware is a positive for the company and almost entirely neutral for the consumer.

I'd say it's the other way around, actually. And possibly a negative for the company in the current market, since it also needs to worry about competition based on their sources. The same as with open source software.

It's entirely positive for the consumer, since it gives them the choice to choose between several manufacturers of the same design, or even manufacture the product themselves, if they have the capability. Open hardware is the end goal of the right to repair movement, in this sense.

Even if the consumer doesn't have the capability to verify that the product they're using was built from open sources, they still retain their rights to modify, build and distribute their own versions. This is what "open" means. Note that this is the case with OSS too. It would be very difficult to prove that a specific binary was built from a specific source, but there's usually a degree of trust that the binaries provided by the developers were built from the same source the user has access to. The developers can also provide a mechanism for this to be validated, via checksums, reproducible builds, etc. I imagine that a similar mechanism might exist for hardware as well.


RISC-V isn't open hardware though, it's open architecture. Open architectures make it easier to design new chips. That isn't necessarily going to translate into cheaper or more easily repairable or more open-to-the-user products.


This isn't about RISC-V, but specifically about the BeagleV-Ahead board. They have an OSHW certification[1], and provide schematics and design documents[2].

Whether that will provide tangible benefits to the consumer is hard to say, as it will depend on how popular this design ends up being, among many other factors. Unlike with software, most end users don't have the capability to build their own versions, and have to rely on other manufacturers to do that. Still, this is surely a step in the right direction, and is more consumer-friendly than traditional closed architectures, platforms and products.

[1]: https://certification.oshwa.org/us002535.html

[2]: https://git.beagleboard.org/beaglev-ahead/beaglev-ahead


Since a CPU is the end-product, much like a binary compiled from open source project would be, open hardware is closer to "you can build it yourself" rather than "trust the already compiled version"


OpenC910 and XuanTie C910 are different cores. The latter has RVV 0.7.1 for example while the open-source version does not have it.


What is TI's angle here? (Asked naively by a software person.)

Is TI hoping to grab RISC-V developers/enthusiasts with the Beagleboard now (keep them from moving to Chinese RISC-V SBCs), and later migrate these users to a TI RISC-V SoC & Beagleboard when it's ready?

Is this a royalties negotiating tactic with Arm?

Odd negotiating tactic with US lawmakers (e.g., "If we're restricted too much in international engineering collaboration, we'll have to source a black box from China, and put it into everything")?

Symbolic deal with some entity in China?

Some Beagleboard business unit trying to increase its own profitability, not just a devboard to promote TI chip products?

Something else?


>> What is TI's angle here?

This isn't TI. Read the bottom of the page, it's a Michigan based nonprofit.


Is there an evolving relationship between TI and BeagleBoard.org?

> The BeagleBoard is a low-power open-source single-board computer produced by Texas Instruments in association with Digi-Key and Newark element14. The BeagleBoard was also designed with open source software development in mind, and as a way of demonstrating the Texas Instrument's OMAP3530 system-on-a-chip.[8]

https://en.wikipedia.org/wiki/BeagleBoard

That page includes the BeagleV-Ahead.


TI kinda sorta helped start BeagleBoard. The founder of BeagleBoard was a former TI sales or applications engineer. TI has always had a strong hand in BB hardware design because it’s effectively a reference design for them.


Similar to how the Raspberry Pi was developed by former Broadcom engineers and as such uses their CPUs.


In both cases, there's more to the explanation than that a person used to work for the company. That explanation could be significant and nonobvious. The explanation could also change over time.


I'm not sure I would buy a RISC-V board yet. In particular it is still in the stage where knowledge about the platform has to be hard coded rather than probed at runtime. For example the location of MMIO devices like CLINT (time/interrupt controller) and physical memory attributes. The spec says something like "the platform must provide a method for software to discover this" but it doesn't specify how and as far as I know the current method is that it should read the manual!

There is a placeholder CSR mconfigptr that I believe is intended to fix this by pointing to a DeviceTree blob which would solve this (someone correct me if I'm wrong) but none of that is standardised yet.

I work on RISC-V and I think it's great and ARM should be very very worried but if you buy this you're on the bleeding edge and don't expect the software experience you'd get on x86 or even RPi. Give it a few years and I expect the story will be very different.


>In particular it is still in the stage where knowledge about the platform has to be hard coded rather than probed at runtime.

Avoidance of fixed memory addresses is intentional and by design.

The boot and platform specifications cover how to deal with this.

In its simplest form, when your code runs, you get a pointer to a DTB in a specific register.

More complex variants include the RISC-V UEFI protocol (ratified) as well as the (work in progress spec) ACPI protocol.


I think the solution is simple: ACPI


I don't have any experience of ACPI but it doesn't not sound like a sane alternative to DeviceTree:

https://stackoverflow.com/a/56235466/265521

> ACPI is the unprofessional, hackish attempt of bios and board vendors to solve a small subset of the problems that DT already solved long ago.

> In November 2003, Linus Torvalds—author of the Linux kernel—described ACPI as "a complete design disaster in every way".

I'm not sure Linus has the best design sense but it doesn't sound like he is wrong here:

> Much of the firmware ACPI functionality is provided in bytecode of ACPI Machine Language (AML), a Turing-complete, domain-specific low-level language, stored in the ACPI tables.[7] To make use of the ACPI tables, the operating system must have an interpreter for the AML bytecode. A reference AML interpreter implementation is provided by the ACPI Component Architecture (ACPICA). At the BIOS development time, AML bytecode is compiled from the ASL (ACPI Source Language) code.

Wtf.


What would you use instead of bytecode? I think the problem with ACPI is the clunky implementation, not the basic ideas themselves.


I'm still learning about this stuff to be honest, but I don't really understand what problems would need bytecode in the first place? What is implemented in ACPI in bytecode?


CPU and OS independent methods. For example to shutdown, reboot or change video mode. Much easier to run a bytecode sandboxed on any platform.


microUSB, microHDMI

Why? This came up in the thread about the new rpi. These connectors have no place in 2023.


I was hallucinating the text to follow "microUSB 3.0 is indeed weird..." and realized what it is. There is practically no implementation ambiguity in microUSB 3.0. No USB-PD, USB-BC, or USB Billboard class, just plain old - if there was such a thing - USB 3.0. So it's non-ideal but zero risk option that comes with tolerable cost.

MicroHDMI, idk.


Blame the SoC manufacturer for not supporting something newer?

If you want to support Thunderbolt or DP, you're looking at a lot of extra hardware (if the SoC even has a way of talking that quickly off-chip) or you need support directly on the SoC. If you don't support Thunderbolt or DP, what other video interface is there but some variant of HDMI?

As for the lack of USB-C interfaces on this or the RPi, think for a minute about why you don't see 7-port USB-C hubs and see a maximum of 4 USB-C ports on laptops. USB-C even before PD might have to source 5V at 3A. That's 15W of power for every port--that's almost higher than the power spec for a 7-port USB 2.0 hub!

I will, however, concede the point about the really stupid micro-USB 3.0 connector for power. That's just incredibly dumb. This is a Chinese-made board--shrug--presumably LCSC was running a special. I do agree that any system like this should be designed with USB-C PD since peripherals are going to suck down power even if the board itself can work with 5V/900mA.


Maybe to conserve board space?

I'm curious why you think these connectors have no place in 2023... They are fragile, sure, but I'm not sure they are entirely useless.


They're fragile, and they don't save any/appreciable board space over USB C. And for the price ($150), USB C is easily justified.


Altmode DP has been challenge going for board builders to integrate as far as I've seen. I guess it is a complete replacement for micro USB, though.


If so, I'd much rather have miniDP. Better connector, better standard, it talks HDMI anyway if you want to, and you will need an adapter anyway.


Also, if you include miniDP and don't include HDMI, you don't have to pay the HDMI licensing fees.


Many of these SoC have HDMI directly from the IC without needing additional hardware converters. Something like DP you need a converter IC which adds expense and board space/complexity. Something like MIPI -> DP, I think TI makes some chipsets that do this.

The size is probably to save space on the board - looks like they try to keep to relatively the same size and mounting holes as previous boards.

I have an entire box of these mini-normal converter cables because of boards like this. OK when messing around but I can see the frustrations if you're not used to it.


Also has a US $150 price tag.


How long before we have RISC V processors powering Linux PCs with no “Secure Enclave” that has root on your system? Also playing games on it would be nice.


You probably want a secure enclave as otherwise you won't have a good way to secure keystores. With Webauthn and keypasses becoming more widely adopted, not having a secure computation environment means you will be at risk of having your login data exfiltrated by random apps.


I’m more concerned about tech companies being more of a “superuser” on the machine I have paid for and own, than me.

Stallman saw this coming decades ago. And I firmly believe he will be proven right in due time.

https://www.gnu.org/philosophy/can-you-trust.en.html


> I’m more concerned about tech companies being more of a “superuser” on the machine I have paid for and own, than me.

Don't run Intel and AMD CPUs then!


It would be nice to have a riscV option that doesn’t include these.


Many people saw it coming even before Stallaman.


That would probably be fine by me if and only if:

a) the source code of the secure enclave is 100% open source b) I can compile my own version of it c) I can run my own version of it d) I face no reprecussions (i.e. services not working, DRM not working, ...) if I choose to do so.

This is all fine and dandy for key storage purposes; you actually want all of these to guarantee that your keys are safe. But modern enclaves are primarily used for DRM, and this just doesn't work if I can just patch a way into my enclave to get the key if I really want to.

So, I'd much rather have a system with no enclave which I can attach a HSM to than a secure "trust me bro" enclave.

DRM was the original sin of computing, and nobody can convince me otherwise.


Does the secure enclave need to be built into the main CPU though? A key store on a USB stick or on a TPM will never allow your keys to be exfiltrated, yet it's not part of the CPU, and it's even removable.


Such devices are called FIDO keys. But they work only if the service you're accessing also supports it. I don't even know whether there is consumer hardware that supports an external TPM for boot image verification and hard drive decryption.

A plain USB stick is not a secure place for a keystore as a compromised kernel could trivially copy it and send it somewhere else for cracking.


> Such devices are called FIDO keys. But they work only if the service you're accessing also supports it.

That’s not quite true. The (web) service I’m accessing doesn’t communicate with my FIDO keys — there’s my browser in the middle. The service has no way to know whether my browser is talking with a hardware token or emulating one, and it is not privy to the details of how my browser communicates with my token.

If my browser supports FIDO on the network end, and my hardware token on the other end, it works. Now I’m guessing right now only relatively mainstream stuff like Yubikeys are supported out of the box, but support for say, the TKey (https://tillitis.se/products/tkey/), is likely only a browser extension away.


> The service has no way to know whether my browser is talking with a hardware token or emulating one, and it is not privy to the details of how my browser communicates with my token.

The service doesn't necessarily have to know that for the scheme to work. If the user is fine with the browser keeping the keys, then so be it. Browsers have been featuring password managers for a while now, and people happily use them because they have a convenient user experience.

However, if the user wants to use a hardware token, all the browser has to do is be the middle man between service and token. The actual protocol is MITM-proof. Unless you assume your browser is compromised and will screw with your data and your account as soon as you log in. But that's a problem different from user authentication :)

These features are actually nothing new - browsers have supported client certificates and hardware security modules for ages. The features are not in the spotlight and have a horrible user experience though.


The server can trust that once the user is registered, they’ll be able to detect any key change. After registration, MitM is as you say not possible. And I agree that if the private key came from the hardware token itself, switching back to a password manager without telling the server is impossible, because we’d have to change our private key.

First though, the user must register their key. My claim here is that without a PKI (the sister thread speaks of what I think of attestations) the server has now way to tell where that new key comes from. Could be a hardware token, could be derived from `/dev/urandom`.

This is where it gets interesting: we could generate the user’s key outside the hardware token and copy it somewhere safe¹. The hardware token would then encrypt that key, and we’d keep that encrypted blob somewhere convenient (we don’t care if the blob leaks). Before the token does its end of the protocol, it must first decrypt the blob and extract the key (for internal use only). If we lose the token, we can switch back to a password manager (or set up a new hardware token) by retrieving the original key from its safe location. Since we didn’t change the key, the server doesn’t have to know.

[1] The definition of "somewhere safe" is is left as an exercise for Bruce Schneier. Me, I’ll just wave my hands.


You're completely right about everything you wrote. But this is only a problem if the browser is assumed to be malicious. In this case, remote attestation can prove that we are indeed talking with a TPM.

However, if the browser is assumed to be malicious, then authenticating the TPM is pointless. As soon as the user establishes a session via that browser, the user account would be compromised.


You're correct, a malicious browser can wreck havoc in the user's account. The advantage of a hardware token is that it can limit the damage: if log in and important operations require the hardware token, we can make sure that a compromised browser cannot exfiltrate the user's long term secrets, and cannot permanently hijack the account. Done well enough, the account would only temporarily be compromised (which I reckon is still bad), and the user can easily reclaim control by turning off their computer and log in with another.

As for why I care about compromised browsers, well… I hear malware is still a thing. I'm relatively safe, but I'm one bad vulnerability or bad decision away from letting a Trojan in. So I quite like the idea of protecting my most important long term secret with something that's immune to that. Maybe I'll even get there.

As for the service, most of the time their own stakes are pretty low. They ought to offer good security options, but I'm not sure it's their place to mandate stuff like 2FA.


> The service has no way to know whether my browser is talking with a hardware token or emulating one

It does. It can request that the key do attestation, which involves providing a certificate that proves who the manufacturer is.


Such attestations are evil and I want nothing to do with them.

I mean, I’d be okay that if I’m working for some company, I have to use the company issued hardware token that can deliver a company issued attestation that the company servers can then check. In some sense, the company is the user here, and the fact that employees have no say in this matter is not a big deal.

For individual however, I believe it is important that the user be in control. If they don’t want (or can’t afford) to buy a hardware token emulation should be an option. And if they prefer the hardware token they should be able to buy it from any company.

Picture how anti-competitive it would be that to use AWS you must use a security token issued by Yubico (or a list of approved companies): how does a small non-approved company like Tillitis enters the market? They have to ask every relevant cloud provider to add them to their list? This is both impractical and unfair.

An alternative that wouldn’t be anti-competitive is for AWS to mandate an Amazon provide key to use their services. And that key must not be usable for anything else. Note the e-waste and impracticality if every cloud provider did this however. It’s much better to let users use one hardware token for all services.

The worst thing is, I’m pretty sure companies will try and mandate such attestations, they will say it is to "protect the user", while in fact it will be yet another tool in their lock-in toolbox. As I said, it’s evil and I want nothing to do with it.


I think the requirement here is that the owner of the user account needs to be able to register their own attestation keys. The owner of the account may be an employer or an end user.

It must not be a hardware manufacturer.


Yes, that I can support.


Note, this makes the system more secure, because the manufacturer is no longer a single point of failure, and a compromised key can be rotated by the account owner.


As long as the system is fully auditable and open source, I’d be happy. Having the keys be external is a big plus, assuming that is fully auditable as well. Having no “management engine” is a big plus too.


More generally, they are called smart cards, and can be in the form factor of a USB stick (not mass storage USB stick).


The parent probably means Intel ME, not TPM. The latter can be based on FLOSS.


The ME is simply a backdoor so that enterprise users can centrally manage their fleet. There's nothing really secure about it. It is probably still based on Minix and presumably full of security-relevant bugs.


We have a secure computing environment, it's called process isolation and it's provided by the kernel.


The isolation provided by the kennel is strictly more vulnerable than a TPM chip that can only interacted with via a communication protocol. Using the TPM, the hardware bootloader can verify that it boots only authorized, untainted kernel images. Without it, a chain of exploits can be used to taint the kernel boot image, thus effectively take over the machine. Even if an exploit chain succeeds in gaining root access, it's gone the next time the machine reboots.

The crucial problem is that the IT industry is more concerned about using it to enforce DRM instead of educating users so they can use it to retain ownership of their services.


So what's preventing chain of exploits from just using the TPM over communication protocol to do whatever it needs that requires said TPM to sign/attest to something?


For the same reason a website can't ask the browser to simply install another root certificate into its truststore: a browser simply doesn't offer an interface to do that.

The TPM itself does very little. It is simply used by the UEFI to verify that the digital signature of the OS image is valid, similarly to how a browser validates a server certificate using its own truststore.

A possible attack vector to compromise that functionality would be to tamper with UEFI. Since it is firmware, the operating system simply doesn't have the capability to so. Even when doing firmware updates, the OS must ask the UEFI nicely to apply a new firmware image, which is similarly verified using a digital signature.

All the above assume that there are no backdoors that allow an upper layer to compromise a lower layer.


That doesn't make much sense. You don't need any private material to verify signatures when using asymmetric cryptography. How's TPM useful at all if all it does is verify signatures?


The TPM had multiple functions, and for verifying operating system images indeed no private key is required. For that to work, it is merely required that the OS doesn't have write access to the TPM.

Private key material is required for remote attestation, which makes it possible to prove certain things to an external party, for example the exact identity of the TPM. This feature is much more questionable.


If you can assume that the TPM has no vulnerability, I can assume that the kernel has no exploit. QED


> The isolation provided by the kennel is strictly more vulnerable than a TPM chip that can only interacted with via a communication protocol.

That's a useful property, but you have to weigh it against a kernel that can be audited and patched.


Whether the kernel can vbe audited is completely orthogonal to verifying it at boot time since there are still ways to retrieve it even with active hardware encryption.

The utility of patching your own kernel has to be carefully weighted against the security impact of doing so. If an attacker can take over the user account that compiles the kernel, they can make you install a compromised kernel. For example, by running a file system monitor that exchanges the kernel image with a compromised one as soon as it has been built.


Specs:

Alibaba T-Head TH1520 SoC

    2GHz quad-core RISC-V 64GC Xuantie C910
        3-issue 8-execution superscalar with out-of-order issue/completion/retirement
        64KB I and 64KB D L1 per core
        1MB shared L2 cache
    4TOPS@INT8 neural processing unit (NPU) @ 1GHz
    50GFLOPS, 3Mpixel/s Imagination BXM-4-64 graphics processing unit (GPU)
    2x image signal processor (ISP)
    Video codecs
        H.265/H.264 @ 4Kp75 decode
        H.265/H.264 @ 4Kp40 encode
4GB RAM, 16GB eMMC, PHY: AP6203BM for Wifi/Bluetooth, Realtek RTL8211F-VD-CG Gigabit Ethernet, microsd, microUSB, micro HDMI.


Is this:

50GFLOPS, 3Mpixel/s Imagination BXM-4-64 graphics processing unit (GPU)

A typo on the fillrate? It has to be 3 billion pixels/s, right? At 3 million you can't fill 640x480 VGA at 10 fps, even ...


This [1] would suggest that a single BXM-4-64 integrated on an SoC can get up to about 4 gigapixels/s assuming the memory interface can manage it.

I found another article [2] that talks about both the BeagleV and another board using the same chip, the LiceePi4a [3] which says "50.7GFLOPS, Fill 3168M pixels/s". As the designs are probably very similar, it almost certainly is a typo.

[1] https://www.anandtech.com/show/16155/imagination-announces-b...

[2] https://riscv.org/blog/2023/07/the-release-of-the-first-two-...

[3] https://sipeed.com/licheepi4a


Phew, that surely makes more sense. Great links too, thanks a lot!


The XuanTie C910 is a shockingly slow RISC-V CPU core, don't expect miracles out of this one... (performs _far_ below the advertised A72-class numbers in practice)


Sooo... How open is this board?

The core designs are XuanTie, which is open / royalty free. What about the GPU?

Are any schematics or data sheets posted for the board? Design files?


Afaik TH1520 SoC includes an Imagination BXM-4-64 gpu:

https://www.imaginationtech.com/products/gpu/img-b-series-gp...

They appear open source friendly, but both this gpu & SoC being fairly recent, we'll have to wait & see how this goes.

GPU design itself is closed, like for practically any other (modern) gpu.

No doubt this board will have many applications. With this powerful cpu+gpu, desktop replacement could be among those. But lack of regular USB connectors (or external hub required) is a big drawback.

Also I prefer RPi style gpio header. But that's just me.


> Are any schematics or data sheets posted for the board? Design files?

Agree. And some VHDL / Verilog while we're at it.


Here you go: https://github.com/T-head-Semi/openc910

(This isn't doesn't include the draft vector extension present in the C910)


Hmm but this is just the CPU core. Not the whole SoC or board...


$248AU per unit…. That is insanely expensive!


Who is the target audience for this?

It's progressing fast which is great but they're not yet competitive vs some random ARM board. So who is buying these in this sorta "catching up" phase?


It's sad that TI never pressed their advantage with their chip, never offered anything to serve as an upgrade; so much that now the BB form factor remains but the TI chips are gone...


The TI-based BeaglePlay Connect (also released a few months ago) seems like a superior buy actually.

BeagleV-Ahead is interesting, but I think most people who are doing hobby embedded projects would prefer to use the TI-based BeaglePlay.

BeaglePlay is supporting single-pair Ethernet for example (I forget which version though... I think the "not-CANbus replacement" one but I dunno for sure). As well as LoRA. And at $99 its cheaper too. And they've also got BeagleConnect-Freedom boards to easily integrate that LoRA radio.


Automotive grade ethernet iirc is just using ethernet for either the same or a new fully deterministic network standard.


SPE is a newish (2018?? standardized) 2-wire power+data twisted pair for 1km wire lengths at 10 Mbit.

It's expensive to play with right now. But the specs are cool. 1km lengths is nuts, and a PoE-like feature on a singular twisted pair minimizes the wiring requirements on industrial scale plants.

Hobbyists probably should stick to PoE honestly. But playing with new protocols is still fun.


LoRa for radio LoRA for LLM refinement


TI took several steps back by ending the OMAP line, but they're still designing chips even though they're not quite on the bleeding edge anymore.

The PRUs are still there for now, but not very well supported. (You do get multiple Cortex R's.) Once the RP1 is fully documented and exploited the PI5 will probably become even more popular for hobbiest IO, but that's in a different market really.

The Jacinto7 line is still pretty popular in some circles and there are some boards around the series. It's a bit of a confusing line though since the same dies are offered in multiple product lines targeted for different markets.

Beaglebone AI64 (TI TDA4VM, 2xA72, 4GB, NVMe, $185) : https://www.beagleboard.org/boards/beaglebone-ai-64

TI SK-AM69 (AM69 - same die as TDA4VH, $400, 8xA72, 32GB RAM, PCI-e, NVMe): https://www.ti.com/tool/SK-AM69

If the SK-AM69 supported transformer models (i.e. llama.cpp) in it's NPU it would probably be perpetually out of stock.


It’s always great to see a new riscv offering. I’ve recently started to take an interest in hardware, I’ve always been a high level software person. I, just yesterday actually, purchased a VisionFive2 risc sbc.


The best RISC-V chips are from China based companies.


Thank you for sending me down the rabbithole of RISC :)


All was fine until I clicked "buy" link and saw the price. Sorry but I could get way more compute power for this price


Is it open source RISC-V?


To me it does not matter. I am buying Teensy like boards for small scale production and said production is out of the question with this kind of price. For my own use - as already said I'll get something much more powerful for the same money.


>I am buying Teensy like boards for small scale production

This board is on a much bigger scale, both in compute power and in size. It is a SoC mounted on a SBC, not a MCU devboard.

You'd be better off looking elsewhere, like ESP32-C series or Milk-V Duo.

This list[0] might prove helpful.

0. https://codeberg.org/20-100/Awesome_RISC-V/src/branch/master...


micro-USB 3.0? Why?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: