This seems extremely similar to https://github.com/jedisct1/libhydrogen -- X25519 and Gimli. (well, and the name...). It also appears to pull from its test suite and aim for compatibility in some ways.
Frank Denis also wrote libsodium, so guess it is a pun by him? Don't think there was any lib{chemical element} popularised before that, and there's a few now, but could be wrong.
Yes, libhydrogen is a pun, as it is lighter than libsodium.
The first version was a rewrite of libsodium, that was small and contained in a single C file. Also introduced xchacha20 for the first time. Then, the Gimli paper was published, and libhydrogen was rewritten to take advantage of it.
Smaller than libhydrogen, there's now charm: https://github.com/jedisct1/charm , also a pun. The plan was to eventually add bottom (platform abstraction layer), strange (asymmetric cryptography), and top (high-level APIs) to build a modular, component-based library.
libsodium itself is a fork of NaCl[0], an acronym for Networking And Cryptography Library, but also the chemical composition of table salt (Sodium Chloride; the latin name for Sodium is Natrium hence Na)
Wondering if the authors heard of embedded disco[1] which uses X25519 and STROBE but with the SHA-3 permutation instead of gimli (and additionally uses Noise for encrypting and authenticating communications)
Finally, Hamburg's "attack" will not be feasible in the foreseeable future, even with quantum computers. Even if the "attack" were extended to the full 24 rounds, it would not contradict any security claims made in the Gimli paper.
Daniel J. Bernstein (djb) is a wizard, but use Gimli at your own risk.
Judging from the statement (I haven’t the cryptographic kung fu to distil the paper myself), the attack seems to be more of an exploration of how vulnerable the (relatively new) ways the Gimli suite builds everything out of a single crypto core can be when used badly, not something applicable to the usage that’s actually specified. Or is that still concerning? (Plus it does seem worse than brute force from the numbers given, though I can’t judge whether that makes it uninteresting in general, either.)
Yes, you clearly don't understand it. That's ok, but you probably shouldn't confuse the issue.
Basically the attack shows that Gimli's speed and simplicity introduces exploitable flaws and reduces its security:
Bernstein et al. have proposed a new permutation, Gimli, which aims to provide simple and performant implementations on a wide variety of platforms. One of the tricks used to make Gimli performant is that it processes data mostly in 96-bit columns, only occasionally swapping 32-bit words between them. Here we show that this trick is dangerous by presenting a distinguisher for reduced-round Gimli.
Yet the attack, as very clearly stated at your link, does require much more computing resources and time than a standard brute force attack.
So I can only agree with DJB that the attack, in its present form, is completely useless.
At most, it can be argued that maybe someone will find a way to use the ideas from your link to conceive a new attack that is much more efficient.
I do not find this more convincing than the threat that someone will find an efficient attack based on completely different ideas.
Any more recent cryptographic algorithm is riskier than the older algorithms, because it is less understood.
However Gimli is intended for slow microcontrollers, where the encrypted data cannot be very valuable, otherwise one would use a slightly more expensive CPU like Cortex-A55 (a few dollars instead of less than a dollar, for a MCU/MPU package), with standard cryptographic libraries.
So the damage done by an attacker decrypting the MCU communication cannot be great, therefore it is an acceptable risk to use a less trusted algorithm, if that reduces a lot the hardware cost.
So the attack must be useless, but if it was not it wouldn't matter because the software doesn't have to be secure to begin with. That's not a great mindset through which to view cryptography.
You put in my mouth words that I have not said, so I will say more clearly:
1. That attack is useless.
2. Nevertheless, Gimli is relatively new and it is also designed for minimum cost, not for maximum security, so there is a risk that someone else could discover a real attack, a risk that is greater than for older algorithms like AES or Chacha.
3. There exists no practical 100% secure form of cryptography. Any choice of cryptographic algorithms is a compromise between the computational cost for the operations done for protecting data, e.g. encryption/decryption/signing/verifying and the computational cost for an attacker that tries to decrypt or forge the protected data.
4. The compromise must be chosen for each application depending on the implications of a successful attack. Some data is so important that it has to remain secret even 10 or 20 years in the future, other data is ephemeral and it does not matter if an attacker would succeed to decrypt it a week later.
The correct mindset in cryptography, like in any other domain, is to choose the right tool for the job.
If you want to use a $0.50 microcontroller, then you must use simpler cryptographic algorithms that can have an acceptable performance on such low-cost hardware. If you want to use algorithms that are harder to break, then you must accept to pay $5.00 for a more powerful device (at the latter price any decent device would have hardware implementations for standard algorithms like AES and SHA-256, so you would not have reasons to use anything less secure).
You can perform very strong cryptography on a 70 cent microcontroller. While that is more than 50 cents, the considerations are quite simply not purely cost (and certainly not to the extent that you quote). Given your explanation, I do not see how my original interpretation is any different from what you actually meant - given that, I must question your reasoning about absolute cost-effectiveness and how much you need to sacrifice for this in practice.
I do not disagree with the premise that sometimes you must make tradeoffs based on cost, but I do disagree with the premise that a platform that you yourself say is not so good should be used because to do otherwise is to (potentially) lose a lot of money. If we get to chips that are a few cents a piece and somehow still need encryption, perhaps you are correct. Then we can be at peace with encryption that only defeats casual snooping. In all other cases, this seems like a poor tradeoff.
I should not be allowed into the same room as crypto development, and have certainly never tried to "attack" a crypto algorithm.
Still, reading that, against 22.5 (of 24) rounds of the Gimli computation, the attack is claimed to need 2^129 bits of memory. That is 77,371,252,455,336,267,181,195,264 TB if math is right, which does seem to gently push that "attack" into a rather theoretical plane?
Not sure what I'm missing, from your tone I would expect a smoking hole and this doesn't seem to be that.
No, it's an Ed25519 library _and_ a Gimli library. Gimli can be used to make a wide range of functions, including MACs, hashes and stream ciphers, just like SHAKE.
Maybe a little pedantic, but per the website liblithium doesn't implement Ed25519. It uses a signature scheme based on X25519, which is usually used for key exchange. From the perspective of interoperability, the liblithium scheme is nearly as different from Ed25519 as it is from ECDSA P-256, notwithstanding that you can technically convert Ed25519 keys to X25519 keys.
How do you convert from Ed25519 to X25519 when certain bits have been masked off the MSB range? Do you mean X25519 -> Ed25519 or am I misunderstanding?
AFAIU, you can go in both directions, though in one of the directions (I think X25519 -> Ed25519) you have to discover or choose one among equivalent keys.
I've implemented some ciphers, and many more security protocols, but I won't pretend to understand the math well enough to directly address your question. But I believe schemes sharing keys between Ed25519 and X25519 generally rest on this paper: https://eprint.iacr.org/2021/509.pdf That paper describes the process in both directions, IIUC. See page 4,
> In this case the recipient translates the public key from edwards25519 to an X25519 public key using the map from [17]
and page 6,
> We will retrieve the two candidates for the v-coordinate from the curve equation and choose one of them uniformly at random. We then change coordinates to edwards25519 using the map in [17, 4.1]. This change of coordinates preserves addition on the curve and the basepoint of curve25519 is mapped to the basepoint on edwards255196.
Maybe there's some nuance wrt converting public vs private key components I'm ignoring. But libsodium provides routines for converting both public and private Ed25519 keys to X25519 independently--i.e. doesn't require the private Ed25519 key to convert the public key. Because EdDSA is more common and widely deployed than X25519/X448 key exchange (or signing) schemes, Ed25519 -> X25519 is typically the direction you care about, permitting you to preserve or leverage existing public and private key management infrastructure.
For another, I used them for an embedded project for 5 years and they were the most complete, competent, and up-to-date library of the open-source ones we surveyed (and had a TON of optimizations for TI, STM, Arm and Cavium MCUs).
Per my understanding, FIPS certification only guarantees the crypto implementation does what it is supposed to do and nothing else. If implementation has language-specific vulnerabilities (like buffer overflow, memory management issues), then FIPS is not designed to catch these.
Doesn't C have years and years of repeated security flaws due to lack of memory safety? What's the motivation to continue releasing greenfield projects in C?
Sounds like this is intended to run on targets rustc would not support, given the claim that it "does not assume 8-bit addressability".
Also, while I'm fully on board the "Rust as a default userspace systems language," the machine code I've gotten out of trying #[no_std] Rust to write part of a bare-metal project was heinously bloated. This is without even using any non-libcore dependencies! Something like 3/4s of the machine instructions looked like they were related to building error messages for panics, despite the fact that there shouldn't have been any reachable panics (and the code was small and simple enough that I expected LLVM to be able to reason about this without any MIR-level optimizations).
Maybe this would be better on a project that could use rustc as its linker, but based on this, I wouldn't necessarily recommend that a bare-metal library intended to be linked to C and hand-written assembly be written in Rust versus, say, verified with Frama-C or other C-specific tools.
To be fair, the AVR and STM8 code I’ve got out of GCC and SDCC when feeding them normal (non-8-bitter-adapted) C also sucked—though not to the point of building strings—because the default promotion to int required by the standard really hurts when the minimum required 16-bit int does not fit into a register.
I'm latching onto a non-main argument to be pedantic, but I'd like to mention that Rust is fine for _non-embedded_ kernel development too, in my experience.
As you're likely aware, Rust for embedded sucks when there's no HAL, but should be pretty pleasant otherwise. Have you looked into the cortex-a[1] crate?
Some unnecessary instructions could also be a part of an ongoing optimization effort[2][3].
Rust is wonderful technology that will make software more secure and reliable. However there is no sense of stability in the Rust world. APIs constantly break and most crates are pre-1.0.0.
It was originally written for a project running on Arduino Uno, where every byte of code and memory counts. Stack usage, in particular, had to be very controlled.
C and assembly were the only options. Zig didn't exist back then.
Hmmm, on the flip side I do like it when a out-of-bounds exploit lets someone create a way for me to "root" my cars old firmware and upload community fixes. Of course it'd be better if the manufacturer let you upload new firmware, but alas.