I suspect Intel uses 32x32b multipliers instead of his theorised 16x16b, just that it only has one every second lane.
It lines up more closely with VPMULLQ, and it seems odd that PMULUDQ would be one uOp vs PMULLD's two.
PMULLD is probably just doing 2x PMULUDQ and discarding the high bits.
(I tried commenting on his blog but it's awaiting moderation - I don't know if that's ever checked, or just sits in the queue forever)
Difference is a crawler paces the requests, respects robots.txt and rate limits, and doesn't typically invoke 50-100MB disk I/O per request.
Like I don't mind automated access to my search engine, I even offer a public API to the effect, that you can in fact hook into SearXNG. What I mind is when one jabroni with a botnet decides their search traffic is more important than everyone else's and grabs all the compute for himself via a sybil attack.
Tangentially related, but I've had a case where GCC 9.4.0 shipped a broken arm_acle.h header [https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100985] - code which includes the header would always fail to compile.
Since users could be trying to compile it on GCC 9.4.0, and as the functionality dependent on the header isn't critical to the application, if the build detects the header is broken, it just disables the functionality and emits a warning.
Back to xz: If security isn't your top priority, it seems reasonable to just ignore the failure of an optional feature and just move along. Of course, in hindsight, it's easy to point fingers, but without that, it doesn't sound completely unreasonable to me.
I don't think phishing is such an obscure scenario.
The point is also that you as an individual can make choices and assess risk. As a large service provider, you will always have people who reuse passwords, store them unencrypted, fall for phishing, etc. There is a percentage of users that will get their account compromised because of bad password handling which will cost you, and by enforcing MFA you can decrease that percentage, and if you mandate yubikeys or something similar the percentage will go to zero.
> I don't think phishing is such an obscure scenario.
For a typical person, maybe, but for a tech-minded individual who understands security, data entropy and what /dev/random is?
And I don't see how MFA stops phishing - it can get you to enter a token like it can get you to enter a password.
I'm also looking at this from the perspective of an individual, not a service provider, so the activities of the greater percentage of users is of little interest to me.
> That's why I qualified it with "certificate-based". The private key never leaves the device
Except that phishing doesn't require the private key - it just needs to echo back the generated token. And even if that isn't possible, what stops it obtaining the session token that's sent back?
From my understanding, FIDO isn't MFA though (the authenticator may present its own local challenge, but I don't think the remote party can mandate it).
There's also the issue of how many sites actually use it, as well as how it handles the loss of or inability to access private keys etc. I generally see stuff like 'recovery keys' being a solution, but now you're just back to a password, just with extra steps.
The phisher can just pass on whatever you sign, and capture the token the server sends back.
Sure, you can probably come up with some non-HTTPS scheme that can address this, but I don't see any site actually doing this, so you're back to the unrealistic scenario.
No, because the phisher will get a token that is designated for, say, mircos0ft.com which microsoft.com will not accept. It is signed with the user's private key and the attacker cannot forge a signature without it.
A password manager is also not going to fill in the password on mircos0ft.com so is perfectly safe in this scenario. You need a MitM-style attack or a full on client compromise in both cases, which are vulnerable to session cookie exfiltration or just remote control of your session no matter the authentication method.
If I were trying to phish someone, I wouldn't attack the public key crypto part, so how domains come into play during authentication doesn't matter. I'd just grab the "unencrypted" session token at the end of the exchange.
Even if you somehow protected the session token (sounds dubious), there's still plenty a phisher could do, since it has full MITM capability.
Padding is actually not really necessary in base64, as you can infer the length from the number of characters received.
Unfortunately for Z85, they made the highly questionable decision to use big-endian, which means it can't take base64's route. You could probably define an incomplete group at the end to be right-aligned or similar, but you may as well be sensible and just go little-endian.
> The four octets SHALL be treated as an unsigned 32-bit integer in network byte order (big endian). The five characters SHALL be output from most significant to least significant (big endian).
Why oh why??!
If it were little endian, you could probably skip the "must be multiple of 5 chars/4 bytes" requirement, not to mention that 99.9999% of processors out there are running in little-endian mode.
There is nothing "envious" about network byte order.
You wouldn't be able to skip the 5 char/4 byte requirement, you'd just be able to strip 0x00 bytes from the end. That actually complicates things, since you then need to specify in the spec whether handling that is a requirement for a conforming parser/reader.
I don't quite understand what you're saying, but it should be possible to infer the length from the number of bytes received.
Assuming n is an integer:
* 5n bytes received = 4n bytes data
* 5n+1 bytes received is [invalid]
* 5n+2 bytes received = 4n+1 bytes data
* 5n+3 bytes received = 4n+2 bytes data
* 5n+4 bytes received = 4n+3 bytes data
This is like modified Base64, which doesn't need any padding.
You never need padding as long as you know how many input characters are missing. My point is that if you encode the single byte binary input 0x01 as "00001" (big endian) instead of "10000" (little endian) you avoid the temptation for people to trim off the zeroes (leaving "1"). This means your decode() input will always be a multiple of 5 character chars by construction.
This comes down to whether there should be 5 valid encodings ("10000", "1000", "100", "10", "1") of a single 0x01 byte, or one. The variable length encoding of integers in Protocol Buffers has the same malleability problem
It's also not clear to me why you say 6 char input is invalid.
In your scheme you can't tell the difference between the single byte binary input 0x01, and the four byte binary input 0x00,0x00,0x00,0x01.
Those are the same if you're treating the binary data as a stream of 32 bit numbers, but not if it's a stream of an arbitrary number of octets.
Your parent is suggesting that if after chunking the input into 5s, your last chunk is "10" you would treat that as 0x01, "100" as 0x00,0x01, "1000" as 0x00,0x00,0x01 and only "10000" as 0x00,0x00,0x00,0x01. That's not four encodings of the same value at all.
Treating "1" (or any single leftover character) as invalid in such a scheme makes sense because a single character can only encode 85 values, from 0x00 to 0x54.
EDIT: okay, looks like the official documentation lists it, but many places I looked didn't. Maybe it wasn't being detected in cpuinfo?