Hacker News new | past | comments | ask | show | jobs | submit | mjg59's comments login

Key derivation from a PIN? Although that's an implementation detail of the key backup rather than anything inherent in the actual messaging so who knows.

Sure we do - the client code is already out there.

Has anyone posted an analysis of its security?


The implementation seems to be libsodium sealed boxes, with the key material sequestered using the juicebox.xyz protocol. In itself this seems broadly fine, with the significant proviso as mentioned in https://help.x.com/en/using-x/encrypted-direct-messages that identity is not verified at present, and as a result it's trivially MITMable.

But there's something more subtle here. Juicebox means that your key material is remotely stored in encrypted form. In an ideal setup, it's split between multiple different realms operated by different people, and the key material is stored in HSMs. There's a complicated dance where you prove knowledge of the PIN without actually revealing the PIN, and then the remote realms hand over the key material and you reassemble it into your key by decrypting it with a key also derived from your PIN.

If Twitter is running their own Juicebox realms then you're having to trust them. Even if the realms are implemented as HSMs, they're in a position to see the encrypted key material as it exits the HSM. And if they're not in HSMs, then the encrypted key material is just sitting there where they can see it. This doesn't intrinsically give them the key, since it still needs the PIN to decrypt it - but the key derivation function from the PIN is just 32 rounds of argon2id with 16MB of memory use, and given the PIN is limited to 4 digits, that's going to take about a second of GPU aided brute forcing to drop out the actual key.

As noted in the help doc, this isn't forward secure, so the moment they have the key they can decrypt everything. This is so far from being a meaningful e2ee platform it's ridiculous.


This is a very thorough technical analysis—thanks for sharing! It seems like even though Juicebox itself uses libSodium sealed boxes and HSMs, the security is ultimately constrained by the 32 rounds of argon2id for the PIN derivation and Twitter’s ability to access the encrypted key material. Perhaps its biggest selling point is deployment flexibility rather than being a true end-to-end encrypted platform.

Thank you for the breakdown.

Since we're on the topic of having to trust X, is there any reason to believe X wouldn't insert some code into the client JS (behind some per-account flag) to exfiltrate your key or PIN, if they were ordered to do so?

I wouldn't rely on a website as a secure communication client, that seems like a job for an open-source native application. But I'm no expert.


Oh, yeah, with no infrastructure to actually attest to the website (or the app) being trustworthy you're inherently placing trust in Twitter. Use Signal.

I think Signal is as secure as is reasonably possible, but it's worth noting that even with Signal, you can't actually verify that the app you've downloaded reflects the source code. The GitHub issue about reproducible builds is closed as not planned: https://github.com/signalapp/Signal-iOS/issues/641

The Android build is reproducible, iOS is (to the best of my knowledge) hard work for a number of reasons.

Huh. I had a conversation with a Tor developer on this topic about a decade ago, when network namespaces were still kind of a new hotness - the feedback I got was that it would be an easy way for people to think they were being secure while still leaking a bunch of identifiable information, so I didn't push that any further.

I think the tor folks made a fundamental strategic error by pushing that line. Yes, people who face a serious threat need to use tor browser and still pay attention to other ways to leak etc. But if we'd got 'tor everywhere' it would still make mass surveillance a lot harder. For one thing, today mass surveillance can detect who is using tor. If everyone was using it that wouldn't matter.

Strange, because torsock and torify do the same thing, but less robustly.

When you have torsocks or torify for everything, you're gonna leave your footprint through tor, whereas something like Tor Browser is designed specifically not to leave any print on the web.

Using tor directly on the kernel level means that your DNS is gonna leak. Your OS telemetry is gonna leak etc.

It's still a good idea but it should be implemented top to bottom and nothing left in between, otherwise you're de-anonymized quickly.


It's actually pretty much the same as the project to get Windows running on Intel Macs before Apple released Boot Camp


It's easier to use in Windows, but in some cases vendors expose WMI interfaces that allow you to overwrite arbitrary RAM and so Linux doesn't give that to you because it's kind of a huge security violation.


The ACPI code largely lives separately in Linux because it was contributed by Intel and is (as far as possible) intended to be dual-licensed GPL/BSD to ensure non-Linux OSes benefit from core improvements.


Eh other than the Thinkvantage button not working, and various other failings. We ended up with various reverse engineered patches to enable those again.


i have no idea whether i'm using your patches -- thanks anyway!

this has been my daily driver for years and i'm looking forward to future upgrades (there seems to be a meteor lake motherboard in the works).


The UI is a payload issue, not a Coreboot issue - various vendors ship Coreboot based firmware with a configuration interface, usually based on the Tiano payload. But for my EC issues I simply took the approach of reverse engineering the EC firmware, binary patching it, flashing that back, and getting on with life. Skill issue.


> I simply took the approach of reverse engineering the EC firmware, binary patching it, flashing that back, and getting on with life. Skill issue.

There is no simply here.

You can’t list a litany of niche skills before implying that’s just life and it’s everyone fault they don’t have the time and knowledge to just, you know, casually reverse engineer and patch a binary.


It was a sarcastic joke ;)


Hard to tell in writing. Still not convinced.


Yeah, no drivers required, merely the ability for userland code to smash hardware by hand. That's a terrible situation. You want to adjust screen brightness on a Thinkpad right now? Write to the appropriate control registers in the GPU to send commands over the relevant eDP sidechannel and you can do that, except any modern operating system will prevent you from doing that because you're going to be racing against anything else that's trying to use the same index/value interface, and now you tried to adjust the brightness but actually changed some other value and now your screen is displaying garbage. You know how your brightness hotkeys used to work? They generated system management interrupts, which caused the CPU to STOP EXECUTING THE OS and EXECUTE OPAQUE CODE IN A HIDDEN AREA OF RAM, and THAT FUCKING SUCKED. Now they simply tell the OS that someone hit the key, and it's up to the OS to hook that up to actually doing something as a result. Is the user experience worse? Eh yeah in some ways, but if anyone cared enough we could make it better. Is the past a better place? Fuck no. In the past HP managed to screw this up such that if the hotkey interrupt ended up being serviced on CPU 1 it would restore the CPU registers on CPU 0 instead, and you'd crash immediately (Windows handled all of these on CPU 0 so it worked by accident).

Simple interfaces were fine when we didn't have pre-emptive multitasking, and didn't have SMP. Life is better now. Having mediated interfaces where we can ensure that anything accessing the same hardware is doing so in a controlled manner is a good thing. And, counter to your claim, this is a space where vendors do stuff in spite of Microsoft in ways that make it harder for Microsoft to provide a well-defined user experience - this is literally vendors trying to differentiate within the space Microsoft allows, not something that allows Microsoft more control.


Yeah, no drivers required, merely the ability for userland code to smash hardware by hand.

Brightness control has always been in ring 0, or -1 when it was still exclusively SMM.

I don't know what you're on about. What "FUCKING SUCKED" about not having to worry about what the OS does, if it does anything at all? The OS can poke the EC if it wants to adjust the screen brightness, fan control, or whatever else. Otherwise the EC takes care of everything.

Windows handled all of these on CPU 0 so it worked by accident

...and if it was handled by SMM, Windows doesn't need to care at all!

Then again, I'm not surprised at your misdirection and half-truths given you've been essentially shilling Treacherous Computing and Restricted Boot among other hostilities.

Edit: and as if destiny itself wanted to show why needing an OS and accompanying mess of drivers to do something simple is a bad idea, especially for Asus products in particular, this shows up right on cue: https://news.ycombinator.com/item?id=43951588


For most of the past 20 years, brightness control has been ring 0 mechanism, ring 3 policy. ACPI provides a mechanism for the OS to interface with the EC in a way that aligns with any SMM that's going on, and where we are now is just fundamentally better than the halcyon DOS days you're pining for. It also means we don't need to involve another computer in here, which is what the EC is.

> Otherwise the EC takes care of everything.

See, that's just not possible. Modern displays have the backlight integrated into them, and the control is over something that's roughly i2c over eDP. But the same i2c channel needs to be accessible to the OS, and if the OS and the EC try to access that at the same time then things will break. So you need some kind of locking, and that's utterly impossible if hitting a key just jams you into SMM and smashes some bits over there - you might do that in the period between the OS writing an index and writing a value.

> ...and if it was handled by SMM, Windows doesn't need to care at all!

The behaviour of the SMM handler is defined by the platform vendor, and if the vendor ends up only testing against one OS then they may embody assumptions about that OS. In this case they assumed that SMM could only be triggered when running on CPU 0, which was true in the case of Windows and not in the case of Linux. This isn't about Windows needing to care, it's about SMM being a mechanism for vendors to accidentally assume that what Windows does is universal.

> Then again, I'm not surprised at your misdirection and half-truths given you've been essentially shilling Treacherous Computing and Restricted Boot among other hostilities.

I am arguing that it is better that this code lives in visible space, executed in the context of the OS, and subject to security boundaries that the OS imposes. You appear to be arguing for opaque code running on a separate microcontroller, mediated by code running on the main CPU, but which stops the entire OS from running while it executes, and which is hidden from the OS. Which of these sounds better for the user?


> Modern displays have the backlight integrated into them, and the control is over something that's roughly i2c over eDP.

Maybe that's a bad idea? You've clearly outlined some problems it causes.


And in response to your edit: do you think putting this code in hardware is more or less secure than having it in the OS because if it were in hardware all operating systems would be fucked while having it at the OS level means that only Windows is fucked


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: