Hacker News new | past | comments | ask | show | jobs | submit login

> KeePassXC is written well and exercises defensive coding sufficiently.

This might be a transcription or language problem, but: auditors really shouldn’t normative claims like “software X is written well,” much less actually endorse the software they’re paid to review (as the audit’s summary appears to at the end of the post). It’s a massive conflict of interest, and undermines the actual purpose of an audit: to accurately report any weaknesses found (if any!), rather than offer an opinion on the product’s value or future exploitability (including against unknown adversaries).

(This is not a dig at KeePassXC or this auditor in particular; lots of auditing shops are guilty of this.)




> auditors really shouldn’t normative claims

Audits are - by necessity(1) - value judgements (otherwise we'd call them "proofs"). This audit concerned the source code.

I very much want a source code audit to make value judgements about the audited source code. That is its entire reason for existing, after all.

(1) audits involve examination and possibly testing. Neither of these can offer guarantees beyond "what we observed, is there / happened."


I thought audits were usually of the form "I looked for vulnerabilities in this code, and found the following", which isn't really a value assesment. I mean, yes, the point of doing that is to have the basis on which to form an opinion about "is this software safe/secure?" which is absolutely a value judgment and at least partially subjective, but I didn't think the audit itself tended to concern itself with that?


IMHO an audit should always report about how readable a codebase is, because it's a dependency for the quality of the audit. E.g when the 1st kubernetes audit pointed out that the (side-)effects of certains operations are too complex to understand, this gives some insight about the height of the possibility for uncaught bugs and the possibility to introduce new bugs.


Most important any audit should state methodology and assumptions and report findings based on this.

My big problem with KeepassXC is that the threat model is not well documented. It does a lot of fuzz around memory randomisation but it might be much easier to extract the secret key of the browser extension when unlocked. So I guess it mostly provides security for data at rest, but afaik this is not documented. A general security audit is IMHO difficult if nobody states any guarantees.


If the browser extension allowed leakage, I'd expect it to come from prepared pages and a JS exploit that can fake other origins, or from some stupid DOM information leak of the injected popups.

From each other, WebExtensions are sandboxed pretty well as far as I know.

I guess you're safe if you stick to always manually allow the password request and of course, practise the same amount of scrutiny regarding phishing as when entering passwords manually.

Or are you talking about the IPC between the browser and the KPXC app?


Audits are neither proofs nor value judgements (these are not the only two things in the world).

They’re closer to a procedural or logistical requirement, similar to a cross-check on an airplane. The FAA doesn’t mandate cross-checks because they prove that an airplane won’t fall out of the sky; they mandate them because they empirically reduce (but do not prevent) incidents.

For proofs, we have formal methods (which an audit can employ!). For value judgements, we have consumer groups and word of mouth.


Personally, I think I want auditors to try to assess this.

There are certain risky / iffy coding practices (manual string manipulation in C, for example) that might or might not lead to actual security issues, depending on whether you make an error. If no actual security issues are found in an audit, that's good, but I want to know about the potential for them, too.

Yes, this is difficult to evaluate in an entirely objective manner, but I'd rather they just do their best because to me that's still better than no information.


> If no actual security issues are found in an audit, that's good, but I want to know about the potential for them, too.

That’s perfectly reasonable, and independent from what I’ve said: audits regularly document potential weaknesses, especially when vulnerabilities aren’t found. What they don’t generally do is make statements to the effect of “this software is high quality” or “we approve of this software.”


> much less actually endorse the software they’re paid to review

> It’s a massive conflict of interest

They weren't paid.

There's no conflict of interest, at least not a commercial one.


The review file does

* ask for donations to the author, and

* provides contact details of the author, in case someone wants to hire them to review their software.

We can debate whether these constitute conflict of interest.


That’s important information, and should be included in a public announcement of an audit!

Even still: pro bono audits carry reputational value, meaning that there’s no way to fully discharge the conflict of interest here. The only correct way to do it is to refuse to endorse the software you audit; an audit that enthusiastically recommends the software it covers sets off red flags.

Edit: I misread the post, which does explicitly state that the audit was conducted for free.


It is in the first paragraph


You’re right, sorry — I missed that. I’m going to edit my comment with an explicit correction.

The second point still stands.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: