Hacker News new | past | comments | ask | show | jobs | submit login
VeraCrypt to be audited (ostif.org)
212 points by kobayashi on Aug 15, 2016 | hide | past | favorite | 38 comments



And the very next post on their blog says:

OSTIF, QuarksLab, and VeraCrypt E-mails are Being Intercepted

We have now had a total of four email messages disappear without a trace, stemming from multiple independent senders. Not only have the emails not arrived, but there is no trace of the emails in our “sent” folders. In the case of OSTIF, this is the Google Apps business version of Gmail where these sent emails have disappeared.

https://ostif.org/ostif-quarklab-and-veracrypt-e-mails-are-b...


Someone I knew had emails disappearing in gmail, and at somepoint we were able to see it happening in realtime. Person had a weak password, and gmail showed someone being logged in from a strange location. Logged everything out and changed to new password and been OK since.


Call me paranoid, but I wouldn't trust an account after somebody has broken in.

In that case I'd create a different account with a different password, possibly at a different provider, and tell all people that my email address changed. Maybe I'd also create a forwarding from the old to the new account, but only for transition period over a few months.


it's gmail, if you change the password the other person is out. It's not a computer. You can also log out all active sessions.


That would be an incredibly inept and stupid interception operation.

OTOH, mails disappearing is not exactly uncommon.

I call bullshit.


Inept malware happens, but yeah, i'm not sure I'd expect inept malware in an email intercept operation.

I once helped someone diagnose a 'broken' WordPress operation. It had unpatched vulnerabilities, and had been infected with malware that appeared to make it click on ads somewhere or other, from what I could tell. But the only reason it was even discovered is it also brought down their site with a syntax error in a *.php file. If they had kept the site up and running, the owners probably never would have noticed that their WordPress installation was periodically simulating clicks on ads in the background.


How likely is it that this kind of security review will reveal something that is not already known to these three letter agencies? I would assume they already spend quite much money and time to perform similar reviews to find potential weaknesses.


It could be something in email infrastructure. Happens sometimes when they encounter email bodies or headers they don't expect. If anyone has their contact info, tell them to send the messages as text or files encrypted with GPG. Attachments to normal emails. If problem persists, do it from fresh accounts negotiated over the phone given its encrypted, authenticated comms. Otherwise, use a file drop service, IRC, something weird they probably aren't equipped for. Do it over various WiFi spots if they're blocking transport layer.


They could've used a temporary dropmail acc or something more convenient like protonmail to avoid that problem. I mean, what do they expect?

Of course emails are intercepted, that's the easiest thing to do with that unsecure protocol.


There are plenty of secure options to use for mailing.

Though having your mailserver under your own control is step 1 they skipped here. I tend to find it sad when (tech, and specially security/audit) parties don't (if they can't or don't want i don't care so much about) host their own mailserver and simply house it at google or similar party.

Actually while writing it, it should be slightly embarrassing for them, a security, auditing company they can't even tell what actually happened with their email communication.


I think the biggest problem with hosting your own email servers is to get other mail server providers (google, microsoft, etc) to trust them (through ranking). It can take ages until your email stops going directly to your customers/subscribers spam folder which literally means loosing some business opportunities.


As a VeraCrypt user, I'm in support of this type of thing happening more for potentially critical open-source projects like this.

It'd be pretty cool if larger security consultancies with cryptographic expertise would band together and come up with a way to easily work with these types of projects, that way they could crowdfund exactly what they'd need to complete audits like this for themselves. I'd give a good chunk of change to projects that I used if they had a crowdfunding page for their cryptographic audit efforts that didn't expire ever.


>> "I'd give a good chunk of change to projects that I used if they had a crowdfunding page for their cryptographic audit efforts that didn't expire ever."

What in USD is a good chuck of change? What projects?

Think for example I asked Moxie about doing something like is for Signal, and as far as I recall the response was that it's open source.


One-time audit is great, but continuous auditing is also crucial. There are three kinds of vulnerabilities I can think of:

* software bugs

* design flaw

* backdoor

First could be detected using fuzzer, static analysis and proof reader.

The second and third, unclear to me how could be found intelligently without human code review. Perhaps https://news.ycombinator.com/item?id=12236394 is relevant? Anyone?


> First could be detected using fuzzer, static analysis and proof reader.

I respectfully disagree; I don't think we're truly at the point where we can rely upon these techniques over human code review.


If you are relying on anything alone you are a fool. If you are not using the above as part of your solution you are a fool.

Each of the above will consistently find certain types of errors. The above tools do not "let their mind wonder", get bored, or any of the other things that happen to humans.

When you as a human find an error it is worth asking if you can modify one of the above tools to find that class of errors so that you can be sure all cases were caught. Unfortunately most of the time we cannot yet (generally because the number of false positives is too high - I'm hoping for research to improve this)


I agree with everything you've said. Automation and tooling where it excels, humans where necessary and for oversight.

My comment was largely around the fact that parent comment seemed to say "software bugs have been solved by tools, but design problems need humans". I think design/backdoor/crypto bugs may be more (or entirely) depending on manual audit, but it's dangerous to suggest software bugs can be fully automated away.

Perhaps the parent comment was suggesting we should look to more software solutions to backdoor/design flaws? that's an interesting thought exercise.


There's always human review. Should always be human review. The real goal is to reduce the amount of stuff we have to trust whether human or machine. High-assurance security does it as follows:

1. Formal specs of what it's doing in terms of features and security where there's no ambiguity.

2. Formal specs of how it does that.

3. Formal, machine-checked proof that the how embodies the what.

4. Formal proof or even extraction of implementation. These techniques can go down to machine code or gates now.

5. Covert channel analysis to find any leaks in any of that.

6. Testing of every execution trace under a variety of inputs to show equivalence with formal model during success and failure!

7. Trustworthy distribution of above artifacts to both evaluators and users.

8. Optional, trustworthy checking of above artifacts on-site by users with diverse tooling.

9. On-site generation of system from distributed artifacts.

10. Proper guidance w/ automation where possible on secure initialization, configuration, maintenance, and termination.

Shortest summary I can do of process that goes back to the 1980's for countering the three problems you mentioned. Many key issues have been grand slammed out by tooling and checklists. Others are still evolving with mixed success. Clever attackers might always embed new backdoor you don't see. Plus, specs, tests, or key tools might be wrong. Hence, human review of each of the above by many smart minds is the most important assurance activity.

EDIT: I should note that, while it looks waterfall process, it can and probably should be done a mix of top-down & bottom-up development. Important thing is you can link various pieces together for believable assurance argument.


You could check design flaws using model checking and/or rigor formal verification process. I think it is what they meant under 'advanced mathematics' term.


As a Lehman... the only thing I don't like about Veracrypt is it seems to have a different on-disk container format than the original Truecrypt. I don't like having to remember to check "Truecrypt compatibility mode" when unlocking a drive. I only use the Truecrypt format because so many other tools exist (like tcplay) and because dm-crypt supports that format natively. If Veracrypt does take over the community of those who used Truecrypt, I hope its format gets supported upstream so we'd be at the same level of cross-platform portability. (really I'm only talking about what I had previously on Linux, though)


They VeraCrypt developer claims to have fixed several weaknesses in the TrueCrypt codebase in this interview: https://www.youtube.com/watch?v=rgjsDS4ynq8

Will be interesting to follow the audit.


Anyone know if those fixes has flow back to the Linux kernel implementation of True Crypt (tcplay / LUKS)?


"dm-crypt" is the infrastructure in the linux kernel that deals with block device encryption.

TrueCrypt,VeraCrypt,zuluCrypt,tcplay,cryptsetup among others use this infrastructure to do user data encryption/decryption.

What these project do is parse a volume header on a volume to get crypto properties and then pass them to dm-crypt for it to do everything else.

The difference between a TrueCrypt volume,a VeraCrypt volume and a LUKS volume is in how their crypto properties are stored on the header and dm-crypt is not aware of any of these projects.

Once you know crypto properties of a volume,you can skip all these projects and go straight to dm-crypt and manually create the encryption mapper using dmsetup. All the necessary information about an open encryption mapper looks like below:

  [root@ink mtz]# dmsetup --showkeys table

  zuluCrypt-500-NAAN-luks.img-2363596225: 0 16384 crypt aes-xts-plain64 afaeef82a6a823e226b0f22289404f1eac5b262b5d1984b7de9328cb571dd3f3 0 7:0 4096 1 allow_discards

  [root@ink mtz]#


How does fixing such vulnerabilities work in practice? Is he going to patch the code in private branches so that nobody can infer the vulnerability from the commit, or will people be expected to update to the latest commit if he pushes to the public repo?


Nowadays facts don't matter. Everybody follows whatever they already believe, including myself.

If we look at the TrueCrypt audit report: https://opencryptoaudit.org/reports/TrueCrypt_Phase_II_NCC_O...

It says they found 2 high severity issues, 1 low severity issue, 1 undermined severity issue. All in the cryptography category.

There were additional issues found by the Project Zero: http://googleprojectzero.blogspot.de/2015/10/windows-drivers...

Even when faced with this clear evidence, people consider TrueCrypt as being safe.

VeryCrypt is under active development, so the situation is much better since the issues can be fixed in the future releases. However, people might blindly follow whatever is reported and consider VeraCrypt bulletproof regardless of the previous experience with other crypto projects.


> Even when faced with this clear evidence, people consider TrueCrypt as being safe.

I don't understand. If your definition of 'safe' requires that no vulnerabilities can ever be discovered in a product, you're going to have to give up and never use a computer again.

Having some high-end crypto experts and some of the best bug hunters audit your product and then fix the discovered vulnerabilities puts you at the higher end of the security spectrum.

> VeryCrypt is under active development, so the situation is much better since the issues can be fixed in the future releases.

Counter-point for consideration: any non-maintenance code changes may introduce new issues that weren't part of this audit.


It is safe. It's safe in the same way that your valuables locked inside a 4 ton safe in your basement are safe. Or in the same way that being a passenger on a plane is safe. Or even in the same way that SSL is safe.

From the report you linked:

> While CS believes these calls will succeed in all normal scenarios, at least one unusual scenario would cause the calls to fail and rely on poor sources of entropy

Essentially, there are outlying circumstances in which it might be more vulnerable than usual, which is true of pretty much anything.


Good. Quarkslab is one of the most impressive reverse company I know of.


Serious question: what is a reverse company? Flat organization / no hierarchy or something? Sorry, I just don't know the term.


I meant "reverse engineering" :D


Oh, they have a Linux versios, too, did not know that. Then it makes sense to audit this software.

Does anybody have any real life experiences and performance benchmarks for VeraCrypt on Linux?

Also a feature and performance comparison to LUKS would be very interesting!

Linux Magazin authors, please consider... Thanks!


VeraCrypt on linux uses the linux kernel infrastructure for user data encryption/decryption and hence its performance will be on par with LUKS because it uses the same infrastructure also.

Performance of a VeraCrypt volume will degrade only if you use multiple ciphers.


Is VeraCrypt a legal fork? I thought the license of TrueCrypt didn't allow for forking.


TrueCrypt said essentially do whatever you want, just don't re-use the TrueCrypt name. Regardless it's not like they could ever enforce their license because it would meaning losing their anonymity.


Can anyone speak as to why Truecrypt/Veracrypt are such difficult projects but the built-in disk encryption used in Ubuntu/Mint isn't? Seems like everyone accepts that The Ubuntu/Mint FDE is fine and secure and yet it gets no attention.


The "built-in disk encryption used in Ubuntu/Mint" is simpler for three reasons:

1. All the crypto stuff is done by the kernel and hence they are not responsible for it.

2. Most of work that goes into setting up the kernel to manage an encrypted drive is done by a tool called cryptsetup and they are not responsible for it.

3. The part where they are responsible for lives entirely in the root's space and hence they are not susceptible to security issues that arises when trying to cross a privileged user/unprivileged user boundary.

The reason why TrueCrypt/VeraCrypt in windows specifically have so much problems is because they are doing everything themselves.

They have less problems in linux because they delegate user data encryption/decryption to the kernel and they cross normal user privileges to root's privileges using sudo.


I appreciate the response. I am not sure I understand the parts about "root's space" and crossing between privileges, it seems like this isn't relevant with FDE?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: