"But it still requires the customer to have faith in source code they can not themselves see." (pnathan)
"4. Low-level, simple, modular code mapping to that.
5. Source-to-object code verification or ability to generate from source on-site." (me)
Seriously, did someone hack my comment where it doesn't show that on everyone else's end or did they hack my system where 4 and 5 are only visible to me? Shit! Here I was using OSS, reviewed, well-maintained software specifically to reduce the odds of that. I'm blaming Arclisp: must have called a C function or something.
"You're not wrong that a correct process dramatically limits classes of issues (I've worked in a very high ceremony requirements-tracability shop)."
Well, there we go. Least you saw that and have experienced that assurance activities can increase assurance. Now we're getting somewhere.
"Again, this is about faith. I'd like to avoid having it when it comes to security."
You're probably going to have it anyway unless you specifically verified the software, libraries, compiler, linker, build system, and all while producing it from a compiler you wrote from scratch. Nonetheless, open-source can increase trust but I say closed can be more trustworthy. Not is or even on average but can be.
Here's my essay that claims and supports that the real factors that are important are the review, the trustworthiness of the reviewers, and verification you're using what they reviewed. I'd like your thoughts on it as I see where you're coming from and like the faith angle. Faith in the integrity of the process and reviewers are the two things I identified as core to security assurance. So, I broke it down to give us a start on that.
I don't think most understand what (4) and (5) are, which is why you're seeing the responses you're seeing. Not being in that field, I had to re-read it a couple of times to understand what they meant.
I think OpenSSL's past disproves many of the pro-OSS claims.
As most things, a blended approach is probably best. Defense in depth, layers of security, crunchy on the outside, still tough on the inside. If you put all your eggs in one basket, you're gonna have a bad time, unless it was a very expensive, well-engineered basket.
That could be the problem. If it was, I take by what was in the comments to those people. Will have to make it more clear next time. No 4 was source code that maps directly to specs, high-level design, requirements, whatever. Modular, code that clearly belongs. No 5 either means generating the system onsite from source-code or an audit trail that goes from source statements to object/assembler code so you can see they match.
"I think OpenSSL's past disproves many of the pro-OSS claims."
100% agreement.
"Defense in depth, layers of security, crunchy on the outside, still tough on the inside. If you put all your eggs in one basket, you're gonna have a bad time, unless it was a very expensive, well-engineered basket.
And even then you might still have a bad time."
Decent points. Other engineers and I went back and forth on discussions involving the latter point due to all the factors involved. A high assurance design usually worked pretty well. Yet, it might not, so Clive Robinson and I's consensus was combining triple, modular redundancy w/ voters and diverse implementation concepts. So, three, different implementations of concepts that shouldn't share flaws with at least one high assurance (preferably three). The voting logic is simple enough it can nearly be perfected. Distributed voters exist, though.
Shit gets worse when you think of the subversion potential at EDA, mask-making, fab, and packaging companies. You have to come up with a way to trust (or not) one entity. For HW, esp low hundreds of MHz, the diverse redundancy with voters should do the trick. Until the adversary breaks them all or the voter. Lol...
"4. Low-level, simple, modular code mapping to that.
5. Source-to-object code verification or ability to generate from source on-site." (me)
Seriously, did someone hack my comment where it doesn't show that on everyone else's end or did they hack my system where 4 and 5 are only visible to me? Shit! Here I was using OSS, reviewed, well-maintained software specifically to reduce the odds of that. I'm blaming Arclisp: must have called a C function or something.
"You're not wrong that a correct process dramatically limits classes of issues (I've worked in a very high ceremony requirements-tracability shop)."
Well, there we go. Least you saw that and have experienced that assurance activities can increase assurance. Now we're getting somewhere.
"Again, this is about faith. I'd like to avoid having it when it comes to security."
You're probably going to have it anyway unless you specifically verified the software, libraries, compiler, linker, build system, and all while producing it from a compiler you wrote from scratch. Nonetheless, open-source can increase trust but I say closed can be more trustworthy. Not is or even on average but can be.
Here's my essay that claims and supports that the real factors that are important are the review, the trustworthiness of the reviewers, and verification you're using what they reviewed. I'd like your thoughts on it as I see where you're coming from and like the faith angle. Faith in the integrity of the process and reviewers are the two things I identified as core to security assurance. So, I broke it down to give us a start on that.
https://www.schneier.com/blog/archives/2014/05/friday_squid_...
Note: I have stuff for other aspects like compilers, dev process, HW, etc. I'm just holding off to focus on the source aspect here.