Did you read my comment at all before replying? One step is the source, one step implies reproducible build, and alternately specifies a stronger requirememt (source-to-object verification).
And no, OSS with reproducible builds is nowhere near enough for software to be trustworthy. It's why even Orange Book had more than one sentence in its feature and assurance activities recommendations.
Let's put it to the test though. If Im right, most the majority of OSS software will be similarly to proprietary full of easily-prevented holes, undocumented/barely-clear functionality, and difficulty even building it. Whereas high assurance systems would've had the opposite attributes while faring well during professional pentesting w/ source.
One of us was right for a decade straight. Maybe it's because the principles and practices I promoted... work? Evidence in the field is on my side. Neither OSS nor good builds are enough.
There's more than process going on here; there's also a policy axis in this graph.
Closed source has no obligation to reveal vulnerabilities, fix anything, or even work with customers who report vulnerabilities. ORCL will sue you [1] if you learn too much about what you bought. It's often more in their interest to fix something after public discovery for PR reasons but leave it on the todo list otherwise.
So yes, of course closed and open source have holes. The question is, will they be found, announced, and addressed or will they lay secret for years behind a legal wall?
Thats a good point. I should factor that into the next version of my essay. It will be a natural advantage for OSS. On proprietary end, perhaps the contract will give disclosure of problems plus immunity to suits to the reviewer. There are still problems that can happen in that model but negates most of what you said.
Btw, about obligation, my essay assumes the company is trying to differentiate by taking initiative and having their product reviewed. Companies that don't shouldn't be trusted at all. End of story.
In that case, of course you get into mitigation options.
Depending on your contract, proprietary vendors offer few choices about getting a vulnerability patched, if ever. If you're in the riffraff section (ie, most router owners only have a few), you might wait a very long time. One [1] from netgear that languished for months. And what about the Juniper backdoor: won't fix?
With open source, you can take the code to whomever you wish, fix it in house, offer a bounty, etc etc. There are plenty of houses that give away code and sell support. If GPL'ed, this model also accelerates fixes because everyone gets immediate benefit of everyone else's fixes.
Hmm. I'm probably going to have to rewrite or supplement the essay to account for these other dimensions. So, what do you think of it's primary intent to show that belief in security or no subversion come down to trusting a reviewer and methods put in rather than source access? In general rather than for, say, a project whose source you personally would review in full for every kind of security issue, etc.
You're not wrong that a correct process dramatically limits classes of issues (I've worked in a very high ceremony requirements-tracability shop). But it still requires the customer to have faith in source code they can not themselves see.
As a customer, you can claim what you like about your process and your glory: until I can actually verify the code as part of the deliverable, it's just faith on my part. I'd rather use an open source firewall rather than a closed source firewall, regardless of the claims made by the proprietary company. Again, this is about faith. I'd like to avoid having it when it comes to security.
"But it still requires the customer to have faith in source code they can not themselves see." (pnathan)
"4. Low-level, simple, modular code mapping to that.
5. Source-to-object code verification or ability to generate from source on-site." (me)
Seriously, did someone hack my comment where it doesn't show that on everyone else's end or did they hack my system where 4 and 5 are only visible to me? Shit! Here I was using OSS, reviewed, well-maintained software specifically to reduce the odds of that. I'm blaming Arclisp: must have called a C function or something.
"You're not wrong that a correct process dramatically limits classes of issues (I've worked in a very high ceremony requirements-tracability shop)."
Well, there we go. Least you saw that and have experienced that assurance activities can increase assurance. Now we're getting somewhere.
"Again, this is about faith. I'd like to avoid having it when it comes to security."
You're probably going to have it anyway unless you specifically verified the software, libraries, compiler, linker, build system, and all while producing it from a compiler you wrote from scratch. Nonetheless, open-source can increase trust but I say closed can be more trustworthy. Not is or even on average but can be.
Here's my essay that claims and supports that the real factors that are important are the review, the trustworthiness of the reviewers, and verification you're using what they reviewed. I'd like your thoughts on it as I see where you're coming from and like the faith angle. Faith in the integrity of the process and reviewers are the two things I identified as core to security assurance. So, I broke it down to give us a start on that.
I don't think most understand what (4) and (5) are, which is why you're seeing the responses you're seeing. Not being in that field, I had to re-read it a couple of times to understand what they meant.
I think OpenSSL's past disproves many of the pro-OSS claims.
As most things, a blended approach is probably best. Defense in depth, layers of security, crunchy on the outside, still tough on the inside. If you put all your eggs in one basket, you're gonna have a bad time, unless it was a very expensive, well-engineered basket.
That could be the problem. If it was, I take by what was in the comments to those people. Will have to make it more clear next time. No 4 was source code that maps directly to specs, high-level design, requirements, whatever. Modular, code that clearly belongs. No 5 either means generating the system onsite from source-code or an audit trail that goes from source statements to object/assembler code so you can see they match.
"I think OpenSSL's past disproves many of the pro-OSS claims."
100% agreement.
"Defense in depth, layers of security, crunchy on the outside, still tough on the inside. If you put all your eggs in one basket, you're gonna have a bad time, unless it was a very expensive, well-engineered basket.
And even then you might still have a bad time."
Decent points. Other engineers and I went back and forth on discussions involving the latter point due to all the factors involved. A high assurance design usually worked pretty well. Yet, it might not, so Clive Robinson and I's consensus was combining triple, modular redundancy w/ voters and diverse implementation concepts. So, three, different implementations of concepts that shouldn't share flaws with at least one high assurance (preferably three). The voting logic is simple enough it can nearly be perfected. Distributed voters exist, though.
Shit gets worse when you think of the subversion potential at EDA, mask-making, fab, and packaging companies. You have to come up with a way to trust (or not) one entity. For HW, esp low hundreds of MHz, the diverse redundancy with voters should do the trick. Until the adversary breaks them all or the voter. Lol...
And no, OSS with reproducible builds is nowhere near enough for software to be trustworthy. It's why even Orange Book had more than one sentence in its feature and assurance activities recommendations.
Let's put it to the test though. If Im right, most the majority of OSS software will be similarly to proprietary full of easily-prevented holes, undocumented/barely-clear functionality, and difficulty even building it. Whereas high assurance systems would've had the opposite attributes while faring well during professional pentesting w/ source.
One of us was right for a decade straight. Maybe it's because the principles and practices I promoted... work? Evidence in the field is on my side. Neither OSS nor good builds are enough.