I’ve been thinking about this topic thru the lens of moral philosophy lately.
A lot of the “big lists of controls” security approaches correspond to duty ethics: following and upholding rules is the path to ethical behaviour. IT applies this control, manages exceptions, tracks compliance, and enforces adherence. Why? It’s the rule.
Contrast with consequentialism (the outcome is key) or virtue ethics (exercising and aligning with virtuous characteristics), where rule following isn’t the main focus. I’ve been part of (heck, I’ve started) lots of debates about the value of some arbitrary control that seemed out of touch with reality, but framed my perspective on virtues (efficiency, convenience) or outcomes (faster launch, lower overhead). That disconnect in ethical perspectives made most of those discussions a waste of time.
A lot of security debates are specific instances of general ethical situations; threat models instead of trolley problems.
I work at medium to large government orgs as a consultant and it’s entertaining watching beginners coming in from small private industries using - as you put it - consequentialism and virtue ethics to fight against an enterprise that admits only duty ethics: checklists, approvals, and exemptions.
My current favourite one is the mandatory use of Web Application Firewalls (WAFs). They’re digital snake oil sold to organisations that have had “Must use WAF” on their checklists for two decades and will never take them off that list.
Most WAF I’ve seen or deployed are doing nothing other then burning money to heat the data centre air because they’re generally left them in “audit only mode”, sending logs to a destination accessed by no-one. This is because if a WAF enforces its rules it’ll break most web apps outright, and it’s an expensive exercise to tune them… and maintain this tuning to avoid 403 errors after every software update or new feature. So no-one volunteers for this responsibility which would be a virtuous ethical behaviour in an org where that’s not rewarded.
This means that recently I spun up a tiny web server that costs $200/mo with a $500/mo WAF in front of it that does nothing just so a checkbox can be ticked.
Oh man, web application firewalls and especially Azure Application Gateway are the bane of my existence. Where I work they literally slap an Azure Application Gateway instance on every app service with all rules enabled (even the ones Microsofts recommends not to enable) in block mode directly when provisioning the stuff in Azure. The app is never observed in audit mode.
Result is that random stuff in the application does not work for any user, or only for some users, because some obscure rule in Azure Application Gateway triggers. Especially the SQL injection rule of Azure Application Gateway seems to misfire very often. A true pain to debug, then a true pain for the process to get the particular rule disabled.
And then not even to start about the monthly costs. Often Azure Application Gateway itself is more expensive than the App Service + SQL Database + Blob Storage + opt. App Insights. I really think someone in the company got offered a private island from Microsoft for putting Azure Application Gateway as a mandatory piece in the infrastructure of every app.
Yes, our most of our security has been outsourced to cheap workers in developing countries like India, which are of course rated on maintaining the standard and not rated on thinking and understanding what you want and putting things in context, and probably also work 60-70 hours per week during ungodly times so you can hardly blame them. It is truly the process that is broken.
Well what if they were intelligent and could actually really understand the data and its schema before deciding whether to allow or reject the request... wait... that's just the application itself.
It all boils down to trust. Management don’t trust the developers to do the right thing because they outsourced development to the lowest bidder. They futilely compensate for this by spending a mere $500/mo for a WAF.
So WAF. Bad? I don’t know enough about it. If it’s just a way to inject custom rules that need to be written and maintained, the value seems low or negative. I had hoped you got a bunch of packages that protected against (or at least detected) common classes of attacks. Or at least gave you tools in order to react to an attack?
Just slapping WAF in front of your services without configuring and maintaining rules is bad.
Without someone dedicated for maintenance of WAF it is just a waste. Where not many companies want to pay for someone babysitting WAF and it can be full time job if there is enough changes on layers behind.
Maybe, if the attacker didn't bother to hack into the WAF itself (generally a softer target than whatever's behind it) and if you bothered keeping or understanding the logs (extremely unlikely to be a good use of resources).
You don't need to understand the logs at the time you gather them for this, you just need to keep them long enough to cover the breach, and to be able to understand them after the fact. Hardly seems like an obvious waste to me, and well worth $500/mo.
Every corporation over a certain size has a rule that everything needs a firewall in front of it… even if the something is a cloud service that only listens on port 443.
I have friends who are very scary drivers but insist on backseat driving and telling you about best driving practices, and coworkers who are insistent on implementing excessive procedures at work but constantly are the ones breaking things.
I think following rules gives some people a sense of peace in a chaotic and unpredictable world. And I can't stand them.
A little of both. I understand getting a warm fuzzy feeling that you did the right things, but if you don't achieve your goal, what's the point?
But let me clarify -- OP mentioned a contrast between consequentialism and virtual ethics and I think you can be "too much" consequentialism too. I'm wouldn't call myself a rule follower but I also follow rules 99% of the time too. It does create a sense of order and and predictability and I value that.
There is a right balance where you do follow rules but you also know when to break them. What I can't really stand are rigid people -- diehard rule followers or diehard "no one can tell me what to do." I find working with rigid people hard because you have to work around their "buttons."
It gets worse than that: it rewards people who try to break the law as much as possible without getting caught, while people who follow it are punished.
That's true of most laws, but the system punishes law breakers to make it better to follow the law overall. When the law is vague and subjective, the people who get the most reward are the ones who are willing to see how far they can push it.
The vast majority of the security "industry" is about useless compliance, rather than actual security. The chimps have put their fears into large enterprise compliance documents. This teaches the junior security people at enterprise companies that these useless fears are necessary, and they pass them along to their friends. Why? Not just because of chimps and fear, but also $$. There is a ton of money to be made off of silly chimps.
I’m an engineer who now works security. Very few of us come from an engineering background. Most lack the technical skill to do much than apply controls and run tooling. Some try to do design work but imagine a junior dev with 2-3 years experience trying to write a service.
Those of us who are architects and coders don’t often get to do it anymore because we’re not working on single projects or solutions.. so we become people who swoop in on a project for a month at a time to make sure there’s no major smells before moving on. Our understanding our your system is shallow as a result.
> I’m an engineer who now works security. Very few of us come from an engineering background. Most lack the technical skill to do much than apply controls and run tooling.
I think you probably hit the nail on the head there. Often the people in Infosec I work with are not interested into putting things in context, thinking into the actual impact of a control not being met. Instead, just a bunch of controls are thrown out without any regard to the actual security.
Now I have to say, most of our security has been outsourced to cheap workers in developing countries like India, which are of course rated on maintaining the standard and not rated on thinking and understanding what you want, and probably also work 60-70 hours per week during ungodly times so you can hardly blame them.
* You get a cool industry certification that you can put on your website to justify the vague "we take your security seriously" platitudes we spew.
* It lets you stop putting money and effort into security once you've renewed your certs this year.
* You don't need to hire a dedicated security person, any sysadmin can check boxes.
* You can say you followed industry best practices and "did all you could" when you get breached.
It's the answer to "how do we not care about security?" across an entire industry that stands to make billions from said lack of care. In a depressing way, the company with useless performative security certs will fare better after a breach then the one without them but that actually tried.
My less cynical take about this is that if you need to actually care about security because you'll be up against sophisticated targeted attacks then you probably already know that. For everyone else there's checkboxes to stop companies from getting owned by drive-by attacks.
There is compliance everywhere and compliance is often complying with larger industry "requirements" or considered best practice controls.
If you start a business from scratch, I don't know any company that has developed their own controls library from scratch without complying with some sort of framework or baseline controls set.
The frameworks and control sets that you often comply exist and are there for a reason, but your mileage may vary if you choose to use them.
Well one of the big problems is that businesses don't do root cause analysis on incidents and learn what controls failed, or should have been in place that may have prevented the incident.
Additionally, actually testing if the controls works. I work in testing controls and I find a lot of controls might be developed well, but just simply aren't being done due to resource constraints.
The ironic thing about the chimp story is that probably chimps are immune to the problem and humans are the only species that would fall for it. It takes chimps a long time to learn to copy others. I doubt they could sustain a superstition like this for long even if you managed to induce it through great effort.
It's humans that copy each other without a second thought. It's a great heuristic on average. These kinds of fables are correctives against our first instinct to replicate other's behaviors, but if we actually tried to reason through everything from first principles we'd never get anything done.
Copying is the plain pieces in the lucky charms, thinking things through is the marshmallows.
I just read the book The Phoenix Project. It's over a decade old so some of the principles are obvious/quaint at this point, or perhaps not quite as applicable.
That said, one of the things that caught me off guard is the dressing down of the head of security by a member of the board. More or less, they were told what they did was clog the flow of useful work. The message conveyed is similar to this post.
> More or less, they were told what they did was clog the flow of useful work.
That sounds like a very valid complaint, too rarely heard these days.
People seem to forget that security always comes at a cost, so security decisions are always trade-offs. The only perfectly secure system is the one that does absolutely nothing at all.
Does forcing everyone's machine to run real-time scans on all file I/O improves our security more than it costs us in crippling all software devs? Maybe. Being on the receiving end of such policies, including this particular one, I sometimes doubt this question was even asked, much less that someone bothered to estimate the expected loss on both sides of the equation. Ignoring the risks doesn't make them go away, but neither do costs go away when you pretend they don't exist.
The Phoenix Project has been very influential on me in my security career, at least partially because I share the name of the ineffectual CISO and want so desperately to avoid the link.
I think the book is still very applicable, and every security practitioner needs to be hit over the head with it (or at least The DevOps Handbook or Accelerate). Security generally is decades behind engineering operations, even though security is basically just a more paranoid lens for doing engineering ops; the ideas from Phoenix are still depressingly revolutionary in my field.
Security is always an economic activity too. The crash engrs at Ford could demand 30 mph speed governers, quarter inch steel plates, 5 point harnesses, and helmets, but people need a car that costs less than $200k and gets more than 2 miles per gallon.
Sometimes security requirements _are_ too onerous.
I've been thinking about this a lot. First, the author should replace security with compliance. Currently they are two different things. There is a huge divide between compliance teams and developers, they speak completely different languages. I'm writing an entire series about it. I do think we can fix the problem, but it is going to be a lot more work than it was to get development and operations on the same page.
This is quite a simplification. There are a lot of useless/dubious controls out there, but the problem is rather the contradiction between security pragmatism and compliance regimes.
####
Government: I need a service.
Contractor: I can provide that.
Government: Does it comply with NIST 123.456?
Contractor: Well not completely, because control XYZ is ackshually useless and doesn't contribute--
I think it's fine to implement a useless control to get a customer.
Just don't pretend that you're doing it because it is a useful control, pretend that you're doing it because jumping through that hoop gets you that customer, and "we're a smaller fish than the government". Especially with the government (especially if it's the USA…) there are going to be utterly pointless hoops. I can pragmatically smile & jump, … but that doesn't make it useful.
Exactly. There is absolutely a threshold of money that will get me to implement FIPS. There is no threshold of money that will get me to say it's a good idea that has any value other than getting the (singular) customer that demands FIPS.
The core idea of FIPS doesn't seem terrible at first glance: a validation program to ensure known attacks are protected against.
The obvious issue is that known attacks have progressed significantly faster than FIPS has been updated, so in practice it doesn't defend against actual attackers. Compliance-based security pretty much always falls into this trap, and often is even worse because compliance with the standard is considered the maximum that can be done instead of the minimum that must be done. FIPS' fatal flaw is that in many cases it mandates a maximum security level that is now outdated.
It's a lot like building or electrical codes: if they're treated as the minimum as intended things stay safe, but if they're just barely complied with then buildings tend to fall down and/or catch fire.
I guess as a company I would agree that it's fine to implement a useless control to get a customer. As a tax-payer...not so much. We spend so much money (at least in the U.S.) on garbage.
Note, though, that "the government" (NIST to be specific) says that requiring passwords to be changed every 90 days is counterproductive and shouldn't be done, yet many corporations (including my employer) still mandate it. Corporate bureaucracy can be as backward and counterproductive as government bureaucracy.
I was kind of shocked by just how gosh-darned reasonable it is when it came out a couple of years ago. It's my absolute favorite thing to cite during audits.
"Are you requiring password resets every 90 days?"
"No. We follow the federal government's NIST SP800-63B guidelines which explicitly states that passwords should not be arbitrarily reset."
I've been pleasantly surprised that I haven't really had an auditor push back so far. I'm sure I eventually will, but it's been incredibly effective ammunition so far.
Alas, in Australia one of the more popular frameworks in gov agencies is Essential Eight, and they are a few years away from publishing an update with this radical idea.
I bumped into controls mandating security scans, when people running the scans don't need to know anything about the results. One example prevented us from serving public data using Google Web Services because the front-end was still offering 3DES among the offered ciphers. This raised alerts because of the possibility of Sweet32 vulnerability, which is completely impractical to exploit with website scale data sizes and short-lived sessions (and modern browsers generally don't opt to use 3DES). Still, it was a hard 'no', but nobody could explain the risk beyond the risk of non-compliance and the red 'severe' on the report.
We also had scans report GPL licenses in our dependencies, which for us was a total non-issue, but security dug in, not because of legal risk, but compliance with the scans.
"Why do we have to do X? Because we have to do X and have always had to do X" is a human problem coming from lack of expertise and lack of confidence to question authority.
Not just lack of expertise and confidence, but also lack of trust, and possibly also a real overhead of running a large org.
Like, IT sec does not trust employees. This burns absurd amount of money day in, day out, due to broadly applied security policies that interfere with work.
Like, there's a lot of talk about how almost no one has any business having local admin rights on their work machine. You let people have it, and then someone will quickly install a malicious Outlook extension or some shit. Limits are applied, real-time scans are introduced too, and surely this inconveniences almost everyone, but maybe it's the right tradeoff for most of the org's moderately paid office workers.
But then, it's a global policy, so it also hits all the org's absurdly-highly paid tech workers, and hits them much worse than everyone else. Since IT (or people giving them orders) doesn't trust anyone, you now have all those devs eating the productivity loss, or worse, playing cat-and-mouse with corporate IT by inventing clever workarounds, some of which could actually compromise company security.
In places I've seen, by my guesstimate that lack of trust and ability to issue and monitor exceptions to security policies[0] could easily cost as much as doubling the salary of all affected tech teams.
As much as big orgs crave legibility, they sure love to inflict illegible costs on themselves (don't get me started about the general trend of phasing out specialist jobs and distributing workload equally on everyone...).
--
[0] - Real exceptions, as in "sure whatev, have local admin (you're still surveilled anyway)", instead of "spend 5 minutes filling this form, on a page that's down half the time, to get temporary local admin for couple hours; no, that still doesn't mean you can add folders to exclusion list for real-time scanner".
Another of my favorite examples is companies going "everyone needs cyber security training" and applying a single test to their entire global staff with no "test out" option. I watched a former employer with a few hundred thousand employees in the US alone mandate a multi-hour course on the most basic things, which could have been negated with some short knowledge surveys.
The same employer also mandated a multi-hour ethics guidelines course yearly that was 90% oriented towards corporate salespeople, and once demanded everyone take what I believe was a 16 hour training set on their particular cloud computing offerings. That one just have cost them millions in wasted hours.
> nobody could explain the risk beyond the risk of non-compliance and the red 'severe' on the report.
Isn't it just a burden on the security team & the organization at a whole if nothing else? If every team gets to exempt themselves from a ban just because they use the thing responsibly, then suddenly the answer to the question of "are we at risk of X which relies on banned thing Y" can become a massive investigation you have to re-do after every event, rather than a simple "no".
I don't know the details of your situation obviously, maybe there's something silly about it, but it doesn't seem silly to me. More generally, "you can only make an exemption-free rule if 100% of its violations are dangerous" is not how the world works.
This is often the result of poor risk management or lack of risk management understanding.
Compliance assessments at least the assessments I have worked with, take a risk based approach and allow for risk based decisions/exemptions.
If you have a vulnerability management process which takes what the scanning solution says at face value and therefore your process assumes ALL vulnerabilities are to be patched, then you're setting yourself up for failure.
Password resets are definitely one, and I still have to tell prospects and customers that I can't both comply with NIST 800-63 and periodically rotate my passwords, every single day. Other ones I often counter include other aggressive login requirements, WAFs, database isolation, weird single tenancy or multitenancy asks, or for anti-virus to be in places that they don't need to be.
I think it depends a bit on circumstance, but I think I'd start with "way too much software binds to 0.0.0.0 by default", "way too much software lacks decent authn/z out of the box, possibly has no authn/z out of the box", and "developers are too lazy to change the defaults".
Do you mean "why is running a firewall on an individual host useful"? Single-application hosts are quite common, and sadly some applications do not have adequate authentication built-in.
Do you mean "why does Linux allow firewalling based on the source host"? Linux has a flexible routing policy system that can be used to implement useful controls, host is just one of the available fields, it's not meant to be used for trusting on a per-host basis.
It's a catch-all in case any single service is badly configured. This often happens while people are fiddling around trying to configure a new service, which means they are at the most vulnerable.
There's always an edge case, gotta know the various sec controls to slice the target risk outcome, vs target outcome == specific implementation. Security hires who are challenging employees are the latter types.
Edge case and your answer, in spirit - public-facing server, can't have a HW firewall in-line, can't do ACLs for some reason, can't have EDR on it.... at least put on a Linux host-level FW and hope for the best.
Agreed. As an ISO 27001 auditor I see a growing demand for security compliance certification / attestations (ISO 27001, SOC 2), and it's client driven 95% of the time. So, in the end, it’s often worth it to go ahead and do it.
ISO 27001 is more affordable (2k-3k for audit, and additional 1k-3k for external provider to manage everything for you), SOC 2 will set you back at least 10k
Third party cyber risk management is a hot topic in cyber security at the moment. If you want people to buy your solution, you need to be able to demonstrate you have appropriate information security controls. A good way to do that is ISO 27001, all the way up to SOC reports.
The chimps in a cage metaphor is a great introduction to a problem that exists in all software development. I call it the Walls of Assumptions.
When we write software, we answer three questions: "What?", "How?", and "Why?".
We write out the answers to "What?" and "How?" explicitly as data and source code. The last answer, alas, can never be written; at least, not explicitly. When we are good programmers, we do our best to write the answer Why implicitly. We write documentation, tutorials, examples, etc. These construct a picture whose negative space looks similar enough to live in Why's place.
No matter what, the question "Why?" is always answered. How can this be, if that answer is never written? It is encoded into the entropy of the very act of writing. When we write software, we must make decisions. There are many ways a problem could be solved: choose only one solution. A chosen solution is what I call an "Assumption". It is assumed that the solution you chose will be the best fit for your program: that it is the answer your users need, or at least that it will be good enough for them to accomplish what they want.
Inevitably, our Assumptions will be wrong. Users will bring unique problems that your Assumption isn't compatible with. While you hoped your Assumption would be a bridge, it is instead a Wall.
The Walls of Assumptions in every program define a unique maze that every software user must traverse to meet their goals. Monolithic design cultivates a walled garden, where an efficient maze may fail entirely to lead the user to their goal. Modular design cultivates an ecosystem of compatible mazes that, while less efficient, can be restructured to reach more goals.
---
The eternal hype around Natural Language Processing and Artificial Intelligence is readily explained with this metaphor. The most powerful feature of Natural Language is Ambiguity. Ambiguity allows us to encode more than one answer into data, which means we actually can write the answer to Why; we just can't read it computationally. Artificial Intelligence hinges on the ability for decision to be encoded into software. I'm not talking about logical branches here: I'm talking about the ability to fully postpone the answering of Why from time-of-writing to runtime.
---
For the last year or two, I've been chewing on a potential solution to this problem that I call the Story Empathizer. So far, the idea is too abstract; but I still think it has potential.
Security is having a bit of a hay day as everyone fights to build a moat against smart kids and AI. SOC2 and friends are a pain in the ass, but are a moat more than most these days. Security theater? The answer is at least “mostly”, but a moat nonetheless. You can feel the power swinging back into the hands of the customer.
When all software is trivial, the salesman and the customer will reign again. Not that I’m hoping for that day, but that day may be coming.
I think the "chimps in a cage" needs some followup experiments to tell the whole story -- replacing the banana with a much higher value reward, or placing another water hose which fires if chimps stopped trying to reach the reward ;)
Most likely, useless controls exist because the company thinks they are good enough for the business and there's no incentive to improve or replace them.
The chimps story is made up. There was a study that tried to test something like that but only in one case, out of many trials, was a chimp discouraged from doing something by another chimp, due to the second chimp’s fear.
I wrote this! I'm excited to see this get attention here. I'll be responding to folks' comments where I feel like I have something to add, but please let me know if you have any questions or feedback!
There's certainly a lot of cargo cult security controls out there. One of the big issues is simply that it is very hard to change established practices. It takes a lot of effort, and senior people who are not security experts have to sign off on the "risk" of not doing what all their peers are doing.
There is one word I would change in your post title. Security has a useless controls problem, not security is a useless controls problem.
If money was no object I would just hire continuous pen testers to test your infra and every time they are able to do something they shouldn't be able to then fix how they did it and then repeat endlessly. I think it is analogous to immersing a tire in water and looking for bubbles to find leaks and then patching them.
> Cross-site scripting (XSS) safe front-end frameworks like React are good because they prevent XSS. XSS is bad because it allows an attacker to take over your active web session and do horrible things
What? React is not "Cross-site scripting safe"
Many security controls do require more than a 2-3 sentence explanation. Trying to condense your response in such a way strips out any sort of nuance such as examples of how react can be susceptible to XSS. Security is a subset of engineering and security decisions often require a trade off. React does protect against some classes of attacks, but also exposes applications to new ones.
A lot of the “big lists of controls” security approaches correspond to duty ethics: following and upholding rules is the path to ethical behaviour. IT applies this control, manages exceptions, tracks compliance, and enforces adherence. Why? It’s the rule.
Contrast with consequentialism (the outcome is key) or virtue ethics (exercising and aligning with virtuous characteristics), where rule following isn’t the main focus. I’ve been part of (heck, I’ve started) lots of debates about the value of some arbitrary control that seemed out of touch with reality, but framed my perspective on virtues (efficiency, convenience) or outcomes (faster launch, lower overhead). That disconnect in ethical perspectives made most of those discussions a waste of time.
A lot of security debates are specific instances of general ethical situations; threat models instead of trolley problems.