- Scan containers and Pods for vulnerabilities or misconfigurations.
- Run containers and Pods with the least privileges possible.
- Use network separation to control the amount of damage a compromise can
cause.
- Use firewalls to limit unneeded network connectivity and encryption to protect
confidentiality.
- Use strong authentication and authorization to limit user and administrator
access as well as to limit the attack surface.
- Use log auditing so that administrators can monitor activity and be alerted to
potential malicious activity.
- Periodically review all Kubernetes settings and use vulnerability scans to help
ensure risks are appropriately accounted for and security patches are applied.
Probably the hardest part about this. Private networks with private domains. Who runs the private CA, updates DNS records, issues certs, revokes, keeps the keys secure, embeds the keychain in every container's cert stack, and enforces validation?
That is a shit-ton of stuff to set up (and potentially screw up) which will take a small team probably months to complete. How many teams are actually going to do this, versus just terminating at the load balancer and everything in the cluster running plaintext?
For as fundamental and important as encryption-in-transit is, it's always baffled me that there isn't a simpler, easier solution to accomplishing it on private networks. Everyone knows its important, and everyone wants to do it, but it's just such a pain in the ass and so prone to error that even some top security leaders will tell you not to bother because it's such a footgun.
We really need something to help make the process simpler, like how Let's Encrypt made public HTTPS so much easier to do for even the smallest of websites.
I would argue that if you have services then the right place to put encryption and authentication is at the service level. Building secure channels between IP addresses is all good, but do you really want to map roles/identities/privileges to specific IP addresses if those roles/identities/privileges really represent services?
What if you end up spinning more than one container for that service?
How are these containers getting the different secrets they need to identify themselves? Are you attaching IAM roles to them to get secrets from some secret store?
Said it twice before: differently complex. There are plenty of potential “solutions” to the specific scenario you’re describing, but my original comment was more “generally consider X instead of Y so you don’t have to care about Z” rather than “use X in this specific way and it will simply solve every problem with Y”.
Well, there are encrypted CNIs like Weave. I've used Calico over ZeroTier to similar effect. The network is 'encrypted' and there isn't much effort required past initial configuration.
But that's not really the issue. You still have a big plaintext network with a bunch of random stuff talking, no mutual auth and no security controls other than segmentation. That's the tricky problem that mTLS and service meshes attempt to solve.
First, I’ll respond this w.r.t. k8s CNI specifically: all inter-node traffic is encrypted, the only plaintext is localhost. If you’re worried about network snooping on localhost you’ve got bigger problems. As for security controls, that’s what Network Policies are for.
Outside of k8s (where one has greater control over how specifically e.g. Wireguard is deployed). Again, there is no plaintext outside of localhost. Wireguard is mutual auth, I’m not sure why you think it isn’t. Wireguard + firewall is security control since, well, you have mutual auth so rules can be applied per-client.
If Operating Systems had TLS built into the TCP/IP stack exposed by the kernel/system, you would never need to shim it in anywhere. You would just make a system call and use an open file descriptor/socket. One of the many programming-in-1970s-style things we still have not fixed.
But 1) kernel hackers won't implement it, 2) app devs are too possessive of their stack/codebase to just use one standard implementation/interface, and 3) security people are too paranoid to leave something "so important" up to the OS so they'd rather everyone implement it poorly/fragmentedly.
The thing is you can use Let’s Encrypt for private networks too. For example, I use a dns challenge to get a wildcard certificate for a sub domain on my personal site, but those domains only resolve in my house. The wildcard cert isn’t essential for this - you could get individual ones - but it was easier for my home lab.
I think both of those focus on ingress? I suppose you could just create your CA with cert manager and manually issue cert requests, but securing in-cluster traffic (automagically) will need some other moving piece, like a sidecar proxy that the service meshes use.
As far as a service mesh, check out Linkerd! I find Istio much harder to setup and manage. Linkerd is super simple and has always worked pretty much out of the box for me.
One thing to keep in mind is that Linkerd is pretty much strictly k8s only one while Istio and Consul Connect have first-class support for out-of-cluster services as well as e.g. Nomad. Relying on linkerd digs you waaay deeper into k8s lock-in.
This may be fully acceptable for you, but should not be glossed over.
From my experience, linkerd had the most seamless deployment to get to the most feature-complete out-of-the box experience with monitoring etc. But as it goes with these things there’s a much bigger amortized cost in terms of magic to unwind if you need to integrate it.
Service mesh solutions like Istio / Consul Connect+Vault can help a lot with this.
Depending on the existing size and complexity of your stack those months can be cut down to weeks or even days.
I don't mean to trivialize the time and expertise needed to set up and manage, but if you can afford to run a microservice architecture on k8s already it's definitely not untenable.
Encryption today is pretty much requirement for any regulated businesses and required practice for any sane shop, with or without Kubernetes. The only difference is that those services are communicating within the cluster internal network and not across different machines in the servers vlans.
If anything, setting up the whole things within Kubernetes ecosystem can be much easier with the available operators and automation frameworks like cert manager and/or Istio.
> That is a shit-ton of stuff to set up (and potentially screw up) which will take a small team probably months to complete.
Agree! This is why that "Kubernetes Hardening Guidance" is for NSA, not for startups.
Resource needs aside, keeping basic AppSec/InfoSec hygiene is a strong recommendation. Also there are tons of startups that are trying to provide solutions/services to solve that also. A lot of times, it's worth the money.
>It includes hardening strategies to avoid common
misconfigurations and guide system administrators and developers of National Security
Systems on how to deploy Kubernetes...
> Purpose
> NSA and CISA developed this document in furtherance of their respective cybersecurity missions, including their responsibilities to develop and issue cybersecurity specifications and mitigations. This information may be shared broadly to reach all appropriate stakeholders.
> [6] CISA, "Analysis Report (AR21-013A): Strengthening Security Configurations to
Defend Against Attackers Targeting Cloud Services." Cybersecurity and
Infrastructure Security Agency, 14 January 2021. [Online]. Available:https://us-
cert.cisa.gov/ncas/analysis-reports/ar21-013a [Accessed 8 July 2021].
How can k8s and zero-trust cooccur?
> CISA encourages administrators and organizations review NSA’s guidance on Embracing a Zero Trust Security Model to help secure sensitive data, systems, and services.
In addition to "zero [trust]", I also looked for the term "SBOM". From p.32//39:
> As updates are deployed, administrators should also keep up with removing any old
components that are no longer needed from the environment. Using a managed Kubernetes service can help to automate upgrades and patches for Kubernetes, operating systems, and networking protocols. *However, administrators must still patch and upgrade their containerized applications.*
Most (but not all) overlay networks are implemented in kernel. If you compromise one node in a cluster, you can fairly trivially snoop traffic, bias other nodes to send traffic through you, or listen via various mechanisms such that you can intercept traffic flowing between workloads not actually located on the compromised node.
So always encrypt everything unless you’re in a very rare environment with central network control that cannot be compromised or intercepted from a given machine.
This would be less of a concern if the cluster's pods were Firecrackers, yes?
AWS EKS on Fargate has a dedicated ENI and kernel per pod; the only way to intercept the traffic is when it crosses a network, or with flow control logs. Or if somebody hacked the control plane, but that's always "Game over man, game over!"
And if you've been in that kind of rare environment, those people encrypt everything. They'd encrypt their license plate if they could. You want paranoid, look up laser microphones.
Although with many clusters, compromising a single node is likely to lead to cluster compromise as it allows for all the service account tokens assigned to workloads running on the compromised node to be used by the attacker :)
We do this right now in a totally disconnected env. We have process in place to get images and manifests into our env. All containers have to go through scanning pipelines and have to be approved through a process.
We also for any container that makes requests that does not have the mechanisms for adding certificates we have to rebuild the containers in the disconnected env to insert certificates to allow communication.
It's all handled as part of the manifests as well as we can have our clusters pulled if we are caught using non approved containers and they are all scanned when they are brought into the disconnected environment.
TLS terminating is fine for most use cases (basic web services), but if you are protecting PII information you need to/should protect from external and internal threats.
Maybe a disgruntled sys admin decides to capture data coming in from the load balancer between the service(s) and sell it to the highest bidder. If traffic is encrypted between the load balancer and underlying service, it makes it much harder to do.
Isn't this exactly what hashicorps "consul" can do? Specific services, and setup keys/certs so that all internal traffic is also 'blindly' encrypted? End point services don't care or know about it because it's transparent, but over the internal network it's encrypted?
I agree there is a shit-ton of stuff to set up but based on the recent Pipeline and other Hacks where the Companies had to pay millions I would expect more companies to take stuff like this seriously.
We deal with this by having multiple vulnerability scanners. Product A and Product B both scan your active environment. Product A scans Product B. Product B scans Product A. Additionally, make the vendors of those products sign NDAs so your threat actors, other than insiders, don't necessarily even know who they are. An attacker then needs to not only compromise both, but figure out who they are in the first place.
The Bootstrappable Builds community (which camlboot is part of) are working on a lot of different efforts in this area. The main one is going from a small amount of machine code to an entire Linux distro, which is in-progress.
Notably I know the Rust compiler has been verified in this way (or at least certain versions of it have been verified), but it shouldn't be hard to do the same for any language with multiple independent implementations.
If your threat profile says you need to audit your vulnerability scanners, you audit your vulnerability scanners. There's not really a problem there right?
I am mostly non technical person but why do we need to resort to firewalls etc. if we can employ UNIX like file permission system for network access? Wouldn't it be awesome if we can allow any installed software to contact ONLY whitelisted domains? Of course this excludes web browsers but you get the idea.
How about our mainstream OSes incorporate that kind of permission system similar to what we have in mobile OSes already have today?
It's a fair question and certainly is possible to have firewalls on a per-server basis. We do that for incoming traffic primarily. The catch is if that server itself gets compromised then you can't count on those rules still being enforced.
Having dedicated network appliances acting as firewalls means from a security perspective you need to compromise the local machine and then also compromise a dedicated, hardened external system as well. It vastly ups the difficulty barrier.
I didn't know that, learnt somthing today, Thank You!
Again, as a non technical person, why a software needs access to entire internet instead of whitelisted domains specific to its requirements is beyond me, since we already know how UNIX permission system works. Is it so hard to extend that to networks? Especially since everything is file in UNIX? Kindly pardon my ignorance :-)
You are right. Software doesn't need access to everything and it shouldn't. Unfortunately, it is easier on the consumer end to leave software access somewhat "open ended". The domain for updates may change or it may need to connect to different plugin sources. Unnecessary constrictions on a software's ability to function would fuel software issues. So, more sensitive networks will have administrators define these permissions. However, providing constrictive defaults to a regular consumer wouldn't be worth the customer service burden.
I do all those things in the pro version of my RBI (remote browser isolation) product, but i don't use k8s.
- Scan for vulns and misconfigs: I regularly update the underlying distro images, and use security scanning software to monitor dependencies, and regularly update them.
- Run with least privilege: I create a separate, temporary user account (no login, no shell) for each browser and service which has no elevated privileges, as well as run that browser and its service in a group and cgroup that restricts disk, bandwidth, CPU, and memory using block quotas, cgroups, tc, iptables and active monitoring and termination.
- Use network separation to isolate: RBI is basically a network isolation layer between the client (where the human interacts), and the server (where the browser actually runs.) I also don't have any privileges (service accounts, SSH keys, trusted IPs) on any of the machines and they're all single tenant and run inside GCE.
- Use firewalls to lock down connectivity + encryption: I use GCE firewall rules and iptables drop rules to block access to GCP metadata endpoints, as well as to other machines in the subnet. Also, every network request is encrypted (HTTP is https/TLS, WebSocket is wss/TLS, WebRTC is encrypted by default).
- Use strong auth to limit user access: For running the processes I use temporary users. For persistent browser sessions I use persistent users (either system native, or in a DB, always with bcrypt salted hashed passwords). For SaaS and resource control I use high entropy random API keys between each service layer. But I could improve my game for keeping secrets out of private git repos and separating code and config, ideally automatically. I could also improve my game to limit administrator access (right now I just have a single role, with God power, but I should create an admin role with power limited to a project, ideally even on a per-customer level).
- Use log autditing: I do this, but only manually, using various grepping and inspection of various logs, including last and lastb, as well as the service internal logs. This is likely something I could improve as well.
- Review all k8s settings: I don't use k8s or docker, just run services in this custom sandbox on GCE instances. I see that as both a way to limit attack service and complexity as well as minimize some overheads for maintenance and performance. In the longer term these things are worth exploring.
Some useful guidance here, although worth noting that some of it is a bit dated (k8s security can move quickly).
Most notably from a scan through, they're mentioning PodSecurityPolicy, but that's deprecated and scheduled to be removed in 1.25.
There will be an in-tree replacement but it won't work the same way. Out of tree open source options would be things like OPA, Kyverno, jsPolicy, k-rail or Kubewarden.
We've actually already moved the official guidance from PSPs to OPA and that's what the primary DevSecOps reference implementation has used for about two months now.
"We" being the DoD, but our guidance is the NSA guidance. I'm not sure why it hasn't made it into the policy pdf, but the actual official IAC has been using OPA since April.
That's awesome. I know a lot of work is going into things like P1.
I scale some large K8s in fed (not DOD)... ATO is fun. Actually unsure how I'd position something like OPA (actually envisioned them being key back in '17 when working in the Kessel Run relm... called and they hadnt been exposed to fed at the time).
Legit question / maybe dumb - where is DOD at in general perimeter security. Outside looking in & everything before a container runs - network and to OS primarily, cloud envs as well. A lot of Fed needs help here before they can comprehend even a basic Kubernetes authorization. It's also generally more important (at list from controls perspective) in non DOD environments, than something like security context in pods.
P1 has been leading the pack here. Most of the guidance mentioned in this guide has been coming from the CSO's office [0] for a while. We're using OPA extensively for not just container level policies but blocking column/cell level access in queries. We have multiple roles [1] to help Kessel Run, Space CAMP, and other software factories with this.
This is why I think big vs little government is really missing the forest for the trees in a lot of contexts (unless your overall goal is to minimize taxes and regulations at all costs). It's really a debate about the nature of bureaucracy. Process vs nimble. You can organize things to promote either, depending on your actual goals.
Unfortunately small government activists have recognized this and have enacted policies that promote incompetence as much as possible. "Good enough for government work" is a choice, not an inevitability.
I wonder if there's a third option, a decentralized government of small nodes, which can orchestrate their activity to rapidly scale in the need of large resource projects.
In-tree replacement is coming in v1.22...as in, just a few weeks away. It uses admission controllers, just like OPA/Kyverno et al, hence the current guidance to use one of those.
I used to study and focus on security a lot more and keep up with trends. After several interviews this year I realize a lot of jobs prioritize leetcode over everything else. It's pretty annoying and makes me wonder if the focus for tech works is leetcode above all else then no wonder so many companies have insecure apps and servers.
I applied for a job that wanted someone who has experience with SAML. I've actually written my own hobby IDP, and I can diagram the handshake off the top of my head. I've spent a lot of time learning how to write custom decorators to handle access restrictions. I failed my interview because they wanted me to leetcode some shit with 3d geometric volumes. I'm sorry but what does this have to do with SAML or security?
Wow that's dumb. I've done some reading on 3d computational geometry for hobbyist game engine reasons, and in my admittedly limited experience, very few of the algorithms involved are intuitive enough to be derivable in an interview setting.
Consider the typical company is running servers/instances that haven't been updated or rebooted in 6 months to 3 years. Never mind the multiple year old software dependencies in their apps...
Thanks! Yeah, articles like this I would have studies in greater detail in the past but this year I realize In need to improve my leetcode/algo times so long term I'll keep focused on security and important topics. But in the meantime ... time to zig-zag a binary tree :(
At my company the head of security is also the chief programmer. Not sure if that's a good thing but he's got 30 years experience and likes to tell war stories.
What protects the companies and freelancers who write these insecure systems from liability? Is it just a blanket “we are not liable” clause in every contract?
There was a famous case about 6 years ago where Google didn't hire the author of Homebrew because he couldn't whiteboard a leetcode type question. He posted to Twitter, then it was discussed heavily on tech sites:
In my opinion the problem has gotten worse. I spoke with a former Microsoft Product Manager early this year and he mentioned "Highly experienced engineers give the worst interviews" .. based on the current environment. He mentioned it's because the questions that come up are stuff people learned 20 years ago and never used in real-life.
I was invited by several FAANG companies this year to interview and did one interview (and turned down others) but had to cram for 2 months doing leetcode.
I realize it's a game but have mixed feelings. I think the interviews are bad if it's the only option but part of me knows the interview process is so bad it will likely discourage people from being Software Engineers and keep salaries high for many more years.
leetcode is a website that acts as a programming dojo of sorts where programmers can prove their skill in a measurable way and thus increase their odds of being hired.
Employers use it to loosely gauge a programmer's basic skill level, as well as their competency to think clearly and cleverly.
The DoD maintains its own registry of hardened container images they call the Iron Bank. I guess they can't issue guidelines to the general public that you should use these, but the DoD has to use them. Which kind of sucks, because they may be hardened, but they also break all the time because the people responsible for hardening them can't possibly understand all the myriad subleties involved in building and deploying software packaged with dependencies in the same way the actual software vendors do. They make some serious rookie mistakes, like just straight copying executables out of a Fedora image into a UBI images, which works perfectly fine when a brand-new UBI release happens and it's on the same glibc as Fedora, then immediately stops working and all your containers break when Fedora updates.
They may suck at building containers, but this also sounds like a release management issue. Both the producers and consumers of the release need a test suite to validate the new artifacts before they can make it into a pipeline to eventually deliver to a customer use case. (But also they should 100% not be copying random binaries)
For what it's worth I've seen worse from corporations. Bad hires lead to bad systems.
I work on Platform one and we use and deploy new versions of these containers weekly and have never had them break in that way. In the Beginning when I was on the Kubernetes team we struggled with the containers just not working at all but they have gotten better.
Now I work on deploying and we run every container from IB and have few issues. If you find them report the images and they will fix them pretty quick.
A lot of this applies to containers in general. Not complaining, it's well written but wish they would break out the none kube container stuff into a general container-sec advice for people.
This is a great point. And containers don't even really exist in the first place, so really there should be (at least one of) a family of docs about securing the various namespaces, cgroups etc in modern Linux releases, and a doc about how to secure them in combination with each other.
You don't use this guide as a bible but take it into account and compare with other common security advice in the field. If you get similar results it most likely a good list of advice.
What yields the lowest risk - spending a ton of time hardening one cluster, or building multiple clusters to reduce the blast radius of bugs and misconfigurations?
> What yields the lowest risk - spending a ton of time hardening one cluster, or building multiple clusters to reduce the blast radius of bugs and misconfigurations?
Not sure this is a valid dichotomy.
If you are spinning up multiple clusters, you are presumably doing so in an automated fashion. If so, then the effort of hardening is very similar. It doesn't really matter where you do it.
Multiple clusters may have a smaller blast radius, but will have a larger attack surface. Things may be shared between them (accounts? network tunnels? credentials to a shared service?) in which case an intrusion in one puts everyone else at risk.
> If so, then the effort of hardening is very similar. It doesn't really matter where you do it.
Nope. If the clusters are separate it limits how damaging a compromise of the cluster is. This is why cloud providers don’t stick you on the same k8s cluster as another tenant.
> Multiple clusters may have a smaller blast radius, but will have a larger attack surface. Things may be shared between them (accounts? network tunnels? credentials to a shared service?) in which case an intrusion in one puts everyone else at risk.
It’s not really clear what you’re trying to say here. If someone compromises credentials shared between all clusters that’s the same as compromising credentials used by one mega cluster.
> Nope. If the clusters are separate it limits how damaging a compromise of the cluster is.
But if the clusters are configured similarly, a flaw in one is likely present in the others. GPs point is that if you invest in hardening, you can easily apply it to multiple clusters.
> It’s not really clear what you’re trying to say here.
I assume they mean having more clusters present means there are more opportunities to be compromised (e.g. more credentials to leak, more API servers to target, possible version skew, etc.).
> But if the clusters are configured similarly, a flaw in one is likely present in the others.
That doesn’t matter. The point is that you isolate applications/tenants into different clusters. So if someone exploits their own, they haven’t gained access to some other application.
> assume they mean having more clusters present means there are more opportunities to be compromised (e.g. more credentials to leak, more API servers to target, possible version skew, etc.).
That doesn’t even make sense though. In our strawman scenario these are cookie cutter things. Many is not more vulnerable than one in this case.
You can't skip "spending a ton of time hardening one cluster" anyways.
Having multiple clusters may help reduce the blast radius of _certain_ attacks, to some degree. However, managing multiple clusters is a lot more difficult than managing one, and you will potentially replicate bad practices, vulnerabilities to multiple places and increase maintenance burden.
The one benefit you get is protection from bugs in Kubernetes itself and a reduced blast radius. Even if you could produce a secure and H/A cluster, you still leave yourself open to Kubernetes bugs and configuration mistakes such as adding a network policy that blocks all communication across all namespaces.
Multiple clusters protects you from these types of configuration mistakes by reducing the blast radius and providing an additional landing zone to roll out changes over time.
And making it so that "many clusters" look exactly like "one cluster" is one of the goals the kcp prototype was exploring (although still early) because I hear this ALL the time:
1. 1 cluster was awesome
2. Many clusters means I rebuild the world
3. I wish there was a way to get the benefits of one cluster across multiples.
Which I believe is a solvable problem and partially what we've been poking at at https://github.com/kcp-dev/kcp (although it's still so early that I don't want to get hopes up).
At a high level, almost anything you would want to use multiple clusters for can be done on a single cluster, using e.g. node pools, affinity, and taints to ensure that workloads only run on the machines you want them to. As a simple example, you can set up a separate node pool for production, and use node affinity and/or taints to ensure that only production workloads can run there.
One exception, as other have mentioned, is blast radius - with a single cluster, a problem with Kubernetes itself could take down everything.
Another issue is scaling limits. We've found a few dozen ways to break a cluster by scaling along a certain axis. (Most are not related to "vanilla" Kubernetes but the backing cloud provider or specific add-on components.)
Other management tasks are easier when you have separate clusters, such as applying environment-specific OPA policies and not having to filter them based on labels or annotations you hope everyone is using correctly.
At our very large org we do both. At least two clusters per region to isolate platform changes, all hardened to the same standards using automated tooling.
"Nauarch, in ancient Greece, an admiral or supreme commander of the navy, used as an official title primarily in Sparta in the late 5th and early 4th centuries bc." - google cites britannica (!)
First you should configure some kind of authentication. It is fun to remember this 3 years old Tesla example [1]: Publicly accessible Kubernetes Dashboard.
In general the NSA functions more like 2 agencies, one focused on the "red" side (hacking, breaking crypto, sigint stuff) and one focused on the "blue" side (protecting US assets from being hacked, developing better/new crypto, providing guidance on security).
Both sides are good at their jobs and for what it's worth, my understanding is that the blue side really does want to keep your shit from being hacked.
> Not sure I’ve ever read the NSA providing hardening guidance on anything before.
The NSA made SELinux, SHA-1, and SHA-256.
SHA-1 was specifically a slight change to SHA-0 that was unjustified at the time but over the next 3-5 years some attacks on SHA-0 that SHA-1 was not vulnerable to surfaced.
I used them back at lockheed as early as ~2005? Although they were mostly around hardening BSD IIRC... (which became SElinux? I can't recall) and at the time, they were really "best practices" (things that you want to make sure you have done if you expect to pass any sort of audit (SOX, SAS70, etc).
Sarcastically, we would say "they already have back doors in everything, they just don't want any other Bad Actors getting in their yard"
I keep forgetting NSA's job is to protect instead of maliciously eavesdropping on Americans. Given their prior probability of being a bad actor I'd take any security "guidance" they issue with a huge grain of salt.
We all know it's the National Insecurity Agency[0], and that the NSA hoards & stockpiles 0day. They very rarely release tools and research papers designed to strengthen our IT infra, since they sit on so much 0day. There's no balance.
I don't buy that they're 50% red team, and 50% blue team. More like 99% red team and 1% blue team.
> We all know it's the National Insecurity Agency[0], and that the NSA hoards & stockpiles 0day. They very rarely release tools and research papers designed to strengthen our IT infra, since they sit on so much 0day. There's no balance.
Well if the NSA does have loads of 0day then it's still better for them to give good security advice to strengthen infra, because it will limit the access adversary's have while they still have all the 0day's anyway.
i.e. they are advanced enough to not need to walk through an open door, so they might as well encourage others to close the doors because that will increase national security (while presumably not limiting their own access).
One of their missions is infrastructure security of nationally important assets. Usually this is military stuff. But think power grids, etc…
NSA ironically puts out some good security stuff.
Their “manageable network plan” pdf is a must read for anyone hacking to wrangle a new environment, even if it isn’t followed by the owners of said environment.
I get a "cannot find requested file" page when trying to get the actual file. The IAD library stopped being updated in 2018 and the link has apparently bitrotted. Cryptome still has a copy, FWIW [0].
Having said that, the last thing I tried implementing from the NSA was a simple systemd service to disable ptrace [1]. The provided service definition had at least three errors, and the instructions themselves were incomplete. Not exactly a confidence builder, but I'll take a look at this one so thank you.
Have absolutely zero background knowledge here, but just to be pendantic your argument is structured as a logical fallacy [1].
While we maybe could estimate the relative sizes of the groups you mention and compare them relative to each other to guess the strategy/policy/tactics it's not clear that would be accurate; or maybe we could infer based on some heuristic or metric (like budget being a proxy for headcount), and even then it's not clear how certain that guess would be, so it's not obvious how "we all know" it's 99/1 vs 50/50, vs any other permutation.
Push come to shove would probably agree with your premise and conclusion, and really have no idea, so apologies for being nitpicky; without a background on the technical details it's likely I'm wrong.
- Run containers and Pods with the least privileges possible.
- Use network separation to control the amount of damage a compromise can cause.
- Use firewalls to limit unneeded network connectivity and encryption to protect confidentiality.
- Use strong authentication and authorization to limit user and administrator access as well as to limit the attack surface.
- Use log auditing so that administrators can monitor activity and be alerted to potential malicious activity.
- Periodically review all Kubernetes settings and use vulnerability scans to help ensure risks are appropriately accounted for and security patches are applied.