I guess taking some risks as I am not really authorized to use social media, but at AWS there is a sauron like focus on not letting internal engineers view customer data. On my service we don't even have tools to do it and the security controls are very tight and getting tighter all the time. It's a big fucking deal, if not the biggest fucking deal, besides KTLO.
Posting to get ahead of some of the comment here on security posture.
The first premise when using a vendor is trust. If you don't trust them then don't use them. So far, AWS has proven to be trustworthy in my opinion. From what little I've seen about their operations they seem to give a damn and you have to assume that iceberg goes deep.
If you don't trust them, don't use them. Provide evidence to your leadership of malicious intent and provide an alternative and they'll back you.
Executives make decisions based on a small amount of evidence and go with their gut from there. So far AWS has a good reputation but if they're not careful the mood could change quickly. I don't think something like this shows any malice and I have to just assume any cloud provider _can_ access all of my data but chooses not to in order to preserve their reputation which ultimately generates more revenue long term than any single piece of data.
This seems very binary / black-and-white to me - it’s not “you either trust them or don’t”. I may trust AWS to keep the cloud running, but I may not want to trust all their stuff with access to my private data.
If a provider puts themselves in a position that they’re entirely unable to access my data, or it being extremely difficult, that would actually increase my trust in them.
If having all customer’s private S3 data being accessible by all AWS support is happening due to a wrong checkbox being selected, it definitely hints at a bigger problem and erodes my trust in that vendor.
And how they respond to this incident could restore that trust, or continue to erode it even further.
> If a provider puts themselves in a position that they’re entirely unable to access my data, or it being extremely difficult, that would actually increase my trust in them.
This is never possible if you're using KMS / server side encryption or no encryption at all. Your data on these services is always visible if they were to try to read it, whether that be forging KMS requests to decrypt data or passively snapshotting VM memory for inspection. Technically this might be solved via recent datacenter CPU security improvements but there will always be flaws and 0-days that people with money and a will can use to bypass these protections.
Use rclone with encryption for S3 and consider anything else potentially visible.
Of course, that’s why I said “extremely difficult” — it’s never completely impossible. I was thinking about Apple, who made iCloud reasonably e2e encrypted: of course, they can still push malicious updates targeting specific devices, but that needs to be a deliberate action by the organization, not a single rogue employee.
It’s not entirely unlikely that AWS was hacked / socially engineered in order to get this privilege in there, perhaps because some specific s3 buckets were being targeted.
As an organization, you need to have a high enough level of protection that these things are just not possible.
So maybe this whole idea of managed IAM policies being installed in all customers’ accounts is just a fundamentally bad idea, and should just be given on a case by case basis.
While I recognize that's the ideal goal, and most appropriate, I think there may be gaps in practice. My team just discovered yesterday that a Kinesis Firehose Stream can still deliver to an S3 bucket that lacks a bucket policy and whose ACLs disable any access to it at all. That diminished my confidence a little bit that all teams are perfectly in compliance of the overall goal.
Kinesis Firehose uses an IAM role to deliver data, so delivery within the same account does not necessarily depend on permissions on the bucket. Removing s3:* permissions from that IAM role or adding an explicit deny statement to the bucket policy would stop the flow of data.
I thought this was true at the engineering level, but "authorized" cross customer data sharing- where there is some debate at what's customer data vs what's platform data- at the business level seems...rampant? Just curious for more perspective on posture.
When I was at Facebook (2018-2020) there were in fact few permission barriers (click through the screen warning you to follow the rules, and that's at the UI level not to mention pulling stuff directly from Tao through the old PHP interfaces still laying around, or just building a user context from scratch), but plenty of auditing and automated detection.
Okay, opinions up front: I don't think this is worthy of "declaring a security incident. Having some experience working behind the scenes, just because this policy was changes this way doesn't mean "All AWS Support personnel had unrequited access to your S3 objects." To me, this reads as Twitter inflammatory nonsense. Here's why:
* KMS Encrypted objects would not be accessible because the support personnel would need permission policies that grant `kms:decrypot` permissions to encrypted objects. The only way this could wind up happening is if you are granting the AWS Support principal access in the KMS Key Policy.
* Objects with a default-deny bucket policy could not have been circumvented with the support team's escalated privilege. So if you have a policy that looks something like this, that data was not exposed:
{
"Action": "Deny",
"NotPrincipal": [...]
}
* Internal Checks. AWS has a lot of protections and checks in place to prevent their support personnel from accessing metadata about S3 objects. They don't have tools to fetch the actual objects unless your really high up the food-chain. Think like, people with a legal or security related reason to need to review data.
Nevertheless, I'll share some nuggets of wisdom I've accrued over the years, in a hopes to save y'all some time:
If you have an NDA with AWS, I'd recommend reaching out to your TAM and asking them about what the potential exposure was; and make sure to ask about the internal access control mechanisms.
But everyone who's concerned and DIDN'T set up data access logging already: 1) Consider turning that on to trace potential disclosures in the future. 2) Open up a case with Premium Support under the CloudTrail, state that you have a security incident and you need to retrieve data events for the time of 2021-12-23 to 2021-12-22. If granted to you, save that sucker in S3 and query it for requests coming from `support.amazonaws.com` in Athena. [0]
"Declaring an incident" means there's something to investigate, it didn't mean anything bad has happened: it's detection of a non-conformity. The output of the incident would look similar to what you wrote.
Any time the wrong permissions are assigned and confidentiality is potentially breached, I think you have to have an incident. Arguably in some jurisdictions, it's a legal requirement to ensure you have a near miss not an actual breach.
Exactly. This is something we'll be reviewing in our monthly security review at work, discussing what the impact was, why we were not impacted, and any action items we want to take.
Declaring an incident doesn't mean sending out a breach report or anything particularly dramatic, though I can see how as an outsider it may sound that way.
This is partially right. The best way to think of an incident is that it has negatively impacted Confidentiality, Integrity, or Availability (the CIA triad).
An incident does mean something bad has happened, and requires action. An example of an action can be to investigate the impact, or to shut something down, or to patch something.
> * KMS Encrypted objects would not be accessible because the support personnel would need permission policies that grant `kms:decrypot` permissions to encrypted objects.
This is only true if you are using SSE-KMS *and* are creating/managing the CMKs used to encrypt objects. If you’re using SSE-S3 or SSE-KMS with the default AWS S3 key (aws/s3) there is no key policy to manage.
Of course SSE-C or 100% customer-managed crypto would be immune as well, but under different mechanics.
> Objects with a default-deny bucket policy could not have been circumvented with the support team's escalated privilege.
I would wager this is done for a vanishingly small percentage of buckets used in production. Less than one percent for sure.
The general point you’re making seems to be that if you had a comprehensive, defense-in-depth security strategy for your cloud computing environment then this would have had no effect, and i do agree with that. I just think that in reality this would have provided access to wide swaths of customer data.
AWS uses envelope encryption. Data encryption (for example bucket-level) keys are generated and managed outside KMS for some time/services to minimize calls to KMS that can be expensive.
Thus, I imagine with AWS-managed KMS keys, in some cases permission to KMS service is not needed to access data.
KMS-CMKs are better in this respect, can be controlled and audited.
> * Objects with a default-deny bucket policy could not have been circumvented with the support team's escalated privilege. So if you have a policy that looks something like this, that data was not exposed
Service accounts are not constrained by customer bucket policies. In fact, not even SCP's are restricted by service-linked roles:
"SCPs do not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can't be restricted by SCPs."
Sure it is. AWS now needs to go on record as to 1) why they did this, 2) how much of this access was used, 3) who is able to use it, and 4) whether this was done at the behest of some government.
Such concerns specifically led to my decision of only uploading sensitive data to S3 with client side encryption. Since the aws cli tool only supports server side encryption with keys stored on amazon servers (where the non-default managed keys cost like 1 USD per month), I decided to simply symmetrically encrypt the backup of my syncthing data volume with AES256 using gnupg and only then pushing it to the S3 bucket.
this should be the default user behaviour for any cloud storage. Don't put unencrypted (company) data on a cloud infrastructure you don't have full control over.
Also reminds me of the (hyped?) "outrage" when a former facebook developer stated that they used to have a "default password" that allowed fb devs to log into every account and the media were like "omg they could have logged in and seen your photos". I mean... yeah they're the developers they could always do that even without the password
> yeah they're the developers they could always do that even without the password
Not really. Obviously facebook the company can always access your data. Weather or not an individual developer can do the same, which developers can do it, how they can do it, and under what level of supervision this would be is a design choice.
It is possible to design a system with very high level of security and ones with none too. As with any design considerations it has trade-offs. A super secure system might introduce dev and operational frictions which the company might deem unnacceptable. But even with that consideration the question is a lot more complicated than a simple “yeah they’re the developers”.
Sadly, "yeah, they are the developers" applies far more often than any other scenario.
Unless a business is heavily regulated and checked for compliance, the burden and friction introduced by developer access controls to the data of the software they write is anecdotally not seen as a positive investment in any company I've seen the internals of.
> Just assume every engineer has access to everything.
Wise rule to live by. I would certainly advise everyone to assume that.
On the other hand geek_at was talking about a slightly different thing.
They were talking about how the media criticised FB for having too lax controls on private information. geek_at even called it an "outrage".
We can and should absolutely ask platforms to do better while at the same time playing it safe ourselves with the data we control. There is no contradiction there.
There is an other layer in which it feels we are talking by each other. You mention zero days, and yes those are a thing and yes an insider is in an excellent position to find them and exploit them. Finding them and patching them is a good idea for sure. (For many reasons.) But the FB thing mentioned wasn't about an exploited zero day. It was a company sanctified system and associated work practices. We can demand that a company develop better practices (where not every engineer needs this high of a level of access to do their job) without expecting them to find and patch every single vulnerability.
Not at amazon, but I've def written internal-only-and-never-used-outside-of-team-type-tools that has obvious security issues, just to let non-devs get things done.
> Weather or not an individual developer can do the same
This was early-early facebook. Like under 10 developers back in the day. But obviously today a facebook frontend developer should not and probably has no access to the database
It should be the default behavior in places where it makes sense - customer PII data, financials, etc. I'm not going through the headache of implementing encryption for otherwise public images.
Maybe AWS should have another object storage product that's specific to sensitive material. I know that would flip Corey Quinn's lid because it would be yet another AWS product (h/t to him for actually have a valuable twitter post and not just snark) but I honestly don't care - I'd rather have extra level of confidence. For this there would be absolutely no data access.
Tangentially, I wonder if this role has been deployed to GovCloud?
What does "full control in the cloud" mean to you? It sounds like that's an oxymoron in your opinion, but correct me if I'm wrong.
I get the idea, but also realize this is fundamentally incompatible with using the range of services at AWS. Fringe-future tech aside, you need unencrypted data to process it and use it. AWS isn't just S3, it's lambda, it's hosting and data science and databases.
Having just read the tweet, as weird as it is to give AWS the benefit of the doubt, I agree with others in that thread. If you KMS encrypt your data, then the engineers will likely only have the ability to see encrypted data. My guess is that there are processes and monitoring in place to ensure this is only used as a break-glass.
Have you ever dealt with automation of AWS resources? There are definitely issues where, by putting incorrect permissions on a KMS key or an S3 bucket, not even root can get your data back. This is likely what this is for. Customers would rather have a non-removable AWS break-glass than their own root account.
When the Australian Cyber Security Centre did their security assessment of AWS, they actually recommended that Australian government agencies not do this. Their recommendation was for agencies to use KMS. They considered that if agencies managed encryption keys themselves, the risk of them misplacing their keys and losing access to their data was too high.
The code says it uses the public key for uploading/encryption, and the private key for downloading/decryption. It's useful to be able to split writing and reading with cryptographic guarantees, for security or organisational purposes. E.g. if a client only has the public key and should it be compromised, it can't read the data. Or it allows you to have less strict access controls on uploading clients.
I wish AWS official clients did this automaically by default. Customers could opt out from encryption if it is public data. When an aws account is created it also creates a default CMk (specific to that aws account) for entire service which customers could change if they want.
Is there a managed service that does this? A Dropbox-like UX that client-side encrypts everything and then persists it on S3 or the like. I don’t want to fiddle with it myself but I’d pay for that kind of service
All that's changed is that this role is now visible in IAMS and any API calls by Amazon support tools will be logged in AWS CloudTrail - I don't think AWS have any more access now than they did before.
It's obvious to anyone who has had problems with any AWS services (lambda functions that couldn't be deleted, non properly propagating/operational services) that support has access to them via their tools.
The policy in question had “s3:GetObject” permission to “*” added for a few hours. And CloudTrail logging doesn’t capture GetObject API requests by default. This role is specifically for metadata only access. Yes, service teams have behind-the-scenes escalation tools for bugs on the backend, but the more numerous front-line support staff should only be able to view metadata. The GetObject permission would have let them view actual potentially sensitive data.
The problem is nobody who is getting upset about this had any reasonable expectations to begin with. They're just now realizing they might need to protect their data against attackers inside AWS, and now they're caught with their pants down, so cue the hand-wringing.
Yes. As I wrote in https://news.ycombinator.com/item?id=29663566, service accounts are not constrained by customer bucket policies. In fact, not even SCP's are restricted by service-linked roles:
"SCPs do not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can't be restricted by SCPs."
When I worked in aws, this is primarily used to check for permissions of an object. I know how dumb customers can be, for the most part this is used to see why a customer cannot delete a bucket or object those sort of things. I don't remember having ability to see actual customers data only metadata is accessible.
Edit: Based on what I know, I'm pretty sure support will not be able see any of the customers data.
Or how broken is the tooling for IAM + S3 + other services (for example Athena and Glue).
Several times I had to explain to support that we do not want s3:* anywhere in our infra because they insisted that is the easiest solution so they do not need to waste their precious (paid by us) time on figuring out which exact permission is missing that I as a customer have no way of figuring out.
Many of us working on cloud infra for 10+ years and we still struggle some times to set up especially new services.
I really like how you conclude that this is somehow the customer's fault. I find it entertaining how the decent support staff of amazon admits that the tooling is subpar, because they got a different system internally to check out why S3 throwing a 403. As a customer we do not have anything just the API.
And no, this is not because the customers are dumb. I can't wait the moment when AWS has to actually compete with other cloud providers because this arrogance has to go.
This. The tooling and error messaging around IAM is inconsistent and lacking. I’ve even seen AWS support be completely wrong about why IAM is denying something, so I am guessing their internal tooling isn’t much better.
I caught on the fact that they have much more finely grained logging than the users do (e.g. underlying specific access denied errors which are covered by a generic one users get), and sometimes report what they see there, with no consideration on the effect on the users. You can sometimes get some details on how the services work underneath.
It happened several times with Glue mentioned by the user two replies above (usually schema registry which requires *s in resource element of the policies to work).
Maybe a more constructive way to look at this would be that people simply do "dumb" things. In customer support where you only see those moments, it might not always seem that way, but dealing with people's simple mistakes is also educating them to do better next time.
People can be ignorant, lazy, not give a shit about the work they are doing, have poor learning ability and or skills, and cross their fingers, mashing buttons, hoping everything just works, and then expect everyone else around them to help them out of their screw ups.
If you've ever worked in CS, or known anyone that works in CS, you know that there are an absolute fucking shitload of these people. Often in roles they are unqualified for and with privileges and power no sane person would ever give them.
That's true. It's also true that AWS's IAM system is pretty complex and not incredibly well designed. AWS internally makes mistakes with it with some regularity.
I find this insulting as a customer. Is AWS usually contemptuous of its customers?
I don't think I've ever called my customer "dumb", and working as a consultant I've seen all kinds of interesting things.
People make mistakes. They're always in a hurry. They may have a hard time understanding ambiguous, complex or incomplete documentation. The interface may be confusing and lead them to bad solutions. Come on, support is there to help.
>I find this insulting as a customer. Is AWS usually contemptuous of its customers?
Oh come off it. We've all seen the idiotic things that "users" can do. Someone complains something isn't working. Then you go through the steps to see what they have done, and you think "why would you ever do that?" We've all been there, and if you haven't been there then you just haven't had much interaction with "users".
I've had many AWS support engineers (and higher engineers) look at things in our env and say "I've never seen that before" and have no clue what was happening. It's a two way street. Everybody can't know everything. And remember that many devs in the real world have much broader domains than AWS engineers - I have to know every nuance about 30 AWS services, as well as my own applications and my own domain. An AWS engineer would be limited to having a deep understanding of one or a few services, and has internal experts on individual services to reach out to when they don't have some information. But sometimes even AWS devs might not be aware of a little line in the Lambda docs like "Background processes or callbacks that were initiated by your Lambda function and did not complete when the function ended resume if Lambda reuses the execution environment. Make sure that any background processes or callbacks in your code are complete before the code exits." [1] There are gotchas like this with every service, and missing a single line within the novella of docs AWS provides for each service is not a significant failing. There are also issues and concerns that are completely undocumented and are only learned with experience.
As a developer for a SaaS, I have to spend some time on support every day, including for devs who have refused to read our documentation for a particular service we provide (and the only one these devs use). It's frustrating, I know. You should assume that the developers who are your customers are unlikely to be stupid, and are instead just not informed about something or haven't read the docs (maybe they didn't know where to look, or like many, they are too busy to justify spending a day reading the docs for lambda). Best thing to do is direct them to the relevant parts of the documentation and do your best to help those people.
I am not an expert in AWS, but I have been using it for far too many years and am intimate with a number of workarounds for common problems(fuck you cloudformation).
But, I have sent off helpdesk requests for things that turn out to be me being very stupid.
As a customer, I don’t take it as an insult, we all can (as per gp) be dumb on occasion, without actually being dumb in general. On the other hand, the comment did not offer much assurance with regards to the topic at hand either.
This sort of personal attack is unwarranted and extremely unfair. AWS is renowned for it's byzantine and ever-changing and expanding nature, to the point it's outright practically impossible to know extremely basic things such as what are you paying for and how much you are paying.
It’s their servers, they can do whatever they want with them. What are you going to do about it they have physical access, you have an API key.
Suggestion, if you want to secure your data don’t put it on other peoples computers, and for fucks sake don’t store your crypto keys on someone else’s computer.
That is a really false statement. This is why contracts, audits,... exists and they define what each party can and can't do. When in violation this could result in huge fines, loss of business,...
You can also securely storage your data on other servers by using client-side encryption.
Not every business/person has the means or knowledge to have their own datacenter.
> You can also securely storage your data on other servers by using client-side encryption.
Hey but you have audits, contracts, why would you need that? You are effectively saying the same thing that parent comment is. You're just offering a more practical solution.
There are many reasons to do client-side encryption, some of them are that you want to storage the data on multiple storage providers but with the same key.
A national law of the country explicitly tells the company to do so, or a company you are in contract with asks of you to do so. The key that S3 can provide is not good enough for your internal usage,...
Stop looking at everything pure technically because that is not how the real world works.
> When in violation this could result in huge fines, loss of business,...
The stress is on could. AWS is too big to be allowed to fail. Has Facebook seen such severe consequences because of known misconduct? And AWS is in a much more critical role for many businesses.
But it could impact them if there came an issue out of it (someone can prove that AWS downloaded some of their files). AWS doesn't want to go in the news that they look at their customer data as that would impact the decisions of future and current deals of hosting their data on AWS.
I've worked for some big financial institutions and the longest part of the contract with AWS was all lawyers going over what is happening with the data, how AWS has access to it and especially how it doesn't have access to it.
Sorry a small startup does not have the resources to go against behmouts like Amazon, Microsoft or Alphabet in US courts if that is your defence it is worthless.
Amazon has many cases where they’ve been found to have violated contracts, laws, etc.
The rest of the points save for keeping the keys on your own hardware is orthogonal to whether Amazon with physical access to your data could access it.
I think we are both in agreement that in most cases the data isn’t worth accessing which is the real world protection most data on Amazon has.
A very shortsighted take. Sure, yes "they" can do whatever they want.
But even in the world you are imagining where AWS is peeking at customer's data willy-nilly, I have to imagine you don't believe that every tech support representative should have default access to every AWS customer's storage data, do you?
Even a dishonest unethical company that created backdoors for its employees would surely gate their backdoors.
This change (a mistaken one that was rolled back immediately) would have given the keys through the front door to presumably thousand low-level employees.
BTW, AWS spends a long time talking about how verifiably they do not have access to customer data. If you're interested in crypto (otherwise not sure why you are referencing it here), this kind of thing should be right up your alley: https://www.youtube.com/watch?v=4J8REvs7zaY
AWS has regions in China, they verifiably DO have access to your data.
They also have regions in the US where they verifiably DO have access to your data.
Both points of access are verifiable by their compliance with the law in those countries ensuring that the government can access that data.
If you use their CA or EU locations it’s conceivable that they’ve developed separate software that actually protects your data but I would hazard a guess that they use the same backdoored software there once it has been sufficiently beta tested in us-east-1
I have been looking into this lately for a company that wanted to host important data on AWS S3. I couldn’t find conclusive information in public domain.
It’s hard to decipher the AWS Data Privacy policy:
In section Who Owns Customer Content, it’s implied that AWS doesn’t access customers’ data:
“As a customer, you maintain ownership of your content, and you select which AWS services can process, store, and host your content. We do not access or use your content for any purpose without your agreement. We never use customer content or derive information from it for marketing or advertising.”
Here, the statement is we don’t access customer’s data without customer’s agreement. But customers do agree with ToS in which different statements might be included.
In another place it’s mentioned, if governments request, AWS will comply. Considering that governments obviously request access from cloud providers, and obviously require them not to disclose the backdoor, I came to the conclusion that governments have unconditional access to data held in data centers of companies such as Amazon, Google and Microsoft.
As for non-government access, there are claims that Amazon uses data or metadata to launch competing products. If I recall correctly, AWS may collect some data presumably to maintain and secure the platform and fight abuse. The counter argument is that, AWS will not risk a profitable business; but AWS is too big to be easily impacted and might act on good opportunities.
It would be great if people working in AWS or similar platforms could chime in.
Amazon's product people also don't look at third party seller statistics to decide which products to sell themselves. Until they got caught doing just that.
To assume that they don't look at data feels naive.
A German super market chain with online ambitions has a rule that nothing touching their pipeline can be hosted on AWS. Want to sell them SaaS? You can't run your nodes on AWS. I consider that to be reasonable.
Are you referring to third party seller sales on the amazon.com websites?
That's completely different and separate issue from AWS customer data (for example someone running e-commerce software on a linux VM).
3rd party seller sales pay Amazon commission on each sale - deciding which products are selling well is just a matter of Amazon doing a SQL call on their own sales database (much like a physical retailer may see what brands/products are selling well)
(S)he's just saying that Amazon's claims about their business practices can't be audited by anyone and are therefore unenforceable except by lawsuit on a timescale of years.
I guess you have inside information but my naive assumption, reading the website doc and guessing how this things work, is that Scwarz group created it on top of their already existing self-service infrastructure, they "just" opened to external customers. Or did they create it from scratch?
That's great. It is heart-warming to hear some businesses are taking this threat seriously.
We built are service in similar vain. Nothing can reside under authority on non-EU entity, and nothing can be hosted on servers owned by non-EU entity. This effectively removed AWS, Azure, GCP and Alibaba.
And still, we had plenty of choice.
We specifically picked “boring” cloud provider. No thrills cloud vendor which has core business in building infrastructure rather than snooping on customers.
Would having a 3rd party host the services in the EU meet your requirements? Or having data residency restrictions with strict key management, EU based support, and access transparency/approvals?
IMO Google is also taking this seriously, but I am genuinely curious if any off the above would meet your requirements.
When you think about risks it’s productive to understand what could happen vs what may.
I was a stakeholder in an early cloud negotiation back in 2010-11, and one of the key issues was the support personnel. Ultimately, they have access to your data.
I think you’re a little harsh describing it as a scam, but yes you have to be aware of who you are trusting and what the risks and failure when using this type of service, including the security risks.
Of course there are also risks with self hosting, depending on your competence and the service.
I have a Vm on the cloud I use for running nightly nmaps against my network range, when that breaks it’s not the end of the world, if the VM provider hikes the price, or goes bust, I can migrate it in an hour by running a script, thetr no sensitive information on there, it makes perfect sense to run on a VM.
Would I entrust my journalist’s files when they are investigating corruption in Amazon? No.
There are three types of S3 server side encryption:
- SSE-KMS
- SSE-S3
- SSE-C
Without having an AWS support person test each type and report back, one must assume that the only bulletproof s3 encryption methods are client-side (where you handle encryption and decryption yourself and they just store the blob) and SSE-C (where AWS don't store your keys, you send them in every bucket API request). But even that latter method has other caveats:
- What does the S3 service log? Who can access those logs?
- Where does TLS for your S3 https request get terminated? Who can view the traffic?
I'm assuming that this isn't just a regional issue, and that any AWS Support person globally could access buckets in any region. If so, then that's a big deal. If you're in Europe and your bank or healthcare provider is an AWS customer, how much trouble could you cause them (and by extension, AWS) right now?
Furthermore, with the antiwork movement and backlash amongst employees for their treatment of warehouse workers, one cannot guarantee that an AWS worker wouldn't do something to hurt the company.
Amazon need to head this of with a very thorough explanation of what happened and what was exposed directly and indirectly.
> SSE-C (where AWS don't store your keys, you send them in every bucket API request)
Since this is symmetrical encryption we're talking about, let's just be completely aware that the technical possibility to also store the encryption key definitely exists. It would violate the terms of service, of course.
For those who don't know how SSE-C works, it's that you send both the unencrypted data and a key in a request. AWS will encrypt the data with the key, and store it encrypted. To get your data back, you supply the same key in your subsequent request. AWS will decrypt the data using the key, and send the unencrypted data back to you.
During both those times when you gave AWS your key, you entirely trust that they will not also happen to store it for their own use.
> Without having an AWS support person test each type and report back, one must assume that the only bulletproof s3 encryption methods are client-side
It is normal practice to have a 3rd party access to your technical infrastructure (for example for purpose of support/maintenance). I was once contracted to maintain database for another company. You sign NDAs, you sign penalties, you sign your children to slavery and the right of first night with your wife. You know, standard business practice.
But if you care enough that you would not have contracted 3rd party access to the data, client-side is the only solution assuming the client is under your sole control.
Yes but this role did not add the necessary privileges for it to use customer KMS keys. You can’t get an S3 object that’s encrypted with a KMS key if you don’t also have permission to decrypt with that key.
Of course Amazon could just give themselves access to decrypt with your KMS keys too, but that didn’t happen here.
Objects encrypted with S3-managed encryption keys (SSE-S3) are affected, as these keys are set up with a non-configurable resource policy granting the S3 service decryption permissions.
Posting to get ahead of some of the comment here on security posture.