Hacker News new | past | comments | ask | show | jobs | submit login
SOC2: The screenshots will continue until security improves (fly.io)
546 points by todsacerdoti on July 7, 2022 | hide | past | favorite | 244 comments



Part of taking on SOC 2 is also choosing whether you want your attitude to be "Let's do the minimum to get past the audit" and "Let's take this framework, and figure out where we can learn from it."

The post mentions background checks. On the one hand, I understand there's a real issue with these. On the other, if my PAAS isn't ensuring repeat offender fraudsters don't have access to sensitive data, that's possibly an area of concern. Hopefully the things they took from the other mentioned company do increase due diligence in vetting employees who have access to sensitive/regulated information.

Use it as a framework to actually think about BCP, DRP, etc, etc, and it won't be a total waste of time.

Edit: Also adding I bring up background checks as an example of learning from vetted practices, rather than trying to attack the decisions of fly. I respect this article, especially where it's easy for people on the internet to criticize decisions, when the reality is security is a series of tradeoffs, and to function as a business means having imperfect processes.


> On the one hand, I understand there's a real issue with these.

My personal hate of these is how much information they require you to hand over to some random organisation.

Some make you take photos or videos of yourself holding your ID, and sure they say that they delete the information, but we all know that rarely happens.

I just don't trust their infosec policies, and it's only a matter of time before one of these companies gets reported as having a public S3 bucket, or employee laptop, or USB drive stuffed with video and ID scans and background check data.


Bingo. Most background check companies seem like literally the scummiest things in the world. I have no problem with a background check, but I don't trust these companies at all.


It would be a different story if people at companies of our ballpark size had stories about how background check results were informative, or mitigated some risk. But nobody has those stories.


It's funny how much fraud could have been prevented with simple background checks however. There's an entire podcast about financial fraud called "Oh My Fraud" about many instances where a simple background check might have prevented hundreds of thousands of dollars of fraud at charities. Ah well.

I understand the negative impact upon recidivism that background checks and stigma may have, but also we have to balance the interest of financial and information controls of organizations performing essential functions in society. Ideally speaking.


There should be a rule that all background check companies for SOC2 audits must be certified SOC2 themselves.


Very much this.


We definitely don't do the minimum! Our goal was to keep the scope manageable so the Type 2 certification goes smoothly. The side effect of this is that the things we _did_ put into scope are things we expect to do really well.

Preventing access to sensitive data is important. None of the top ten ways we try to solve that include "background checks", though.


Co-sign.

You can run a SOC2 compliance program earnestly or as a check-the-box exercise.

If you're running earnestly, I would argue that the hardest thing about a SOC2 is ensuring that you stick to your guns on approaches that work for you and not adding cruft that you don't care about. If you let the latter happen, you will invariably end up a box-checker, and being a box-checker eventually contaminates a robust engineering / security culture.

And it's hard to walk back more restrictive / cumbersome policies; if you delegate your SOC2 to a person who doesn't deeply care, they'll eventually agree to put ClamAV on all the hosts or something (to make the auditors go away) and then you're going to be stuck with that for a while.

(So you need to find someone who has enough business context and good judgement to run the process, which is super painful from an opportunity cost perspective at a startup, and hard to locate at all at a larger company.)


> If you're running earnestly, I would argue that the hardest thing about a SOC2 is ensuring that you stick to your guns on approaches that work for you and not adding cruft that you don't care about. If you let the latter happen, you will invariably end up a box-checker, and being a box-checker eventually contaminates a robust engineering / security culture.

That's spot on, not only for SOC2 but for many, if not most, relevant certifications. The most important part is "not adding cruft". Nothing sucks like being stuck in a ISO9xxx certified process because you over-specified your processes even though you'd get the "ISO9xxx-certified" label for 10% of what you did. Suddenly you cannot react with common sense anymore because doing so would violate contracts you made with exactly those big customers you got the certification for in first place.


> if you delegate your SOC2 to a person who doesn't deeply care, they'll eventually agree to put ClamAV on all the hosts

Bingo. Just spell out the consequence: you no longer can optimize your compute costs by switching to a managed Kubernetes, because there is no ClamAV there.


I will have more to yell about this in 3 hours because I have a thing I want to yell about this. Free stickers if someone yells it for me before I’m done with this appointment.


> On the other, if my PAAS isn't ensuring repeat offender fraudsters don't have access to sensitive data, that's possibly an area of concern.

I don’t know that there’s much evidence that background checks work for this, though.


I mean, they're pretty effective for determining whether people have convictions (caveats apply, depending on your jurisdiction) for that kind of stuff, right?

One of the problems with not doing background checks is ex post facto if you do have problems with an employee, and it turns out that you could have discovered them with a background check, then that can figure into your liability.


+1 here.

I don’t love background checks but I have seen a situation where a person previously convicted for embezzlement was hired as a controller.

No background check was performed and the person drained the bank account ($XMM) at their new job before being caught when checks started bouncing.


The main thing that makes me uncomfortable is that they will raise all sorts of unrelated things that can bias you or others in your company against a candidate, or even if you choose to ignore them, be brought up against you if it turns out you hired someone with a drug conviction or something unrelated to their work at your company.

I don't really give a shit if someone was arrested for a drug offense (or many other offenses), and knowing that information brings up all sorts of complications, to the degree that it outweighs the value of knowing relevant stuff (largely because genuinely relevant things from a background check are rare).


What you do is decide what you care about from a conviction perspective (say, crimes of dishonesty, serious violence etc). Then you write up a policy that says these are exclusionary. Then you only let HR run background checks immediately pre-contract, and you don't let anyone outside of the immediate background check team see any of that stuff. They're empowered to give you a yes/no to hire, but that is all.

Hiring managers should not be looking at BGC material.


Are they? How much will that differ between an employee from Peru, one from Canada, and one from Romania? How comparable will the data be, how much data will you even get in the first place?

And what kind of employee will you lose if you set such restrictions?


But convictions for what?

In some countries (e.g. US) people are convicted all the times for things they haven't done. Or have done but which is in the past and really irrelevant for the job.

On the other time people are not convicted for a lot of kinds of white collar fraud all the time, too.


Slightly different, but in a past life working in a field the company had extreme access to people's homes, we definitely weeded out some people who should not be given access to a strangers home using background checks.

Again, they're not ideal, and there's a large social concern of a permanent record like that, but you have a duty of care if your customers are trusting you.


You are arguing that background checks should be a requirement in order to have access to certain protected resources. While this is a fair argument, the counterargument is that most people in this particular software business shouldn't have access to user data/protected branches anyway. If someone needs elevated access, they would likely already have a significant pedigree at the company, and a background check may not add much value. In reality, most companies don't do background checks for security purposes; they do it to screen out candidates who aren't agreeable people, which raises ethical questions. I don't have an opinion on whether this is fair or unethical, but if security was the sole purpose, it would make more sense to background-check employees as a precondition to privileged access, not candidates as a precondition to employment.


I think they're intrusive and mostly unuseful, but I can at least see the rationale behind it at the kinds of companies that hire people en masse, or the kinds of jobs that people get bonded to do.


I've always been mildly amused at the credit check.

I'm sure bad credit is the indicator they're looking for, but what does it mean to them?

A new hire makes bad decisions? A new hire could be easily bought or bribed? A new hire is broke and needs a job?


> A new hire could be easily bought or bribed?

This one. Same reason it's looked at for a security clearance.


I would think it would be fairly obvious that a candidate could be “bought or bribed,” simply from the fact they’re asking for a job in the first place. They’re willing to exchange their time for money, i.e. “be bought.”

So why do people not commit corporate espionage? Well, it might have more to do with character traits than financial stability. In fact, most spies probably have their life fairly well together, and will have perfect credit. As for any asset they might compromise, what’s the difference between someone with poor credit applying to a job because they need the money, vs. applying to a job because they need more money from your enemy bribing them? I’d argue the difference comes down to character.

So for that reason, I’m skeptical of the effectiveness of a credit report as a proxy for likelihood to commit corporate espionage. A good credit report doesn’t seem to offer meaningful signal in either case of a malicious attacker or a desperate contributor. A bad credit report produces as many false positives as a good one.


You're not afraid of a spy.

You're afraid of someone with 100k in gambling debts to the mob.


This is what Schneier calls a "movie-plot threat". Instead of imagining a complicated narrative that connects a poor credit score with episodes of control fraud, improve internal controls so that individual contributors don't have the ability to steal from customers. This would be safer, and more considerate of your colleagues.


Which isn't all that likely to show up on either a credit report or a background check.


It will. You pay the guy who will break your legs first. Thus you will be delinquent on other stuff.


But unfortunately not "A new hire needs money so give them a good offer".


It’s a proxy for having your shit together.

The bankrupt dude with a felony DWI is a liability for anything involving cash, driving or public contact.


I'm not sure I follow what you're trying to say here, so don't take any of this personally, because maybe I'm misreading you.

Telling any startup engineering team that they should maximize their SOC2 Type I audit is borderline malpractice. Every engineering decision you make to support a Type I is something you'll need to live with in your Type II. That might just be an irritating own-goal if it's you doing the Type II, and you have to waste a couple hours on the phone explaining to your auditors why you've decided to remove ClamAV from all your servers. But it's could actually be destructive if it's someone other than you running the Type II, and gets cornered by your original Type I claims into supporting that bad decision.

Telling any startup engineering team that they should build a security practice informed by a SOC2 audit --- that they should take the COSO framework and figure out what they can learn from it --- might also approach malpractice. There are probably reasonable corpsec process controls you could build based on the AICPA Security TSCs. But the TSCs are not based on modern software engineering best practices and they aren't informed by modern software security. They are heavily influenced by the security concerns of medium-to-large sized enterprises with sprawling legacy IT footprints, and you can easily lose security by rolling out controls that aren't relevant to your environment but require you to add attack surface and operational overhead to mitigate vulnerabilities that you don't have because you don't share printers or run Windows Server 2008.

SOC2 is not security. Security is its own thing. Good compliance work is a byproduct of sound security engineering. It does not work the other way around.

Naturally, it's easy and gratifying to point out that a SOC2 report doesn't make a company secure. I don't think we could have come up with a clearer way to have said that, or, for that matter, to explain that we did our best to minimize the impact SOC2 had on our engineering practices.

As for background checks: once again, we can't background check a decent-sized fraction of our team, because they're in jurisdictions that (very reasonably) forbid employers from running intrusive background checks. We considered just background checking the unfortunate US team members we have that can be made subject to them. I had a fairly long conversation in a Slack channel full of secops people from about a dozen security companies, and none of them told me a single story about how background checks (which are, as it turns out, performative, superficial, and error-prone) was a win for them. I did get stories about how they were problematic: for instance, I did not make up the thing about high school transcripts.

So, long story short: because of our workforce, our platform needs to be resilient against bad hires. You don't get that from background checks; you get it from security engineering, tight access control, tightly designed roles informed by those access control decisions, regularly reviewed internal audits, detailed audit logs, and sound hiring practices. SOC2 covers most of this stuff only superficially. Security engineering is the real work, at least for technology startups.

Later

This is way angrier than I want this to come across. I promise, it's not personal, and if I'm caricaturing any point you made, I apologize in advance. I had a 6-hour tattoo session that ended with an hour of ditch work and I am in a fucking _mood_.

Also, since I'm predictably being pulled in the direction of repeatedly dunking on SOC2 in this thread, I want to say very clearly that I had a fantastic experience with our auditors, who were more clueful than any other auditor I've worked with. Our auditors are great. Don't flunk us!


Thanks for the detailed response. I definitely don't take it angrily. One additional point of context is I managed a Series B SAAS companies first SOC 2 (and its renewal) in the recent past, so I definitely understand what you're saying, and I think it lines up with the point I was trying to make.

My main point is you can either treat the SOC 2 as an adversary to overcome, or actually try and leverage it to be better. No matter what it's going to suck and be annoying, though Vanta/Drata can help. But one can leverage it to be a better company.

A less controversial example than background checks is infra cost monitoring. Where a lot of SOC 2 is focused on business continuity, in addition to security, one of the things required is that you're actually paying attention to your costs. A lot of cash flushed VC backed startups don't. So, once SOC 2 hits, the company that's treating it adversarially will just rubber stamp some quarterly meeting where they "look" at infra costs. Or, you can actually take that moment to level up the company to have macro review of the cost of goods sold, an ensure the business is on a healthy path.

Again, not a comment on your article, one of the big takeaways for me in running a security program for years was a general anxiety around being transparent on the program externally, because there's a certain type of "security" person who gets off on picking apart policies, without understanding tradeoffs that we were careful to make sure kept the company safe, while letting it function smoothly.

Coming out with an article like this is a great thing to do, where a lot of content out there is just "we got our SOC 2, and now we're prefect."

I won't comment on the background check thing again, not the least because I don't want to argue more for something I don't like, just think may be a needed evil.


Thanks! Looking at my comment the next day, I think I came across super grumpy, and I appreciate you reading past that.


> you can easily lose security by rolling out controls that aren't relevant to your environment but require you to add attack surface and operational overhead to mitigate vulnerabilities that you don't have because you don't share printers or run Windows Server 2008.

Can you talk about that more concretely? I'm sure there are many cases where compliance requires you to consider a risk that's not relevant, but it's hard to imagine that thinking about each risk and taking appropriate action (which, sure, in many cases will just be documenting why it doesn't apply) would damage one's security.

At the end of the day any list of potential risks is going to be imperfect, and some will be more imperfect than others, but I'd still think that the vast majority of the time you'll get a better outcome by engaging seriously with a given list of risk factors than by treating it as an exercise in doing the minimum. If you were trying to do security from the ground up without any regard to compliance you'd ultimately end up doing something quite similar - coming up with a big set of risks and then figuring out what you're doing about each of them - and sure, you could probably start with a better-targetted one than what SOC2 has, but that sounds like a matter of degree rather than being so different that you can't get any shared use out of doing those things together.


I'm sure they'll add their own responses, but I've seen some tremendous issues caused by legacies like this. I spent a solid week including a case opened with Microsoft attempting to determine why Outlooks "report as phishing" button didn't work. That's definitely harming security, people stopped reporting phishing. The root cause was a Windows XP era Internet Explorer hardening policy that served no purpose on a modern desktop. In 2019, Microsoft removed a recommendation for an old font related security setting[0].

From a management level, a wide variety of modern best practices are already the default from Windows 2019 or so. The cognitive effort of looking at 50 security settings and convincing yourself they are all reasonable and won't break things is substantively better than the 400 or so we used to have. It's one thing to inherit this sort of legacy but it's a much worse thing to be implementing all these policies in a greenfield 2022 environment because they are all on some checklist.

[0] https://techcommunity.microsoft.com/t5/microsoft-security-ba...


So combine that with “post-facto” code reviews on a weekly cadence; there is potentially 7 day window during which a bad faith employee could act unrestrained?

Certainly this is giving me pause on using your platform for anything other hobby projects


No, that is not a good summary.

Were you already using the platform? What for?


We are a fintech startup. SOC2 compliant platforms are table stakes.


I agree. You're a fintech startup deployed on Fly.io right now?

The best way to get detailed information about how our security practice works at Fly.io is to ask us about it directly. We're trying to be up-front about how weak SOC2, for everything else it might be good for, is with respect to security. Unfortunately, in the process of speaking plainly about SOC2, we have apparently sent the message that we think most of security is performative, which is not remotely true; the point is that we don't think SOC2 is an especially meaningful representation of the work.


I personally dont consider soc2 or similar certifications a good framework bc of its checklist nature. A lot of those items end up being orthogonal or sometimes even detrimental to actual security.

Using your example of background checks it’s probably more valuable to have proper acls and audit trail internally than doing background checks which is a really low signal compared to the level of hassle


It's fine for what it claims to be. It's an actual audit, in the accounting sense, not a detailed investigation of your security engineering practice. People are very hung up on this, and I get the urge to jump into SOC2 conversations to point out that SOC2 isn't a passing grade on security engineering. But your SOC2 auditors are up-front about what they're doing. There are management practices that can be verified by retrospective paperwork audits: from a random sample of the people you off-boarded in the last 12 months, did you reliably terminate access within N hours of severing their employment? SOC2 is fine for that. Do you have a security policy that puts employees on notice of their personal obligations with respect to data security, and did a random sample of your employees sign it? SOC2 FTW.

There's real value in being forced through this stuff, because these kinds of management processes are a real weak point at a lot of shops with otherwise strong security engineering. I'm glad that our policies and processes are clarified, and that there's an external process that keeps us honest and forces us to do the routine scheduled meetings, rather than keeping stuff in our heads. We started doing SOC2 prep work a year ago, and even before the audit, we were better than we were before we started.

But it is what it is. The thing that drives me nuts is when people suggest that good teams will maximize SOC2 so their security engineering can be informed by it. Yikes. No.


> Using your example of background checks it’s probably more valuable to have proper acls and audit trail internally than doing background checks which is a really low signal compared to the level of hassle

I agree with your idea, but background checks are a poor example. They're negligable cost, always outsourced, and trivial to perform. They're worth doing if only to validate that your candidate said the same things as the background check says (if they say they're not a felon and they are, that's a red flag -- if they admit to it and explain why, you're not being lied to). In contrast, you actually need to spend time working on audit trails and stuff. One is hiring a vendor and checking a box, one is probably engineering work.


What's actual security? Looking for zero days? Malware research? Continuous red team?

I think at the end of the day, SOC 2 aims to instill a basic level of organizational security so the company doesn't shoot itself in the foot. If a company can't genuinely follow a basic set of SOC 2 controls, can I trust them to do actual security?

Also, badly written checklists might be bad, but not all checklist are bad. Pilots use them. Doctors use them. Mechanics use them. In fact, most fields that involve critical life or death operations use them. Why? Because humans have a limited memory and tends to miss critical tasks all the time.


Genuine question: is consulting for compliance exclusively the possibility for big four type firms,or can't we do this at (traditional not scale up) start-up style? I'd buy your pre money common stock off your comment alone and the similar applies to numerous others I keep reading here.


A little delayed, but there are plenty of companies doing compliance consulting. Eden Data is a small shop I worked with briefly. If you wanted to talk more, my emails in my bio.


Glad to see this at the top. While a lot of folks will groan when it comes to the minutia that are necessary to do a compliance audit, there can be real value in them if you take them seriously. I think of it like this:

1. There are a set of things you need to do for "real" security

2. There are a set of boxes you need to check to pass a compliance audit

I think SOC 2 is pretty reasonable in that, if you're taking it with the right mindset, there is a large, large amount of overlap between #1 and #2.


There is a medium-low amount of overlap between #1 and #2, leaning more towards low. There's not no overlap. It's a weak positive indicator.


This content marketing is so good I don’t even know if I’d call it marketing: it is legit educational with a weak (in a good way) “throw some money at us” CTA. Bravo.


Which is my favorite kind of marketing. Others in this group for me are the Backblaze Drive Stats blogs and any guide that DigialOcean writes.


Agree, this is so well written.


It was written by the person with (by far) the most HN karma, so they definitely know their audience.


Thomas is also an exceptionally good writer. He was honed in the forges of newsgroups and HN comments.

He definitely knows the audience, but writing something like this is a special skill.


Skill, for sure. But it’s also art.


The CTA is mostly a joke; the trick is just having fleshed out opinions and sharing them. It’s easy to replicate!


> This is the only issue we ended up having to seriously back-and-forth with our auditors about. We held the line on refusing to do background checks, and ultimately got out of it after tracking down another company our auditors had worked with, finding out what they did instead of background checks, stealing their process, and telling our auditors “do what you did for them”. This worked fine.

What process did that company use instead of background checks, that you ended up doing as well?


In Ireland there is no legal mechanism to do a background check unless you work with children or are in law enforcement. Even collecting and recording public information on individuals can be problematic with the data commission. Employee reference checks are acceptable for auditors in that case.


Roughly (but not exactly) same in Germany.


My company did adopt background checks, as part of our SOC-2 requirements and because my company works with health insurers (which generally impose this requirement via contract, regardless of SOC-2).

Like many people here, I didn't like the requirement. That being said

1) It's possible to configure background checks so you don't receive irrelevant information (e.g., if DUIs aren't relevant, then configure the check so you don't receive information about DUIs). In most cases, you'll just want to receive information about financial and privacy related offenses.

2) What you do with the information is up to you (unless your customers enforce certain actions). In general, the SOC-2 auditors will want to see a plan by which you acknowledge and manage the risk, which doesn't necessarily mean you can't hire the person.


IMHO _recent_ DUIs are more relevant then a lot of "not at all" recent much more serve things.

DUI is a sign of gross recklessness and apathy for the well being of others. Sure I won't blame a young adult for doing this mistake and there are situations where it's understandable (i.e. some kind of emergency making you DUI even through you generally are against it).

But still I would prefer to work with someone who in the youth due to poverty has committed robbery (but not anymore since 20 years), then someone who in their 40th who is frequently driving under influence of alcohol.

Anyway even if I had a company and it for whatever reason would do background checks I wouldn't want to know the outcome as long as whoever is responsible for it following some strict guidelines didn't judge it to be a problem (and no if it's not a car company it wouldn't contain DUI, and generally I don't like background checks).


Appendix:

I just realized that I had forgotten that in the US you often do not have the freedom of not taking the car but e.g. the public transportation. This makes things more complicated. But then doesn't really change how I feel about it.


If it helps, in the US you can get a taxi/uber/lyft to take you home from the bar. Some bars even offer free rides or can help arrange one if you need it. Calling a friend or relative to pick you up is also an option.

It's true that having shit public transportation and everything so far away that you need to drive complicates things, but there are always options. In Japan the public transportation is great, but the trains stop running long before the alcohol stops being served and it's not uncommon for drunk people to wait until morning even if it means waiting/sleeping outside all night. No reason folks here can't do the same.


> If it helps, in the US you can get a taxi/uber/lyft to take you home from the bar. Some bars even offer free rides or can help arrange one if you need it.

I was more thinking about people with an alcohol addiction still getting to/from their job on a daily basis then people going home from partying.


Bars could have hives of Puke-n-Nap pods that they can just hose out in the morning after everyone leaves.


I certainly don't mean to endorse DUIs! And if a company has the viewpoint that a DUI indicates that a person shouldn't be employed in a specific role, then background checks are a good way to achieve that.

My perception is that some people who don't want to do background checks feel that way because they don't want to know embarrassing details about their employees and colleagues that aren't relevant to work. And the good news is that employers can generally set up background check reporting to simply not report issues that employers don't think are relevant. And that makes it easier to offer background checks, and easier to meet SOC-2 audit requirements.


In fact, what I think you'll find in a lot of SOC2 background check regimes is that they're pretty much just automatically filed away without any careful review. As long as you did the check, you'll be fine with the auditors. We could have just did that with our US employees; we were fine, in the audit, with not doing them for people in Europe. But that's stupid, and we're not going to do stupid stuff for SOC2.


The only reason I didn’t write it out, besides it being boring, is we stole it from someone else who might rather share it themselves.


Ironically work I dabble in the security automation space and I'd say this is the real "social ill" of all regulatory cultures. It is not the automation that's important is sharing and reusing agreed upon understanding of requirements and best practices (what and why we automate and the real goals not just cargo culting or copying). Most unintentionally hoard and others (auditors, special consultants) intentionally do with the belief this is their market differentiator. This is good but still falls short by not sharing and dropping hints. This is the default I see most of the time.

Most higher level attempts to meaningfully share and reduce toil and wasted effort are not incentivized in risk/governance/oversight culture, so we all get to lose.


I'd really like to have something to point to, when the issue of forcing-background-checks thing comes up again for our SOC2 certification.

In one case, I know a company is using reference checks to comply with the "background check" requirement.


Another issue I have here is that I want to be a little bit cagey about what our specific controls are, not because they're sensitive to us, but because there's a limit to how much we're supposed to talk publicly about the specific results of the audit (it's a Type I, people who know SOC2 know that means there are no unhappy surprises in it) --- the audit results are confidential, as a term of our engagement with the auditor.

(This is why there's a SOC3.)

Long story short: it's not complicated, and if you're currently doing a SOC2, like right now (or in the future) and you have reached the point where you're trying to get out of background checking everyone, shoot me a line and I'll tell you what we did and what we said (I may performatively NDA you in the process, because I like our auditors and don't want to irritate them).


Invite them to share?


So, two reasons?


Oh HN I can’t quit you.


As pedantic as this comment thread was, I laughed.


One reason, besides being boring.


Careful, now. “Getting SOC2-certified” isn't the same as “doing the engineering work to get SOC2-certified”. *Do the engineering now. As early as you can. The work, and particularly its up-front costs, scale with the size of your team.*

Emphasis mine.

I've uttered this phrase or something like it so many times that I want to get this line fire etched onto a large, thick and sturdy block of wood and go back in time to several jobs where I and my cohorts got stuck with shoring up SOC2 tasks and smack my former execs with it.

Half joking but the pain and trauma from past SOC2 audits due to exactly this is real.


I work somewhere going through a similar process for much the same reason.

And to be frank, I'm wondering if how it ends is in me looking for a new job.

I know there's good reasons for the business to go through this process, and maybe it is really important for the business future but as an individual it sucks, my main motivator of achieving good outcomes for users, has been derailed by a large amount of procedure instigated under the belief that some auditor I've never met will be impressed.

Now with all this new procedure piled on, all of which is very new and thus immature and untried, I feel my energy to innovate, build new things and just generally drive change sapping away.

So the cost of your company isn't just the direct hours and the auditors fees! There's also a harder to quantify cost, as everything and everyone bends to change every team to work in the way some external entity thinks it should. This loss of autonomy and the loss of spirit is I think potentially far far more expensive than the direct costs.

Of course, maybe we're just going about the whole process poorly. I like to think if you were to do it well maybe it's not so bad. But, to do that I think you'd need abnormally talented staff involved, who are well versed in the topic, enthusiastic and empathetic. Most people involved in these sorts of processes in my experience aren't like that, and I can't say I blame them.


For what it's worth, as someone who helped quarterback the SOC2 process, it's a place that's ripe for personal innovation. We had automated scripts do our quarterly screenshots, learned and made full use of AWS Config to check and enforce compliance, worked hard to automate patching, and started tracking all audit tasks using Jira Service Desk. It ended up we used about two engineer-days a quarter on audit tasks. There's no escaping the annual review, but if you spend some time to streamline yourself, it goes pretty smoothly we found.


There are tons of people automating it already (Vanta, Secureframe, etc.)


> For what it's worth, as someone who helped quarterback the SOC2 process, it's a place that's ripe for personal innovation

I wonder if the parent comment was talking more about things like instead of committing code on their own, now it becomes a process that could be delayed only for the sake of compliance.

Something that was purely self contained and blocked by nothing is now blocked by at least someone else reviewing what you've done. But it's not just reviewing the code. To get something reviewable also means writing a detailed description of the work, documenting any and all processes involved and then maybe answering follow up questions if they come up. If you happen to change something then it needs to be re-tested, etc..

This is how something goes from being done and self tested in 2 hours to taking 3+ days. It can be a motivation killer at a personal level and also delay goals, features and everything basically at the company level.

I'm all for documenting work and following best practices like code reviews, well written tickets, etc.. but there are certain things where sometimes having all of that isn't an option because you don't have a team built around what you're doing. For example I'm in a platform / SRE / "devops engineer" type of role at a place and I end up shipping quite a lot of infrastructure code to production without review. I use my best judgment here. If it's something I can test in a test environment and I have high confidence I'll do it. If there's high stakes to the change I'll ask someone on the dev team to screenshare with me while I explain everything since most of them aren't deep in the woods with Terraform, Kubernetes, etc., but a 2nd set of eyes is still very helpful and valuable.

The problem there is not every change needs that level of review. It becomes extremely wasteful. I mean, if I add a Kubernetes annotation to a deployment, that's something I can do on my own in 10 seconds and push it to production but if it needs a review then it needs a jira ticket, documenting why / etc., making a PR, doing the work, finding a reviewer, going over what a deployment is in Kubernetes to a developer without Kubernetes knowledge and then going over what an annotation is, then finish things up by explaining what Kustomize does and how it works along with Argo CD because the state of the pod after Argo CD deploys it is the verification step to make sure it works. That has to be done over a screenshare because an application developer won't have these tools installed or know how to use them.

This ends up taking potentially longer than 1 day all-in. Maybe 15 minutes to do the ticket + PR but it could take 4-5 hours before someone is ready to review it, then another 30min for the screenshare, suddenly it's end of day. But for SOC 2 compliance I'm pretty sure you also need to deploy to a pre-prod environment as part of the workflow so that means the next day you spin up a temporary Kubernetes cluster in a proper test environment on the cloud, deploy a test application to it, applying your PR, verify it, kill your test cluster and then deploy it to prod which also involves someone else to approve the PR being merged to your infra's main branch.

Now imagine this type of scenario coming up at least 5 or 10 times a week. You would be in a constant state of being blocked and delayed. Without SOC 2 compliance I would have just pushed the annotation to production (main branch) directly in 10 seconds and moved onto the next thing.


You're absolutely right.

Most organizations do a completely hopeless job of implementing compliance sensibly. Those responsible for compliance tend to choose a solution that's scalable for them with zero regard for the inefficiencies they're introducing elsewhere in the organization.

Solving compliance sensibly provides organizations with a substantial, long-term competitive advantage. Nobody working in compliance seems to care.


Two factor authentication is often implemented by SMS. For example, Microsoft’s single sign on is protected this way.

Bring-your-own-device is often the only way employees can receive text messages. Obviously most people have phones but there are legitimate cases where employees don’t or can’t use their device. For example, they may have lost or permanently broken their phone and are saving up for a replacement. They might be an immigrant worker with a phone contract in their origin country that limits their ability to receive texts abroad. They might be a member of a union that would no longer represent them because they waived their employment rights by providing their own equipment.

Is the point of SOC2 to show that you tried to get BYOD employees to do 2FA, or do you have to show that you implemented it without requiring personal devices? How do you do it without BYOD? Have all the SOC2 / ISO27001 certified businesses bitten the bullet and bought phones for all their employees?

I would concede that phones and SMS are a crappy way of doing this. App-based 2FA is a better tool but still relies on your employees having two devices. I miss the days of RSA tokens.

(This article is a good read, in general. I’ve lost count of the number of times someone has arbitrarily said “you can’t do that because we will fail our audit” without actually being able to give any details. This article is the first time I’m reading “you should do this” as opposed to “you shouldn’t do that”.)


Microsoft (Azure AD to be more precise) single sign on can be configured in a few ways and SMS can be disabled.

If you really care about users not needing BYOD, you can restrict 2FA to hardware keys.

That said I think the overall sentiment of your post still stands, as most orgs just push the device issue to the user (either they need a phone of SMS, push notifications or OTP).


Webauthn tokens such as yubikeys? Seems like what half the tech industry is doing.


Webauth is a good way to get rid of passwords as an authentication factor. Everyone re-uses passwords and they can be easily phished.


> Obviously most people have phones but there are legitimate cases where employees don’t or can’t use their device.

There do exist quite some privacy-conscious people who for this reason don't have a mobile phone and very actively don't want one.


Those tend to be the kind of people that will easily take the Yubikey you ask them to carry around. The difficult people for MFA are the ones who won't own a phone because "I hate technology". Those can be a real roadblock for MFA rollouts.


You don’t need a phone for 2fA. There are software version that sit on the machine or you can use a hardware token.


> To get the merit badge, we also had to document an approval process that ensured changes that hit production were reviewed by another developer.

> This isn’t something we were doing prior to SOC2. We have components that are effectively teams-of-one; getting reviews prior to merging changes for those components would be a drag.

This mindset kind of blows my mind. As a customer, it makes me want to ask all my vendors for SOC2 now.

As a comparison, we have a small in-house team that develops some nonprofit websites; no PR goes to production without being reviewed by another developer.

I’m honestly flabbergasted at such a seemingly blasé attitude about insider threats. It doesn’t have to be a top-secret Chinese operation to subvert your company. It can be as simple as a developer you know well who is nursing some simmering resentment over a perceived slight.


None of the companies you trust are secure from insider threats by dint of PR review --- a thing we do, but override for urgent changes in oddball services sometimes, like most companies I've worked with over the past 10 years.


I’m confused, in the post it seemed like you said you don’t do PR reviews, but here it seems like you’re saying you do, except for emergencies.


Read more carefully. Or, stay flabbergasted! Flabbergasting can be very satisfying.


I feel like you’re being unnecessarily confrontational.

You wrote “changes that hit production were reviewed by another developer. This isn’t something we were doing…”

I cited PR as an example of a change and you’re acting like I’m trying to argue that PR reviews alone stop insider threats. (I’m not.)

There are lots of way to change production. To me it looks like you said effectively one person could do so without being checked by another person. To me that sounds like a hole for problems to drive through. And that SOC2 was the impetus for you to tighten it.

If I’m wrong, you can take it as feedback on your writing, I guess. You know what you wanted to write; I’m telling you what I read.

Or you can just treat me like I’m an idiot if you want. I don’t think I’m an idiot, and I’m not the only person you’re being combative with.


This is true. It was a bit uncalled for.


The commenter above constructed a supercilious put-down based on an uncharitable reading of the post, where the most direct and technically accurate response would have required me to cut against the point the post makes, which is that in a showdown between SOC2 and a mature, well-thought-out dev process, the dev process can and should win.

I'm comfortable with what I brought to the thread in response. If you detected contempt, well, you're not wrong. But it's for the comment, not the commenter, who I assume was just having a bad day, like I (in a way) was.


You obviously wanted a more charitable reading from me; please return the favor. I did not set out in bad faith to make you look bad. I read something surprising and took it seriously—dumb me, I guess.


Miss me with this stuff. You chose the words "honestly flabbergasted", not me.

Part of the ethos of "we're not doing stupid stuff for SOC2" and "we're going to be direct about how this stuff works so other people can hopefully benefit" is assuming that we're all reasonable adults who want to understand how SOC2 works, and aren't looking to score dunks off unintended subtext.


Flabbergasted was how I honestly felt. But you’ve more than adequately clarified where I was wrong.


>Consumers, meanwhile, split down the middle between cynics who’re certain it’s worthless and true-believers who think it sets the standard for how security should work.

There are many dependencies in the software supply chain that are maintained by a single person (open source or not),so it seems silly to default assume malicious intent for employees of a software vendor with a good reputation.

There are bad actors that no SOC2 control would catch, and there are good actors (the default) where no SOC2 control affect their behavior.

SOC2 is never part of my decision making, rather, I carefully study the company and product offerings to decide if right fit.

(This is coming from someone who goes thru an annual SOC2 audit)

https://xkcd.com/2347/


It's frustrating because the supercilious "what, you don't do code review!" comments the post attracted put me in the position of having to explain that we do in fact do code review, like every other mature dev shop, but that cuts against the point the post is making, which is that SOC2's understanding of code review is black-and-white and complicated dev projects have occasionally complicated dev processes --- and, importantly, your dev process can and should win the argument with the SOC2 auditor.

The third party dependency point is the best rebuttal I think you could come up with. It's exactly right: in the SOC2 view of how code works, you can't commit a 3rd party dependency without every line of its code being reviewed and approved. Nobody does that. It was my job for 15 years to do that for other people, and nobody came close to 100% coverage. Or 50%. SOC2 demands that you pretend you're achieving that. That's stupid. We're not doing stupid stuff for SOC2.

I reserve the right to be unproductive and standoffish with people going out of their way to misconstrue what this post is saying. I'm grateful that other people can contribute the productivity instead. Thank you!


Absolutely wonderful write up. And filed away for when I need to point my cofounders to something on why we should put it off as long as possible.

It's crazy how different the approach to SOC2 here is, having someone in the company running it vs having a "security/compliance" consultant taking point. We wound up with basically everything in "What We Didn’t Let SOC2 Make Us Do", despite protests by most of the engineering team and lots of documentation. It was very much a case of badly aligned incentives. "Person who sells services suggests buying more services from him".

One of the very annoying additional requirements was all the additional process around Jira (which is already annoying enough to use). Such as "Engineering cannot make this kind of card, it must be made by a product manager" and "Engineering cannot move the card into the final state, that must also be a product manager". Which might be fine for a larger company, but we had one PM and 3.5 devs.


Get Vanta. It really simplifies the process.


In what way? When you used Vanta, did you install Vanta software on your fleet? What engineering tasks did Vanta get you to undertake?


Great writeup. Dude is totally right about the importance of compliance and its implications.

There are other ones as well.

  * ISO 13485 for medical software
  * ISO 2700x for process, security, and change management
  * ISO 55000 for asset management
Your eyes might glaze over, but they can really help a startup graduate from "move fast and break things, cowboy shit" to a mature and respectable engineering organization.

Congrats on the SOC2 and finding a good spin on the story. It is also an important number of checkboxes you might need to get those juicy Enterprise and Government deals.


Pedant mode on (this is standards after all) - ISO 13485 is medical devices in general. You'll end up tacking on something like IEC 62304 for medical software and software in medical devices.


I think one of the things that you are missing out on big time with your existing SOC 2 report is the design of controls that are specific to your business and shows your customers the steps that you take to protect their data.

It sounds like you are doing amazing things from a security perspective, but that those are not included in your report, which means that your customers are missing out on. As you mentioned, sure you can put it in a white paper, but who is to say that you are actually doing it? That's the point of the SOC 2 report.

My hunch on this comes from the fact that you mentioned that SOC 2 comes from answering a giant spreadsheet from your auditors. It doesn't really, however that is how a lot of auditing firms get their clients to become compliant with SOC 2. They need something, and a spreadsheet is a good place to start.

You're right, some businesses won't need AV, or Background checks, but in your case if it makes sense to have a control that mentions "The company only leverages memory-safe languages", you should be able to, and even explain it further in your Section 3. The key is to design controls that address the SOC 2 criteria but also are 100% reflective of how you are operating your business. It sounds to me like you got to SOC 2, but that you could get a lot more out of it. This is unfortunately super common.

Source: I help SaaS companies with SOC 2 and I have been for 7 years now. Currently helping companies in unusual spaces such as crypto exchanges and making sure that all the amazing work they put in security is reflected in their report. I'd love to chat more in depth if you are interested, I am very passionate about this and I've worked with a lot of companies you probably use one way or another in your stack, or that are in a similar space as yours. Don't hesitate to reach out and good luck!


There's no formal audit I'm aware of that I would trust to capture and provide real assurance of the claims we'll make in our security practice writeup. I think it's more honest to just say directly that you're going to have to take us at our word on some of this stuff, that we're doing what we say we do. SOC2 does a great job of assuring that you don't have to trust us that we're terminating access when employees leave, and that we're conducting frequent access review meetings. It can't --- cannot --- do a good job of ensuring that we build secure software.


This is correct! Ultimately everything comes down to trust, there's only so much verification available. I've encountered companies who have SOC 2 while they blatantly do not adhere to their policies consistently. All SOC 2 demonstrates is that you wrote some words down and an auditor couldn't catch you in a lie after a few spot checks. That's it!

Even security questionnaires are practically unenforceable. If a vendor lies your only practical recourse is to avoid them.

I would much rather companies like Fly spend their time building and writing about real problems, including security, than figuring out how to abuse a SOC 2 report to demonstrate they're smarter than the average bear.


I hear you, and I think it lines up pretty well with my point below, everyone is going to have a different opinion about what they require to create that trust. To some, it might be a SOC 2 report, and to others it's having an understanding of the technical work that is being done behind the scene through whitepapers, meeting in person at conferences etc.

It is unfortunate that the SOC 2 process has become so mainstream because to your point (and I agree) there are a lot of weak audits. However, I feel like if you are putting in the effort of taking extra steps to be a better company and treat your customer data better, it is worth putting those controls in the SOC 2 report so that readers can know about it. Especially if you work with a recognized auditing firm. It doesn't mean that it is absolutely fault-proof, but it helps create trust, which is what it's all about.

On that note about trust, it can also go either way, as you've mentioned some SOC 2 reports will do the opposite of creating trust and will only result in more doubt and questions.


I don't deny that there are certainly companies that act in bad faith (say one thing in their SOC 2, but do another), but I don't consider it to be a fault of the SOC 2 process. Just bad companies. I wouldn't be surprised said companies would take shortcuts in other places aside from SOC 2.

I don't understand why taking the time to do SOC 2 right will take time away from the "real problems." Perhaps things like asset/vendor management, access control, and maintaining an efficient security organization aren't real problems for any organization. I'm reminded of that Futurama quote "When you do things right, people won’t be sure you’ve done anything at all." Unfortunately, just as you've encountered companies that lie on their SOC 2, I've encountered companies that have strong security engineering practices, but fails at basic organization security.


Thanks for taking the time to respond.

Ultimately companies become compliant with SOC 2 for one reason: Sales. It's the one department that wants it and the one department that plays no role in it.

One of the advantages of SOC 2 in my opinion is that it helps companies tell a narrative to their clients about how they operate. I understand your perspective of directly telling your customers how you are operating, and this might work for a lot of them, but some others will want to see it in the report. It also works to your advantage to only send one document at first rather than too many, because too many can open to many questions which will also slow down the sales process.

At the end of the day, since you have gone through the process once and will have to keep doing it for a while with the type II, I'd suggest keeping in touch with your sales team. Ask them what kind of questions they constantly get during calls with prospects. A lot of time it is very valuable to add controls around those questions. Anything you can do to reduce the procurement time leads to more sales, more business, and less engineering time spent answering security questionnaires that have become unbearable.

Unfortunately, there isn't a single standard that answers all questions that your sales team can provide, which is why I think SOC 2's flexibility is pretty awesome at helping close deals. The framework doesn't tell you what to do, but rather asks you how you address risks, which gives you a ton of flexibility.

Some companies can get away with very little and still flourish, others will need much stronger controls to satisfy their customers, and that's what it comes down to.

I've never considered SOC 2 as a way to ensure anything, but rather to educate the reader about how you are operating and have an accredited firm validate that you are indeed doing those things (to an extent with population & samples).

There are many ways to describe the steps that you take to ensure that you are building secure software, and I think you should get the credit for it, by having those important controls in your report, and help you stand out from the competition.


I do not in fact think we would benefit by having everything we want to say about our security mediated through AICPA auditors, even the excellent ones we work with. We'll write our own security documentation for prospects that care about how our security engineering works as well as whether we're SOC2-certified.

Bear in mind you're talking to someone who had a non-ironic debate about whether we oughtn't just do repeated Type I audits, year after year.


You sound super interesting and I’m sure plenty of people would be interested in chatting (including me!), so please think about putting your email in your profile!


I have personal experience working with pimartin. If you're looking for a reference, they really know what they're talking about for SOC 2. They helped me get SOC 2 at the company I co-founded, ProcedureFlow where I'm VP of Engineering.

My main concern was this: we are a growing company and I didn't want to bolt on "some corporate SOC 2 thing" just to make us seem more secure. Honestly, my attitude was similar to the OP.

Sales were getting blocked and delayed by lack of SOC 2 but also having to fill out security questionnaires for every customer. I found pimartin and they really showed us how SOC 2 is customizable and isn't black and white like most people think. SOC 2 is not prescriptive about how you do things. He also helped us find an auditor that understands our business and made the process very easy for us.

When our prospective customers now do their IT/Security reviews, we pass with flying colors because of the changes that have been made to our organization and the big attitude shift we had about it. SOC 2 is not a burden in our company.

Happy to talk more about our experience with pimartin and doing SOC 2 "right"!


Hey thanks! You bet, I just added it to my profile, thanks for making the suggestion. Feel free to email me, always happy to chat.


The ssh transaction/repl log thing is frightening to me. I get it, but when I've needed to get creds for a user to log into a service, one of my go-tos has been to pull shell history on all the hosts they log into since they've almost definitely typed their password prematurely at some point. So, logging all that sounds like a really fun way of making a password store!


The trick here is not having any passwords that anyone ever types.


Right - for example: openssh supports GSSAPI, which you can make work (e.g.) with your Active Directory or other Kerberos implementation.


Also set your shell config so they don't save history persistently.

Anything that you use history for should be set as an alias or shell function or script.


The concern being raised here is that transcript-level SSH audit logs are the equivalent of permanent shell histories for everyone, and they are. But if you're giving team members a reason to ever type a password into an SSH session, you're got a bigger gap to close. We already have to do secrets management at scale, because it's a feature we provide to our customers, and so we already have a process for loading secrets into environments for host work.


I'd be more worried (but not terrified) that the session transcripts can teach an attacker a fair bit about how systems work, should an attacker get access to those. Of course only a small subset of attackers is going to care...


> SOC2 is the best-known infosec certification, and the only one routinely demanded by customers.

In the US... and I think that SOC2 is not an "infosec" certification per se but more of a generic policies and procedures certification. When I sucessfully took a startup to SOC2 certification, we were doing PCI DSS Level 1 compliance at the same time. SOC2 was very light on InfoSec per se (it was more "bureaucracy"), whereas PCI leaned heavily into InfoSec itself.

Of course that was for an American company, the rest of the world uses ISO 27001


SOC2 is absolutely an infosec certification; it's just one that's premised around the idea that that the service organization should have its own mechanisms for achieving security goals and then demonstrate compliance with them.

This is different from PCI-DSS which is single-domain-specific and highly prescriptive at a technical level as a result.


> We held the line on refusing to do background checks, and ultimately got out of it after tracking down another company our auditors had worked with, finding out what they did instead of background checks, stealing their process, and telling our auditors “do what you did for them”. This worked fine.

Wonder why the author is coy about the specifics of this when they weren't earlier on a different topic...



> SOC2 reports: it's viral, like the GPL. The DRL strongly suggests you have a policy to “collect SOC2 reports from all your vendors”.

This kind of thing should be frowned upon. Especially if it involves things one has to pay for. Requiring someone to get certified and then requiring everyone they speak to is also certified is just a racket for the certification provider.

Even if elements of the certification are good, the viral nature is not. A certification should survive based on its merits, not based on who manages to use the most strongarm tactics to ensure an industry requires it of all participants.


I have seen that SSO requirement creep into more and more audits. It reminds me of the requirement to change passwords every month, which was everywhere but now suddenly is nowhere to be found.

It's weird. It's very convenient for the user, but it's on the opposite side of the convenience/security divide. Hijack one session and re-use it for other purposes. Of course it's preferable to not let systems access credentials but one would have thought that we had yubikeys or cards and whatnot now.


SSO is important because it's the best mechanism to show that employee lifecycle events are being appropriately mapped into access controls.

You only activate the SSO account when the employee onboards. You deactivate the account when the employee off-boards. You can suspend the account if the employee is on an extended leave-of-absence.

You can do this in one place, as single source of truth, rather than having to demonstrate that all the different identity systems in which Bob is represented have deactivated him when you have to let Bob go.

Ideally this happens automatically due to an integration between your HR system and your identity platform.

The alternative is that you have some kind of runbook that tells you exactly who to speak to about removing accounts from what systems (etc) which rapidly becomes unmanageable as scope of applications and number of employees increases.

Even for small firms, you can run into problems when the person you need to let go is the person that knows how to run the process.


You seem to be describing centralized authentication, like the family LDAP/NIS/YP.

That's distinct from SSO. SSO is like Kerberos, where you authenticate with a ticket, that for the lifetime of the ticket will not require the user to authenticate again.

In my experience SSO systems are not a security benefit. They risk worsen the impact of attacks. Even with mutual authentication there are still issues with ticket lifetime, extension and playback. Especially when people do whatever they feel necessary to run cron jobs, CI/CD and other systems. They do bring convenience benefits, which are a good thing in themselves, and may have secondary security benefits, but I can't shake the feeling that the arguments are similar to the password change policies of yesterday.

For some time it felt like a huge push towards things like Yubikeys and other modern smart cards that focus on making authentication painless enough to lessen the need for SSO in modern organizations. I would have expected to see more of that, not less.


Still - do you use SSO for your password manager?

Edit: For your work password manager.


I don’t use a password manager at work.

But for certain operations (e.g. privileged access stuff), we do have step-up to a second authentication factor in addition to SSO.


FYI, the reason the pw change requirement went away is because NIST published an updated set of guidelines that explicitly disrecommend it: https://pages.nist.gov/800-63-FAQ/#q-b05

On the vendor / policy side, many/most of these questions trickle down from NIST or similar institutional guidance. The auditors pick up on that and on practices from comparable companies they've audited, which can be helpful when your industry is moving towards sanity and painful when there's a meme that makes no sense in your context.

(If you spend significant time dealing with customer compliance issues, I would definitely vote that it's worth being familiar with the relevant subset of NIST pubs.)


> Hijack one session and re-use it for other purposes

If the attacker has the ability to hijack one session they could hijack all the other n sessions from the same place anyways. So while SSO does centralize your identity, it also makes it safer because you can uniformly handle provisioning/deprovisioning, password resets, 2FA and all other policies in one place for your entire organization instead of a hundred separate ones.


SOC2 - the thing your CTO tells you important but does everything in their power to ignore.


> but does everything in their power to ignore.

aka "make the devops team deal with it even though we're paying double their salary to a 'Security & Compliance Director' who hasn't renewed any of the certs they used to get this job since 1997 and hasn't the foggiest clue how the SIEM works when the auditor shows up"


I'm proud to say that I cost our SRE team less than a day's work from the start of the audit engagement to the end of it. If I'd thought to brag about that in the post, I would have. I cost our bizops person a lot of time though, which I feel bad about.


I feel this in my veins.


This matches our experience well. Surprisingly intensive in some unexpected ways, and surprisingly silly and incomplete in others. I get that the Fortune 500s want it, but having now done one I don’t quite know why they value it beyond a checklist item in their own compliance.


You know how you get software engineering candidates that look awesome on paper, but can't code FizzBuzz?

Same reason.


This is a great way to put it. People here are bashing SOC2 because it doesn't go in-depth enough, it's just checking for the basics, it doesn't actually stop hackers from accessing insecure AWS buckets or ransomware attacks, etc etc, and they're absolutely right.

But it's meant to be a minimum. It verifies that there isn't one copy of the source code on a dev's laptop. It verifies that a dev who gets fired won't be able to log into the production server and delete all data in retaliation. It verifies that an intern isn't able to completely destroy the business by accidentally deleting the production database (because you have routinely tested backups and a documented RTO/RPO, of course). Being able to demonstrate this level of minimum competency is extremely valuable when you're in the B2B world and trying to sell your product to a larger client.

The paperwork is a hassle, but if your company is following best practices for development and operations, there shouldn't be much of a step change in what you're actually doing on a day-to-day basis.


I worked for a SOC2 and ISO27001 cert outfit.

Real security starts with a good security culture. Audit and compliance assures audit and compliance passes not a good security culture.

While I respect that the two are independent, being audited doesn’t mean your customers aren’t at risk or you care about them.


Did you get any other message from this post? I'd like to think I just wrote the most SOC2-cynical "we just got SOC2" post that has ever been written, but if you've got a better example, I'd dearly love to read it.


I would love to write it but I would probably be sued instantly. That's all I'm saying and that says a lot :)


  I’m underselling SOC2. It assures some things pentests don't:
  * consistent policies for who in the company gets access to what
  ...
That's a start. Of course, having policies raises a new question -- how well are they practiced? To what degree are they:

a. discoverable via internal tools?

b. descriptive and understandable (the policies use terminology that maps to the business processes clearly)?

c. measured?

d. internalized by employees as part of the culture?

e. enforced (as desired)?

f. externally reported (e.g. to downstream customers)?

g. reviewed when appropriate?

h. adjusted or removed as needed?

There are some spectacular failure modes of policies in the real word. Here are five attributes that fit together nicely into a mosaic of dysfunction:

* Everyone has to take a sleep-inducing thirty minute training.

* Policy compliance is "tracked" with a spreadsheet on an ad-hoc basis.

* Everyone in the organization has to "check off" that they comply, starting at the bottom and working up the chain to El Jefe.

* If something hits the fan and you need someone to blame, follow the paper trail and blame the people who incorrectly attested to policy compliance.

* Virtually anyone could fail compliance if you haul out the microscope. This is a feature, not a bug. Now you have a convenient and official way to get rid of people you don't like for arbitrary reasons.

This is stylized, yes, but not too far off the mark at some places.


Many of those are real concerns.

One of the main points of real audits like SOC2 type II are about validating enforcement of the controls outlined as policies. So the policy that only employees with job role Z can sign onto System Y would be checked by summarizing the login audit logs for Y, and verifying that all the names on the list. Other policies might require verification by reviewing all or some auditor selected subset of occurrences to verify. Some details of some policies cannot be fully verified. As long as that risk is known and documented, it is not necessarily a problem.

SOC2 audit reports provide a high level version of reporting to downstream customers, without necessarily revealing the full details of the policy. (Sufficiently important customers could always insist on seeing the actual policy documents, if the reports don't satisfy them).

But some of your remaining considerations are somewhat outside the scope of SOC2. And they can be tricky problems.


The policy section and example policy had me cringing hard, not because of the "simplistic" tone (well, a little) but more because of the blithe ignorance of why policies are written with "whereas" and "designee" and such. It's all well and good to have a quirky fun simple summary of a policy, or a simple straightforward policy that is super vague to cover every base, but don't discount the legal jargon.

I don't know if the example given is real or not, but something as simple as saying "Slack or E-mail" might make sense if those are literally the only two methods of electronic communication used, but when shit hits the fan that language won't cover it if someone sent a SMS. And let's face it: shit will hit the fan.

This doesn't mean any given policy has to be unreadable (and in fact that might be detrimental) but neither can it be so jocular that it is ignored or unenforceable. If fly.io has received ISO certifications with those examples in actual usage, I'd be skeptical about who issued those certs; ISO doesn't certify directly, relying on the free market reputation of external companies/consultants to be truthful about compliance. Of course, ISO compliance isn't legally enforceable other than as a checkbox for some other procurement or investigative body, so maybe a dice roll on whether anyone checks is worth the cost.

Just my two cents, take it for what it's worth in 2022 USD.


As I am now an authority on the authoring of security policies I can reliably inform you: you are criticizing an information security policy for not being a data classification policy. The data classification policy spells out exactly what kinds of information are suitable for exactly which modes of transmission and storage.

SOC2 demands both an information security policy and a data classification policy. And a retention policy. And an access review policy. And an incident response policy. And a BC/DR policy. And a change management policy. And a vulnerability management policy. And a vendor management policy. These are different policies. Some of them have broad audiences, like the data classification policy, which is incorporated by reference in the information security policy. Some of them have narrower audiences, like the vulnerability management policy.

Hopefully that resolves your blitheness concern.


Yes, thank you, that soothes me somewhat. I am not overly familiar with SOC2 specifically, so I read it as a generality. I was kinda harsh, but mainly because I really don't want somebody unfamiliar to think policy writing in general is a waste of time and that they should globally adopt the same sort of language as in the example provided. To be fair, I think the world of what fly.io has been doing. Just not a fan of that particular section.


Congrats to the team at Fly.io. I can only imagine how lucky they felt to have Thomas. I'm sure having that level of expertise and experience in house made the process relatively smooth.

> You’re going to write a bunch of policies. It’s up to you how seriously you’re going to take this.

I think this is one of the most important points in the article. SOC2 is going to hold you to these policies so they better make sense for your business, the nature of the product, and how your company works. Creating well written policies is an art though.


Going through SOC2 as well, I have a similar experience :)

> I was surprised by how important this was to our auditors. If they had one clearly articulable concern about what might go wrong with our dev process, it was that some developer on our team might “go rogue” and install malware in our hypervisor build.

> ... our auditors cared a lot about unsupervised PRs hitting production.

It's indeed a growing concern, and (shameless plug) the main reason we founded arnica.io (help organizations automate that process without hurting development velocity)


The blog post in general was good/informative, but I gotta say, this quote does reduce my confidence in Fly quite a lot:

> To get the merit badge, we also had to document an approval process that ensured changes that hit production were reviewed by another developer.

> This isn’t something we were doing prior to SOC2. We have components that are effectively teams-of-one; getting reviews prior to merging changes for those components would be a drag. But our auditors cared a lot about unsupervised PRs hitting production.

> We asked peers who had done their own SOC2 and stole their answer: post-facto reviews. We do regular reviews on large components, like the Rust fly-proxy that powers our Anycast network and the Go flyd that drives Fly machines. But smaller projects like our private DNS server, and out-of-process changes like urgent bug fixes, can get merged unreviewed (by a subset of authorized developers). We run a Github bot that flags these PRs automatically and hold a weekly meeting where we review the PRs.

Letting code go straight to prod without a review is just IMO a really bad practice. It sounds like they've improved significantly here, but still have plenty of gaps where people can ship to prod with no code review until it's been running in prod for up to a week. This isn't just about stopping bad actors, it's 99% about preventing all sorts of bugs/mistakes/bad ideas from hitting prod. Obviously automated tests are the main line of defence there, but code reviews are very important too. I'm kind of shocked that they want to skip out on this. I personally wouldn't want to rely on a core piece of infrastructure from a team practicing this level of cowboy coding.


In this context, prod means all the things you'd expect, plus: internal admin app, blog, static marketing site, old Rails app no one has touched in 2 years that one customer still needs, bash scripts to diagnose host issues, etc. There's a reasonable scope for "PR reviews are good", and it does not extend across everything SOC2 covers.

That's because SOC2 is only concerned about vectors for exploiting code, and gives very few shits about how well the platform actually works. The policy had to cover the full scope, though.

This is the difference between a "policy" and a "practice". We've long been doing code reviews on critical code, even last year when there were only 7 people here. And we've long had a release process meant to minimize the risk of bugs harming users.


We don't let most code go straight to prod without review:

* Post-facto review is a norm only on a couple oddball projects

* Out-of-process PR merges are a privilege for only a subset of developers

* We do the same in-depth code review most teams I've worked with in the last 10 years do. If you're rolling out a feature, teammates are reviewing your code.

Take a typical, clueful engineering team and SOC2 it sometime, and you'll see the difference between "not allowing cowboy coding culture" and "satisfying the letter of the SOC2 code review controls".


I will need something like this in 12 months. Can you add me to your CRM to reach out in a year please? Find me on linkedin, thank you!


Have a look at https://www.vanta.com/ when you actually get involved in the SOC2 dance. A couple of years ago I took a startup through the SOC2 and PCI L1 compliance process "manually". At the same time Vanta was kind of starting.

I decided not to use them because I (foolishly in retrospective) wanted to "learn" the SOC2 and PCI cert process by walking through it (kind of how you do derivatives, intergrals, numerical methods in school by hand so that you "grok" them).

Since then, I've heard good things about Vanta from a couple of friend CTOs that adopted them. If I had to go through SOC2, PCI or ISO27001 (I did that in yet a previous startup) I would deffinitely go with them.


We similarly had good results with Vanta (and their recommendation of a Vanta-friendly firm).


You can be SOC2 compliant and let your employees store passwords in plain text on Slack, Confluence, etc. and those passwords can be things like "password" and be shared with your partners. No 2FA / SSO by the way. And that's just the tip of the iceberg.


I don't understand why people assume SOC 2 can cover every single possible scenario. Especially scenarios that have nothing to do with actual SOC 2 controls, but the result of lax security culture or bad actors.

You can pass a driving test and get a driving license, but you can still drive 90 miles on the freeway and run red lights. Is it the fault of the DMV? The fault of the person who administered the driving test? Well, since people are getting away with bad things, why don't we remove the driving test and abolish the DMV.

Also, who is intentionally letting your employees store password in plaintext?


This is true. But if you have diverse applications and vendors, SOC2'ing that practice will be a pain in the ass, because you're going to have to document how you manage credentials, access, and onboarding/offboarding for each of them. SSO is good on the merits, but it also simplifies clearing SOC2 audits.


That's not really true. I mean, if any auditor knew you were doing those things, you would fail the audit (or, rather, should fail it, if your auditor knows what they're doing).

It's not like it's impossible to hide those things, but that's a whole other issue.


>Bottom-line: SOC2 is a weak positive indicator of security maturity, in the same ballpark of significance as a penetration test report (but less significant than multiple pentest reports).

Having gone through SOC2 a few times, it's more about opening doors to enterprise customers. The audits are very "grey" and subjective to your compliance auditor. Also, you get the freedom to say we're working on this and it allows you pass certain controls.

One final note: watching our CISO go through this I realize it's utterly the most boring, soul crushing job I have ever seen. It's non-stop clerical paperwork that nobody will ever read but everybody demands to cover their ass. Pay your CISO's.


I think one thing that doesn't get covered enough is SOC 2's value in providing additional data for vendor security reviews. That poor CISO that have to work on SOC 2 is probably tasked with reviewing new vendors on a regular basis as well. Sure there are security white papers and pentests (which can come from dubious sources), a SOC 2 report at least serves as a fairly independent assessment of a company's security maturity. Most people don't fully understand the amount of vendors required for a company to operate (take every department you can think of and assume each will have at least 3-5 vendors per quarter).


> try to find simple access control models

Can I finally make a fly.io API key that isn't all-powerful? All I see is "Create" and "Revoke" buttons. There's not much access control visible, to this user.

I'd love to delegate ability to scale a specific app up/down to some automation, but currently all it seems I can do is let that automation become me.

(For the peanut gallery: https://fly.io/blog/api-tokens-a-tedious-survey/)


It's coming, very soon! We did Fly Machines first. We're doing Macaroons. The backend code is mostly there, but the API integration is a work in progress. This is at the top of our punch list right now.


First, my company uses a tool called Vanta. Really simplifies the SOC2 process for a small to medium sized business. They have managed to automate large portions of the process. I think I created only 5 or so screenshots for the whole SOC2 audit.

Second, the process is making your business more secure. There are so many things that people skip without the reminders of the SOC2 audit. Also, if someone has a SOC2 it tells me they are doing the Standard stuff I expect.


Everyone uses Vanta, at least among YC companies (Vanta is a YC company). There are things I like about Vanta, and you can't argue with the track record. I have one big concern with it, which is that in working with other companies that used Vanta, my experience was that it pulled those teams towards an expansive take on what SOC2 is, and induced them to do extra engineering work. Alarm bells should go off in your head when you do engineering work specifically for SOC2, because SOC2 has weird ideas of how serverside engineering should work, and you probably don't want to adopt them.

I know Vanta (and the other tools like them) are customizable, and you don't have to do everything they suggest. If you know that going in, and you're careful about minimizing your Type I, they work great and you can't argue with the track record.

The thing with me is: you really can't flunk a Type I. If you're serious about getting certified, you will get certified. So you should be much more worried about SOC2 dragging you into extra work or bad engineering decisions than you should be about whether you'll succeed and getting a Type I. So anything that creates the perception that "extra stuff" has to get done for SOC2 is something I'm automatically wary of.


Agreed. We did a process on Drata vs Vanta and ended up choosing Drata, but they're similar in terms of product/pricing. Drata has saved us tons of time.


> First, We needed a formal org chart posted where employees could find it.

Isn't this bad infosec to publish this information? Also, probably bad from employee retention. Don't want the headhunters to get a hold of this.


It’s where employees can find it, not necessarily having it public (which probably is a bad idea).


> SOC2 is the best-known infosec certification, and the only one routinely demanded by customers

Maybe in the US. For the rest of the world, ISO27001 is arguably better known.


SOC2 is also one of the weakest.

>Developed by the American Institute of CPAs

I don't know when CPAs became infosec experts.

>Each company designs its own controls to comply with its Trust Services Criteria.

Because it depends on self-assertion, SOC2 is generally a weak organizational certification.


They're not infosec experts, and don't claim to be.


SOC2 signals much higher maturity than ISO27001, also in Europe.


> finding out what they did instead of background checks, stealing their process, and telling our auditors “do what you did for them”.

And what might that be?


Was ssh really broken that you need teleport? No.

Things were once capable of being automated across fleets of cattle-vms with ssh + keys now are not since teleport was shoved into my face.

Oh, you type teleport to get somewhere, and then your SSO appears in a web page/browser! Because what all IT at scale needs is a human constantly manually authenticating with a web browser every 2-4 hours.

I will say that at least teleport is a "solution" rather than an imposition by some powerpoint security group run by people graduating from Florida Gulf Coast College running through checklists. Meanwhile the core change-my-company-password webpage won't run in chrome because the SSL algorithms are so out of date. But somewhere... the company is in compliance.

I like that the article is self-aware how shitty the security industry is, mostly a bunch of bullshit they sell the execs so that the execs can say "we did what was best practice".

Meanwhile, Okta SSO got breached by some teenagers bribing people with bitcoin, and everyone just ate their public "it wasn't that bad" and moved on with no real accountability to what was pushed out in the press release. I do appreciate those evil little shits pulling down the pants of Okta.

China must laugh at the security employed by US companies. I guess the real challenge isn't getting in, it's just not getting caught.

The problem with security audit log software is that it implies you have centralized systems with universal access. Uhoh, your security audit logs are now a ransomware vector, and as someone else pointed out, a place you mistakenly write your password or worse.

The problem with organizational behavior audits is that there's no way your org can resist being bribed, because your org is cheap, mistreats its workers, and abuses them. Teenagers with bitcoin will lulz you, background checks or no.

These auditors are consultant scum. The good news is that with all things like Rational Rose, Xtreme agile, etc, they too shall pass.

Especially with teenagers with bitcoins embarrassing them on the regular.

Sorry for the rambling. There are no easy answers, and fuck all the compliance and "enterprisey solutions" for saying there is.


I'm trying hard to figure out what any of this has to do with SOC 2.

Perhaps consider the phrase "Don't let perfect be the enemy of good". Okta had a shitty breach, but does that mean dropping SSO completely? What better alternatives are out there?

If you believe in "easy answers", then you are buying to the marketing and sales pitch. There is actual meaningful work being done by many that isn't easy and isn't appreciated.


I don't have an opinion about Okta. We don't use Okta. I trust Google's security engineering more than I trust my own; in fact: the entire industry implicitly does.

Teleport isn't a hosted solution, or, at least, if it is, we're not using it. We're using an open-source codebase that gives us mandatory, phishing-proof MFA authentication for SSH sessions, access control tied to our source of truth about roles and access, and transcript-level audit logs.

I'd use something like Teleport even if SOC2 didn't exist. SOC2 does exist, so believe that I'm going to apply Teleport features that we already use for security-engineering reasons to every DRL item I can.


"There are no easy answers, and fuck all the compliance and "enterprisey solutions" for saying there is."

https://www.rsync.net/resources/regulatory/pci.html


A tqbf blog with a link to a blogger agreeing with a Matasano blog about not liking agents? I smell a reference loop!


Every article about SOC2 should be illustrated with a Hieronymus Bosch-like hellscape.


I cheated and stole the art for our podcast as a placeholder. We have something better coming for this post. :)


Wait, what screenshots?


Like the sibling, I've been through a SOC2 process, and there were screenshots, lots of screenshots, to try to "prove" things to the auditor.

(I say "prove", as sometimes the question is malformed, and so you're trying to make the best of it as you can.)


In my experience, massive reams of screenshots for evidence of process / tooling / etc. are the norm with auditors. I have in the past been asked to provide screenshots of source code even.


Yes. Screenshots with the desktop clock included, because that way you can be sure of exactly when the screenshot was taken.


Yes. Screenshots of resolved security tickets, screenshots of 2FA being required in the Google Apps console, etc.


Looks reasonable compliance than I expected. Good work.


SOC2 is a deal breaker. It is already a pain to work for large American companies. You are forced to not be honest in the audits. The audits are complete bs by the way. Some external off shored company that wants screenshots of some specifics that they somehow found important. It is more Kafka than actual value. Miss those days, not…

It is better how it works in EU. Track money and stock activity + GDPR. SOC does not prevent Enrons. I mean, even if there was a complete ledger of everything ever done in a company you could still not prevent Enrons. The fraud part should have been detected anyway. And would have in EU, I would say.

Edit: D'oh! I mixed it up with SOX that is mandatory.


>SOC does not prevent Enrons.

SOC does not prevent Enrons; SOX does ... that's a different audit, and a different process, though.

>The fraud part should have been detected anyway. And would have in EU, I would say.

cough ... Wirecard ...


D'oh! You are right of course. I mixed it up with SOX that is mandatory.


> You are forced to not be honest in the audits.

Not sure I agree. The question is, what are you willing to sacrifice in order to be honest.


When an audit imposes a dumb requirement that nobody will benefit from, and it's easier to change jobs than to do the dumb thing, you have the theoretical option to be honest and do it.

But if it's easier to change jobs than to be honest - is the option to be honest one that will be taken by any rational person?


Actually, the easiest thing is to find a better auditor. A SOC audit isn't like an IRS audit, you actually pay them to come in and audit. Not all are created equal and sometimes you get what you pay for.


> But if it's easier to change jobs than to be honest - is the option to be honest one that will be taken by any rational person?

Yes. But I don't think this is a good forum for arguing differences in values.


> We moved everything we could to our Google SSO.

I realize this is for employees and it is hard to escape their gravitational pull, but I hope no customer data is going to Google unless asked for.


This is for employees only. We don't collect more than necessary for customers, and if we did we sure as hell wouldn't be sending it to Google.


Thanks, and right... I don't expect maliciousness on fly's part. But you'd be surprised (or not) how many goog products phone home with anything they can find. In fact it might be called a "business model" of some sort.


Google's SSO can't really phone home. You're either using SAML or OAuth; in either scenario, the information flow is Google --> the app you're SSOing into; name, email, and user group information.

If you're SSOing into, say, AWS, Google doesn't get any access or private info out of AWS in the flow.


You get that the norm in hosting providers is hosting applications themselves on Google and Amazon, right?


I don’t trust anything with google on it, is that clear enough?


Accidentally hit enter the moment before HN went down and didn't get to edit it. :-D

But yes, reading google on your network gave me the heeby-jeebies. That the 'normal' thing is to host them on google is not really pertinent. If I was interested in that, I'd have done it. But I'm looking at other parties on purpose.

ceejayoz seems to think it is not technically an issue, google is not on your network. That sounds plausible to me, assuming it is accurate.


Well, our root of authentication trust is Google, so if "no Google involved" is a criteria for you, we're not a good place to host stuff for you.


I was waiting for ISO 27001 :(


thank you for writing this. this is one of the truest accounts of SOC2 ever!


Soc 2 is policy audit.


I love it when nothing is documented and every process exists in someone's head.


[flagged]


> The whole industry is a fucking racket.

That seems plausible, but can you flesh out the argument?


Not OP, but from my experience with SOC2 the auditing and compliance can be a complete joke. There are auditors who're entirely satisfied so long as an engineer dismisses all the dependabot alerts in Github (even if they're all dismissed with the "don't have bandwidth to fix this" option).

> SOC2 is a weak positive indicator of security maturity [quoted from TFA]

I'd argure it's no indicator at all.


It’s an indicator that you paid your protection money to the industry audit mafia. Racket, like I said.


>SOC2 is the best-known infosec certification

LOL. I've stop reading right there.


but it is? Which one is the best-known in your opinion then?


If I do SOC2, then I have to spend a lot of money.

If I don't, then my customers will forgive me in a few weeks and life goes on.


If you're in B2B, plenty of larger companies will disqualify for not having SOC2/ISO27001.

Also, it can help get you out of repeat security assessment questionnaires, so it can actually give you time back, depending on how many of those you have to field.


This. You'll lose more money in lost clients than SOC2 will cost you. It is only really expensive the first time you do it - after that if you just follow your own procedures the annual audits are pretty easy. And yes, being able to just reply to those security questionnaires (do you have armed guards in your data center?) with "see SOC2 report" is gold.

Of course if you are in an industry were clients don't ask for soc2, don't do soc2.


> If you're in B2B, plenty of larger companies will disqualify for not having SOC2/ISO27001.

And it's a good question whether you want such larger clients at all. At one of the previous places where I worked, we used to put deliberately bad answers (the worst that our public version of the security policy would allow, not the actual practices) in security forms in order to get rid of too-demanding clients.


> we used to put deliberately bad answers

That seems like quite a waste of time. Nobody forces you to take on a customer, so if you don't want them just say no and move on, instead of spending a lot of everyone's time to go through the motions hoping for the deal to break.


A lot of enterprise-scale companies won't even consider your SAAS if you don't have SOC/ISO, but many can certainly make it without those companies as customers.


If you're trying to be a vendor for a medium or larger company, SOC2 is usually one of the bright-line requirements.

... Which is not a good thing, because (as noted already in this thread) SOC2 doesn't actually make you secure. Nor does not having certification make you insecure. But, when used as a shorthand, it leads companies to engaging in compliance theater to get certified, spending a bunch of money without actually making their data noticeably more secure.


Having policies, records, procedures and documents for everything might also make due diligence easier in case you want to sell the company at some point. Makes it look a bit less like a messy one man show too.


> If I don't, then my customers will forgive me in a few weeks and life goes on.

If you don't, you are missing out on a lot of customers who would have given you 10000x the one-time cost of SOC2.


For a lot of shops, yes. Don’t SOC2 if that’s you!


Okay cool; but where is the report?

Surely you didn't write all of this just to make me sign an NDA to see your report...


You can’t publish them; it’s an auditor restriction. You do a separate, expensive report if you want it public. Trust me, you’re not missing much.


This is a restriction of SOC2 itself. SOC2 produces a "restricted use" report, meant for the client company who purchased it, and for limited access by third parties that do business with the client company.

AIUI, the intent is to ensure that the people reading the SOC2 report don't read into it more than what the auditor intended. This, according to the auditors, is fraught with difficulty, because you need a degree of familiarity with COSO principles and general SOC-ese to understand precisely what the auditor is making strong claims about, and what is _not_ being claimed.

In practice, the way this is accomplished is that the SOC2 report includes standard verbiage saying that the report is only for the client company, and for specific third-parties who must have "sufficient understanding" to parse the report correctly.

The client company then implements some process, with the guideline of "do something that won't piss off your auditor". For small companies, by and large the implementation is "you have to be an existing or prospective customer, email to ask for the report, and sign an NDA." The very act of requesting a SOC2 report implies that you know how to read a SOC2 correctly.

Some larger companies set up portals where you can help yourself to their various reports, sometimes without NDA - but in that case you have to click through a pinky-promise to not redistribute, and the report is watermarked with your identity to deter distribution. A very few publish the report at a hidden URL and hand out the URL to anyone who emails in to ask for it, though I personally think that's walking a bit too close to the edge of what they agreed to with auditors.

Companies with deeper pockets end up paying an auditor extra for a SOC3 report, which is "SOC2, but abridged and with unrestricted distribution rights". I believe the theory (besides giving more money to the auditor) is that the SOC3 removes all the information that might be misinterpreted, boiling it down to barely more than "I, a trustworthy auditor, confirm that the company is doing all the right things." You don't get much detail, but as long as you're willing to transitively trust the auditor (who are themselves scrutinized by their regulatory bodies), that gives you a "compliance is yes" document that you can publish far and wide.

If you want an example of what a SOC3 says, Github has one: https://github.githubassets.com/images/modules/site/security...


Wonder how this will mesh with Vanta’s new Trust Report feature.

All it current requires is an agreement and your email address.

The Trust Report doesn’t show all the details unless you configure it to show those details.


Trust Report seems to be irrelevant to this (from what I can tell from the brochure without being a vanta customer), because it's a way for a company to publish claims about itself. Crucially, nowhere does it say that an independent auditor verified those claims.

SOC2 broadly contains:

  - A description of what the company claims to do

  - A statement that the description is complete and accurate

  - Auditor's testing procedure for verifying the company's claims

  - The results of the testing

  - The auditor's overall conclusion as to whether the company meets the bar for SOC2.
Trust Report seems to only cover the first point.


Hi! Christina from Vanta here.

The Trust Reports contain programmatically-validated information (basically: Vanta's code says the control was in place continuously.)

There's (obviously) pros and cons of trusting a software provider (like Vanta) to validate technical configuration compared to trusting a human auditor to do the same.

Our bet with Trust Reports is that for some cases, having software do the checking and validation continuously is better than having a human auditor do it once a year.


> Companies with deeper pockets end up paying an auditor extra for a SOC3 report

They usually don't. SOC3s are generally useless.


That's a neat document. Is there something like this also for ISO 27001?


I've never met a company that gave SOC 2 out without an NDA. In fact, I'd consider it a negative indicator of maturity to not require it.


I've met a few where it's not explicitly required on download (e.g. GitHub has their SOC2 available for download on the enterprise admin page, but by the time you're using GHEC you've signed a few pieces of paper), but agreed that most companies aren't giving them away for free.


Those systems tend to stamp your account information into the PDF, as well.


Yeah I was trying to make a joke that didn't land I guess.


> Everybody would be better off if they stopped believing what they believe about SOC2, and started believing what I believe about SOC2.

Since the author is a member of the set "everybody", we have a paradox. :)

More seriously, it would not be hard to adjust the language just a little bit by saying, e.g. "Most people would be better off...". Alternatively, the author could adopt a common style used in business communication where the author creates a label for the group that would benefit from the SOC2 knowledge. Perhaps call the combination of "cynics", "customers", and "true believers" the "unwise trifecta" or something. (I admit don't have a catchy term in mind yet.)


The best thing about written English is how malleable it is. Everybody doesn't have to mean everybody.


My current view is that ambiguous language only benefits poets and disingenuous scoundrels (politicians, sleazy marketers, etc.)

That said, I'm open to being talked out of this viewpoint. Being cynical makes me unhappy.


Thomas is definitely a disingenuous scoundrel.

In all seriousness, I laugh at absurd absolutes. If you show me a sentence with "Most people" and the same sentence with "Everybody", I'm going to smile at the second.

But I don't think that means it's ambiguous. "I literally died" makes me laugh but there's no deception.


This is why we can't have nice things.

e.g. remember when "synergy" actually meant something?


I literally remember that.


> Alternatively, the author could adopt a common style used in business communication where the author creates a label...

Yeah no offense but this style of writing makes me go look for something else to read, whereas tptacek's style made me keep reading the (long) post..


If we're being sufficiently pedantic about this, for any person A and belief B, it is perfectly non-paradoxical for A to stop believing B and then start believing B.


If we're being sufficiently pedantic, then after this process person A would be right where they started and would not be "better off", making the original statement false.


It would be easy to adjust the language. I'd just have to give up on people reading any of it.


... because there are no possible improvements, of course, to perfection.

P.S. False dichotomy alert




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: