You don't need "safe harbor" to test software you install on your own machine (which is what Crowdstrike is), and if you're testing someone else's server, you'd better have permission already.
The problem is with the publishing part. It's pretty unclear - to me at least - what the legal status of publishing 0days is around the world. In the USA, I'd expect it to be protected by free speech, but even then I wouldn't be 100% sure.
Publishing vulnerabilities in the US is protected speech. You get in trouble with disclosing vulnerabilities in the US in four ways, ordered from most to least common:
1. You tested someone else's servers, and not software running on your own computer, and you didn't get permission or adhere to the rules of engagement the target established. Now you're not a researcher, you're an intruder, subject to CFAA. There's a bright line in US law between your computer and someone else's computer.
2. You tested software running on your own computer, but you acquired that software by agreeing to a contract prohibiting research, reverse engineering, or disclosure (ie: any NDA). Now you've violated a contract, and can be sued civilly for doing so. This comes up a fair bit when testing stuff on behalf of big enterprises, where all the software acquisitions come with big, enforceable contracts. I've had to cave a bunch of times on disclosures because of this; most memorably, I got locked in a vendor's suite at Black Hat an hour before my talk redacting things, because that vendor had a binding contract with my consulting client.
3. You were wrong about the vulnerability, or it could plausibly be argued that you were wrong, and you made a big stink about it. You're still a researcher, but you've also possibly defamed the target, which is a tort that you can be sued for.
4. You disclosed a vulnerability that you'd previously leaked, or provided exploit tooling regarding, to a criminal enterprise. Now you're not a researcher, you're an accomplice to the criminal enterprise. This has come up with people writing exploits for carding rings --- they weren't (or couldn't be proved to be) carders themselves, but they explicitly and knowingly enabled the carding.
As you can see, disclosing vulnerabilities isn't the least scary thing you can do with speech in the US, but it's not that much more scary than, say, leaving a nasty Yelp review for a dentist's office (something that also gets people sued). Just (a) don't test servers and (b) don't give secret bugs to organized criminals.
Excellent breakdown! The reason I was thinking of safe harbor is that most bug bounties tend to explicitly grant permission to folks participating in the program. It’s usually walled off by some scoping criteria but it’s part of the deal.
The thing that seems a little iffy for me with crowdstrike is that it’s an agent that calls back to services. It seems plausible that I could unintentionally break something in their environment while testing their software.
I like how you wrapped it up though and totally agree.