Hacker News new | past | comments | ask | show | jobs | submit login

If you’re doing research of any significance in today’s world and don’t have an active security program looking for harmful actions by foreign intelligence your organization opens itself up for all sorts of nasty liabilities. You don’t even have to have an electronic intrusion. The PRC’s government also pays people off as the case of this former Cleveland Clinic researcher shows: https://www.cleveland.com/crime/2020/05/former-cleveland-cli...



I'm not sure I agree that it's the responsibility of the people doing research to protect against foreign nation state attacks (whether cyber or legacy intelligence).

1st: most people outside of government don't know how much they are expected/"required" to do to protect their work against foreign nation states. Except for heavily regulated sectors (government, military, heavy industry, banking, core telecom, and more recently elections) very few companies will actually get help from 3-letter-agencies to actively protect against foreign nation state attacks.

2nd: many people expect that the {NSA, Cyber Command, et al} are actively defending all US organizations. I don't see evidence of this (although if there was evidence, I probably wouldn't see it anyway).

3rd: In a national emergency (which the COVID response was declared), there are limits to the liabilities which would otherwise be enforceable in court. There are frequently/always legal escape clauses like force majeure and act of god which would likely alleviate liabilities due to fallout from acts of war or a severe pandemic, so it's not clear that those "nasty liabilities" could be enforced. There are currently 2 important cyberinsurance cases[1] which are winding their way through courts right now which may effectively decide if cyberinsurance is a viable product (depending on whether). Violations of HIPAA are possible, but similarly may not amount to much in terms of prosecution because of the pandemic.

In reality, it's damn near impossible to protect against a motivated+targeted nation state attack (especially with the resources of PRC). If the liabilities incentives require all projects (large and small) be able to withstand nation-state attacks, then all of the project resources go to cybersecurity and none into research -- your productivity is now zero.

It's important to remember that it's the FBI's job to do counter-intel. If a medical research group is defrauded by PRC spies and you blame the researchers for not being able to spot a non-trivial espionage attempt, you are just victim blaming. I work as a product developer in cybersecurity and I doubt I could identify most spy craft if it were to happen right in front of me.

[1] https://www.cpomagazine.com/cyber-security/aig-case-highligh...


> many people expect that the {NSA, Cyber Command, et al} are actively defending all US organizations. I don't see evidence of this (although if there was evidence, I probably wouldn't see it anyway).

If you keep your confidential research results on an unpatched server with weak passwords and exposed to the internet, what is the NSA supposed to do about that?

About the best thing they could do is to scan for and find the vulnerability before the attackers and notify you about it, which in general they don't. And it still wouldn't solve most of the problem because there would be objections if they did more than a cursory scan, which means they won't find most problems, but the attackers are under no such limitations.

> there are limits to the liabilities which would otherwise be enforceable in court

I don't think this is the kind of liability they're talking about. If your confidential research falls into the hands of economic spies, the problem isn't so much that someone is going to sue you as that your research and any relevant patents have now lost their economic value because a knockoff product will beat you to market.

> cyberinsurance

This is liable to be more of a grant hog than liability would. Not only do you have to pay the premiums -- which would be high unless researchers adopt good security practices, which having the insurance would give them the incentive to do the opposite of -- but you also then have the insurance company imposing some kind of bureaucratic best practices procedures that gives you even more compliance costs than you would get from having liability, because the insurance company has misaligned incentives with respect to the level of compliance burden to impose, since they don't pay any of it but get all the benefits.

The reality is, the researchers are the ones operating the systems their research is on. They're the ones who have to secure them. And they already largely have the right incentives to want to do that, but they also have a poor understanding of the necessity of it and the process for doing it.

What would help here are the things that would help in general. Fund vulnerability research in free software so that the software people are using (because it's what they can afford) is secure by default, and easy enough to use that people don't commonly make mistakes, and well-documented. Things like that. Make it easier to do the right thing so more people do.


> 1st: most people outside of government don't know how much they are expected/"required" to do to protect their work against foreign nation states.

This is very true, sadly. It ought not to be, but level of practical cyber abilities seems sorely lacking. I see lots of "governance" style cyber, but not a lot of "deep technical expertise being allowed to develop defences".

University research lab type environments deserve a special call-out though for being near-impossible to defend. Most of the time these are "defended" by pooled central IT staff without specific awareness of the significance of the systems or threats faced. University networks are also notoriously open, and even in lab environments, they're often connected directly to the internet or campus network (airgapped computers for internet access are less convenient and someone would have to pay for them, and nobody wants to). Let's not even go into the various shadow IT remote access systems in use, which circumvent the institution firewall to let them get work done from home in the evenings...

University lab environments are an incredibly tough target to secure. And the researchers will find ever more ingenious workarounds to security measures that they find getting in the way of their work.

> Except for heavily regulated sectors (government, military, heavy industry, banking, core telecom, and more recently elections) very few companies will actually get help from 3-letter-agencies to actively protect against foreign nation state attacks.

Even some of these sectors sorely lack ability in cyber, at least in some very developed and otherwise capable countries. There is still a very real barrier between 3 letter agencies, and the industries you mentioned that need this help. Information sharing is often too little too late, or not specific enough to be actioned.

That said, I do think cyber security needs to be a bigger priority in all sectors, but nobody wants to pay for it, and as long as there's no routine cost to business, I don't see that changing. Not while traditional "value for money" metrics are used to measure and compare options - it's very hard for those reviewing tenders or proposalsto see and differentiate between good security and some "military grade, unbreakable, quantum sprinkles" snake-oil security that has SQL injections everywhere.


You conflate so many facets into an hopeless image. Yes, all research facilities are potential targets. Yes, it's wise to assume that no single one can realistically deflect a full frontal attack from a state agent. But each possibility doesn't happen all at once because resources are limited. It's like saying you can't defend a country because each of your soldiers is mortal.

A large part of cybersecurity is removing the low hanging fruit (eg. Gitlab's recent phishing test). The current stakes might probably target an unprecedented level of attention towards research facilities where people weren't concerned about all this stuff, and it's safe to assume aren't experts in the matter. So there's probably a lot that can be done to strengthen the security landscape, making life difficult to attackers, and generally consuming their attention and resources, resulting in a net positive. Even if each one would still succumb, maybe fewer will.


They don't even have to pay.

Chinese citizens are forced by law to spy when asked.

https://www.canada.ca/en/security-intelligence-service/corpo...


Let's not pretend that you have to be a Chinese citizen, or even Chinese in order to spy for China. Or for any other country for that matter.

https://www.cnn.com/2020/01/28/politics/harvard-professor-ch...


It is a good bayesian prior to note that most Chinese spies are Chinese.


True, although having a good bayesian prior working in your favour can also make you more effective and valuable as a spy.


What kind of liabilities? That looks like a case against an individual.

Are you talking human counter-intelligence as well as IT security?


Imagine a state actor hitting the contract research organization in charge of the last phase of a clinical trial for a blood pressure medication and changing data. Due to the nature of double blind trials, catching these modifications can become really hard to catch and could lead to a lot of human suffering.


If they target a CRO the sponsor still has the original data from the trial sites. I can say that at least for the company (one of the 10 largest pharmaceutical companies) I work for this would almost be impossible to not be caught.


Even the crappy little cowboy CRO I worked for had a fleet of CRA's go out and manually verify documents against the EDC. It's required by law. I think the FDA audit also repeats that process with a random sampling for some studies, though I couldn't swear to that.

I get the point the parent comment was trying to make, but yeah, bad example.


The harmful action being getting info that will be published in Cell or Nature six months early?


So two questions (not to you, but the general audience):

1. Are you ok if some entity steals your research and publishes it under their name in a venue under their name before you publish it?

2. Are you okay if the entity stealing the research is not similarly liberal with their own research on the same subject?


if the research was published published ongoing, like open source software, it would not be aproblem.

the problem only exists because people want to 1) hold data hostage for money and power 2) only publish success.


> if the research was published published ongoing

Are you aware that there are protocols in medical research against this?

Publishing research in an ongoing basis will taint the results (placebo effects etc).

There are blackout periods etc. just to make sure that the analysis is valid and not tainted.

Also a lot of research has lag time. Experiments can take a lot of time.

There is also a hidden assumption China is hacking for the greater good.


you're talking about human medical trials. that's a very low percentage of research


Even if software is open source, you can't steal it and claim you wrote it yourself.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: