Hacker News new | past | comments | ask | show | jobs | submit login

> Is there a valid defense for a platform whose security relies on the unanimous cooperation of a widely-scattered developer base?

The defense is staged deployment and active users. This obviously depends on the blutness of the malicious code.

If I may assume easily noticed effects of the malicious code: A dev at our place - using java with maven - would update the library, his workstation would get owned. This could have impacts, but if we notice, we'd wipe that workstation, re-image from backup and get in contact with sonatype to kill that version. This version would never touch staging, the last step before prod.

If we don't notice on the workstation, there's a good chance we or our IDS would notice trouble either on our testing servers or our staging servers, since especially staging is similar to prod and subject to load tests similar to prod load. Once we're there, it's back to bug reports with the library and contact with sonatype to handle that version.

If we can't notice the malicious code at all until due to really really smart activation mechanisms... well then we're in NSA conspiracy land again.




> If we can't notice the malicious code at all until due to really really smart activation mechanisms... well then we're in NSA conspiracy land again.

What about really dumb activation methods? I.e., a condition that only triggers malicious behavior several months after the date the package was subverted. You don’t have to be the NSA to write that.

What’s scary here is that there are simpleminded attacks that, AFAIK, we don’t know how to defend against.


Mh, I have a rather aggressive stance on these kind of incidents, no matter if they are availability or security related. You can fish for them, you can test for them, and there are entire classes of malicious code you cannot find. For everything you do, turing complete code can circumvent it. There's a lot of interesting reading material going on in the space of malware analysis regarding sandbox detection, for example.

So stop worrying. Try to catch as much as feasible before prod. Then focus on detecting, alerting and ending the actual incident. If code causes an incident, it't probably measurable and detectable. And even then you won't be able to catch everything. As long as a server has behavior observable from the internet, it could be exfiltrating data.


What if it encrypts user data?


You have your tested backups, yeah?


Tested restores with at most 59 minutes of data loss for prod clusters within 90 minutes after order. 30ish minutes of downtime. We could even inspect binlogs for a full restore afterwards on a per-request basis for our big customers.

Cryptolocker on prod is not my primary issue.


Sounds like good hygiene, though it seems burdensome if everyone must do it or seriously risk infection. Ideally there would be at least minimal sanity checks and a formal process before a package can be claimed by someone else.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: