I asked myself this same question when my work was focussed on attack research, and the result was that I stopped working on it, finding some clever new attack is great, but no one really cares after it is patched. So I focussed on building tools as well, automated reverse engineering and static analysis tools. Tools are getting better, but even then it's not clear that any of the current tooling approaches will have significant lasting pay offs.
For all the talk of security being a rapidly changing field, it's only true in the same sense that JavaScript frameworks are rapidly changing, lots of churn but not so much progress.
"So I focussed on building tools as well, automated reverse engineering and static analysis tools."
Smart pivot. I mentioned that in my own comment. The tool building, if done right, can give you knowledge or code that can be reapplied to all kinds of use cases. Especially in static analysis since there's a few techniques and constructions that keep showing up in tools whether it's system, web, whatever. If you ever went for formal verification, the stuff you came up with might again be reusable on new problems.
Much higher payoff on tool building than just hunting and patching vulnerabilities.
What do you mean "no one really cares after it is patched"?
At that point, you get a chance to find a new attack. People really care. The government adds a new task order to your contract and then you get working on the problem.
No-one really cares about what you have come up with before except as proof of your abilities.
You can't keep building on top of your work as you can in other parts of CS. Personally, this lead to a desire to keep bugs private since then at least you could privately chain things together in interesting ways.
It can be a very lucrative career, and it can be a lot of fun if you enjoy turning puzzles over in your mind (though this really applies more to tricky exploit dev), but I did not find it satisfying in the long term.
I transitioned from writing trading software to writing software to secure networks (with a brief startup co-founding stint.) Here's my biased, objective, and anecdotal opinion.
Security is just another knowledge domain. More than any other domain, security has a clear divide between understanding the math behind cryptography (primes, elliptic curves...), understanding how to implement correct and high-performance code (sse4, avx,simd..), and high-level use of crypto implementations to create secure protocols (diffie-hellman, tls...) On top of that there's a new category that's emerging in the field of what I would call embedded/root of trust security in the form of SGX and TPM.
If you stay with the mind-set of "use TLS and PGP" that's very commendable but probably not very challenging. Going all the way up and down the crypto stack is extremely challenging. Squeezing every last cycle out of CPUs is very lasting (we'll always need crypto )and very transferable (mobile devices, clients, servers, embedded, companies of any size, companies of any domain...) It's also political and philosophical at times. Basically it's a hacker's dream to work in security.
Very few people working in security do cryptography. In fact, despite the fact that the author of this post is a trained cryptographer, virtually none of his professional work is cryptographic (he's a reverser).
I think one of the good things about security is that it exposes you to so much domain knowledge (like cryptographic algorithms and bare-metal programming).
But executing well in security (and particularly software security) is more about a specific kind of problem-solving mindset. The unique difference between security and other subfields of CS is that security provides you with an adversary; it's intrinsically competitive in a way that other kinds of computer science aren't.
I've been wondering why I worked in it for a different reason: overall lack of demand for real thing and apathy of users. Industry faces a constant, uphill battle that gets steeper every year due to the apathy. The intellectual challenge is great in terms of having to learn everything part of the stack down to EM effects of transistors. Then try to outsmart a world full of brains in a design. You get moments you feel proud of.
Overall, though, it looks like it might have been more satisfying if I stayed on the automatic programming track. Doing things like program/hardware synthesis, static analysis, functional programming compilation, and so on. Have a steady stream of useful inventions plus more users happy to adopt them. Must be nice for those fields. ;)
> ...might have been more satisfying if I stayed on the automatic programming track.
Hah, my job is sort of in that area now, and it is satisfying. Even if happy users are still a bit harder to come by than one would like... (Working at a large company helps: we can just hire people to use our stuff, instead of having to market it.)
I particularly like how you can fruitfully mix PL with pretty much any other field and get something interesting. The intersection with security has always seemed particularly fruitful—you get a lot of bang for your buck with static analysis and automation.
You also have the advantage that "secure" is at least a partially objective criterion, sort of like performance. I personally care most about making systems that are more expressive but it's much harder to make a case for that than for "more secure".
That's a very insightful take. When you say "apathy of users" are you speaking of "higher level" users like CIOs and network admins or just the end users / average users in the network or both?
It may be a truism or cliché but it seems correct from my perspective as a "devops type": security is a losing game in favor of attackers due to asymmetry between attack and defense.
"When you say "apathy of users" are you speaking of "higher level" users like CIOs and network admins or just the end users / average users in the network or both?"
All of them. There were secure computers far back as Burroughs, which was successful.
The drivers of most of the computing industry were more features and speed at lowest cost. Quality or security meant less of that. Tradeoff began there. By about 80's, we mostly knew how to secure computers from specs down to CPU. Market demand was so non-existent almost nobody was producing them. Per one of INFOSEC's inventors, Roger Schell, there was some demand by CIO's (?!) but they believed IT industry intentionally left bugs in products to sell them fixes & would never sell them bulletproof stuff. So, maybe they weren't dumb. ;) By 90's, DOD's Computer Security Initiative that promised to only buy secure products for certain stuff plus TCSEC critera for building them led to a number being on market. Almost nobody in DOD or regular market bought them. Rinse repeat with only a few niches doing higher assurance with high unit price due to low volume: safety-critical like aerospace, a few companies selling to military, TEMPEST industry, HSM vendors, and some smartcards. Smartcards being exception to low volume & high price. Even most on that list have been lowering amount of assurance & increasing risky features due to market demand.
Now, let's look at user side. Vast majority of time users get to choose between a secure/private or insecure/surveillance-oriented product they will choose the latter. Just look at whose dominating in messaging, storage, thin clients, remote access, calenders, document formats, etc. Almost got an exception in browsers with Chrome, based on OP Web Browser that was secure, but they watered down security because they knew demand-side wanted blazing fast over kind of fast but secure. There's now privacy-oriented, easy-to-use apps for various things on smartphones for $1-10. Desktop stuff free or even $5 a month on critical functions that are still easy to use. Almost nobody uses these even when they get a lot of press.
So, it seems demand for actual security is almost non-existent even when it gets main function done, performs acceptably, is plug-and-play, and is inexpensive to free. I mean, if they don't care at that point, what can you do? I think it's a fatal flaw of human nature in the way people's minds make tradeoffs. Now, I encourage people to instead get into established companies in senior roles or do startups where the product is good, people use the heck out of it, it stays competitive, and you just bake security into it. Or just avoid INFOSEC altogether for something not set up to fail. :)
"Perhaps automation will change that? :)"
I had some hopes for that. There were tools like 001 Toolkit by Hamilton, various 4GL's, logic programming, and so on that basically let you specify the problem where the tool would do everything else. One could do something like that which embeds good security into it. Opa language does that for web apps as an example. Additionally, the automated analysis tools for code with or without annotations are getting really good. They're at a point where users almost does no work. They also get almost no adoption even by companies that "care about quality." ;) Anyway, machine learning that studied tons of annotated codebases to learn what annotations go with what code patterns might result in tools that correctly annotate new code & just run in background of build process. There's potential there but manual effort will still be required due to false positives/negatives. And new domain logic.
The problem with selling for consumers is simple: it's hard to achieve credibility. On the one hand, anti-virus software , that we all bought, sucked. So we don't have much trust in security vendors.
On the other, after hearing about the crazy stuff hackers do, like stuxnet or the more trivial daily privacy news, make people think the task is impossible. And even what's possible - it's hard to quantify benefits.
Combine that with network effects , the marketing power of free apps and default apps, and the complexity of it all - it's natural that most users don't the time/money to spend on this.
> 1. Original thinkers
> 2. Tolerance of non-conformism and diverse educational backgrounds
> 3. Intellectual honesty
This also sounds a lot like the overall SRE org at Google. I've never worked with a more no-nonsense, brutally honest group who are open to everybody's contributions.
Incidentally this isn't the first time I've seen this connection between security and (Google) SRE. This talk at USENIX talks about the intersection between "privacy engineering" and SRE: https://www.youtube.com/watch?v=Jx2y2yi0rZc
Except that Google is fairly infamous for having educational background as one of their primary criteria. There are theories that close knit groups tend to overestimate their own diversity since they are more sensitive to differences among themselves e.g. https://en.wikipedia.org/wiki/Out-group_homogeneity
Like in many cases, perception lags reality. Google was already starting to broaden out past Ivy League candidates when I was hired (2008), and it's been almost 8 years since then. Even if they wanted to, they couldn't source enough candidates from just the top schools to fill the open positions; they're at 66,000 employees now.
SRE actually seemed one of the more educationally diverse organizations while I was there - it's a field where skills are easily measured and passion for the job you do tends to result in better performance than educational background. Search Quality was still overweight Stanford/Berkeley/MIT/IIT/etc at the time, as was Research and X. Chrome and Apps had a wide variety of backgrounds, including a number of really good people without college degrees. Ironically, the departments that seemed to have the biggest Ivy-League bias were AdSales and HR; when there are few objective ways of judging candidates, more weight gets placed on educational pedigree.
In my experience being hired at Google without a degree, and later talking to folks involved in hiring, education is not a huge factor in most cases these days. If you're at the start of your career and don't have any significant experience then it can make a big difference, but otherwise it's usually more of a supporting factor.
My point wasn't necessarily that you can't get hired at Google, but that it's fairly well-documented that Google during the first decade or so of its existence explicitly favored candidates from elite school. That culture isn't something you just change overnight. I put no judgement in that fact, just that it's unlikely to produce relativly more diversity, non-conformity or original thinking compared to other hiring methodologies.
Edit: From Eric Schmidt himself:
> A few of the rules we had were: we didn’t want to hire your friends, we didn’t want to hire people from “lesser universities,” we only wanted to hire people with very high GPA’s. The constant problem was somebody was a good employee, had someone they had worked with, who was very loyal, but not from a great university, did not have a high GPA. We would not hire these people. It was controversial. We relaxed it a bit now, but the fact of the matter is it got us to the point of where we are today. They got us these intelligent generalists from top Universities.
> "Even though the actress Gwyneth Paltrow had created a best-selling cookbook and popular lifestyle blog, Mayer, who habitually asked deputies where they attended college, balked at hiring her as a contributing editor for Yahoo Food. According to one executive, Mayer disapproved of the fact that Paltrow did not graduate college."
The fact of the matter is it got us to the point of where we are today.
And -- how can you possibly know that, Eric? Are you saying because B follows A, that A caused B to happen? Maybe the early Google succeed despite the now-discredited filtering processes it used in its early years.
According to one executive, Mayer disapproved of the fact that Paltrow did not graduate college.
But this quote is just exquisite. Presumably she filters picks for her bookshelf (including, most especially, cooking and lifestyle books) based on a hard floor for the authors' educational backgrounds, also. And of course the movie she watches, with respect to the pedigrees of their actors, likewise.
I described what I do for a living to him (reading code for subtle mistakes), and he said "that sounds like one of the worst imaginable jobs ever". He is a builder.
I find great interest in finding bugs. Writing code is actually an easier job: it is well defined with inputs, outputs and your keyboard in between.
Human error, in contrast, could be anything. Someone made a typo. Someone used a compiler that had a bug. Or the CPU had a bug. Or they didn't ship the product right. Or they wrote the password on a postit. Human error is unbounded, which is what makes it so intellectually challenging and interesting to find.
> Predictions about what is "lasting" are very difficult to make :-)
It's kind of sad when you realize that a lot of the code you write will eventually be rewritten or lapse out of use, or be replaced by something better.
It's very hard to find truly "lasting" work in this world. Maybe research?
>It's kind of sad when you realize that a lot of the code you write will eventually be rewritten or lapse out of use
It's funny, I actually get sadder when I find out code I wrote a long time ago is still in use. The deployment tool I wrote for reddit ended up lasting six or seven years. Some of the stuff I wrote at eBay is still in use today.
I don't understand why that's sad at all. There's something to be said about code that works well enough, it is needed for a long period of time without being removed or replaced.
We don't pine for bridges that need to be torn down every few years.
If my code shows how less competent I was in the past, then I will definitely want to rewrite it. Unfortunately, I've known some people who can't touch code other people have written (or have touched), even if that "other person" was themselves 6 months ago, without doing a large unnecessary rewrite, invoking a crazy combination of lost features and freshly reinvented wheels.
> the code you write will eventually be rewritten or lapse out of use, or be replaced by something better.
yep. try using the websites you've built as a resume
even worse when all people look at is the UX and if you'r enot a UX person how to even show how goo dyour code is, how maintainable it is etc.
I realised this to me detriment when my last employment came to a natural conclusion. I have no degree and suddenly no portfolio of active production code - all I had was one reference from a guy I built a website for which now doesn't look very good since the new guy butchered it.
I invested in becoming a certified Autocad Operator instead and never looked back.
Code itself tells a story. If the message is clear it can be built upon later by making it better based on new learnings, or new product needs. If it is a messy piece of crap it is better to be forgotten.
From the point of view of a frontend dev front, things aren't much different:
security guy: You accumulate lots of experience over the years, where most errors happen, common culprits, but you _know_ the actual application of these ideas will always be different over time.
frontend guy: You gain lots of experience over time about common architectures, common best practices, common tooling, you _hope_ that the knowledge you accumulated will stay relevant over the following years, but in reality, except some basic concepts, everything changes. :)
I really liked his post. He pointed out to some deep issues like "Intellectual honesty" which I've been concerned about too(yea even in math) but I don't think the security sector is immune to these.
Your title insinuates that security is not lasting....computer security has been around for decades, pretty much since the inception of the internet, and it is not going to go away. So what do you mean security is not lasting?
The lifetime of exploits/defenses (in an ever-evolving threat landscape) is likely to be shorter than the lifetime of widely adopted application software.
It's true the landscape changes rapidly, in terms of specific bugs found in specific pieces of code, but that is the same reason security never stagnates and will always be needed.
He's not addressing the question of whether security will always be needed. He's addressing the fact that the work product of a software developer working seriously in security will have a half-life of just a few years, while the work product of a generalist developer might live for decades.
Dullien is a software developer; he's the author of BinDiff, the first popular binary diffing tool, and of a suite of other reversing tools like it (he's been at Google for several years now).
It seems like the author approached crypto (and security) from a perspective of breaking things. Sure, that's a fundamental part of both fields, but the harder (and IMO) more fun part is building secure systems.
That's the entire reason you have proofs of security in crypto: to show that breaks are unlikely. I don't quite see why crypto should be "blindly result-oriented", and whether that's even a good thing; when public key crypto was invented it was wildly impractical.
The vast majority of crypto is usually not about breaks.
There's no discipline in computer science that is more about breaks than cryptography. The whole field is refined down to the tension between performance and vulnerabilities. What you can do in a cryptosystem is defined by breaks; progress in the field is punctuated by breaks.
For what it's worth: Thomas Dullien is a cryptographer by training; he was one of Dobbertin's doctoral students.
Perhaps you refer to a specific part of applied crypto; most crypto, from what I've seen, is not focused on breaks of the cryptanalytic kind (as in "oh, let's try to break SHA256!").
Note that I'm speaking from an academic perspective.
No, I'm referring to theoretical cryptography as well.
One example: the CAESAR competition that is trying to select a portfolio of next-generation AEADs. Ciphers and constructions are entered, and submissions are sorted by the success of attacks.
Another example: databases and format-preserving encryption. It might look like Order Revealing Encryption is an exercise in building things, but it's also a response to vulnerabilities in Order Preserving Encryption. One team develops something like cryptdb; another tears up cryptdb and uses its findings to design Cipherbase (or whatever).
Unlike in generalist software development, most new things in crypto are responses to breaks, often by the authors of the breaks.
I can agree with the assessment of encrypted DBs and SSE schemes, but I don't feel like that's the case for a huge portion of theoretical-ish crypto. Sure, I'm not disagreeing that people don't break things; I'm just feel like that's not a primary focus of most academic cryptographers; taking a look at the papers in the last few CRYPTOs and EUROCRYPTs only consolidates this feeling.
One exception to that is perhaps multilinear maps, for which candidates are proposed and broken all the time, but once a assumption becomes battle-tested then I feel the focus shifts to building things out of these assumption
Im mostly with tptacek and ciphercoder on this but they're not quite right. I go through tons of papers on INFOSEC research in a year. Many I see most points of the year are introducing crypto schemes or breaking them. I'll add there's a few other categories that require little breaker ability:
1. Verification of previously described properties of algorithms or protocols. This is basically math where key constraints could be done checklist style.
2. Building tools to support cryptography research or development. Cryptol is a great example of this.
3. Speed improvements on crypto algorithms that preserve mathematical properties. Analyzing against common flaws usually covers problems you could have here.
4. Usable crypto. There's some breaking that goes in this but it's mostly presentation stuff. Can often use battle-tested tools that come with tips for proper usage.
These come to mind as not directly related to inventing or breaking new schemes far as crypto itself. I wouldn't need someone with the security mindset to do these things do much as give them some training on what to do and not to do. I think you were referring to No 1 on my list.
ePrint is not necessarily the best indicator for such things, since essentially anybody can post there, but even so, from some random sampling it seems like more than a majority of papers are constructive and not about attacks.
Let's try this, instead: can you name some important fundamental theoretical work in cryptography that isn't defined by attacks? Obviously, that's a subjective judgement. Whatever you name though, we can just look and see how much of it is defined by attacker capabilities.
The very foundational paper for zero knowledge did not have much to do with attacks. Similarly for many many subsequent works.
Similarly for MPC. Most work in MPC is about improving efficiency or proving security against stronger classes of adversaries; it is not in finding breaks of these protocols.
As I said earlier, tye majority of crypto research is in the following scenario: pinning down a battle-tested assumption, using that assumption in a provably secure manner to construct new systems. zero knowledge, MPC, ORAM, etc. all fall within this paradigm.
Constructing new assumptions is obviously a task where attacks are necessary, but new assumptions come along every 5-7 years; the intervening period is dominated by this black box use of assumptions.
EDIT: see this[1] listof most cited security and crypto papers; very few attack papers.
> The very foundational paper for zero knowledge did not have much to do with attacks. Similarly for many many subsequent works.
Dave designs a ZKP protocol for a specific application.
Jane finds a side-channel in the protocol that leaks some data about the secret being verified and publishes it.
Is it still a zero knowledge proof?
> Similarly for MPC. Most work in MPC is about improving efficiency or proving security against stronger classes of adversaries; it is not in finding breaks of these protocols.
How do you prove security against stronger classes of adversaries without finding breaks in existing protocols, or at least trying to do so?
The property of being zero knowledge is entirely separate from properties of the implementation; side channels were simply absent from the original model. Either way, for every zero knowledge protocol, side channels are relevant only on the prover's side; verification cannot allow anyone to learn any information about the witness underlying the proof (other wise the proof is not zero knowledge).
Sure, attacks can motivate some of the theory that is created, but most theory work only very loosely uses these attacks as motivation. A number of strengthening of the attack model are simply questions of curiosity: if we strengthened the threat model in this way, what guarantees do we have. Very rarely is it motivated by an actual attack.
At this point, I'm just going to say I take your point by stand by my own. Cryptography is unique among the disciplines we deal with in information security in being motivated entirely by attacks.
In software engineering, we're solving problems we've known how to solve since the 1970s. The challenge is solving them as programming environments and user interfaces get more flexible and performant, and as the amount of code we have to secure increases at a rate that outpaces our ability to verify it.
That's not the case with cryptography. In fact, if you include constructions along with underlying algorithms, it's difficult (I can't do it off the top of my head) to think of anything we use today that was secure in the 1970s.
Most cryptography we use today wasn't widely available as recently as the 1990s --- not because nobody had invented the underlying basic concepts, but because the constructions that animated those concepts were terribly insecure.
But that's what I'm saying; there is lots of cryptography that is entirely unmotivated by attacks, like:
work on determining the size of complexity classes like CZK and SZK;
Determining lower bounds of round complexity of MPC in various models;
Determining the kinds of assumptions different primitives can be based upon;
Determining the complexity classes supported by different delegation of computation schemes, and the assumptions these can be based on;
Determining whether you can obtain particular primitives out of others, such as OWPs from OWFs in a non blackbox manner, seeing what iO can be used for, etc;
Improving asymptotic efficiency of various primitives, like NIZKs (see SNARKs).
I think the difference in our viewpoints comes from the communities we're; I'm in the theoretical, rooted in complexity theory side of things, whereas you appear to be on the more applied side of crypto, given your emphasis on things that are being used in practice.
That's really my answer. Maybe I'm setting myself up for career failure, but I just enjoy having some Russian hacker lob a new puzzle at me every couple of weeks.
I definitely understand the sentiment. In high school and university, I did my share of "informal penetration testing", and always loved it. But when I went to look for a job, I came to the "building vs breaking" crossroads, and went with building. Fast forward a few years, and I found a job building security tools. "Por que no los dos?"
Uhm no, if an AI writes software that contains mistakes the errors would still be important as now with human-written software. When a programmer makes a mistake usually we don't think about intentions but this is because we usually assume the mistake is unintentional. With AI we can ask ourselves if the mistake is intentional or not, but the mistake would still be important.
I think it's like garbage collector, they don't build anything, they keep the rats in check. We need people in maintenance jobs: repainting the road marks when they wear or finding the issues with the new version of a product.
I think it's a bit different than that. In security you are looking for weaknesses that other humans could take advantage of. This adversarial aspect is generally not present in maintenance.
I think the answer to the question went a bit off-tangent. IMO, what the original questioner had in mind was, "Why do you (or some people) take less challenging and/or mundane computer jobs over something that seems more "exciting" or something that will make a lasting impact?" And "Security" in this question was just an example of a mundane job that most people have in mind.
Sure, security can be/and is very challenging, exciting and lasting, but that's besides the point of the question.
The question was not whether security is challenging to the brains or "lasting", but more like "why do some programmers settle for less challenging jobs when they can do a lot more with their brains?"
I don't think security is any less challenging, more mundane, or smaller impact than other stuff. I personally find it more exciting and enjoyable. If I enjoy something, I am more likely to be more productive and "do more with my brain" than if I don't enjoy something.
For all the talk of security being a rapidly changing field, it's only true in the same sense that JavaScript frameworks are rapidly changing, lots of churn but not so much progress.