Hacker News new | past | comments | ask | show | jobs | submit login
How we protect our most sensitive secrets from the most determined attackers (monzo.com)
280 points by intunderflow on Nov 18, 2021 | hide | past | favorite | 187 comments



What does "the most determined attackers" mean in this context?

Are we talking the real most determined attackers like the US or China who can easily deploy $10B (i.e. a team of ~3,000 people fulltime for 3 years) for a strategic advantage?

Or $1B (i.e. ~300 people for 3 years) which is within reach of every government, international criminal organization, and thousands of multinationals?

Or $100M (i.e. ~30 people for 3 years) which is now just a line item for those organizations?

Or $10M (i.e. ~3 people for 3 years) which is 10x better than the "gold standard" used by every other bank in the world, but still profitable for small criminal outfits doing ransomware attacks?

Are there any legally binding marketing statements made by any associated security executive on where they sit in these 4 orders of magnitudes of "most determined attackers"?

It is frankly absurd that we continue to allow statements like "secure", "reasonably secure", "most determined", or whatever without expecting some degree of quantification within 4(!) orders of magnitude. If you can not quantify or demarcate an accurate lower bound within 4 orders of magnitude you probably have no idea what you are doing.


If you have >$100M and >3years to burn, WTF are you doing robbing a bank?


You're probably not robbing it.

You're probably in charge of a government-backed digital currency and out to make alternatives look unreliable.


Or you're after the data. Think about how much information there is in having everybody's financial records.

Industrial espionage. Blackmail. Unmasking informants or undercover law enforcement.

It's kind of a disaster that they even collect this stuff.


That's why I try to use cash whenever I can. The less data I generate, the better.


Probably for blackmail of some sort.


When it comes to a bank (especially one mostly centered on the UK; Monzo still has very little international capabilities), I don't think state-sponsored attackers are a major problem. Those rarely need the money, and as I mentioned previously Monzo doesn't routinely do international/SWIFT transfers (if they can at all) so getting the money out would be tricky.


They’re still a target if an individual they’re targeting (to harass, find or discredit) banks there.

They’re still a target for someone wanting to cause disruption or engage in hybrid warfare - imagine a simultaneous hack of all the neobanks, possibly followed by a few of the traditional banks a week later for maximum impact?

They’re still a target for someone wanting to pivot into other organisations they might be connected to, such as card networks or regulators.


It's not much fun when your risk model actually, realistically includes a nation state level hacking attempts to disrupt or pervert your services.


> ... I don't think state-sponsored attackers are a major problem ...

The small and new nature of the Monzo makes it an attractive target.

An attacker may well use slightly smaller, slightly less established business to other, more established businesses, as a pivot point. Since this is a 'bank' it will have some connection with other institutions. It does not need to be SWIFT or similar to be exploited.

> ... don't think state-sponsored attackers are a major problem. Those rarely need the money ...

I disagree with this too. State-sponsored APTs may very well take advantage of financial gains, both to enrich the attacking country, and throw off investigations. (e.g. DPRK, Iran)

Just my opinion.


North Korea would like a word with you.


In a world where a bank robbery is called “identity theft,” why would the care? All they need is security theater.



> $10M (i.e. ~3 people for 3 years)

Where can I apply?


I don’t think every blog, press release needs to be explicit and quantify to that level. Distracting from the main message going down some rabbit hole of how was that 12.67b over 3.67 years calculated


If you're interested in how other folks handle super important private keys, I highly suggest reading the IANA documentation on the DNSSEC Root Zone and how they manage that:

https://www.iana.org/dnssec/archive

https://www.iana.org/dnssec/ceremonies

https://www.youtube.com/channel/UChND9hEeJQjtLDFZ-m8U47A



CACert had detailed documentation on their ceremonies. They were radically transparent about the procedures around their roots. I was sad when Mozilla pushed them out of the Root Programme.

I'm not blaming Moz, CACert were breaking new ground in trying to build a transparent process for a member-run CA. I believe they failed to find a suitable auditor that didn't cost a mint, and could work within the CACert transparency constraints.

Then LetsEncrypt came along... and ate their breakfast.


Banking establishment should take note. They often over-complicate things and rely on "security through obscurity". Monzo opening up their security architecture and even some of their source code puts them to shame. I really like the air gap Notebook and custom OS on CD-R and their ceremonies. It looks like they got very competent people to think it through properly.

Source: formerly at one of the largest banks.


I hope air gaping will stay a niche. If it goes mainstream there will be more attacks on it...


There's some interesting attacks based on working out the sound of each key on a keyboard and then guessing keystrokes - https://github.com/ggerganov/kbd-audio


“..it cannot connect to any wireless network, so we don’t have to worry about it doing so maliciously. We’ve also taken other measures to frustrate attackers, for example we have physically removed the hard drive so there is no way to persist data on the laptop itself”

Hopefully by physically removing where possible? Also consider microphone and speakers if you’ve not already..


Can a white noise machine help there? https://ggerganov.github.io/jekyll/update/2018/11/30/keytap-... says the method is sensitive to noise.


Blasting some Meshuggah should do it


Meshuggah is a fixed pattern that can be subtracted. Ideally noise should unpredictable.


True, I guess they'd need to perform it live - as part of the security ceremony


Preferably in ceremonial cloaks


Airgapped machines ought to be housed in an airgapped room, equivalent to a SCIF.


Funny you mention this because the SCIF tech specs from DNI.gov are cited a lot in our physical security envelope docs, interesting but very terse read, I wouldn't read the whole thing for fun: https://www.dni.gov/files/Governance/IC-Tech-Specs-for-Const...

I'd skip to Chapter 2: Risk Management and Chapter 3: Fixed Facility SCIF Construction


You could let the system ask the characters of the passwords in random order. It would be a bit of a hassle, but gives some added protection for largish passwords.


> We have between 6 and 12 Monzonauts who are keyholders, a certain number of them are necessary to unlock the Hardware Security Module (this is known as the quorum, I’m going to keep the exact number secret ).

I mean it's seven, it's literally written in the attached PDF describing the ceremony procedure.


Whoa, look at Hackerman over here! The Governor of Minnesota would like to have a word with you.


or is it twelve??? I edited this document before publishing it after all :P

for all you know, I rolled two dice to decide how many keyholders I'd put in each part (which I would totally never do, would I?)


What happens if during the ceremony, someone is able to destroy all the cards at once (e.g. the building is attacked)?


I find it sort of ironic how the operating system they use, COEN[1], instructs people to turn off SELinux to build the image correctly...

[1] https://github.com/iana-org/coen


To be fair that's for building the image. The Image itself is debian which is irrelevant from selinux point of view. But I agree, you want your build system to be secure also. Not sure why they give this advice.


Seems like there'd be less need for SELinux on a network like theirs:

"Our entire system is air-gapped, which means it is physically isolated from the outside world and has no way to connect to the internet."


If your system is important enough to be air gapped, then it's important enough to have tight security on those air gapped computers. Air gapping is just one layer of security, it's not impossible to bridge the gap.


What tangible benefits would SELinux bring to an airgapped network?


Why would an airgapped system face any different security threats just because the intruder came in on a USB stick, phone, malicious application, compromised hardware, etc?


Great writeup intunderflow. The question I have, though, is how this doesn't just "push" the security problem one level down.

Point being that yes, you need to ensure that your root certs are super secure and that you have a full documented, videotaped chain-of-custody. But then at some point you need to use that root cert to sign some other cert that DOES live in your infrastructure and is used to sign requests. How do you control your systems at the point where you need to use your cert (or a cert chain) to sign requests that by their nature must be connected to the Internet?


This will probably come in a future blog post (and also I don't lead this part of secret management so I don't have all the context), but our Intermediate's mostly live in nice locked down machines in AWS (avoiding exact product names because I don't want to tread on other peoples toes)

There's definitely a much bigger risk with an intermediate on the internet, however we at least have the mitigation of being able to revoke it if it goes wonky, the main priorities I see in this space are: (1) minimising the chance of a compromise as much as possible (very clear and well defined communication channels so that as few things are open to even being poked at for vulnz over the network as possible) and (2) being ready to react if there is a compromise (revoke the cert and recover)


If the intermediate is compromised you'll need to do a new key ceremony with the root to sign a new CRL, correct? And I'm assuming every component is configured to explicitly check CRLs and fail if those are unavailable for whatever reason, right? How does it ensure that the CRL it's getting is the latest CRL including the now-revoked certificate, and not an earlier one that's being replayed?


PKI revocation as a technology is definitely not the greatest thing in the world, the best protocol at the moment is OCSP with a specifically configured Validation Authority. But even then quite a lot of OCSP implementations in modern software are configured to continue if they can't make a connection.

Fortunately, because the usage of our CAs is very tight to Monzo and our partners, we can reach out and explicitly ban a certificate from our machines (and tell partners to do the same) without too much trouble (as compared to Public CA's who have no chance of being able to do this) in addition to following normal PKI revoking procedures.


> If the intermediate is compromised you'll need to do a new key ceremony with the root to sign a new CRL, correct?

You can use Validation Authorities to avoid needing the full on root that can issue new certs just to revoke an existing one. https://en.wikipedia.org/wiki/Validation_authority


Thanks very much for the response and the writeup, really appreciate it!


Well, in that scenario, creating intermediate certificates offline, deploying them where needed, securing them with less effort than root but more effort than others, and creating actual certificates to use via the intermediate certificates should be a reasonable approach.


Worked in 3 major banks in my country. All of these seems so overthinked and overenginered and not really answering any real fraud risk. To me, personally, it looks more like marketing move for geeks, rather than real necessity.


You should consider the possibility that your perceptions are the result of your not understanding the problem that these procedures are designed to solve rather than the people who instituted them being idiots.


What OP is trying to say is that a disproportionate amount of effort is spent on a rather narrow attack vector and it’s still not 100% secure anyway. The people creating these procedures had goals but good UX was not one of them and definitely not the highest priority.


Yes, I get that. But a compromised root key is essentially synonymous with the end of your entire business (to say nothing of destroying the finances of many if not all of your customers) so the effort being expended doesn't seem disproportionate to me.

If you feel differently, by all means go do business with a bank that doesn't waste all this effort.


Most banks use HSMs to store secrets. This isn’t novel… if you buy an HSM, the manuals describe a less dramatic version of this process. Cloud providers offer them as a service as well.

The inclusion of NSA stuff got my eyes rolling. If you’re worried about NSA implants, you probably should be thinking about the provenance of the HSM, smart cards, etc.


>> But a compromised root key is essentially synonymous with the end of your entire business

No, it is not. That is the point. There are many levels of protection, many checks and validations, and even if you pass all of them still a lot of things can be reversed pretty easily. Banks are not cryptocurrency clowns with private key obsession.


> destroying the finances of many if not all of your customers

Only above the non-trivial limit guaranteed for UK regulated banks.


It wouldn't destroy all their customers finances, they would just back out the transactions, restore their database and get a new root key.


I don't think you fully appreciate the damage that someone with the root key could do. It isn't limited to initiating fraudulent transactions (which, BTW, would be indistinguishable from legitimate one, so you'd have to sort out the mess manually).


Have you ever worked at bank? I mean, I believe you know something about something, but from what you say, it seems like you have never had a word with a bank's InfoSec.

Root access to servers, to secret keys or password is not beneficial. There are many systems validating each other. Even if you got access to one system, even if you control it for some time, you cannot really permanently transfer any significant amount of money anywhere. Fraud detection and AML systems will trigger an alert. Behavioral analysis systems, which are must have in industry, will trigger an alert, to say nothing about many other systems I really do not want to discuss in public.


No, I have never worked at a bank. But I once had half a million dollars go missing because of a transcription error on a wire transfer. It took two weeks to locate the money and get it back to me. And that with without a malicious actor involved, just a stupid clerical error. So I'm pretty confident that if I had a bank's root key I could wreak havoc. That havoc may not take the form of draining everyone's account, but it would be havoc nonetheless.


Oh, I get it. I've represented hacked companies and sat down with FBI agents in tedious, but educational sessions trying to find hackers. It's a real headache and ultimately fruitless.

The problem with hacking a bank is all they can really do is try to screw up the data or lock it down. Even people who work at the bank can't figure out how to move money around, much less where to wire it to to profit anonymously.


I never called anyone idiot, ok? :-) One can be very smart, still not effective.


Overthinked and overengineered describes Monzo very well. Their business model isn't really banking but coming up with ways to extract money from investors, so this kind of overengineering is a strong business move for them.


I have my gripes with Monzo’s business model and over-engineering (though that applies to most VC-backed startups nowadays) but security is quite important and this seems like a sane strategy of managing root certificates.


ditto

it's someone who found the video of the DNSSEC key signing ceremony and decided to implement it at their company


> (sort of like a private password)

I think this is an unfortunate framing. The property of public key cryptography is that unlike secrets such as passwords only one party has the private key. This instantly eliminates an important class of potential problems.

And unlike with something sophisticated like an asymmetric PAKE, I think it is relatively practical to explain this benefit to end users, Monzo's counter-parties can't lose a private key they don't have and can't produce signatures from Monzo themselves. If there are signatures that shouldn't exist Monzo knows the problem necessarily must be with their systems.

British Banks rely far too much on trust. A symptom of this is that periodically a big merchant (e.g. a supermarket) will accidentally run a transaction feed twice (e.g. all card transactions at Tesco on Thursday happen twice). The bank could insist (as it does for its own personal account holders) that these transactions have unique IDs authenticated by the card, which would thus mean the duplicates are rejected, but it trusts these huge merchant customers, so all the transactions they say occurred are just assumed to be fine, and the result is that the banks eat a bunch of customer anger and costs. It was remarkable to me that this would happen in 1995. It's outrageous that it still happens today, but it does.


From having worked in UK FS in the past this reminds me of a more modern version of how PKI key signing ceremonies used to happen :)

It's an interesting contrast with how carefully root CA keys are handled in this kind of setup, compared to things like Kubernetes clusters, where you'll typically get 3+ root CA keys in the clear on the API server disks, and I've seen them committed to configmaps (e.g. in RKEs default setup), and even checked into GH repo's...


Why would you put priv key in the configmap? K8s has api for signing users csrs with master CAs


For the case of RKE, which I mentioned, I think they do it to have the cluster config available, but no idea why they use a configmap rather than a secret (https://github.com/rancher/rke/issues/1024 is the relevant issue)


Simple question: what would be the mitigation against any security threats in the randomly-bought off-the-shelf products? E.g. In a case there is a general hardware exploit that isn't targeted to Monzo explicitly, but since you've bought that item (e.g. A router) there is a possible attack vector.


A few things:

- Since all the components we purchase are kept air-gapped, you'd need to already be on a machine in the air-gapped system (which isn't assembled and powered on except when we need to deal with key material) to exploit a vulnerability

- We're keeping things as minimal as possible in what we trust from coming out a store, for example if we purchase a Laptop we gut it of most of its components before it goes near our key material, Laptop's dont need batteries (or even CMOS batteries) to run basic live systems so they're going out.

- Compromising one part of the system won't let you sneak private key materials out on its own by-design, the part we've tried to make the hardest is exfiltration, the only material that leaves the air-gapped system leaves either as QR codes on a screen or on CD-R drives that we keep for years in a safe in their own tamper evident bags. This is all part of an effort to try and make sneaking private material out (even if you had full control of the system) as difficult as possible to do without being detected.


Have you gone fully paranoid and your air-gaped system is in a Faraday cage inside an anechoic chamber? Things like bus radio [1], coil whine and power fluctuations can be used (it has been shown) to exfiltrate data.

[1] https://github.com/fulldecent/system-bus-radio


Having worked in systems in the basements of TEMPEST checked buildings, I can say that the attacker first needs to eliminate the physical security measures, and manage to get close to the devices close to the servers storing, processing, transmitting the classified information. After that first breach via personally, or a specific device such as a USB dongle used by a user, the software needs to infect airgapped computers and find its way to the servers. Afterwards, the attacker needs to be close to the facility to listen to the radio, whose waves managed to get through the fortified walls with Faraday cages. And if this scenario happens, there is a huge problem with security, other than a system of a malware and an AM radio.


I was more thinking along the lines of an insider close to the device :)


We haven't gone as far as a Faraday Cage inside an anechoic chamber because we think the risks of these attacks is (luckily) not big enough (yet) for now to justify these. Eventually though? Maybe.

There's also projects to guess keystrokes based on sound which don't even require you to be on the host device https://news.ycombinator.com/item?id=29266783


In this case the private key is in HSM, and the laptop has no access to any interesting information worth to exfiltrate. Air gap is to prevent intrusion.


How do you secure physical access? Tamper evident bags and stuff are nice, but they'll only allow you to discover a breach after the fact, potentially days/weeks after the damage is done (said damage can be sneaky and just use the keys to extract sensitive data or introduce further malicious software into the infrastructure, which may remain there even after the keys themselves are rotated).

Totally understandable if you can't/don't want to answer obviously.


Everything lives in safes and those safes are covered by cameras, if you try to drill the safes you'll be found (or heard) a long time before you manage to break in


Unless someone figures out how to compromise the cameras and microphones. (Maybe I've watched too many Hollywood heist movies?)


I believe there is multiple redundancy both in cameras being in multiple places and also delivery mechanisms (local copy + remote stream etc). I'm inclined to believe that there are more mitigations that haven't been told in the post, which is plausible.


Hacking the closed-circuit cameras probably only works in a James Bond movie, since all you have to do is keep that circuit air-gapped to prevent it. I imagine that physical access to the cameras' control system is also guarded.


Beware that normal CD-R uses organic dye subject to rotting. Use either something like M-DISC or normal BD-R discs (HTL, with an anorganic phase-change recording layer).

I also suggest using at least dvdisaster (I suggest it's RS03 codec in augumented-image mode) or equivalent (if you come across a proper competitor to dvdisaster RS03, please let me know).


I've heard about the rotting, we've got a plan in about a year or two to do a key ceremony to move stuff onto more permanent hardware we can retain for at least 20 years (our roots last 15)

What that hardware will be is still an open question, but probably BD-R HTL.

Also worth noting (I think it's in the script too from memory) we make at least 3 identical copies of each CD in case of CD failure.


Yeah, from my understanding the rot is literally a fungus/bacteria colony that eats the dye. If you store the 3 copies separately in sealed containers, regular (no worse than yearly) inspections should probably suffice. The damage is visible; if anything looks off, put it back and immediately organize preemptive data rescue. This can be obvious discoloration (likely with a gradient) or discrete specs, the latter can also happen to BD-R HTL where they are "just" delamination. That is thankfully a fairly slow progressive effect from AFAIK the acrylic glue slowly hydrolyzing, and can be counteracted by cold constant-temperature constant-low-humidity storage (an office cabinet with AC is better than a finished basement w/o AC).


> Each keyholder has a smart card which has been sealed in its own tamper evident bag.

presumably this is kept in a locked cabinet, in a basement locked room, behind a sign "BEWARE OF THE LEOPARD"


"For equipment that is easier to tamper with or where tampering could become a larger problem, we make unannounced visits at random to retail stores and purchase off-the-shelf products."

Admit it...y'all just want a good reason to go to MicroCenter, right?


As an ex-InfoSec engineer for 12 years, my only concern is they seem to have no process to recover from a defective or destroyed HSM. But other than that it's quite good (though I have only skimmed the blog post.)


We have multiple identical backup HSMs in multiple different sites. Any one of these + a keyholder quorum can recover the system. The entire safe room burning down is part of our possible disasters list.


Do you perform the same ceremonies for each backup system like described in the post?


Great!


At this point, I'd just use pencil and paper.


(Disclaimer: I'm the author) the thought of using pencil and paper to do elliptic curves bends my mind


It can be done by hand just like RSA.


I'm not paid enough to generate primes for RSA by hand :P



They're all too small :(


This gives me RSI


You can also use a deck of cards: https://en.wikipedia.org/wiki/Solitaire_(cipher)


Although Solitaire is an impressive literary device, since you could in fact build this and it is in fact good enough to serve the purpose it serves in the novel...

It's a symmetric algorithm, so this doesn't solve any problems Monzo actually has, whereas the elliptic curve public key crypto they use does

It's known to be cryptographically weak. If you write small messages, by hand, and your recipients are, even if they have more technology, reluctant to try to decrypt large volumes of messages too great in number to all be from you, you're safe. But if you used this industrially it would get broken at scale.

As a comparison, RC4 has a biased keystream just like Solitaire, and it has been successfully attacked in practice since unlike Solitaire it was actually in use on the public Internet. The attack consumes a very large number of messages, transmitted over HTTPS in the course of hours and so would not be at all practical for a system used with a few hand-ciphered messages.


intunderflow - What process did you have to go through to get the OK to write and publish this article? I'm very impressed that you've published this.


Kickoff was 18th of October after I indicated to someone internally that I wanted to write something for the blog about this program (not gonna name them in case they don't want to be named :P)

First draft (very rough) was 22nd October

Second draft (toned around a bit) was 5th November

Security review + a proper edit happened while I was on Holiday on the week of 8th-12th November (lots of comments left for me), basically nothing got removed, I think one image and about 2-4 lines of text from memory

I came back to a bunch of comments (nearly all on the Monzo Tone of Voice https://monzo.com/tone-of-voice/), went through those, proof-read and then got a green light from:

- Engineering

- Marketing + Press

- Some other folks in Security (because of the content)

And then we moved this over from Google Docs where I was drafting it into our Blog system (Contentful), had a quick skim to make sure it read properly and then published :)


Off topic, but I love the suggestion in "tone of voice" on how to test if a sentence is written in passive or active voice. Just add ... by monkeys to the end of the sentence. If sentence still makes sense, its passive, so rewrite it

e.g.

A decision has been made to close your account …by monkeys

vs

We decided to close your account ... by monkeys


I'm just wondering - is it safe to have it written in GoogleDocs with all the previous versions remaining there?


Kudus to you and Monzo. It’s great you wanted to write and publish this and it says a lot about Monzo that they let you do it.


There's a much more straightforward way to achieve multi-party security; require M-of-N signatures throughout the entire PKI. No single root-CA, but M out of N root-CAs required to produce a trusted signature for any given message or intermediate CA.

HSMs should only sign messages if a minimum number of authorized users request the message to be signed (by its hash), and downstream systems should only trust messages signed by a quorum of HSMs (at least 2).

For banking as an example, each bank should require some minimum number of distinct authorized signatures on each transfer request from another bank.

Individuals can have a personal public/private key, but HSM actions should also require multi-signature requests from both the user key (stored on FIDO2 or smart card) and a device signature from a list of devices the user is authorized to use, which reduces the risk of rogue devices tricking users into signing requests.

Go back to using bog-standard laptops everywhere because no single device is in the critical path of security; if a single HSM loses its keys then have it generate new ones and add the new keys to the set of N possible keys allowed for root signatures and remove the old one. Only if too many HSMs need to have their keys restored would administrative users have to resort to SSSS to recover all HSM keypairs at once, but the procedure would be done one HSM at a time with potentially different sets of users responsible for recovering each HSM. There never needs to be a single set of people that can restore the entire PKI; instead have K groups (for K KSMs) of N people, of whom M of each group are required to restore an individual HSM, where only L HSMs are required to operate the PKI. There can even be some intersection between the groups of users per HSM so long as the intersection between any two HSMs < M.


Yet, it seems like their app is closed-source and distributed via the android play store (it seems like the main way to interact with that bank?).

I wish it was available as a reproducible build on F-Droid. That's how critical apps ought to be distributed.

This is a common trope in security-oriented businesses: more security is a good thing, but the consumer often isn't afforded the same attention.


It's also worth noting, (although there is nothing special about the security procedure described in OPs article; I'm in an industry that's been doing this for some time), Monzo doesn't stop you from running their app on a rooted Android phone; in some misguided belief that it's somehow more secure to prevent it, like all of the other major banks with their seriously sub-par app experiences (compared to Monzo).


We have an open API https://docs.monzo.com


You want this private (banking) company’s app to be open source? Lol


Why not, really? I wouldn't expect it to be - just because it's so not the norm - but why not?

If you're talking about security, surely a closed-source app is just obscurity, there should be (sounds like there are) better precautions in place that it doesn't add anything.

If it's ripping off features.. meh, is it really development time in frontend logic/UI that keeps Monzo ahead of any competitors its ahead of? (I'm saying that vaguely and unassertively because users seem to get very tribal and argumentative when it comes to 'challenger banks'!)

If it's wanting to hide the API for whatever other 'it's proprietary' reason, well Monzo has a ('beta' and 'developer only' for years) public API anyway. And 'open' banking of course, so you can use TrueLayer or whatever anyway.


Is most of their added value in the client app? I certainly doubt it. The app is probably a front-end that queries various API endpoints.

Who could use the source code? Their competitors? Maybe, but there isn't that much competition in that space, and actually developing the app is certainly not the most difficult part. Auditing it probably is, and using a competitor's open source app as a base is probably a no-go for a lot of established players. For what purpose would they copy a functioning app?

One thing I can see as an argument against would be raising the bar for would-be copycats and phishing attempts. Not that it raises it too much either.

Or is it to limit documentation of their API? I know security trough obscurity is thought as a good additional measure in traditional banking, but that doesn't seem to be the case here. And reverse-engineering an app wouldn't be that complicated.

I would actually love an open source, cross-bank app. Not sure why it couldn't happen. I'm not sure either what the value proposition is for banks to try and limit that kind of development. Control the user's interactions with the bank as much as possible?

It's like these banking apps requiring "safetynet" or similar: they trust google or the manufacturer more than myself, while I am the actual client!


They talk a lot about airgapped computer and provable compilation of an OS but they just get over "we use HSM" as if they were obviously secure. We have many examples of HSMs being compromised or broken and they don't even notice that.


This looks to me like an open invitation to be attacked, banks don't use to describe openly the technologies or methodologies they use for good reasons and Monzo has been quite open about these matters for a long time (Go use for instance), they're doing pretty good though and probably as traditional banks are using quite old technologies they seem to be into a far less vulnerable position than traditional banking.


None of what is described here is revolutionary or some novel form of security when dealing with PKI and root certs. Not sure how it opens up attacks or becomes an invitation to be attacked.

>banks don't use to describe openly the technologies or methodologies they use for good reasons

I don't think it's "for good reasons". I think it's for "don't want to be found to still be using XP" reasons.


I agree that's a significant part of it (most banks don't actually do a very good job of security). The other part is that most banks' leadership are finance people, while Mongo's leadership is largely technical (and consequently are more open to discussing technology).


Not sure what bank you're referring to but most finiance companies I've delt with in the past (in the UK) have extremely good technical people.

most of the stuff is security using multple layers - which can be a pain to modify

Any technical debt is evaluated as a risk and signed off by the business


Having good people is necessary but not sufficient for good security. In many cases, management isn't even aiming for compliance much less actual security. This is based off of multiple close friends and family in the banking and bank auditing world in the US as well as my own experiences working on financial software (including the relevant regulatory and compliance bodies as well as the many banks who were our customers); can't speak to UK specifically.


From my past uk financial security experience - security personal can highlight and raise risks with any decision made by the business or other techies.

the higher the risk the more hoops you need to jump through for security sign off (to the point where they will not sign it off) and higher the risk, the higher up the food chain of managment that is needed to sign it off

obv if the execs are not bothered about security then anything can happen

but if something goes wrong a big finger will be pointing at that person who signed it off - which seems in it self a massive deterrent


Plot twist: their procedures are actually completely different, and this blog is part of a larger scheme to confuse potential attackers.


A lot of us InfoSec folks consider this very good practice to be 100% open about how security technologies are deployed. Ever heard about security by obscurity? It confers a false (dangerous!) sense of security, and prevents your security stack from being audited/reviewed even by casual comments, like it's literally happening right now on HN.


are you saying that it's best for companies to openly publish all their (network) security?


Publish procedures?

Yes. Generally speaking, an open and auditable system is more robust and secure than a closed and non-auditable system.

There are obviously limits (e.g. publishing what specific VLAN numbering scheme you use is obviously not helpful to anyone and just provides information that wasn't known).

But yes, it is best practice to publish and accept feedback on generalized procedures.

You should sign up for the MDSP dev-security-policy mailing list and see how the (open) conversations have continued to improve security for all.


fully audited absolutely yes, feedback come internally with many (100's) of technical/ network architects poking and proding

I fail to see any positives in openly publishing anything, unless you provide an extememly very detailed view - I see only negatives


This isn't an invitation to attack us beyond as part of our HackerOne program (please don't make my life more difficult) but to steal a line from someone else at Monzo: Kerckhoffs's principle probably has some cross-over relevancy to this stuff.

Can't really comment on why other banks don't talk about this so you'll have to draw your own conclusions on that :P


bragging about security is definitely increasing the likelihood that some random guy will say "oh yah, we'll see about that" in response to a marketing campaign centered around security.


If Some Random Guy can just happen to compromise the processes described here after reading a few hundred word blog post, they should probably just go after every browser root cert program which use almost identical procedures and have also described them openly.

Why go after Monzo when you can go after XYZ root cert trusted by every device out there?


because one initiated a marketing campaign about their security so as to get some attention and find a different vulnerability the company didn't know about.


Publicly describing your security procedures is common (and encouraged) practice in infosec. See CloudFare, Mozilla, Google, etc. who have all publicly described various security procedures (including key signing and key management).

If you could actually do anything based on the knowledge here, you would go after a browser root cert because your hack would be exponentially more effective.


I wouldn't do anything, but I also wouldn't do a security driven marketing campaign. If someone thinks their root cert is safe it doesn't mean there isn't some other way to get access to user credentials that could allow compromise through some other avenue.

edit since I can't reply down level further: Yes I would be very careful about security related marketing and really consider if its necessary at all.


>it doesn't mean there isn't some other way to get access to user credentials

You can always be hacked in some other way, so I guess we should never write anything about security ever?

>edit since I can't reply down level further: Yes I would be very careful about security related marketing and really consider if its necessary at all.

Interestingly, the security people at Google, Cloudfare, Microsoft, and just about every other major tech company (and security company, certificate authority, etc.) agree that openly talking about security best practices is.. well.. A best practice. And that keeping security practices secret (obscure, you could say) benefits no one.

Not sure why you have to shoe-horn marketing in every comment, literally anything a company posts is arguably considered marketing, what's your point?


I hope this post doesn't come across as a brag as it's not meant to be, being arrogant about security only ends one way after all...

I published this because I want to be open about how we do these things, because I think we safely can be and it shows that we care and don't just pay lip-service to security

If you'd like to show us up for our security though, please do peek at our HackerOne program. I'm a program admin on it and I'd love to read more interesting reports https://hackerone.com/monzo <3 (I know it doesn't have a paid bounty yet, I'm advocating for it)


it comes off as marketing based on security which is no different than bragging in my opinion. You have a program for dealing with security vulnerabilities - clearly this blog post was to tell people about how good your security is. It's marketing.


> COTTONMOUTH-1, a USB cable manufactured by the NSA that looks like a normal USB cable

It is a shame the article doesn't bother to cite nor link to its source.


That's probably a case of assuming that it's well known, the ANT Catalog leak was a pretty big thing around the time of the Snowden leaks [1]. If you're deep in a topic it's easy to lose sight of what's common knowledge and what isn't.

Of course by now you can just buy similar devices on the free market, what used to be expensive high-tech in 2008 is now possible with commodity hardware.

1: https://en.wikipedia.org/wiki/NSA_ANT_catalog


Yes, and when you take the time to write an article to promote your company, you probably have the time and inclination to cite your reference.

In this case, the author posted here on Hacker News, which provides a wonderful opportunity to accept feedback. And it looks like the author added some text giving more context explaining where it came from:

"COTTONMOUTH-1, a USB cable manufactured by the NSA that looks like a normal USB cable, but has a hidden implant that allows an attacker to inspect data on the cable wirelessly, modify data as it travels over the cable and install malicious software on connected computers - US National Security Agency, Advanced Network Technology (ANT) Division"

It is nice to see authors being responsive to comments here.



A belated comment for the author:

The flow of the text is broken by the Cottonmouth cable and caption. I'm not suggesting removing the image; it is interesting and serves as a good example.[1] I'm suggesting another round of editing to improve the reading flow.

Additionally, since so many people scan text rapidly, some visual changes are needed too. The caption looks too much like a follow-up paragraph. To fix, consider:

A. Include the caption in the bounding box used for the image (which has white space on both the left and the right).

B. use a different font, font-size, or style for the caption

C. using a different background color for the (image + caption)

D. separate the (image + caption) from the surrounding text with vertical lines

I'm inclined to think that in this case, A with B would be sufficient and would work well. You could also pair that with C and/or D, being mindful that the latter two could be overdone.

[1] Why stop at one example? I think a carousel of examples would be even more interesting!


Probably because it was top secret.


Yes, it was classified.

But that isn't the point. You can cite just about anything -- ranging from phone conversations to in-person interviews, from cookbooks to journal articles. Yes, even documents from WikiLeaks or other data dumps. It isn't hard.

Citing your sources shows that you are careful and you respect your readers. Make it easier for them follow the trails where they lead.

It also saves time, energy, and electrons. Write out the reference once, help N people later.


On a similar note, if I’m not mistaken, Monzo stores customer passwords (?)/keys used to access their bank information from other bank accounts (through monzo plus?).

I wonder if Monzo has done a similar deep dive into how they structure their architecture/security practices to protect against the threat of this information+encryption keys being stolen.


I believe that is through the Open Banking API/standard[0]. Anecdotally, henever I've connected bank accounts, it's been limited to three months until having to re-authenticate and get new keys.

[0] https://www.openbanking.org.uk/what-is-open-banking/


I'd be interested to hear the other side of this story, ie. how frustrating and slow it must be for IT/devs to implement any change within that fortress ?

Any Devops on HN to tell us about the last time they implemented a change on a Monzo API ?


This is only for managing the most important secrets of all which are kept offline in safes for most of the time, development work just happens on regular Macbooks, I'm currently sat in my Bedroom writing this and I run this program / authored this blog post.

These sorts of precautions are pretty common at large companies, there's some good coverage of them generally here: https://en.wikipedia.org/wiki/Offline_root_certificate_autho...


It sounds like this is just for them to mint Root Certificates. I don't see how this elaborate ceremony would impact other aspects of development. It perhaps speaks to a very security conscious development culture - and that can impact agility. But when you "move money" - that's a decent tradeoff.


Mint intermediate and root certs, more likely.


Mint roots, sign intermediate CSRs


Monzo engineer here. 99.99% of changes don't require key ceremonies. When they do, we can usually book them with a few days notice. It's normally for things like onboarding big new systems and gets planned into a project from the beginning.

The people who implemented this process are the same type of "IT/devs" or "Devops" you mention. There isn't a whole lot of throwing things over the wall to Ops, or for that matter putting up arduous processes against "normal" engineers.


I work on a system that is air-gapped where we also maintain state and run ceremonies several times a week. It's a very different paradigm than the one we are used to as developers where everything is always connected and online but we found a way to smooth the process from a developers point of view. However sometimes doing something as simple as collecting logs can be a challenge.


> Monzonauts: you can see the code in the Coen repository https://github.com/monzo/coen

404 is it a private repo?


Yeah this is confusing but that repository is basically just for Monzo employees, sorry :( https://news.ycombinator.com/item?id=29266757

We're going to remove that link


That is probably the reason it is targeted at Monzonauts (Monzo employees)


Do you test this system, like drills with planted failures?


Do you mean testing if someone can break in or testing that our system can handle disaster recovery?


Ability to interrupt when someone notices a hole in a bag, a non-working camera, a missing witness, administrator's dismissal.


These are covered by the exceptions procedure as described in the script, just to clarify the Ceremony Administrator is a per-ceremony role rather than a permanent occupation so dismissal would just involve replacement with another team member.

Exceptions are evaluated, documented and acted upon from the moment they are discovered. The Ceremony Administrator decides what to do about them after consulting the Internal Witness (but the CA gets to decide), at the end of every ceremony (with exceptions or not) we ask every participant to sign a sheet saying they're happy we effectively haven't breached security (last page of here: https://monzo.com/static/docs/redacted-root-certificate-auth... ), if they don't sign, that would kick off a large discussion about whether we're happy to continue using these roots or replace them.

From memory we've had about 2 exceptions which have been resolved, both very minor (as ever, when you get into the room stuff comes up that you couldn't have planned for, if I recall one was something silly like a typo in the command documented on the script, so we just ran the command without the typo, etc).


Yes, the system is assumed to work correctly, that's sensible. But does it? What happens when some component start to work in unintended way? If the participants don't have experience with failures, how they will behave?

I mean when administrator denies interruption. Rotation of administrators and signatures is nice, but there's still one person that makes the most authoritative administrator. The bug to address here is conformity, see https://en.wikipedia.org/wiki/Asch_conformity_experiments


But does Monzo (or any other bank at this point) have a sustainable business model? Finance is getting commoditised so there’s very little money to be made from saving accounts and Monzo has been loss making since the beginning. It’s great that they have a slightly more modern software stack but it might not be enough to sustain a bank.


Interchange. While this article is more US specific and Monzo is obviously a UK bank, take a look at https://bam.kalzumeus.com/archive/debit-cards-are-hidden-fin...


The best way to hide anything is to make adversary think that it does not exists.


That sounds like obscurity.


The PKI infrastructure for a bank is on a laptop?


The keys are on a HSM, but something has to nicely ask the HSM to do things and an air-gapped laptop feels like a pretty sensible choice? (Especially given the precautions we've talked about)


Just a side note, but I have to give a thumbs up to this site being the first one I’ve seen since cookie popups shenanigans started that allows me to decline tracking cookies and dismiss the popup in a single click.


They're just complying with the regulation. Any consent flow that does not allow you to easily decline does not actually comply with the regulation (at which point you can just not bother with a consent flow at all - you're still in breach, but at least you're not destroying the user experience).


> but at least you're not destroying the user experience

That is exactly why many do it. The chance of getting fined is low, practically non-existent in most cases, but making the user experience difficult both increases the chance of people just saying “fuck it” and agreeing¹. Those high up the data collection chain also hope that annoying enough people with the bad UX will make them campaign to have the regulations rescinded² in their territory.

---

[1] losing people like me who say “fuck it” and move on elsewhere, is seen as a small price to pay

[2] even though the regulation isn't at fault, the deliberately bad UX is


Yes. I was amazed by this. No ‘more options’ thingy which has the size weird when in phones, no scrolling to ‘save cookie changes’ etc. This is how companies should treat customers if their business is not harvesting their data.


It isn't the case with this site, but I generally mistrust “decline all” buttons without further investigation because on some sites it is only equivalent to unticking “consent” while leaving a pile of “legitimate interest” options enabled.


For what it's worth the two top GDPR consent Wordpress plugins can be configured like that (it's actually their default if memory serves well).

It's entirely up to the people authoring the website.


In France, it's the law. Refusing cookies should be as easy as accepting them ( https://www.cnil.fr/fr/refuser-les-cookies-doit-etre-aussi-s... )


And yet, in the majority of cases, the flow is not as easy. Yet nothing is done.

Jaywalking is illegal in many places too. It's an enforcement problem, not a legislative one.


No cookie popup without JS. Props.


So your entire system relies on a single consumer grade laptop without any redundancy or ECC. Nice.


I think either the blog post is worded wrong (sorry) or you've misread if you think that, I talked a bit about the laptop precautions here:

https://news.ycombinator.com/item?id=29266521

And why it's just a laptop here:

https://news.ycombinator.com/item?id=29267685

To be super clear, the actual private keys only ever exist on HSMs


It would be interesting to hear how efficient their whole IT/dev can be given that the slightest change would apparently take several days, a trip to a CD-R shelf, etc, just to be allowed to startup a laptop and get to work ?


This is only for secrets that are incredibly important to the point that their undetected loss could cause massive problems, I run these key ceremonies and I'm typing this from my bed on my work issued Macbook :)


Hey glc, FYI the GitHub repo [0] with the OS Code from Coen is not available, the url is probably broken.

[0] https://github.com/monzo/coen


I should have linked to an internal page that redirected there so people didn't get a misleading 404, but the internal coen repo is only open to all Monzo employees not the public because it contains some HSM specific stuff :( sorry - the public coen (which we're a fork of) is open though here https://github.com/iana-org/coen


> We’ve also taken other measures to frustrate attackers, for example we have physically removed the hard drive so there is no way to persist data on the laptop itself

This sentence made me believe in the incompetence of the authors.


Well you have to start somewhere. You will never get perfect security, however persisting malicious code on disk (and getting the machine to boot from it) is easy and standardized. Doing the same in firmware (or non-volatile storage in other parts of the machine) is going to be very manufacturer-specific and will require knowledge of the exact hardware, versions, etc and because of the air gap you can't even tell whether your attempt worked (and of course if your malware needs to exfiltrate data it still needs to break out of the air gap).

TLDR: it's not bulletproof but increases the effort required from an attacker significantly.


This is covered more here https://news.ycombinator.com/item?id=29266521

Sorry for not being up to your standards I guess :(


competence/incompetence is not a compliment or slur.

I am an incompetent welder, for example.

My point is that if you are removing the hard disk to prevent state storage, that is ineffective, and if you think that it is effective (which you stated), you are objectively factually incorrect.


Why do you think this shows incompetence?


There are about 10 other ways of persisting secret information on a system other than the hard disk. The claim they make is amateur hour.

There are better ways of doing this. There are embedded systems that can run general purpose OSes that cannot store permanent state, such as rpi <=3 that are much better suited to such things.


Expected that Qubes OS is used somewhere, but it's not. By the way, it's sometimes more secure than air gapping: https://invisiblethingslab.com/resources/2014/Software_compa....


Qubes is awesome, I think the main thing it protects against though is an application you don't trust too much escalating it's privileges (for example a website sandbox escapes into the browser and then tries to go up further) - Since we're on an air-gapped, immutable system all the tools we have on our CD have some implicit trust, so if you manage to get software that isn't playing nice onto the OS, you've probably done it through hijacking the OS CD anyway (I covered a similar-ish question here: https://community.monzo.com/t/how-we-protect-our-most-sensit... )

We also don't run that many complex commands on the actual OS, it's basically one of three things 99% of the time:

- Mount this CD containing some CSRs or etc

- Ask the HSM to please sign this

- Burn a CD with these files and then report the hash / PGP word list of the content on the CD


QubesOS is security by compartmentalisation and allows you to segregate and air gap hardware virtually (which requires VT-d/IOMMU and trust there’s no Xen guest escape or hypervisor exploit). It’s a neat project though IMHO would be weaker for the threat model you describe in your blogpost.



Does coen boot from CD and switch to RAM? Or do you have 2-disk-drives to mount the second disk?


Coen boots off a CD and mounts itself into RAM as and when parts of it are loaded, but you still need the CD in the computer for it to load stuff not in RAM so we have a 2nd CD reader + writer for the material

You can tell if you're using a part of the OS that hasn't been used in a session yet because you can hear the CD drive with the OS CD in it physically spin up :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: