Our systems are as shitty as any other industry, security problems are there, and social engineering is a problem. However, there is one simple rule which allows us to worry less about possible hacks, and when this rule is broken for some reason, shit usually hits the fan (like in the case of Bangladesh heist).
The rule is: it always should be possible to manually reverse the transaction.
Ethereum got it right. Welcome to the real finance.
Ethereum did not reverse the transaction because the transaction was fraudulent. If that was the case, thousands or millions of crypto transactions would probably need reversing.
Ethereum reversed the transaction because of a TBTF situation.
The DAO (a Layer 2 system built on Ethereum) accumulated too much of the entire market cap of the underlying Ethereum network, making it a threat to the underlying network. THEN the DAO failed, threatening the entire underlying Layer 1 network (the blockchain and its ecosystem of miners).
Were it not for the possibility of existential threat (TBTF) then no fork would have been possible, regardless of how much fraud it enabled or how many bugs were found.
The problem is that the crypto community is choosing to not see this for what it is: the inherent systemic vulnerability of a L2 system running on an L1 blockchain. If another DAO - or Lightning Network - or any "L2" system running on top of a crypto garners enough capital from the underlying network, then the L2 system is a weapon that can be used to damage the L1 network.
Crypto enthusiasts should take this as a grim warning: L2 applications can hijack the L1 network and damage it. For this reason, we should be skeptical of any L2 network with excess capitalization.
We should be even more skeptical of any group of devs trying to foist an L2 solution on the community as a one-size-fits-all solution (to distributed apps, to transaction scaling, etc).
Can't tell you how many offices and server closets I walked into and had to ask the question "Hey, what's this machine do?" or "Hey, what's this laptop plugged in for?"
Also lots of "This network is wired in a way that totally doesn't match the spec/use case/documentation."
The construction and cleaning contractors we had had full access to our most secure rooms as well as anyone in the IT department.
The potential for insider and socially-engineered grift is huge.
Hah, they put the servers here in a closet only accessible via the single use restroom. I don't know who designed this building but they didn't understand layout well.
> Ethereum got it right. Welcome to the real finance.
Well, ethereum classic fork seems to be still alive, and many exchanges are considering adding it. So, it was reversable, but the work and the after-effects were pretty big. The reversability in older systems is pretty much easier.
> The rule is: it always should be possible to manually reverse the transaction.
In a financial transaction there are two parties, and both must agree to a reversal. Especially if you want to reverse thousands of transactions. Unless there are special circumstances. But in no way will it always be possible to manually reverse a transaction.
There are lots of parties to any financial transaction. Transfer agents, exchanges, clearing houses, regulators, central banks, et cetera. Various combinations of the above, along with banks and the "buyer" and the "seller", can reverse transactions.
Financial systems have been hacked since the beginning and now the hacking is getting worse.
Regulations will make it progressively harder for newcomers to compete in the marketplace, executive pay will keep rising, corporations will get bigger. Owners of capital won't be able to trust outsiders so they will only recruit their family and close friends into well-paid executive positions. Advertising will completely dominate consumer behaviour; small businesses will disappear; once virtual reality takes over, corporations will make sure that customers forget that these small companies even exist.
Machines will replace people in the workplace. The rich won't care because at that point; because of social tensions and advertising, rich people will have grown to literally hate the poor and want them dead.
Humor aside, the real crimes are the systemic from the top, like the Libor clowns, which will erase public trust at some point, which will cause the public go back to keeping money in mattresses.
Fortunately there are some small wins, like these little fish going to jail: http://www.bbc.com/news/business-36737666 but we need to keep it up and restore accountability.
Yes, I think 'honesty', 'trust' and 'reputation' used to be very important in the old days but not so much now - The system is extremely generous when it comes to giving people 'second chances'.
In our semi-globalized world, you can take HUGE risks and make a COMPLETE MESS of your reputation and credit rating in your home country, but then you can move to a different country (and often, you can also bring along your dirty money with you) and start fresh as though nothing ever happened. Unless you're a major public figure; no one will know what a crook you are.
A partially-globalized world is a perfect world for crooks. I think we should either give up on globalization completely or go full steam ahead and merge into a 'world government' under which everyone can be held accountable for their past behaviour without any loopholes.
We don't need a global government as much as a global reputation system, and we're slowly moving towards that direction. It's especially true of the United States and Europe--if you screw up big time there, it's easy to find out, and it's not like authoritarian regimes outside the west have any incentive to protect the reputations of western companies via censorship.
The ending is more likely to be; the poor rise up in an attempt to overthrow the system - In response to this, robots/drones (owned by and working for the interest of the rich) will automatically kill all poor people.
Then the robots will use the bodies of poor people as fertilizer for crops.
The rich won't even know that there was an uprising at all (the robots won't report it) - Because the robots' only duties will be to increase the comfort and happiness of their masters - No need to encumber their masters with 'operational details' related to the 'management' of 'wildlife' and and agriculture.
It already happened in history many times. Remember kings and royal families ? This will end like it always do, people will rise in new revolution after witch new system will emerge from dust.
This new system has been vetted by whom how far in advance? If it's like the shenanigans of the Fed and too big to fail banks during 2008, I don't see how people trust the new system. Why have only one new system? Part of blockchain based solutions is exactly because of the reduction in trust in traditional finance methods.
And how can it be shown that bad actors manipulating the old system haven't somehow seeded the new system for their future advantage (either to sabotage it, or more likely to just disappear with their gains from damaging the old system).
The article talks about all these sophisticated attacks that could hurt the financial system but I think they're not seeing the elephant in the room: What if someone just ran "rm -rf /" on thousands of servers at just one of the "too big to fail" banks?
How long would it take to recover from that? Would it even be recoverable?
I don't think any of the big banks employ enough IT people to recover tens of thousands of servers in time to save the world's economies from a major collapse. Recovering from such a situation would take months. Not only that but let's assume they hire temporary contractors to do the work... Can you even vet that many people that quickly? You'll be handing these folks some of the world's most sensitive information.
The Fed needs "IT stress testing" in addition to their balance sheet stress tests. They need to ask these sorts of questions: "If half your servers were deleted how long would it take to recover from backups?" "Demonstrate a backup recovery on the following ten randomly-selected servers right now. You have 8 hours. Good luck!"
Having worked at 2 of these "too big to fail" banks, disaster recovery testing is par for the course now. We would do DR testing every quarter on both the primary and secondary data centers. Common simulation scenarios were - network/disk failures in which case the secondary data center would take over the operations. We would also time how long it took for applications to come back up, with high priority apps (trading, payment etc) being given extreme importance.
We did this too. We tested our DR plan at least once every year (this included busing people to a different location, timed responses etc.), and we were considered a non essential part of the bank. We also had to make sure that all our servers had dual power supplies, and were attached to diesel generators in case of a power outage. They were much more on top of it than any other place I worked.
There was a talk last year at DefCon that went over mainframe vulnerabilities, and it was like a playground. The biggest barrier to entry is getting a mainframe to experiment with.
This reminds me of an article on wired[1] where someone hacked into hosts that were hooked into legacy X.25 networks. And just like mainframes and COBOL, there are not very many engineers who understand X.25 networks today, and hence had a very relaxed security posture.
Once you are there, it is usually an open playground. These systems, however, are usually not on open networks and only pre-whitelisted connections are allowed.
> The biggest barrier to entry is getting a mainframe to experiment with.
Is there a barrier to emulating these systems? It's probably difficult (illegal?) to emulate the various Tandem operating systems on virtual hardware, but I'm sure there's a way.
I'd be shocked if there wasn't a turnkey AS/400 VM available.
Transactions should be logged to something that can't be tampered with. Then you can go back and recover state by re-running transactions. Sarbanes-Oxley tries to require this, and it produced a boom in write-once media, such as DVD writers. There are also DAT tapes and even disks with controllers which supposedly enforce write-once.
It's not clear how good compliance with this actually is.
Event Sourcing describes everything of such a system, except for a mechanism to ensure non-tempering. That's where other technologies such as regularily published hash values come into play - which may be printed in newspapers, cryptographically signed by multiple trusted organizations, agreed on through some blockchain technoloy, or a combination of all.
The technical terms are "audit logs" written to "append-only" storage on "write-once, read-many" media. What we've used for a decade or so. I know Fowler is popular but unrelated to the basics of this discussion.
The linked Fowler article is from 2005, which is also more than a decade old.
> Fowler is [...] unrelated to the basics of this discussion
While Event Sourcing doesn't describe audit logs and storage systems, it does describe how to build a whole working system based solely on that, with real-world experience warnings about pitfalls and so on.
It could be useful. What I'm saying is that IT security, which existed before 2005 & governs this topic, calls it those things. By giving the standard terms, I give you or other readers a chance to Google for them to find solutions to the problem. Whereas, people Googling non-security terms and topics will probably miss key info from a security vantage even if there's some good tips out there. Best to not change terms in a risk-related field but the link itself is fine.
There's this tricky edge between "compliant" and "correct". I could build "a box" that takes files and only lets you add new versions, but ultimately it can't enforce me removing its drives and manually editing them.
In other words, the compliance level is actually pretty good, but the solutions themselves are weak and not hardened against intent (just normal operating use).
Recovering internal state is not a big issue for financial institutions, everybody who's serious can do that, the question is about the time needed to do it at scale.
However, recovering internal state is not really a solution to the main problem - you can't reverse any external transactions that were done based on the temporarily wrong state. If ATM or teller handed out cash, you can't simply cancel that. If you (as a bank) sent a payment instruction to another bank or some payment network, you're often not going to get that money back; especially if it was fraud instead of some mistake.
> If ATM or teller handed out cash, you can't simply cancel that.
I wonder if it's possible to specifically cancel the serial numbers of the cash that was fraudulently withdrawn. In other words, make it no longer legal tender. Of course, the problem is how the acceptor of the cash knows that it is fraudulent, but perhaps a public list of serial numbers known to be fraudulent would be a start?
The cash is still legal tender, but it can be used to catch the culprits. IIRC currently around half of cash travels just for a single transaction (i.e. ATM->person->shop->bank); and ATMs would know which particular serial numbers were handed out; so for a large case you can find out where the particular banknotes were spent and that can be combined with camera recordings to find the involved people.
But in an automatic way, no, a public list of known fraudulent serial numbers would not be sufficient in the current legal environment; it would be impractical to mandate everyone to check such a list at every transaction for every single banknote.
I would be hard pressed to think of things that "can't be tampered with." When you make something difficult to tamper with, you usually just increase the potential payout of tampering.
There's an essential problem: all security systems can be compromised. When a security system seems more safe than the others, more people will use that system, thereby increasing the incentive for adversaries to try to break it (and almost always they will succeed).
> When you make something difficult to tamper with, you usually just increase the potential payout of tampering.
The real goal is to make the cost of tampering exceed the payout. The standard mechanisms for increasing the cost are computational power needed (cryptography) and ability to participate in mainstream society (legislation).
"When you make something difficult to tamper with, you usually just increase the potential payout of tampering."
You actually decrease it. The payout of tampering is associsted with assets security scheme protects. As for that point, vendors should sell the solutions as reducing but not eliminating risk. Plus encourage monitoring and audits.
As an someone on the outside, I would imagine the OCC, DTCC or some other clearing house getting hacked as creating far more havoc? Any reason this wouldn't be the case?
> But if this happens to many banks concurrently, and nobody understands why, would central banks be able to save the situation?
Why not? If we restrict ourselves to sovereign, first world central banks (i.e. the Fed, the BoE, the ECB, et cetera), a bank holiday (to reconcile records) followed by a liquidity injection would not only be expected, it would have direct precedent from FDR's bank holiday on 6 March 1933.
I think the underlying issue here is that responsibility for preventing these attacks is misplaced. In short: we need to blame the victim.
There's a popular notion to "not blame the victim" when it comes to many other crimes (e.g. rape). Because long-standing culture traditions or implicit biases can be involved, it's not almost glaringly obvious that somebody is blaming the victim. As such, it's become a kind of litmus test: you look for warning signs that some policy or statement is (even unintentionally) laying the blame on the victim. And that habit is easily transferred to new misconduct, such as hacking.
I think we need to step back and understand why we should not blame the victim for other crimes: I'd argue it is not because they're already suffering, but because the best way to prevent the problem is by focusing on (A) those we can influence and (B) those best placed to prevent the problem. Often that's not the victim - but sometimes it is. In many sex-crimes, the perpetrator is socially well-respected and in a position of power, so that's a (typically) a man that can be influenced (satisfies A), and since he's using his position of power he apparently has it; so he's well placed to prevent the problem (satisfies B).
So for example, we might prosecute people breaking into cars. But most people realize that we're not likely to catch enough thieves to really reduce theftto a minimum, so we also invest in locks, and we teach people not to leave valuables in sight. This is a form of blaming the victim - justifiably so, since I'd argue that if you left your car unlocked and/or leave valuables in sight you're not just hurting yourself, you're hurting others too: you're making theft easy, and by making it worthwhile, you may encourage thieves to try more frequently - also against other targets.
For another example, consider vaccination. Failure to vaccinate make cause the victim to become sick (or their kids to become sick). But here too, it's not just themselves they hurt; they hurt others by propagating dangerous diseases. Beyond a certain level, they'll contribute to epidemics that can hurt even the vaccinated since no vaccine is perfect.
From the perspective of preventing harm, protecting yourself from malicious hacks is more like a cross between theft prevention and vaccines, and not like preventing rape by a powerful individual. Most hacks are trivially easy to prevent (it often takes numerous bugs, mis-designs, and some social engineering to gain access) if there were systematic effort to prevent hacks, so the victim is in a place to prevent the harm from occurring. And since it takes considerable organization to run most vulnerable services in the first place, the victim is also one we can influence.
Focusing on the hackers isn't just futile, it's actively harmful in several ways. Not only is it obvious that many hackers cannot currently be found, many are beyond the reach of law enforcement by virtue of living elsewhere (or at least, acting through not-entirely cooperative countries). So it's immediately apparent that's it's never going to suffice to focus on the perpetrators (they don't satisfy A: we can't influence them). But also, by focusing on the hackers, we help keep vulnerabilities secret. Would you rather be hacked by a script kiddy or by an unscrupulous competitor or hostile country? Hiding vulnerabilities rather than fixing them means that the vast majority of hackers that can never be caught simply have more targets. Additionally, by focusing on hackers, we draw attention away from those that can bear the responsibility to prevent hacks: the victims. And that lets them get off the hook too easily. Most companies suffer rather few consequences for running infrastructure that is, in essence, a public menace. And much like theft and vaccination examples, they hurt others by remaining vulnerable. When an organization is hacked, it affects not just it, but many others. In a data leak, most harm is usually suffered by those who's data is leaked, not by the company holding the data. And when financial infrastructure is hacked then it's not just the organization running it that is harmed, but in particular those that rely on it.
In short: we need a sea-change. To really address the risks posed by hacking, we need to stop focusing on hackers, and instead blame the victim. By failing to defend themselves they are hurting themselves and others, and focusing on hackers is never going to work anyway (and indeed makes it less likely that vulnerabilities are discovered by non-malicious or mildly malicious actors rather than those really motivated and out to get you).
You're speaking to the choir here. Find a way to make your point more concise and convince the popular media and your non technical friends, and then we're on to something.
Yep this is what I'm referring to. And an 8 MB block size still won't resolve the problem.
It's really tiring to see bitcoin promoted as some sort of second-coming-of-Christ when all evidence points to it being nothing more than an interesting experiment.
If current transaction rate is enough for practical purpose of actually using Bitcoin for commercial transactions (I do use it, and it works), then everything's fine as of now.
When it won't be enough, Lightning network will be adapted, as there will be a need for it. Premature scaling is the root of all evil. :)
You have to do everything right, 100% of the time. The bad guys only need a single lucky break. Even the exchanges, whose multi-million dollar businesses depend on it and who have full time devs working on it, managed to get compromised quite a bit.
With the banking system, you generally have some recourse - they're legally responsible for refunding you in some cases, they're able to cancel or claw back transactions in some cases, etc.
TL;DR: It's nice not to be totally fucked for a small mistake.
If I lose my wallet with my 2FA bank access token _and_ my password together, will the bank be responsible? No.
The problem with Bitcoin is that you are your own bank. Implementing personal bank-level security is hard. There is no tech support to call and no authority except your own.
And if you are not confident enough to set up your own security, you can store your money in external wallet, like Coinbase. They will be your bank. Just like IRL!
Except it's not, as I'm just initiating a normal online payment through my online banking, and the recipient receives it (usually) close to instantly.
Giving cash to a friend still requires me to withdraw the cash from an ATM, pass it to the friend, who then needs to deposit it using a deposit ATM or visit his branch.
One involves a few clicks on your online banking website, the other requires a much longer process. The backend company isn't really relevant, what is relevant is that 'Faster Payments' has been used as a marketing term for 'near instant bank transfers' that transparently use FPSL as a backend.
Technically, Faster Payments can handle payments of up to £250,000 (and there are plans to raise that to £1m) but the individual banks place their own limits on the maximum amount they'll accept: http://www.fasterpayments.org.uk/about-us/transaction-limits
Our systems are as shitty as any other industry, security problems are there, and social engineering is a problem. However, there is one simple rule which allows us to worry less about possible hacks, and when this rule is broken for some reason, shit usually hits the fan (like in the case of Bangladesh heist).
The rule is: it always should be possible to manually reverse the transaction.
Ethereum got it right. Welcome to the real finance.