It seems common knowledge these days among the slightly but not too technically inclined that any new major project should use a LAMP stack as its base.
People see Facebook, Amazon, and many others running PHP & MySQL on Linux at scale and they know it works reliably, so while it may not have the support of Cisco or Oracle, it is pretty close on the 'no one ever got fired for choosing' X scale, since you can point to every other major company using these building blocks reliably if your investors, CEO, board or auditors asked why you chose to use PHP & MySQL.
In summary, PHP & MySQL have become the modern equivalent of a "safe" choice for your stack to be built on. Its not necessarily a bad choice either, you get access to a large community of skilled people who can write PHP & write SQL statements, and while everyone likes to hate on PHP, it isn't about to up and disappear any time in the next decade either (unlike COBOL).
I'd say the reliability the bank is looking for is much higher than what's acceptable for web companies (i.e. web companies are fine with eventual consistency, which would is obviously unacceptable in a banking system outside of trivial non-core features).
I'd also suggest that it's not just scale; the kind of reliability Facebook needs is fundamentally different than what a bank needs. Broadly speaking, Facebook needs the site to keep working as well as possible even if some subservice fails, and a bank needs a subservice not to fail. I'm summarizing here and I know it; clearly neither of them is actually on the absolute extreme end, as Facebook needs authentication to work and a bank may not care if the interest rate display widget on their customer banking app fails to load a couple of times. But I'd still suggest there's enough difference between the requirements to be a fundamentally different domain.
Even in "the cloud" things differ between services. A social media app has very different reliability requirements than a backup cloud.
Well actually there are many sub services in a bank that can go down without major impacts. The two major banks I use have weekly planned outages of features like old statement retrieval, person to person payments, ACH transfers, etc. Basically everything in the web interface could experience outages without any major crisis.
As long as ATM requests always work, nobody really seems to care.
"Broadly speaking, Facebook needs the site to keep working as well as possible even if some subservice fails, and a bank needs a subservice not to fail."
One of the reasons many of them stick with mainframes, AS/400's, and NonStop systems for backends. ;)
Why would eventual consistency be unacceptable in a banking system? In my experience people interact with social media on far shorter time scales than their banks.
When they post a new Instagram photo, they expect that their friends will see it basically instantaneously.
In comparison, when people use their debit card at CVS, they're not expecting anyone to log into their bank account seconds later and see the charge show up.
I would think correctness is more important than speed in a retail consumer bank.
Or do I misunderstand what you mean by eventual consistency?
If your data is only eventually consistent, then DB node A can have your bank balance at $x for some time still, while it is already $0 at on node B. Then, if some operation (say withdrawal) is checking the balance with node A, then you have a problem.
Yes, this is true of eventually consistent systems. The question is a) what does "eventually" mean (replication takes seconds, minutes, or hours?), b) what time delta do you expect for most transaction requests, and c) what is the risk of being temporarily wrong?
Seems to me that a bank could answer these questions as well as any other business, and build a system that works within the answers.
You're actually somewhat right! ATMs are an (sometimes) an example of eventual consistency. If an ATM is offline, it'll often allow you to make a withdraw anyways and once it's back on the network report back. That could mean an overdraft for you. Caveat here is that these are often low-traffic ATMs on the periphery, ones in the city are usually making calls home to check balances.
However, the buck (no pun intended) has to stop somewhere. Overdraft limits have to be consistently applied. Even that is somewhat up in the air. Take this with a grain of salt as it's second-party information, but my wife works in fraud prevention at a smaller credit union. She says that transactions are collected throughout the day and overdrafts are only applied at the end of the day to allow for bills to drain your account beyond its capacity and then payroll to land without applying overdrafts unless you're in the red afterwards. In some sense, that's even "eventual consistency" on the scale of 24 hours.
The most important thing in banks is that at the end of the day, the balance sheet, well, balances. And they limit their liability by preventing too much overdraft and applying daily limits to ATM withdrawals. I pose that general eventual consistency fits that pretty well, as long as "eventual" isn't "hours" for the most part.
A little more on eventual consistency in general as I understand it, eventual consistency systems come in many forms. In a leader/follower setup (think MySQL w/ async replication), usually "important" calls are made to the leader in a consistent fashion and changes are asynchronously replicated to the followers for general read fanout. There are a lot of different kinds of systems with different guarantees. In a dynamo-style system, writes/reads are usually done to a quorum of replicas (e.x. 2/3 replicas), and only if the read from the two replicas disagree are the values on all three replicas "repaired" via last-write-wins. Facebook has a model they call causal consistency[1] which models causal relationships (e.x. B depends on A, therefore B isn't visible until A is also replicated).
You can consider any system with a queue or log in it that doesn't provide some token to check for operation completion to be eventual. For example, imagine you fronted DB writes with Kafka. Lag between writing to Kafka and commit into the DB may only be 100ms, but that's "eventual". However, if you provided back a "FYI, your write is offset 1234 on partition 5", you could use that as a part of a read pipeline that checked that the DB writer was beyond offset 1234 on partition 5 before allowing the read to proceed. That'd be consistent.
That part is surprisingly easy if you architect it right. The core abstraction most banks use is your "available balance" and the fact that they can reconcile on a longer time period than seconds.
People see Facebook, Amazon, and many others running PHP & MySQL on Linux at scale and they know it works reliably, so while it may not have the support of Cisco or Oracle, it is pretty close on the 'no one ever got fired for choosing' X scale, since you can point to every other major company using these building blocks reliably if your investors, CEO, board or auditors asked why you chose to use PHP & MySQL.
In summary, PHP & MySQL have become the modern equivalent of a "safe" choice for your stack to be built on. Its not necessarily a bad choice either, you get access to a large community of skilled people who can write PHP & write SQL statements, and while everyone likes to hate on PHP, it isn't about to up and disappear any time in the next decade either (unlike COBOL).