For everyone complaining about "how could they not have failover" let me ask you this: Would you take a job with Wells Fargo to fix their infrastructure?
And if you did take such a job, how long do you think it would take you to get the budget and approvals from all the auditors necessary to fix everything?
How much would they have to pay you to take that job, knowing how frustrating it would be to get anything done?
And now you see why banks have such terrible IT.
(One of my mentors actually works at BofA, and says he only does it because he gets to work 6 hours a day, gets a VP title and a ton of money, but nothing ever gets done)
This gives a good idea of a very real phenomenon that plays a huge but largely hidden role in the failure, success, and form that technology takes at companies.
It's not just infrastructure. Quality, innovation, speed, efficiency, etc. To whatever degree you believe the old 10x programmer meme, believe this: Change that constant 10 to any value for many employees at once at a company and the impact is staggering.
Good people don't think to they are too good for these types of companies. They think the companies are too constraining and place a limit on their ability to realize dreams or their potential to perform.
They don't prefer to work with less passionate peers. Not because such are lesser human beings to be avoided. Rather, it gives them another edge. Constant, motivating, cross-reinforcement during discussion, doesn't just inspire it creates action.
Finally, most of these companies don't have tech as their core business. That correlates very directly to the fact that the most senior and influential people in the company will care less about you, being willing to partner on ideas, etc. It's not personal, Banks think bank stuff is most important and anything else is less important, even if it's a critical operational component of the business.
If you're the best engineer for a company who's bottom line is not direct sales of engineered products you are a middle tier or bottom tier engineer on the open market.
You're the best engineer willing to work in a black box, a niche, in isolation from your peers with constant pushback.
Well, I bet there are continents of people who would love a corporate IT job at WF. You make it seem like it's equivalent to mining coal in the 1920's.
It's not like people blame the staff working on wiring the network hardware here. For any company in that position I'd assume they realize their dependence on that part of the infrastructure for their core business. I don't really see the relevance of how long compliance processes take, that's part of the industry and WF didn't discover the internet last week. Salaries in finance reflect that frustration, who cares.
If the answer to that challenge was one of "too hard", "it works for now" or "too expensive" that's still a strategic failure that impacts their core business now.
It's because management doesn't want to touch the topic. No CIO wants to risk breaking something so they all pray to God nothing breaks on their watch and they can get their retirement package or move on to the next job and it's someone else's problem.
This is why sometimes absurd amounts of money are paid to keep the old system going as it is. It's "safer" than to move to a new one with the associated risks and teething issues.
You can sympathize, mock or rationalize through the corporate-political logic that inevitably leads here... the interesting points remain the same: most tech operations suck.
This is not really related to costs. It's related to culture. Costs and quality of software, including infrastructure, range by orders of magnitude per theoretical quanta of software.
> And if you did take such a job, how long do you think it would take you to get the budget and approvals from all the auditors necessary to fix everything?
The only people who think auditors are bad are the ones trying to pull something an auditor would catch.
I don't think that's a reasonable conclusion. I had to work on software in a heavily audited (finance) environment and there were a lot of beaurocratic obstacles related to that, but the code was fine and unaffected by the whole process. I would have been happier not having to think about audit requirements and just focusing on making working software.
Incredible. They had all their mission-critical infrastructure in a single data center. How many billions of dollars do they make per year? And they can't afford even a tiny bit of redundancy?
If I were a customer, I'd use this as a sign that this company is not technically competent enough to manage my money.
“Wells Fargo customers entrusted their bank with their livelihood, their dreams, and their savings for the future,” said Attorney General Becerra. “Instead of safeguarding its customers, Wells Fargo exploited them, signing them up for products - from bank accounts to insurance - that they never wanted. This is an incredible breach of trust that threatens not only the customers who depended on Wells Fargo, but confidence in our banking system. As our investigation found, Wells Fargo’s conduct was unlawful and disgraceful.”
Unfortunately they're the holders of my mortgage (it was sold twice and they ended up with it). So I don't even have a choice of being their customer for the next ~25 years.
Real talk...they own my mortgage, i cant just 'forget about it' if they forget about it. I want to refinance it. Of they lose all data on my debt like that ending to Fight Club... is the debt just...gone?
Nope. Liens are recorded with the county/state as part of property law, as well as any other encumbrances. So that would actually probably make your situation worse because the lien holder would have no clue about the debt and claim on the property and could be hell for you to remove even if you kept paying and eventually paid everything off. Could potentially be a nightmarish scenario and an exercise in pure bureaucracy. Probably some Brazil or Catch-22 style shit.
This is why I like the Canadian system of mortgages.
I have a 30 year mortgage, but every 5 years I have to "renew" it. At that time, I have to renegotiate the rate for the next 5 years. As part of this negotiation, I can just switch banks if I want. Or to a private lender. Or to anyone really.
It seems weird to me that you are beholden to an entity you've never signed any contract with.
This is not always a good thing. Inability to lock in a reasonable fixed rate for a 15, 20 or 25 year term in Canada meant that in the early 1980s, for example, when benchmark interest rates were 15%+, people whose 5 year terms came up for renewal had no choice but to renew at 17% interest rates. Ask an older person from AB who lived through the first oil economy bust in Edmonton or Calgary.
Is there something to prevent the rate from skyrocketing when you renew the mortgage? Doesn't this have all the issues people occasionally have with ARMs?
That assumes there is some sane upper bound on rates, but they can fluctuate quite a lot depending on how the economy is doing (7% doesn't seem like too much of a stretch). I agree that you should not buy a house near your carrying capacity, but I also posit that if you start out 7% lower that it will hurt your standard of living more than you expect.
There isn't really an upper bound, but the prime rate doesn't move much. As another comment said, competition between different lenders keeps the rates in check.
Typically the Bank of Canada sets their prime rate, some time later the big banks set their own prime rates based on that, then the mortgage rates are set based on that. The Bank of Canada prime rate only moves by .25% or .5% at a time.
If you have a variable rate mortgage and the rates change, they will be immediately reflected your mortgage. This isn't as bad as it sounds - your payment will stay the same, the rate change just affects how much goes to interest vs principal. The mortgage documents will include the 'trigger rate' which is how high interest rates need to get before your payments no longer cover the interest. This is the point where you're in trouble.
For some variable rate loans, like an auto loan, an increasing rate just means that the term of the loan gets longer or shorter.
As always, ask questions. The bank, in Canada at least, doesn't really want you to default on the loan. Ask about the trigger rate, ask what happens if it gets hit, ask what happens if rates go up but don't hit the trigger rate, ask about lump sum payments.
The UK typically uses the same system of fixed rates for (usually) 2-10 years.
Story time: several years ago I took out a 10 year fixed rate of 2.99%. My thinking was that since the base rate couldn't really go down any further, I was locking in a good rate.
As it's turned out, so far I could have had a series of 2 year fixed at around 2%, so this was potentially the wrong move, although the maximum downside was limited.
My parents on the other hand took out a 12.99% fix in the early 90's, which turned out to be incredibly unlucky given the unprecedented low inflation of the nineties and noughties.
Assuming Canada is like the UK, competition in the mortgage market means a bank hiking the rates will lose your business. Of course you can be unlucky if interest rates are unusually high at the time your fixed rate deal is up for renewal.
Can't. It's a fixed 30 year mortgage at a 3.375% rate. Refinancing it will increase that by at least a point. Plus, I originally did get the loan through a local credit union, who then sold it to a consolidator who sold it to a big national bank (which happened to be Wells Fargo). Local credit unions generally don't hold onto mortgages. They don't have the economies of scale to service them efficiently.
My 'local' credit union does. They dont sell mortgages/loans to anyone and keep them 100% in house. Its why even after moving to Germany I keep that bank for US assets. They just dont seem as criminally motivated as other banks.
It doesn't work that way. Mortgages get resold between banks all the time. You can walk into your local bank and get a house loan, and five months from now you'll get a business envelope from Wells Fargo informing you that you should send your checks to them now.
I would expect if WF decided your first loan met their purchasing criteria, your refinance would get the same treatment.
I moved loans to First Tech Federal Credit Union, which used to (and still probably do) state in the loan contract that they will service the loan for its life. Very happy with them so far.
That's good language to look for. I didn't check closely enough and my mortgage documents stated that the loan issuer (my bank) would service the loan, but without any guarantee of how long. They sold the loan shortly after issuing the mortgage.
As edoceo says below, I recommend refinancing with a credit union of your choice, preferably one that will pledge in writing to service the loan for its life.
I just started a new job, so I'm just going to set up an account at my local CU and have my direct deposits go there. Guess I'll have to close my WF accounts once they get this mess sorted out!
I mean truthfully when do you get to test your redundancy against a true disaster. It was a mess. WF is 20 companies rolled into one so the fact the disparate systems from 10 different banks works at all is kind of a miracle.
I can't recall who it was, but there was a big outage due to a data center running a monthly generator test and having a major customer go dark. The data center had 7 generators for 4 server rooms: 1 per room, a backup for each pair of rooms, and a backup for the backups. The primary and then both backups failed, so out went the lights.
You're pretty much damned if you do and damned if you don't. If you touch things that are working you could break them. If you don't touch things you never know what'll happen and you get fewer opportunities to learn. Move your servers around geographically and you might improve the odds that anything is working by reducing the odds that everything is working.
I don't think we're quite to a place yet where having servers down can be characterized as a non-event. Even if the customer can't see a behavioral difference, business units still tend to get quite anxious, and sometimes their theatrics put the whole process in jeopardy (not unlike trying to rescue a drowning man). It just hasn't been normalized yet.
Look at Netflix with chaos monkey and simian army. Netflix routinely does catastrophic destructive testing on their production systems, sometimes evacuating an AWS region entirely. To them, a server going down is a non-event, because they designed their systems with the premise that servers go down.
Netflix is a great example of how to create robust systems. Keep in mind though, they have a very different risk profile than a large bank. no one is going to lose their life’s savings if netflix entire infrastructure crumbles. This might happen in a bank with a single server outtage. Don’t get me wrong, if I start a bank today, I use chaos monkey stategy. But if I take over a bank infrastructure, with cobol still representing a huge percentage of mission critical code that everyone is scared to touch... no chaos monkey. I might deliberately turn off a server for an hour to see what chaos ensues, but it’ll be after 3 months of analysis, longer if I begin to suspect the system does more than anyone can remember.
There is a different level of criticality for "my post didn't go through, hit refresh" and "my transaction didn't go through - the restaurant said my card isn't working."
Would you honestly want to go to a bank and say "if we unplug this, we can find out what fails."
Facebook handles money too, though. Also I think the parent was making the point that Facebook builds software with resiliency in mind so when a failure does happen, the software deals with it gracefully.
They can have (and did have, at least the long time ago when I still used it) weird cache persistency errors and "please just refresh to fix that" type of workflows if you have bad luck. That sort of behaviour is simply not acceptable for a bank.
There is this company that had a grid power issue, batteries kicked in batteries running low, diesel generator show time, diesel generator doesn’t kick in.
people scramble to shutdown before the batteries run out.
So ITIL stuff happens and time to test it, guess what? diesel again doesn’t kick in (don’t recall the consequences thou)
Bloomberg does it once a year. That said, I have yet to encounter a company more obsessed with business continuity. I don't doubt that their failover systems and testing of them are well beyond the typical.
I think Bloomberg can stand to be down for 8 hours to simulate a disaster. Banks with legacy systems and people constantly dependent on them to conduct business can't risk an actual incident happening because they were testing what would happen if an incident happened.
Netflix designed their stuff from the ground up to fail over. Large monolith corporations who've inherited systems from other companies they've bought or merged with have challenges you won't see many places that have benefited from the 30 years of lessons that were taught at these companies.
> I think Bloomberg can stand to be down for 8 hours to simulate a disaster. Banks with legacy systems and people constantly dependent on them to conduct business can't risk an actual incident happening because they were testing what would happen if an incident happened.
No, it can't. Any loss of customer-facing functionality is a critical outage ("World Problem" in company terminology). There are a relatively small number of customers, but the terminal is critical to the operations of those who buy it. The terminal going down for eight hours would be a world-wide headline in the financial press.
A Tier 1 test that simulates loss of a datacenter takes a cluster one DC virtually offline. This puts an entire subset of services offline in that DC entirely. The test is coordinated with the teams who own the services to ensure their services fail over correctly. Any service disruption during the failover is a test failure. If it passes, the customers don't even know it happened. The goal is to be able to lose an entire DC and have the terminal customers not realize it until they hear about it on the news.
Chaos engineering and AWS weren't real things when they started building the company. And the system they have now doesn't resemble much of it was once.
Truth of the matter is they invested more in their infrastructure, but that's because their business plan required them to grow on the back of technological advances. Banks, it's seems, do not. Or maybe they do, and the some of these start up banks will usurp them.
Standard good practice should be to have a redundancy in place and test it at a regular interval. It should be part of periodic maintenance - fail to the backup so updates/grades can be applied to production and fail back to production once done.
But I’m guessing wellsfargo just doesn’t have a reason to care.
Every major financial regulator has business continuity and disaster recovery requirements but the standards are woefully outdated. A plan that gets the bank back to full functionality within 24 hours would be acceptable to most regulators and even considered speedy to some.
Tangentially related, I highly recommend the movie "Out of the Clear Blue Sky." Cantor Fitzgerald was a bond trading firm at the top of one of the twin towers and lost every employee who was in the office on 9/11. Incredibly, despite losing the majority of their employees and despite losing almost all of their trading infrastructure, they managed to resume operations in time for the bond market's reopening 48 hours later.
What kind of redundancy are we talking about here?
You can't really roll back say 10 minutes of transactions, so are you maintaining 2 parallel systems? How do you keep them perfectly in sync?
This isn't my area of expertise by a long shot, but it occurs to me this is probably hard, especially when your codebase started in the 60s, and has been accreting ever since.
You have a primary and a backup with a synchronous commit protocol. When a commit request is made on the primary, the primary writes to its transaction log and the backup’s log. If the backup does not acknowledge, the commit fails.
The backup doesn’t need to be in the same exact state as the primary (it’s not meant to service requests), it just needs to have a persistent log of what changes were applied so that it can roll forward when needed.
Most relational DBs do something like this for their DR product offering. Oracle has Active Data Guard. DB2 has HADR.
Okay, so say you switch banks. How do you have ANY idea if your new bank is better or worse? Are you going to ask customer support? They barely know how to access your account let alone the technical layout of their data centers.
I know that both Switzerland and Singapore have at least "soft" guidelines (no direct retaliation if you don't adhere to them, but frown upon if you don't) about the HW (including buildings & workplaces) & SW and personnel setup required to remediate a potential catastrophic failure of the data centers and/or key-employee's office building.
I think that in Switzerland all major banks test their disaster-readiness (by switching everything to their secondary datacenters & working locations) of all critical applications/software-layers and employees at least once every 3 years - reaction/recovery times depend on the criticality of the service provided by the person/application.
Everyone assumes that the banking industry and the financial industry in general somehow have a magic touch when it comes to technology. All those digital balance sheets, they're pristine. All those networks, they're unpenetrated. All those databases, they're filled with the purest divine truth.
And if anyone ever figures out that isn't the way it is, and that the numbers are not representative of anything of substance? If nations refuse to honor the claimed 'transfers' done through these rickety electronic systems? It would make for an interesting few days.
I was shocked to learn the Fortune-500 bank I was working for had only a single data centre. Should they ever go down, much of the world would be disrupted. I think this is not unique.
Wells is weird though. When I started banking with them in 2009, they were ahead of the curve in online infrastructure - it was much easier to access them online than any of the banks I was used to back in Massachusetts. They also had a reputation for being both honest and conservative - they were the only AAA-rated bank in 2007, they were #1 in green rankings, and Warren Buffett had invested in them because of their sound balance sheet.
And then starting around 2010 but rapidly accelerating around 2014, everything about them went to shit.
The best explanation I can think of is that John Stumpf is a slash & burn sociopath, juicing the numbers so he can get his 473x-the-median-worker paycheck while ruining the company. He wouldn't be alone in the financial world, but it's a shame that a 150+ year old institution can so rapidly go down the toilet.
In 2009/2010 my wife accidentally made a charge to PayPal that was linked with our Wells Fargo checking account. Since we weren’t planning on that charge happening from that account, there was an NSF fee and Wells rejected the charger. Understandable, right?
Except then PayPal made a few more attempts for god knows why and each time Wells Fargo kicked an NSF fee our way.
Now, PayPal shouldn’t have repeatedly attempted a rejected charge. But, Wells Fargo shouldn’t have allowed those attempts. They just couldn’t help themselves to that $35 NSF fee though.
We fought it to no avail. With all the NSF fees and interest (and fees they added to fees while we fought it), what started as a $300 transaction ultimately cost us over $1200.
Wells Fargo is now and was in 2009/2010 a criminal enterprise.
Wells was one of the earlier companies to have the CTO report directly to the CEO.
All the crazy sales numbers and bogus account shenanigans were going on back in 2003/2004 when I worked there. I ratted out more than one professional banker to branch managers and up over that crap. A fun one was the home equity lines people would open without customer knowledge and link up to overdraft protection. The customer would never owe, nor know, anything until one day an overdraft hit their equity line, and then they got notified of late payments.. I don't miss working for a Bank.
I have banked with WF since about 2007 for my student and auto loans. Their online portal has always been worse than whatever my local credit union had.
The WF business is clearly set up to confuse and exploit consumers. My credit union websites have always helped me do what I want and need with my money. This includes the tiny local credit union in Idaho.
And they can't afford even a tiny bit of redundancy?
Big companies tend to defer risk. Managers and project leads want to start new projects rather than upgrade existing infrastructure. Combine these forces and sometimes you get a catastrophe.
uh that may and well be true, but their behavior in recent years says stagecoach.. wild, wild west.. robbers and thus their logo is now a punchline/comedy routine.
The next worst thing for electronics after fire - water! Or are there non liquid fire suppression systems such as pulling all the oxygen out of a room (if there aren't any people in it)?
are there non liquid fire suppression systems such as pulling all the oxygen out of a room
Yes. One place I worked (not a tech company, but with tons of electronics), when the fire alarms went off we had xx seconds (I don't remember the number) to get out of the building before something called Intergen was vented into the room to somehow suck all of the oxygen out, and if we were still inside we'd be dead.
It must be pretty serious stuff, because we'd have evacuation drills twice a year.
The resultant 12% oxygen is 60% of normal atmospheric air. It's still breathable, if you're an average healthy person with healthy cardiac and pulmonary function, sitting at rest - but it's not healthy, it's enough that you're going to start seeing systemic responses. The reduction in partial pressure of oxygen being sucked into the lungs will also cause an increase in the partial pressure of carbon dioxide and inert gases.
I think the concern likely comes from:
-Folks that are of poor cardiac function are going to be evacuating, meaning increased cardiac demand under stress, while being somewhat oxygen-starved. This could tip some folks into an acute episode that otherwise wouldn't.
-Folks that are of poor oxygen function, who are borderline hypoxemic to begin with. Think folks with chronic obstructive pulmonary disorder: about 5% of your employees aged 55-65 will have it.
You won't suddenly kill a building full of people. I'm guessing the evacuation rush is to make sure they're not liable for unnecessarily sending a couple to the hospital.
Even if you were healthy you’d be in bad shape. I know from flying your mind goes quick when oxygen levels go down. You feel drunk and get a bit euphoric and refuse to acknowledge you’re in any trouble at all. Low oxygen, especially if the environment is dangerous or requires a clear mind and/or quick action, is a killer.
Possibly more importantly, the "venting" is not what you'd call a gentle breeze. "Explosive atmosphere replacement" is a better description. Solid things will go airborne at speed.
Huh, so I was curious and looked up demonstration videos. Some of them looked similar to an indoor hurricane and some of them looked like a gentle breeze. The classroom-style (small room with a few Inergen bottles) demonstrations had the biggest effect and the actual datacenter tests just looked like a small breeze. Maybe in an actual datacenter the Inergen:open space ratio is substantially smaller than demonstrations in a small room?
Also: Folks that are already suffering from smoke inhalation due to a fire. Unlikely since the halon systems are pretty good at reacting, but still possible.
>The resultant 12% oxygen is 60% of normal atmospheric air.
To put that in perspective, that's like being sent to the top of Pikes Peak (4.3km / 14,000') in seconds. Pilots flying that high in unpressurized aircraft are required to have oxygen masks. Most people will develop altitude sickness when rapidly subjected to that.
When you consider the potential for stress or panic in this kind of scenario, hypoxia emerges as a very real threat even for the young and healthy.
It's definitely not good to breathe in though, so you can still see why, for liability reasons, the company would want all employees to evacuate if it's going to be deployed, and leave it to firefighters with SCBA equipment.
Still a good reason for everyone to GTFO. Not worth breaking your neck over, but it's not good to put humans under the stress of a low-oxygen environment when they could be evacuated.
That's an important caveat, given how the danger was apparently greatly exaggerated.
If you've been told that the system will
> somehow suck all of the oxygen out, and if we were still inside we'd be dead
then what are you going to do in a real fire situation, when you're not in imminent danger but your escape route is blocked by flames? Better to brave the fire (or jump out a window) than submit to death by suffocation...
Yes water isn't typically used in datacenters, halon gas can be used for example.
But the noise of the nozzles releasing the gas can actually damage disk drives.
> We think water is superior to using the firefighting chemical compound Halon, because water Is less damaging to technology and Halon can destroy circuit cards.
Did not know Halon could destroy circuit cards. Apparently it also damages the Ozone.
> VAXen, my children, just don't belong some places.
Suffice to say... yes. There are fire suppression systems that will pull oxygen out of the room. People are advised to leave the room before the fire suppression system takes effect.
Credit unions and small banks outsource almost all of their core systems to companies like Fiserv. Literally thousands of banks and CUs all ride their infrastructure
I ran a website for a local CU for about 10 years. It was pretty fun doing things for their website that made their Fiserv web tools look like they were from 1970.
There was no interaction with user or account data on my end, just CMS-building, letting them go crazy with coupon promotions or pet shelter PSA's or warnings about ATMs being down, etc. They'd pitch me on a new project, I'd send them a timeline, and we'd add new features or tools to the website. I even hired a local college student to create some super-basic but useful financial tools for the site.
I loved how small-town-feel the whole thing was. At the beginning of the relationship, they gave me a list of broad requirements like "SAS-70", I found a DC to match, I sent them a contract, and I couldn't believe I actually had a banking institution as a client, 4 years out of school and a brand-new business owner.
Eventually they merged with another CU and went away, but apart from the very occasional "server down" notifications while I was on vacation, those are some really fun memories.
I'm overwhelmed by the number of credit union options. Are there significant differences between the options? Anything in particular I should look out for? In general, I'm just looking for something to do day to day banking and potentially get a mortgage in the near term.
a nice problem to have... =) nearly all credit unions have modern features like credit cards (with apple/android pay support), remote check deposit, and are part of a national ATM network, so you don't have to worry about needing (expensive) branches everywhere to do your banking.
when i choose a credit union, i optimized for two things:
- is there a branch close to work or home? this is mainly to develop a personal relationship in case i ever needed a loan. i'd deposit checks and withdraw cash in person occasionally.
- good rates on interest bearing accounts (at least 1%, but often >2% apr)
if you work for a larger institution, disney for example, joining their credit union is convenient.
Is there a bank in the US similar to Monzo or Revolut? I really miss these having moved from the UK to the US and I haven't been able to find a good replacement.
i don't know enough about those banks to compare, but https://empower.me is the best of the new breed online banks i've found - 2% apy for checking (with some reasonable conditions)!
Regional bank checking in. We've been told to reroute certain workloads through other means until 6am central Friday. Regardless of that, the story coming out of this over the next few days is really going to help Business Continuity departments at other institutions plan for this scenario. My teams are running mock scenarios this afternoon based on the auto power shutdown causing a data center to be unable to fail over. Might end up being something else entirely, but it's still a good scenario to investigate.
I get paid by direct deposit into my WF acct between midnight tonight and 6am tomorrow. I suppose I can eventually rely on my employer keeping records and replaying everything.
If ACH areas are involved, hopefully they already had those queued a day early per normal routines and you'll (hopefully) be fine. There are exceptions allowing companies to provide ACH files late. That's the exception and not the rule.
For those who have worked in bank IT, I have some questions.
So these days there's very little physical money. Most of my "wealth" is just entries in various digital ledgers. My bank says I have $XXX and my brokerage says I have $YYY and my retirement account says I have $ZZZ.
Let's go with the bank account case. Is it possible that a catastrophic accident or attack could wipe my balance down to $0 with no way to recover? What if a data center was nuked? What if two data centers were nuked?
How much redundancy is in the system? Are there third-party agencies that track private bank ledgers? How hard is it to take them out too?
Ever since I read "The End of Alchemy" (a great book, btw) this thought has haunted me.
Not very possible. There are typically extensive business continuity plans with multiple geolocation backups. The auditors should be making sure every financial is doing their diligence in this area. I can tell you that at the credit union I work at we test our disaster recovery plans frequently including remote working, backup office space, alternate data centers and cold storage data backups. We also structure our operations to run live from multiple locations. We might be better than most but in a real disaster most financials would be able to recover financial information with a minimal amount of data loss.
If you are talking about actual nuking then the story might be different. Not all backups can survive an EMP. I expect the biggest problem would be getting people to care about bringing the system back up. I think food and shelter would be of primary concern.
I’d love someone who knows the details to explain how that claim process would work, because while the FDIC would insure in the event a bank goes under, in that scenario there’s usually records.
What would happen if there were no records? Surely there’d be lots of people making significant deposits in between snapshots. If you’re “paperless” I’m not even sure how you’d reconstruct your balance.
Would certainly be time consuming, and in this case time would certainly be money.
You presume the live system is the only record - its not, there are backups of those, then historical records to reconstruct what might have been there.
Bear in mind the live records are only one part of what a bank keeps.
I can't even imagine the hellish process that you would have to go through to prove that you actually owned the money you did to receive the FDIC or SPIC pay.
I'm genuinely curious what the process would look like, if it even applies. Where would you even start if the bank just suddenly says "you have $0"?
Yeah my bank is the same. I could download the statements, not sure who really does that accept the extremely paranoid.
Civil claims are based on a preponderance of the evidence. if I go into court with a piece of paper from the bank saying that I have $5,000, and the bank has nothing to say that I do or don't have any money there, I have more evidence on my side than they do on theirs.
If evil corp stores said buzzword stack exclusively on that large rack that's currently on fire because of compliance nothing is gained. Not every problem is a nail.
Neither, I was trying to suggest that offering Blockchain and related technologies as a solution to every problem in existence is kind of an annoying trend. I don't really see the outage discussed here as a technical problem. Quite literally any failover mechanism would have been preferrable here.
Please be advised that the Wells Fargo Gateway is currently unavailable.
We're experiencing system issues due to a power shutdown at one of our facilities, initiated after smoke was detected following routine maintenance. We're working to restore services as soon as possible.
We apologize for the inconvenience as we continue to work on a resolution.
We will update you no later than 3:00 pm, Eastern Time.
Wow, functionality is still significantly degraded over 24 hours later. I just tried to log in to check on an account balance, and the site was extremely slow. It showed my balance, but failed every time to load the transaction history with "Error in external system" or something like that. The sign in page still shows: "Alert: Some customers may be experiencing issues accessing online and mobile banking. We apologize for any inconvenience."
recall Stuxnet, its possible this was an attack, somesort of malicious mod to firmware, but_ Hanlon's razor.
from the reddit:
"throwawayfordays75 1399 points 8 hours ago*2
Throwaway since I have first hand knowledge. Fire suppression went off in one of their main Data Centres from some utility work this morning. No power to any of the network or compute equipment and some failovers did not work as expected. "
At this point im wondering what "utility work" was happening.
"everything minus core network gear was manually being unplugged from any PDUs to help the control the initial power-on."
Can't we... Why don't we have rack hardware that can handle this situation? I thought some HDD RAID solutions had circuitry to keep them from browning themselves out while spinning up the disks. I guess I'm surprised this isn't a solved problem at the rack level now.
Or have we been so focused on never cold booting a rack of servers that we haven't spent any effort on foolproofing of cold booting a rack of servers?
[Edit: answering my own question] apparently these exist and are called Managed PDUs. Can we deduce WF doesn't have them?
Some servers "stage" power on to disks. as spinning up disks is often the largest source of inrush current for e.g. a 10 disk server might only apply power to bays 2 at a time with a delay between each, this doesn't solve the inrush current problem of switching on a rack full or servers together (where managed PDUs come in)
it would seem to be a possibility.
im also thinking 10 finger power up sequence, as in:
insert power cord 1 into powersocket 1, wait till audible beep, insert power cord 2 into power socket 2, wait till audible beep etc. :-D
Speaking generally: Stripe provides an abstraction layer, such that there may not be a 1:1 mapping between features of our platform that you use and particular providers. We also have technical and operational teams, such that when an underlying financial rail has a problem we deal with it so customers do not have to.
knowing how most of the banking system works they are just doing all the accounting on their end and waiting to reconcile with Wells Fargo when they come back online.
I signed up for Stripe in 2011 or 2012. About six months after I signed up an account rep from Wells Fargo called me asking about my merchant account. Stripe never made it quite clear what was going on but apparently in the early days they opened accounts at Wells Fargo on your behalf without telling you.
Speaking as a Law afficionado, that is one great Terms of Service document! Kudos to whichever legal team wrote it! I'm not saying that it completely anticipates all potential customer-company interactions that could occur (no legal document could), but it seems to go very far, and seems to show a good understanding of the many potential issues that can arise. My future company might use this document (and others) for inspiration on how to craft its own Terms of Service documents. Thanks for the link!
Still do:
"The Payment Method Acquirer for Visa and Mastercard Transactions is Wells Fargo Bank, N.A, and you may not submit Visa and Mastercard Charges without first agreeing to the Wells Fargo Financial Services Terms."
Zero problems with Stripe, still processing payments without any issue. Just took a look at their status page too, in case something got posted, but no such notice exists.
"Wells Fargo as their bank" is a weird statement. Wells Fargo is a HUGE company with various banking services. This isn't as simple as "WF data center had an outage so WF as a company is completely down"...that's not how it works.
I got an email today from WF titled “New year. New look. Continued commitment.” Maybe they are flipping the switch on their “New year. New look...”; which sounds like a new redesign and they are having downtime while switching over.
Well, for a second I thought they have closed business operations completely, keeping in mind how embroiled they were in graft cases in 2008 recession.
Wells Fargo is nothing more than a federally regulated crime organization, supported by the federal reserve and the American tax payer, but I digress.
This thread has primarily focused on redundancy and software architecture. That could be the case, but there is no better way to fight proxy war via hacking of banks. It’s a domino effect... pull your money out of the bank, the bank calls loans, then there is credit crunch, investors lose confidence in the stock, value lost etc... the enemy has secretly inflicted civilian problems across the economy. Distrupting the banks and the flow of money can lead to a revolution... or take your mind off of America’s enemy or diverting energy else where.
Bullish on bitcoin b/c of its highly resilient characteristics. Might not process as many transactions per second and it might pollute more than the average bank, but sure as hell dont need to worry about a natural disaster or freak incident single handedly taking out the entire network.
Wells Fargo customers must be terrified that their money is going to be gone when the service finally comes back. This downtime is obviously to prevent a bank rush while the leadership leaves the country with millions in customer funds.
No. It just affords you the possibility of choosing a different set of risks, that's all. Please don't change my words to fit your idea of a cryptocurrency shill.
You need bitcoin exchanges for any kind of practical transaction. Not only that, people who don't keep their money on exchanges lose their coins all the time. It's just the worst store of value. It would be more practical to transact in wheat like the Sumerians did. Or just dollars.
This stuff just keeps on happening, but crypto folks always make some excuse as to why it's not actually a systemic problem.
> You need bitcoin exchanges for any kind of practical transaction. Not only that, people who don't keep their money on exchanges lose their coins all the time.
OK, what's your point? Nobody claimed it was perfect, just that it happens to be immune to the particular failure mode which happened here. Can you accept that that is true?
> This stuff just keeps on happening, but crypto folks always make some excuse as to why it's not actually a systemic problem.
What "stuff"? People losing coins? People lose cash all the time, does that make it a "systemic problem" with cash? They are just different technologies with different trade-offs.
It's understandable that you hate the cryptocurrency community but that doesn't mean you need to deny every possible advantage of cryptocurrencies.
"it happens to be immune to the particular failure mode which happened here"
What exactly do you mean by "immune to the particular failure mode"? Is bitcoin immune to server or data center outages? Perhaps if you very narrowly limit which ones you are talking about to the bitcoin network per se.
Its not immune, no system is absolutely immune to failure, but it is highly resillient because of the wide distribution of its nodes and the fact that each node carries a full copy of the ledger.
Queue infinite stream of downvotes. News flash, you dont need an exchange to store or transact bitcoin!!! And people using exchanges should understand the risks associated with using a centralized service.
What do you mean by "need"? If you don't have an exchange you can trust, doesn't that make things worse, because you have to determine whether you can trust your counterparties and you have to secure your assets with your own resources?
Matt Levine wrote the other day:
"...people think they don’t trust the traditional financial system, but in fact its most basic functionality—not forgetting where it put your money—is so reliable that people just keep assuming without evidence that the crypto system must work the same way.
Surely a big crypto exchange must have, like, systems, and lawyers, and more than one employee, so you can trust it, right? Wrong! If I ever buy Bitcoins I will keep them at Gemini, the Winklevoss twins’ exchange, just because I am absolutely sure there are two of them."
It's essentially no different than carrying a credit card in your wallet... The bitcoin network IS the bank, your key is your card. There's no need to horde anything under said "digital bed". It's frightening to see so many ignorant comments.
And if you did take such a job, how long do you think it would take you to get the budget and approvals from all the auditors necessary to fix everything?
How much would they have to pay you to take that job, knowing how frustrating it would be to get anything done?
And now you see why banks have such terrible IT.
(One of my mentors actually works at BofA, and says he only does it because he gets to work 6 hours a day, gets a VP title and a ton of money, but nothing ever gets done)