The solution is to stop requiring strong ID for every dumb transaction in the universe.
I got carded for a chest x-ray two days ago. There’s no universe in which this makes sense. The nail salon asks for your phone number even if you’re a walk-in.
Americans are so used to this that they even have a hand gesture for “papers please”.
For centuries we could transact without ID; why now?
For medical procedures, I think it makes sense to check ID - just for the potential testimonial "audit trail" in case of actual or purported malpractice.
"Are you CERTAIN that the X-ray we are looking it is of the plaintiff? How would you know if someone else showed up and claimed to be them? Is your memory completely infallible?" etc... It might not be of any real significance but poor process can make one side look like they don't have their sh*t together.
Every other industrialized country that I’ve ever been to has taken me at my word when I provided my name for medical procedures. It works fine and always has.
the generation of kids trained in schools to be used to omnipresent surveillance and data harvesting are now in the workforce, they see no issue with this because they've grown up with it. I see kids on the internet now under 20 who believe that not volunteering your name, age, location, private health information, is "secretive", "deceptive", and cause for suspicion. If you don't stamp these things in your bio, you've obviously got something to hide.
I remember when we were all laughing about that "elf on the shelf" toy and how it was pushed so hard to get kids used to "santa" always watching their every single move even if nobody is in the room, and that there's nothing they can ever do to avoid his judgement at his own discretion and to just accept it because everyone is fine with it and you're crazy and silly and paranoid and clearly hiding something "naughty" if you resist it. Obviously I can't attest to this, but it's funny how that worked out
The new news is that they (finally) started formally notifying end-users i.e. insured patients. Five months after notifying regulators [0]. (Although they would have found out informally if they'd had any claims or interactions with providers).
Under the current definition of "notification", the true extent of the harm (to end-user insured/ patients, not to the company or its stock) only becomes apparent when individual patients go check their SSNs, logins, claims, identity theft, credit reports.
One of their providers, Option Care Health INC's SEC 8-K from 03/14/2024 merely says this, it doesn't tell patients which of their details or how many million might have been compromised, or when exactly, or how to go find out:
> Option Care Health, Inc... was informed that Change Healthcare, a subsidiary of UnitedHealth Group, experienced
an incident in which a cybersecurity threat actor gained access to some of its information technology systems. The Company utilizes several applications provided by Change Healthcare for, among other things, verification of benefits for existing and prospective patients, submission of claims to health plans for processing and payment, and application of cash remittances.
At the time of the system disruption on February 21, 2024, Option Care Health disconnected its operations from Change Healthcare applications and has since been working continuously to find alternative processes to help maintain patient care and its overall operations.
when companies make it very easy to get access to unsecured production assets. You don’t need a “nation state” to infiltrate anymore.
Companies do not care about your private data. A data breach is now considered priced in. It’s a footnote in the quarterly reports.
Maybe lasts half a news cycle until media brings up presidential candidate age/competence.
What we need is a legal framework to decide how much these companies will get fined. We need billions of dollars in fines to bring them to their knees and get their shit in order. We also need execs to get jail time for their negligence.
CHC was hacked by a hacker collective called AlphV. United Healthcare (CHC’s parent company) paid a ransom of $28m iirc, which AlphV allegedly rug pulled.
So the hackers that did the hack itself didn’t get paid and retained the data. Wild ride.
Nation-states would definitely be interested in medical claims data at the scale of CHC. One of the payers is called CHAMPUS, which covers active and retired service members and their families. This is indicated on the claims. So leaking these claims gives access to where service members live and who their family members are.
Since military bases in the US tend to have specific units and specialties, you can get a good approximation for how the military is allocating its strength. Plus you know who to blackmail.
I would guess it's mostly phishing. Even if people are educated about it, it only takes one mistake of opening an attachment or entering a password in a reasonably good fake form.
the people who recently leaked a bunch of governmental memos about the republican party a day ago were all furries who were dissatisfied with their far-right hatred of trans people.
Maybe, some highly educated people are just fed up and calling their senators doesn't seem to get anything done?
Even assuming that were true, I would disagree with attributing most of a successful breach to a wider attack at large. A successful breach on Change Healthcare occurred because their software development practice is garbage.
They have difficulty hiring talent because their talent acquisition process is broken and directed from guys in Nashville who have no clue how to handle developers on the West Coast. Big dependence on manual QA from teams in overseas contractors with no automation, and for developers on that side, there's no transfer of information when the code turns into spaghetti. Code review is weak and mostly for show. Single-account passwords for use in SFTP and outdated protocols. All logic goes into SQL stored procedures when it's completely unnecessary and none of the database developers know how to wrangle it anymore because someone decided all business logic should be in stored procedures (job security?). All software planning and business meetings happen as Waterfall with elaborate Unified Modeling Language but pretends to be Agile so obviously there is ritualistic Scrum, even though it doesn't fit the process that actually happens on a day-to-day basis.
When it comes to software, Change Healthcare cares about optics and most processes are for show, not actual effect, and especially when it comes to security.
It is new news - that they (finally) started notifying customers. Five months after notifying regulators and the stock market (February/March in fact, not June).
The true extent of the harm (to customers, not to the company or its stock) only becomes apparent when individual customers check their SSNs, logins, claims, identity theft, credit reports. That's not the way it should be, but is the way it currently is in the US.
It is recent news. The breach was in April. Insurers started notifying THEIR customers that they were in that breach because they irresponsibly fed everyone’s private data to that random third party, who has clearly committed malpractice in handling highly sensitive patient information like diagnoses and test results.
This is just an insanely vague and worthless notice. And it is infuriating that healthcare customers are made vulnerable by the vendors used by insurers like Cigna or whoever.
Change is so disingenuous that they admit the following was stolen:
> Health information (such as medical record numbers, providers, diagnoses, medicines, test results, images, care and treatment)
But claim with a straight face that no “full medical histories” were compromised. What sort of two faced word game are they playing? Those ARE full medical histories.
And of course due to vague partnerships between other hospitals and them, like the University of Washington hospitals, people who were never their customers also were affected.
This has to stop and it has to happen through regulation, fines, jail time, all retroactively applied. All these companies underfund security because posting a notice and offering credit monitoring is all it takes to move on.
I figure its the same reason behind similar issues all across different industries right now, as well as why there's so few jobs in general:
a race to the bottom in terms of miniscule budgets, overtaxing employees with job creep, a flippant attitude towards preventative measures for saving/making money you can report to shareholders today, etc. Too many people I know, myself included, have realized post-pandemic jobs make you do the work of 3 people while you get "sorry, we just don't have the time or money to pay you properly" if you protest. I wouldn't be surprised if the team for this massive institution is like, 5 guys in a room who work 15 hours a day, subsist off of energy drinks, catered sandwiches, hustlemaxxing youtubers and ketamine
I wonder what the tipping point is going to be? Will there ever be one, or will people just keep getting squeezed until they burn out and die, only to be replaced?
Because they cost time and money to implement and there is no incentive to avoid data breaches. This should be up to legislation, if we as a society actually care about privacy we need to give data privacy laws some teeth and start really enforcing them.
I’m sorry but sometimes it’s not money, it’s employees who are malevolently careless. Yes, you can spend huge sums of money locking up their computer so much that it would require 5 or 10 people to do the job of one because they only have a text editor, but we should also get into law that installing a remote desktop incurs liability on the employee side. I won’t say get the company off the hook, but employees are actively malevolent.
Without needing the Fight Club scene, "because the current [US] regulatory penalties are tiny, even when hundreds of millions of people's data is compromised. And there are rarely criminal charges against the executives of the companies who leaked the data". Until Congress legislates any solution.
If a CTO risks prison and having a criminal record because someone made a mistake, not too many people are going to want to be CTOs. Or you'll have to pay them a lot more.
a) "someone made a mistake" is not a good-faith characterization of "your software does not allow customers to mandate MFA organization-wide and audit that, you never fix that even though you market the capability, you're fully aware many of your customers are still only using 1FA and you continue to allow them to do that for months(/years?) even as you become aware other customers' credentials are being stolen by infostealers, (possibly in some cases from the same contractor laptop working for multiple customers, or at least on the same network/ at the same IT company)". Was it negligence? gross negligence? by which parties? I'm sure that will be argued for years (look how long the 9/11 insurance lawsuits took). But "someone [one single person] made a mistake [one mistake]" it ain't.
b) unclear are you talking about the Snowflake CTO(/CEO/COO/CIO/CMO/General Counsel) or their customers' executives; where did anyone say it was the Snowflake CTO's sole responsibility, or sole responsibility of any single executive? There will presumably be Congressional hearings as well as an SEC inquiry, truckloads of civil suits, plus tech journalist coverage. Their customers' cyberinsurance might well decline to pay out, more lawsuits. I wouldn't jump to conclusions until those facts are in. But in the meantime likely the stock market will deliver a financial verdict much sooner, and Snowflake might have to change executives, or get acquired, or worse.
c) But the general proposition that management isn't a consequence-free country-club environment seems fairly self-evident.
d) Not too many people should want to be CEOs or COOs or CTOs of a large company (or be considered qualified or competent to), if they might be held responsible for negligence or criminal wrongdoing. Boeing and SVB both spring to mind, and we don't have the facts on those either. Monsanto/Roundup, 3M/PFAS, Sackler/opioids also.
e) But executives being held [civilly or even criminally] responsible in extreme cases is not an existential problem like you're suggesting, because the market will figure out how much to compensate them. If a good CTO by their actions avoids $10m losses or reputational damage or lost customers every year, you could still pay them a lot while saving money, right? The case has been made that huge executive golden parachutes are a terrible practice, and that higher executive base compensation is better.
You wouldn't dispute that Sarbanes-Oxley was on balance a good thing? CEOs and CFOs know if they sign off on outright fraud, they could go to jail. Actual Sarbanes-Oxley prosecutions are very rare, but that's because it's having a deterrent effect.
Authentication, 2FA by SMS (going to a personal cellphone on a monthly contract), SIM-stealing, auditing whether MFA is in fact happening organization-wide etc. all seem to be in the news constantly. If Congress wants to get in a moral panic about TikTok, maybe they could spare a session or two for this.
I got carded for a chest x-ray two days ago. There’s no universe in which this makes sense. The nail salon asks for your phone number even if you’re a walk-in.
Americans are so used to this that they even have a hand gesture for “papers please”.
For centuries we could transact without ID; why now?