Article is interesting on the whole (I have no experience with "professional" work, and would love for suggestions as to how to be more familiar), but I latched onto this nugget:
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, I see one very concrete moral problem:
that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
That, to me, is deeply disturbing, and very very difficult to justify.
>> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human.
Real world evidence supporting your argument:
United Health Group is currently embroiled in a class action lawsuit pertaining to using AI to auto-deny health care claims and procedures:
The plaintiffs are members who were denied benefit coverage. They claim in the lawsuit that the use of AI to evaluate claims for post-acute care resulted in denials, which in turn led to worsening health for the patients and in some cases resulted in death.
They said the AI program developed by UnitedHealth subsidiary naviHealth, nH Predict, would sometimes supersede physician judgement, and has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
> 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
This is a fantastic illustration of selection bias. It stands to reason that truly-unjustified (some hidden variable) denials would be appealed at a higher rate and therefore the true value is something less than 90%.
That's not to say UHG are without blame, I just thought this was really interesting.
Your scientific take is useful in the case where selection bias is unavoidable and needs to be corrected for.
This case is not like that; if the insurance agency wants to dispute the 90% false denial rate, it would be trivial for them to take a random sample of _all_ cases, go through the appeal process for those, and publish the resulting number without selection bias.
As long as that doesn't happen, the most logical conclusion for us outside observers is: the number is probably not so much lower than 90% that it makes a difference.
The insurance company may well have already done that; this is being put by someone who is suing them and looking for reasons that the AI bot is bad. The article is silent on what the company response to the accusation was and, realistically, we'd expect the appealed denials to have a very high rate of error whether determined by bots or humans. Few people indeed are going to waste time arguing a hopeless case against an insurance company - this is classic selection bias.
What do you think the claim approval rate is? Less than 10%?
It stands to reason that the overwhelming majority of cases where the claim was approved were approved correctly. Unless that rate is well under 15%, it’s impossible to have the claimed “90% error rate”.
It's clear from the quoted paragraph that by "error rate" they actually meant "false denial rate". That's also the words I used in the comment you are replying to.
Did you comment because you take issue with misuse of the term error rate, or because you think that correct approvals make up for incorrect denials, and that therefore overall error rate is a useful metric?
It's not necessarily even selection bias. It might just be how they roll.
I'd say half of all municipal permitting offices and 90% of welfare/disability offices operate on the principal of "deny them all and see which ones come back around". Wouldn't surprise me at all if the behavior spread to healthcare.
Right, but in this case the critical service isn't providing "health" for users, it's extracting profit from them (from the transactions) for the shareholders. THAT'S the critical service this company cybernetically fulfills.
Seems to me that the use of AI is irrelevant[1], and the real problem is the absurd error rate.
[1] In the sense of "it doesn't matter if it caused the problem", rather than "it probably didn't have any effect". Because after all, "to err is human, but to really foul things up takes a computer".
AI adjudication of healthcare is fine but there needs to be extremely steep consequences for false negatives and a truly independent board of medical experts to appeal to. If a large panel agrees the denial was wrong, a penalty of 10-100x the cost of procedure would be assessed depending on the consequence of the denial.
No one is going to accept a claim rejection from AI. Everyone will want to dispute, which will have to go to a human to review. At the end of the day I don’t see how 100 people is realistic.
This reaction is primarily an emotional one. Why is a human rejecting a claim better than an AI rejecting a claim? Presumably the AI will one day -- if not today -- be more accurate in following decisioning logic than humans, who will continue to make human errors.
The AI won't reject a claim because that's easier than doing the paperwork to approve the claim and it's 4:30 on a Friday.
It also won't approve it because despite not putting the "magic words" on the form it's clear from the situation that you'll get approved regardless and it's a waste of the company's labor to have you file an appeal that they then have to review.
If that were true then they would also dispute every first-line human review. I don't think the average first-line human customer service rep is any better than AI even today.
Can we imagine a world where the claims are adjudicated by an uninterested party (as far as possible)? I don't want the insurance company to decide a contractual issue, that's ridiculous. At the moment they're kept honest by the law and by public opinion (which varies by country), but the principal-agent problem is too big to ignore.
I don't think there an ethical responsibility to worrying about your competitor's labor. That would lead to stagnation and it's own sort of ethical issues.
I don't think it's as easy as hand waving it away as "your competitor's labor". Your competitors labor is your community, it's people. I believe we all have an ethical responsibility to that.
For the points you brought up, why is stagnation for the purposes of upholding an ethical position a bad thing?
And yes, by definition, worrying about ethical responsibility would lead to ethical issues. That's the whole point.
So should we all be farming and collecting berries? Most advancements since have put people out of jobs in "competitors" that didn't adapt. Still the unemployment rate isn't 99.9%. Yet we displaced whole industries many times over the centuries. Obviously people move to better jobs and find other things to do. There's nothing particularly good about sitting on a computer denying people insurance all day, why not have a computer do it?
If it is a choice between progress unfettered by concern for your "competitor's labor" or farming berries, I choose berries.
However, I believe there's a middle ground and endeavor to find it. Based on your response it doesn't appear as though you believe a middle ground exists.
Choosing berries (ie not progressing to "protect jobs" - no jobs are protected, we have close to full employment worldwide) is choosing avoidable deaths. Child mortality rate in a "choose berries" world is just one example that makes me triggered by those that have that position.
And you get nothing in return for protecting those jobs, as I said, the world is "employed" and we've killed many industries already over the centuries. You're protecting nothing.
You got that right, and yes, I was putting that issue aside, although my counterpoint to GGP argument would be "the ethical issues aren't from the competitor's perspective, it's from the perspective of the whole workforce, industry, and/or economy as a whole".
The impact to the whole workforce, industry and/or economy as a whole is a second order effect of the real ethical issue of providing a worse service for so cheap it's almost free such that the market won't bear significantly better service provided by humans. As I see it, the ethical concerns are not about specific people being out of a job, but with setting an expectation that it's not worth providing real, useful service (using actual people) because to do so would be a cost higher than phoning it in with AI.
I've had travel insurance from time to time and the consensus in online forums seems to be Allianz. But, in spite of anecdotal stories, relatively few people have any real world experience with the claims process. So it's really hard to tell what the true story is especially given that different people have different tolerances for out of pocket costs--especially below extreme amounts related to evacuation and the like.
The whole ugly turn of AI hypemen claiming its somehow morally okay for everyone to lose their jobs all at once makes me think the Luddites were right all along
My knee-jerk reaction is to think that the prospect of an insurance company handing support over to machines is a terrible development.
But it was already the case that they just arbitrarily do WTF ever they want, that outside a small set of actions that "bots" can perhaps handle fine they aren't going to do anything for you, and that the only way to get actual support for a real problem involves something being sent from a .gov email address or on frightening letterhead.
So... not really any different? You already basically have to threaten them (well, have someone scarier than you threaten them) to get any real support, this wouldn't be different.
My last few interactions with an insurance company were moderately annoying but far from terrible - I would absolutely loathe having those replaced by a machine, given the terrible quality of every AI "assistant" I've ever used.
Similarly, I was just forced to talk to an insurance company and the only way I got any response was by talking to a human. The more robotic they are, instead of working around known issues, the more likely we are to get to a satisfactory solution (e.g. don't overcharge me and then do nothing about it).
Right. I wouldn't say that my interactions with those people were great, but they weren't nearly as bad as any of the automated systems that I've used.
Also, I think you may have made a typo that negated the meaning of some of your comment (but I believe I can understand what you meant anyway).
While a human interaction can be awful, there's a special hellishness that is trying to negotiate with a robot to get something related to your healthcare taken care of.
It seems to me apparent that there needs to be some way to arbiter the claims outside the insurer company itself. I'm... not sure that there is. But if there were and there exists some sort of sanction or incentive for the insurer to get it right the first time... I'm confident that AI insurance companies could streamline the process. But you need this incentive mechanism, else it's a recipe for dystopia. (Deeper thought goes that you would shift a lot of work to the arbiter, but I won't touch that for now.)
This comment is at the heart of many of the challenges tech companies face - they can scale the serving of content - but struggle to scale the content moderation and/or dispute resolution.
It's a common problem with automation - the focus is often on accelerating the 'happy' path, only to realise dealing with the exceptions is where the real challenges lie.
One tried and trust way around that is to cherry pick customers as part of your strategy. You sell insurance to people who will never claim ( and hence dispute), and shun those likely to.
However such market segmentation results in no insurance for people who would need it and the people who don't wondering why they are buying it - ie optimal efficiency for an insurance company is to simply offer no value at all.
ie you could argue the whole value proposition of an insurance company is to pool, not segmented risk, and critically to provide fair arbitration ( protecting the majority of the pool from those that would do insurance fraud, while still paying out ).
Buying 'peace of mind' requires a belief in a fair dealing insurer - that's the key scale challenge - not pricing or sales.
I don't see it as inherently a problem; AI can (theoretically) be a lot more fair in dealing with claims, and responds a lot sooner.
That said I suspect the founder is seriously overestimating the number of highly intelligent, competent people he can hire, and underestimating how much bureaucratic nonsense comes with insurance, but that's a problem he'll run into later down the road. Sometimes you have to hire three people with mediocre salaries because the sort of highly motivated competent person you want can't be found for the role.
> AI can (theoretically) be a lot more fair in dealing with claims
Respectfully, no it can't. From a Western perspective, specifically American, and from an average middle-class person's perspective, specifically American, it only appears to be fair.
However, LLMs are a codification of internet and written content, largely by English speakers for English speakers. There are <400m people in the US and ~8b in the world. The bias tilt is insane. At the margins, weird things happen that you would be otherwise oblivious to unless you yourself come from the margins.
I don't even think most Americans (except those trying to do the automating) would consider it to be fair.
AI is bias automation, and reflects the data it's trained on. The vast majority of training data is biased, even against different slices of Americans. The resulting AI will be biased.
On the other hand, once the claim is mishandled by AI, one can use the normal process to discover the juiced prompt and all the papertrail that comes with implementing it.
> LLMs are a codification of internet and written content
Only true for pre-trained foundational models without any domain-specific augmentations. A good AI tool in this space would be fine-tuned or have other mechanisms that overshadow the pre-training from internet content.
Why wouldn't they be? LLMs need a lot of content for training and there's multiple orders of magnitude less to train on of if you limited it to insurance-specific content, so you'd probably get a really crappy LLM. And training from scratch is really expensive anyway.
At best they'll be using fine tuned enterprise OpenAI / Anthropic models, more likely a regular model with a custom prompt.
And whichever method you use, you’re still accountable to regulators, courts, the letter of your contract, and the consequences of your reputation in a competitive market.
United Healthcare was in the news last year because they had an AI claims "approval" process with a 90% error rate, all in favor of the insurance company.
It's easy to describe a business process with written down rules, and those are easy to find in legal discovery. It's much easier to obfuscate with an AI model, because "nobody knows what it's actually doing - it's AI!".
It was not a 90% error rate (or at least that’s not a claim I read). It was that 90% of appeals of those decisions were decided (at least partially) in favor of the appeal. That could be 1000 decisions, 10 appeals, and 9 reversals.
I am personally 7 for 8 in lifetime wins in my city's parking ticket appeals process. That doesn't mean that I think that 7 out of 8 tickets my city issues are incorrect.
> It's much easier to obfuscate with an AI model, because "nobody knows what it's actually doing - it's AI!".
Do you have actual knowledge of this? If not, the most obvious counterpoint is that the AI will need to give the reason or reasons for denial, and recording them for audit. Just like a human or a rules-based system.
This is life insurance specifically. It's not very hard to prove someone is dead, is there really much room for argument over paying out the policy benefit?
If the plan is to just pay out after confirming the person is dead, what’s the AI doing? It could be replaced by a “upload your death certificate here” box.
Most of life insurance policies have exceptions. For example, they won't pay out if you commit suicide. So the conditions of the death must be assessed against the insurance policy before payout.
It is kind of weird. Why does a life insurer have 100,000 employees. I'm really only familiar with term life. All the "customer service" is pre-purchase. Once you buy it, you forget it other than making the annual payment. There's nothing to manage, no real customer service required until and unless you die.
I suppose whole life where there is a cash value and investments being managed might have a more ongoing service need, but I'm not familiar with that.
This doesn’t establish any sort of mathematical bounds, but it gives an idea of the size of the problem. I suspect 100k employees is an over-estimate just because a lot of people are uninsured…
I work in the industry at a startup insurtech, we are a life insurance carrier (wysh.com - our flagship product is a b2b micro life insurance benefit, but we built that on top of a term life carrier and also sell d2c term life)
Allianz has ~150k employees but certainly they don't all work on the term life business in the USA, they do all kinds of other insurance stuff all over the world and have hundreds of different products.
For term life specifically, there still are some pretty significant back office teams that a customer probably never interacts with directly, though. A few that come to mind:
- underwriters: you wont be able to make a decision for all of your applicants based on the info they provide you and the info you can pull from automated sources, so some number of humans are on the phone with your applicants asking clarifying questions, doing additional research, and making risk decisions. They're also routinely doing retrospective analysis that looks back on claims paid out to make sure the claims are reasonable and there's not some sort of gap in the underwriting approach thats leaving unknown risk on the table, and audits of automated underwriting decisions to make sure the rules engines are correctly categorizing risks
- actuaries: every company has varying risk tolerance for both the policies they issue and the cash they hold/invest. These people are advising on how to take risks and working with underwriters and finance people to try and figure out the financial impact of various underwriting decisions: can a product remain viable if it is purchased by a heavier balance of smokers vs nonsmokers, etc
- accountants and finance: its a capital-intensive business that requires large cash reserves and sane investment strategy for that cash, often subject to tests by regulators or industry associations and all sorts of lengthy audits
- compliance: in the US, life insurance is individually regulated by each state. Many states join the ICC Compact and agree to all follow the same rules and have a single set of regulatory filings, but you still have plenty of other states to do filings with, analyze changing requirements from, maintain relationships with regulators, respond to regulatory complaints or investigations, etc
- industry reporting: most insurance carriers participate in information-sharing programs like the MIB (Medical Information Bureau) and these memberships come with various reporting and code-back obligations. The goal is to prevent you from getting declined at one life insurer because you say you have some sort of uninsurable illness and then turning around and lying about not having that illness to another life insurer the next day. These sort of conflicting answers get flagged for manual review, someone will need to talk to the applicant and figure out why they gave conflicting info to multiple insurers and what the truth really is.
- claims and fraud investigations: many, many people lie to try and get insurance they aren't qualified for or to take out insurance on someone they aren't supposed to. Claims investigations start by asking "is the insured really dead" but then try to answer the questions like "did the insured know this policy was taken out on them", "were the responses the insured gave during underwriting truthful", etc. These investigations are extremely time consuming and often involve combing public records, calling doctors, interviewing family, and more. You'd probably be shocked how common it is for former-spouses to try and take out insurance policies without the other knowing during divorces. Some level of this investigation is happening in the first couple of years a policy is in force, too, as insurers can rescind the policy and refund the premiums if they determine it was obtained under false pretenses
- reinsurance: even the biggest insurers typically pool and share some amount of risk so that a bad claims year can't take down an entire carrier. reinsurance treaties are complex things to negotiate and maintain, and have lots of reporting obligations and collaboration between the reinsurer and the actuaries to validate the risks are what everyone thinks they are
The customer-facing part of a term life company is really just the tip of the iceberg. Small companies are certainly better at doing this with tech than bigger incumbents (thats a big part of the reason we exist at Wysh), and a narrow product focus really helps, but there's still some pretty significant levels of human expertise involved to keep it all running.
The important detail there is doing it without the knowledge of the (former) spouse.
You need both an insurable interest and consent of the insured in order to buy an insurance policy on someone else’s life.
Couples separating and holding policies on each other is pretty common and carriers have some specific rules to follow to make sure there’s appropriate mutual consent for policy changes etc
Of course! One can die by suicide or as a result of drug abuse, preexisting conditions and all that. Otherwise somebody discovering or suspecting they have an incurable disease would be able to get a policy after that.
> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI. That, to me, is deeply disturbing, and very very difficult to justify.
I don't know. Given the human beings I've interacted with in customer support, and the number of times I've had to escalate because they were quite simply "intelligence-challenged" who couldn't even understand my issues, I'm not sure this is a bad thing.
In my limited experience with AI agents, they've been far more helpful and far faster, they actually seem to understand the issue immediately, and then either give me the solution (i.e. the obscure fact I needed in a support PDF that no regular rep would probably ever have known) or escalate me immediately to the actual right person who can help.
And regular humans will stonewall you anyway, if that's corporate policy. And then you go to the courts.
While I get the vibes, and have had experience of human customer support being very weird on a few occasions, replacing mediocre humans with mediocre AI isn't a win for customers getting actual solutions.
And right now, the LLMs aren't really that smart, they're making up for low intelligence by being superhumanly fast and able to hold a lot of context at once. While this is better than every response being from a randomly selected customer support agent (as I've experienced), and when they don't even bother reading their own previous replies when the randomiser puts the same person in the chain more than once, it's not great.
LLM customer support can seem like a customer win to start with, when the AI is friendlier etc., but either the AI is just being more polite about the fixed corporate policy, or the LLM is making stuff up when it talks to you.
I think there's an interesting implication here: that the actually good (for the customer) support experience is a real human who has access to a RAG where they can look up company documents/policies/procedures, but still be able to use their human brain to make judgement calls (and, of course, they have to be willing to, y'know, read the notes left by the previous rep).
Nothing new or revolutionary, just the usual race to the cost bottom with corresponding quality bottom.
Author ignores the fact that any normal market there are variously priced insurances, yet somehow not all people flock to the cheapest one, in contrary (at least where I live). Higher fees mean ie less stressful life when dealing with insurer.
Ethical issues of putting people out of a job? Please. This mindset has to be called out because it directly causes suffering via creating a societal permission structure for politicians to protect interest groups with protectionist trade policy and internal pork barreling policy.
Economic productivity putting people out of jobs is both good and necessary and it is unethical to work against it.
I think the commenter was definitely somewhat glib in their statement, but I don't think the case is as clear cut as you think.
The way I've come to think of the current moment in history is that capitalism allocates resources via markets and we use this system because in many situations its highly efficient. But governments allocate resources democratically exactly because we do not always want to allocate resources efficiently with respect to making money.
Whether it "makes sense" or not, most people believe there is more to life than the efficient allocation of resources and thus it might be a reasonable opinion that making 100,000 people suddenly unemployed is bad. I doubt seriously that the OP believes having 100,000 people working indefinitely when the labor can be done more efficiently by machines is good. I think most reasonable people want to see the transition handled more smoothly than a pure market capitalism would do it.
One might argue that the government allocating some resources is more efficient than the market doing so purely because specific outcomes are desired that the invisible hand is not motivated or incentivized to provide. If the goal is to keep people healthy, efficiency is based on how successful that is, not on the monetary cost. Few people seem to understand it this way, though.
In most cases government employees simply aren't prescient enough to allocate resources efficiently. Like in theory maybe central planning could be more efficient if everything worked correctly, but in practice it never works efficiently at scale. Much of the resources simply end up wasted.
If one looks to "government employees", as individuals, then yes, they aren't prescient enough to allocate resources efficiently. But comparing the free market to government employees is not an apples to apples comparison, because individuals don't allocate resources efficiently either in a free market; the "market" as a whole is what optimizes for efficiency.
And I think there is a distinction in different kinds of efficiency that can be optimized for, not just monetary cost. If we desire clean, paved, safe roads, that can be used by all equally for efficient movement of goods, because we recognize that as a prereq for a strong economy, we can not rely on the free market to deliver that, much less optimize for it. It can be more efficient, in terms of actually delivering the desired goal vs not delivering it at all (or delivering a grossly bastardized version of it) to pool our resources and explicitly work towards making something available rather than hoping that the free market will deliver it.
The free market did not deliver on reducing congestion in New York (in fact, one might say that over the decades, the free market is what made it worse), but the congestion pricing program has, and has resulted in a bunch of valuable/desirable knock-on effects.
I do not think that a centrally planned economy is workable; but collectively being deliberate about building the things we need/want, and taking a longer view, can result in significant efficiencies.
The free market ends up simply wasting resources in its drive to discover where efficiencies lie and how to take advantage of them.
I'm not sure this is the way to think about it. It obviously matters if money is being wasted, but the question is to what end is the money utilized?
In capitalism, roughly speaking, the purpose of spending money is to generate a return on investment and the market does a reasonable job of doing that and a reasonable knock-on effect is rising standards of living, etc.
But in health care, for example, we might decide that a return on investment isn't the point, but the efficient allocation of resources still matters, to the end of making people healthier. Its more that free markets really struggle to optimize efficiency that isn't directed towards ROI. I think its a genuinely moral philosophical question - if you really prioritize freedom over all else, then markets are sort of the best you can do and you just give up on collective or philosophically motivated goals. But, genuinely, and I think the current political moment underscores this basic fact, people care about plenty of other things besides freedom and even besides democracy.
> it directly causes suffering via creating a societal permission structure for politicians to protect interest groups with protectionist trade policy and internal pork barreling policy
What part of that is suffering, if it enables 100k constituents to put food on the table?
We could employ 100k people to dig holes and then fill them back in; should we?
We shouldn't employ people in economically un-viable ways just because they need income. We can just give them money directly, or redirect them to other work, or a combination of the two.
> We could employ 100k people to dig holes and then fill them back in; should we?
If that is what's necessary to provide a social safety net, then maybe so. See the works progress administration for an example of this.
> We can just give them money directly, or redirect them to other work
Ideally yes, but that isn't happening, hence the first option.
We may be straying here, though: this discussion didn't start out with someone saying what someone else should or shouldn't do. We were discussing the ethical and economic consequences of an idea.
The problem is that it's a misallocation of human capital which slows progress for all of society. We should be providing social safety nets for people, not fake jobs.
> We should be providing social safety nets for people, not fake jobs.
I agree with you (except in classifying the genuine effort of my fellow people to be "fake jobs" just because a computer can do some of the work) and believe making a resilient, trustworthy, proven system for the former is a prerequisite to withdrawing the latter, to avoid suffering.
Unfortunately for us, the barrier to the former is ideological in nature and imposed by the elite few in power now, before any matters of capital allocation (human or financial) come into play.
Nobody has classified genuine effort as fake. But what good is genuine effort when it can be done much more easily without it? There's no shame whatsoever in this. At least, I don't think we should add any to the situation.
> Nobody has classified genuine effort as fake. But what good is genuine effort when it can be done much more easily without it?
This was previously stated: the good being done is 100,000 people can feed their families. What good is going without that? You'll enrich some private equity dudes and make a lot more people unemployed and a lot more families unhappy.
No, but claims processing is already highly automated across much of the insurance industry and the level of automation will only increase in the future.
There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.
More likely, those 100,000 humans are mostly working on sales and marketing, and the few allocated to support are all incentivized to avoid you, and to send you canned answers. A reasonably decent AI would be better at customer support than most companies give, since it'll have the same rules and policies to operate with, but will most likely be able to speak and write coherently in the language I speak.
There's a huge assumption in your comment -- that you know how insurance works. "Most" probably aren't working in sales and marketing; I'd heavily dispute anything above 50% and I feel like 33% might be pushing it? I don't want to get overconfident here, but this claim feels off-base.
Insurance isn't like a widget. People have actual legal rights that insurers must service. This involves processing clerks, adjusters, examiners, underwriters, etc. Which then requires actual humans, because AI with the pinpoint accuracy needed for these legally binding, high-stakes decisions aren't here yet.
E.g., issuing and continuing disability policies: Sifting through medical records, calling and emailing claimants and external doctors, constant follow-ups about their life and status. Sure, automate parts of it, but what happens when your AI:
a. incorrectly approves someone, then you need to kick them off the policy later?
b. incorrectly denies someone initial or continuing coverage?
Both scenarios almost guarantee legal action—multiple appeals, attorneys getting involved—especially when it's a denial of ongoing benefits.
And that's just scratching the surface. I get that many companies are bloated, and nobody loves insurance companies. No doubt, smarter regulations could probably trim headcount. But the idea that you could insure a billion people with just 100, or even 1000 (10x!), employees is just silly.
> There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.
That's not an assumption.
I know that I, and many others, have been able to get a human on the phone every time we needed one. Regardless of the number of those humans actually working claims, in the current system, it is "enough".
I also know that it's impossible to give that level of service when you have 1 employee for every 10 million customers.
That's really all that you need in order to make the judgement that you're not going to get a human.
Side-note: I did a quick search, and found that Allstate has 23k reps that actually handle claims and 55k employees total, so almost half of their workforce does claims and disputes. They also have 10% market share of the US's ~340 million people, so that's, at most, 1 rep per 1500 employees. That's much better odds than 1 for every 10 million.
> A reasonably decent A
And there's the problem - that AI doesn't exist. You're speculating about a scenario that simply hasn't been realized in the real world, and every single person that I've talked to who has interacted with an AI-based "support representative" has had a bad experience.
> the only way to provide dispute resolution and customer service to 1B people is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
The Catholic church has 1B "customers" and seems to be doing ok with human-to-human interaction without the need (or desire) for AI. They do so via ~ 500K priests and another 4M lay ministers
I didn't read "the only way..." as having the condition of 100 or less employees. In fact the 100 employees is mentioned in an earlier sentence that explicitly says they are using AI to accomplish such a low employee count. The comment I was replying to seems to imply AI was the only way to serve 1B people, without regard for the number of employees.
It absolutely does not imply that. You read it wrong. It's very clear from the context that I'm talking about serving 1B people with only 100 employees.
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, I see one very concrete moral problem:
that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
That, to me, is deeply disturbing, and very very difficult to justify.