> The article never says how much they detected. I can only assume it’s because it’s a nothing amount. If it was significant they would have been saying how much. It’s hard to take the article seriously as a result.
Did we read the same article? There's a table with the amounts of different metals, with the amounts found in each of the different samples.
I'm not sure I follow. Traditional computing does allow us to make this distinction, and allows us to control the scenarios when we don't want this distinction, and when we have software that doesn't implement such rules appropriately we consider it a security vulnerability.
We're just treating LLMs and agents different because we're focused on making them powerful, and there is basically no way to make the distinction with an LLM. Doesn't change the fact that we wouldn't have this problem with a traditional approach.
I think it would be possible to use a model like prepared SQL statements with a list of bound parameters.
Doing so would mean giving up some of the natural language interface aspect of LLMs for security-critical contexts, of course, but it seems like in most cases, that would only be visible to developers building on top of the model, not end users, since end use input would become one or more of the bound parameters.
E.g. the LLM is trained to handle a set of instructions like:
---
Parse the user's message into a list of topics and optionally a list of document types. Store the topics in string array %TOPICS%. If a list of document types is specified, store that list in string array %DOCTYPES%.
Reset all context.
Search for all documents that seem to contain topics like the ones in %TOPICS%. If %DOCTYPES% is populated, restrict the search to those document types.
----
Like a prepared statement, the values would never be inlined, the variables would always be pointers to isolated data.
Obviously there are some hard problems in glossing over, but addressing them should be able to take advantage of a wealth of work that's already been done in input validation in general and RAG-type LLM approaches specifically, right?
I think the research by anthropic released recently showed that language is handled independently of the "concepts" they convey, so first you get the concepts, then you get the translation to language.
> Fed started raising rates in Apr 2022, at which point leaders started freaking out because they know what higher rates mean, and by Jun 2022 the Fed was raising them in 0.75% increments, which was unheard of in modern economics.
You're basically making the case that it happened fast, and went up high, but everyone who paid attention to interest rates understood it was only a matter of time till it had to at least revert back to pre-covid rates (whether you think that's 1.5 or 2.3 or something, depending on how you measure), and that obviously there would need to be real layoffs after.
The excuse is really saying "it turned out more extreme than we thought", but was the behavior take responsible assuming non-extreme rate changes?
I'm sure this will be written up somewhere as an example of Google doing a good job at customer relations, despite the disaster it is for said customers.
That's nominal profit margins. It doesn't take into account the way that different parts of these conglomerates take money from one pocket and put it in the other (e.g. insurance and Healthcare providers), which means it doesn't show up in the profits for your first pocket. This is why the fact that Healthcare platforms are vertical monopolies (not just horizontal monopies) is important to this conversation.
(separately, profit capping rules means that once a monopoly is cemented, once a company has moved as much as it can from one pocket to the other, there's an internal incentive to spend money on bureaucracy).
Because the regulations encourage vertical monopolies. Replacing that with a state mandated vertical monopoly unconstrained by market forces isn't going to help. In fact it'll probably make the situation worse. There is no reason for there to be monopolies in healthcare and if they are emerging that strongly suggests misregulation. Giving the regulators more power in that sort of situation is the opposite of helping.
So far it's beating your argument pretty handily. I'm not sure why you think it is worth writing that without including one; but my advice would be if your going to post a comment disagreeing with someone you should include some actual arguments or evidence. It helps keep the threads from rambling on.
>That's nominal profit margins. It doesn't take into account the way that different parts of these conglomerates take money from one pocket and put it in the other (e.g. insurance and Healthcare providers), which means it doesn't show up in the profits for your first pocket.
This is nonsense. UNH’s profit margin is all net income divided by all revenue.
Same with Elevance, CVS, Cigna, Humana, Centene, and Molina. There is a reason all these businesses aren’t at the top of the market cap rankings. Not even in the top 100.
UNH is up there due to sheer size and the fact that they sell high profit margin software and healthcare. Otherwise, you will not get rich starting a managed care organization. Even Warren Buffet, Jeff Bezos, and Jamie Dimon ran away with their tail between their legs:
reply