I wonder is the GP is referring to the CLOUD Act, as it is true that US companies cannot be compliant with both the GDPR and the CLOUD Act, but it doesn't weaken the case for European tech sovereignty.
We totally found this doing financial document analysis. It's so quick to do an LLM-based "put this document into this schema" proof-of-concept.
Then you run it on 100,000 real documents.
And so you find there actually are so, so many exceptions and special cases. And so begins the journey of constructing layers of heuristics and codified special cases needed to turn ~80% raw accuracy to something asymptotically close to 100%.
That's the moat. At least where high accuracy is the key requirement.
If a compressor can compress every input of length N bits into fewer than N bits, then at least 2 of the 2^N possible inputs have the same output. Thus there cannot exist a universal compressor.
Modify as desired for fractional bits. The essential argument is the same.
No, the subreddit has applied custom css to do that. It's the mildly infuriating subreddit. There's also an image of a hair visible on widescreen monitors, to make you think there's a hair on your display.
> The Outlook is Superficially Stable, defined here as “By outward appearances stable unless, you know, things happen. Then we’ll downgrade after the shit hits the fan.”
Why do you think the current government would be the slightest bit interested in solutions to housing, inflation or healthcare if Epstein wasn't an issue?
If you are transferring a conversation trace from another model, ... to bypass strict validation in these specific scenarios, populate the field with this specific dummy string:
"thoughtSignature": "context_engineering_is_the_way_to_go"
It's an artifact of the problem that they don't show you the reasoning output but need it for further messages so they save each api conversation on their side and give you a reference number. It sucks from a GDPR compliance perspective as well as in terms of transparent pricing as you have no way to control reasoning trace length (which is billed at the much higher output rate) other than switching between low/high but if the model decides to think longer "low" could result in more tokens used than "high" for a prompt where the model decides not to think that much. "thinking budgets" are now "legacy" and thus while you can constrain output length you cannot constrain cost. Obviously you also cannot optimize your prompts if some red herring makes the LLM get hung up on something irrelevant only to realize this in later thinking steps. This will happen with EVERY SINGLE prompt if it's caused by something in your system prompt. Finding what makes the model go astray can be rather difficult with 15k token system prompts or a multitude of MCP tools, you're basically blinded while trying to optimize a black box. Obviously you can try different variations of different parts of your system prompt or tool descriptions but just because they result in less thinking tokens does not mean they are better if those reasoning steps where actually beneficial (if only in edge cases) this would be immediately apparent upon inspection but hard/impossible to find out without access to the full Chain of Thought. For the uninitiated, the reasons OpenAI started replacing the CoT with summaries, were A. to prevent rapid distillation as they suspected deepSeek to have used for R1 and B. to prevent embarrassment if App users see the CoT and find parts of it objectionable/irrelevant/absurd (reasoning steps that make sense for an LLM do not necessarily look like human reasoning). That's a tradeoff that is great with end-users but terrible for developers. As Open Weights LLMs necessarily output their full reasoning traces the potential to optimize prompts for specific tasks is much greater and will for certain applications certainly outweigh the performance delta to Google/OpenAI.
For instance, one of GDPR's 6 lawful bases for processing data is in order to comply with legal obligations.
If you're going to make strong claims like that, the onus really is on you to give specific examples.
reply