I was kind of hoping that there was some little-known x84 standard that never saw the light of day, but instead all I found was classic French racing cars.
>The person openly praising Hitler did not get banned at all.
Content moderation at Meta is a joke now. I reported an account multiple times for hate speech. The account's photos were comprised entirely of racist caricatures of black people. Like absolutely vile, hateful shit.
Each time, I received a notification along the lines of: [paraphrasing] "We found that the account in question did not violate our community standards. Therefore, we did not take any action. Thanks for the report."
Existing off the shelf IR tools are mid, more recent research is often not productionized, and there are a lot of assumptions that hold for agentic context (at least in the coding realm, which is the one that matters) that you can take advantage of to push performance.
That plus babysitting Claude Code's context is annoying as hell.
>That plus babysitting Claude Code's context is annoying as hell.
It's crazy to me that—last I checked—its context strategy was basically tool use of ls and cat. Despite the breathtaking amount of engineering resources major AI companies have, they're eschewing dense RAG setups for dirt simple tool calls.
To their credit it was good enough to fuel Claude Code's spectacular success, and is fine for most use cases, but it really sucks not having proper RAG when you need it.
On the bright side, now that MCP has taken off I imagine one can just provide their preferred RAG setup as a tool call.
You can, but my tool actually handles the raw chat context. So you can have millions of tokens in context, and actual message that gets produced for the LLM is an optimized distillate, re-ordered to take into account LLM memory patterns. RAG tools are mostly optimized for QA anyhow, which has dubious carryover to coding tasks.
It is done at immediately before the LLM call, transforming the message history for the API call.
This does reduce the context cache hit rate a bit, but I'm cache aware so I try to avoid repacking the early parts if I can help it. The tradeoff is 100% worth it though.
Sam Altman actually said this on a podcast with Theo Von recently.
“So if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that,” Sam told Theo.
He even asked Theo about his own ChatGPT usage, and Theo admitted he doesn’t use it much because of privacy concerns.
Sam’s response:
“I think it makes sense... to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity.”
I doubt that's it. Artificial demand just adds noise to the signal, it doesn't eliminate it. It seems more likely that they've just decided that knowing the Pentagon is working on something without additional details isn't a very useful signal for adversaries.
I was about to make this same comment - this data might've been more useful for say, the soviets, when they were the only major threat that the US was actively dealing with, so they could have some guarantee that if they spotted a ton of pizzas being ordered to the pentagon, they could be fairly sure it would be something to relevant to them.
One of the examples was the night before the '91 Desert Storm started. For those that weren't around, there was a huge build up operation called Desert Shield and only became Desert Storm when they started shooting. It's not like that was a secret, and the Iraqis could have seen this data and not be surprised when the bombs start falling immediately after the pizza surge.
If you were bin Laden, maybe you might not have been caught unawares helicopters were about to crash in your garden. It's not like you didn't know they were looking for you.
I can't think of someone that the Pentagon or other agencies that this applies to that their adversaries would not know they were the adversary. This might be more relevant than you might think
Plus when it’s time to go Mutually Assured Destruction, ie to those capable of attacking the US mainland, I’m going to assume they would be doing all this from whatever the successor to Mt Weather is.
>Per Bloomberg, Zuckerberg has been personally leading recruiting for the team of about 50 people — and has even rearranged Meta's offices so the new hires sit near him.
> Meta CEO Mark Zuckerberg is personally creating a new "superintelligence" team dedicated to building the world's most advanced AI platform, and splashing out nine-figure packages to hire top talent, the New York Times and Bloomberg reported Tuesday.
Says the person going out of their way to attack another person over a single-character asterisk substitution.
Seems fairly understandable to not want to piss off rabid lawyers, however remote the chances of angering them may be.
reply