This reminds me of a post earlier this year, “Looksmapping” [0], where the author ranked restaurants by the attractiveness of the reviewers according to their profile photos.
These NK operations are after salary and intelligence. The salary ensures the operation is self-sustaining even while it doesn’t yield actionable intelligence. They can keep growing their army of staffers until they get a hit on a prime target. And since nobody at the company meets the actual worker, they can rotate “employees” as needed – when a big score is in sight, they can plug their best hackers into the operation to “close the deal” (stealing cryptocurrency, trade secrets, or other actionable intel).
Not even the quoted passage from the book makes the claim in the title. It basically says Palantir had a contract with the IDF during the same time the IDF executed the pager attack. There is zero substantiation of the claim that Palantir assisted with the attack itself. It’s mostly a breathless description of Palantir’s standard operating practice — namely, sending “forward deployed engineers” (consultants) to customers — garnished with some emotional (but clever) wordplay like “Operation Grim Beeper.”
The most infuriating “feature” of autocorrect is that it includes all your contact names in your dictionary, with no way to opt out of this aside from disabling autocorrect entirely. This can lead to some awkward texts when your innocent typo (or even correctly spelled technical term) turns into a mention of someone’s name who should not be in your phone…
I wonder if this is related to the fact that every Apple app shows up as “recently accessing” contacts in App Privacy Report. And I don’t mean only photos (face recognition), but: Safari, Camera, Shortcuts, Mail, Health… why? I’ve never even configured a Mailbox. Why are these apps all accessing my Contacts?
This drives me nuts because I put things like "(Alexander)" after someone's name to indicate who I met them through, who they're friends of, where I met them, etc.
Then whenever I dictate "Alexander" it shows up as "(Alexander)" in parentheses. Drives me mad.
I'm astonished people on this site use autocorrect at all. IMO it's a mind-bogglingy insane antifeature, even more insane than that weird "replace arithmetic expressions with their result" thing Apple once did.
It's worse because it keeps the text you intentionally entered? To be sure: I'm not talking about next word suggestions, only about it changing words after you already wrote them.
If someone is asking a technical question along the lines of “how does this work” or “can I do this,” then I’d expect them to Google it first. Nowadays I’d also expect them to ask ChatGPT. So I’d appreciate their preamble explaining that they already did that, and giving me the chance to say “yep, ChatGPT is basically right, but there’s some nuance about X, Y, and Z…”
Calling these letters toothless is missing the point. Ofcom doesn’t expect 4chan to comply. They are creating a paper trail to justify the next step of forcing UK ISPs to block the content at a network level.
Some of the best software engineers I know are ex-physics PhDs… it’s one of those “can’t fake it” skillsets that also happens to have high transferability to ML/AI fields. On the other hand, I snuck through the CS major without ever multiplying a matrix.
Yeah… one of them addresses a market populated by hundreds of thousands of developers with extensive professional experience in the framework, and the other addresses a niche of Python developers who refused to learn JavaScript until somebody hid it from them and called it hypermedia.
100’s of thousands used to use php too :) most developers (roughly 97.56% are terrible/incompetent so going with the herd should tell you you are on the wrong train :)
Thousands of developers still use PHP… and even more users… Wordpress (43% of web), Facebook (billions of users), Wikipedia (billions of users)…. all PHP.
htmx is a a toy, mildly amusing to play with, built on an insecure foundation that bypasses basic browser security controls and hands a blob of JavaScript to a bunch of backend developers who can’t be bothered to learn it because they think they know better…
No serious project uses htmx and none ever will, because it becomes an unmaintainable mess by the third developer and second year of development.
“No serious project uses [insert any framework/language/…] and none ever will, because it becomes an unmaintainable mess by the third developer and second year of development” if team is incompetent
JS has the fastest, most robust and widely deployed sandboxing engines (V8, followed closely by JavaScriptCore which is what Bun uses). It also has TypeScript which pairs well with agentic coding loops, and compiles to the aforementioned JavaScript which can run pretty much anywhere.
Note that "sandboxing" in this case is strictly runtime sandboxing - it's basically like having a separate process per event loop (as if you ran separate Node processes). It does not sandbox the machine context in which it runs (i.e. it's not VM-level containment).
When you say runtime sandboxing, are you referring to JavaScript agents? I haven't worked all that much with JavaScript execution environments outside of the browser so I'm not sure about what sandboxing mechanics are available.
Bun claims this feature is for running untrusted code (https://bun.com/reference/node/vm), while Node says "The node:vm module is not a security mechanism. Do not use it to run untrusted code." I'm not sure whom to believe.
It's interesting to see the difference in how both treat the module. It feels similar to a realm which makes me lean by default to not trusting it for untrusted code execution.
It looks like Bun also supports Shadow Realms which from my understanding was more intended for sandboxing (although I have no idea how resources are shared between a host environment and Shadow Realms, and how that might potentially differ from the node VM module).
The reference docs are auto generated from node’s TypeScript types. node:vm is better than using the same global object to run untrusted code, but it’s not really a sandbox
> It also has TypeScript which pairs well with agentic coding loops, (...)
I've heard that TypeScript is pretty rough on agentic coding loops because the idiomatic static type assertion code ends up requiring huge amounts of context to handle in a meaningful way. Is there any truth to it?
> Not sure where you heard this but general sentiment is the opposite.
My personal experience and anecdotal evidence is in line with this hypothesis. Using the likes of Microsoft's own Copilot with small simple greenfield TypeScript 5 projects results in surprisingly poor results the minute you start leaning heavily on type safety and idiomatic techniques such as branded types.
> There was recently a conference which was themed around the idea that typescript monorepos are the best way to build with AI
It's especially tricky since monorepos are an obvious antipattern to begin with. They're a de-separation of concerns: an encouragement to blur the unit boundaries, not write docs, create unstable APIs (updating all usages at once when they change), and generally to let complexity spread unchecked.
Hate to say it but this sounds like a skill issue. The reason Typescript monorepos are gaining popularity for building with AI is because of how powerful TS's inference system is. If you are writing lots of types you are doing it wrong.
You declare your schema with a good TS ORM then use something like TRPC to get type inference from your schemas in your route handlers and your front end.
You get an enforced single source of truth that keeps the AI on track with a very small amount of code compared to something like Java.
This really only applies to full stack SAAS apps though.
> It also has TypeScript which pairs well with agentic coding loops
The language syntax has nothing to do with it pairing well with agentic coding loops.
Considering how close Typescript and C# are syntactically, and C#'s speed advantage over JS among many other things would make C# the main language for building Agents. It is not and that's because the early SDKs were JS and Python.
Typescript is probably generally a good LLM language because
- static types
- tons and tons of training data
Kind of tangent but I used to think static types were a must-have for LLM generated code. But the most magical and impressively awesome thing I’ve seen for LLM code generation is “calva backseat driver”, a vscode extension that lets copilot evaluate clojure expressions and generally do REPL stuff.
It can write MUCH cleaner and more capable code, using all sorts of libraries that it’s unfamiliar with, because it can mess around and try stuff just like a human would. It’s mind blowingly cool!!
> C#'s speed advantage over JS among many other things would make C# the main language
Nobody cares about this, JS is plenty fast for LLM needs. If maximum performance was necessary, you're better off using Go because of fast compiler and better performance.
And that was my point. The choice of using JS/TS for LLM stuff was made for us based on initial wave of SDK availabilities. Nothing to do with language merits.
This has always been the case. The Java and C# ecosystems prioritise stability and scale. They wait for ideas to prove themselves in other languages like Erlang, Python, Go, Scala, and so on, and then adopt the successful ones. Last-mover advantage. That said, there are some caveats. Java is still missing value types, while C# has moved quickly with async/await rather than adopting models like goroutines or virtual threads, which can sometimes limit concurrency ergonomics for the developer.
[0] https://news.ycombinator.com/item?id=44461015
reply