In the same way that crypto folks speedran "why we have finance regulations and standards", LLM folks are now speedrunning "how to build software paradigms".
The concept they're trying to accomplish (expose possibly remote functions to a caller in an interrogable manner) has plenty of existing examples in DLLs, gRPC, SOAP, IDL, dCOM, etc, but they don't seem to have learned from any of them, let alone be aware that they exist.
Give it more than a couple months though and I think we'll see it mature some more. We just got their auth patterns to use existing rails and concepts, just have to eat the rest of the camel.
I don't disagree with the article, but after working in big tech, two HN startups, a couple unicorns and others, in two continents, I don't really find this too actionable.
In the last ten years (and even in the 20-people HN startups), the day to day work of engineers has become so incredibly specialised and divorced from the needs of decision-makers and the customers that there is almost nothing I can do to influence whether someone views me as doing my job or not. Mainly because of the presence of Product Managers that insert themselves between engineers and the rest of the company.
I'm always interested in delivering value, but the fight necessary to actually do so has become stressful. It's no longer a collaboration, all my contributions must be filtered through the ego of the person speaking to decision-makers.
In fact, the only time I was actually satisfied with my work in the last 5 years (as opposed to my paycheck) was when I was acting as interim Product Manager for 9 months. Unsurprisingly, me and my team managed to deliver three projects that other teams tried and failed several times.
Most of that was accomplished by communicating with stakeholders and actually figuring out what they needed, rather than endlessly "trying to put my own spin" on it.
So yeah, I'm gonna keep delivering whatever is asked, getting the blame for bugs and not getting the credit for features. At least the pay is alright. I'm constantly searching for the place where I can actually fully contribute, though.
That is a very obvious thing for them to say though regardless of what they truly believe, because (a) it legitimizes removing the cap , making fundraising easier and (b) averts antitrust suspicions.
The MAGA people attacking Hollywood is such a big mistake. Yes I know the average celebrity doesn’t share their views, but that doesn’t change the fact that Hollywood projects American culture around the world in a way that the government could never do itself. Movies and music made here are so often cited as the reason why young people around the world idealize America and want to emulate us (buy our clothes, speak English, etc)
It’s the cultural equivalent of being the world’s reserve currency, it’s a massive free advantage in almost any situation. Stupid stupid stupid to threaten it.
When I was young and easily swayed, I took life advice from a well-known Dutch comedian (Youp van 't Hek) who loved to mock tourists taking those cringe “holding up the Leaning Tower of Pisa” photos. The message was clear: tourist photos were tacky, and besides, you could always find a better photo in the gift shop anyway.
So for years, I smugly avoided taking photos—too cool for clichés. It only hit me much later that I wasn’t missing out on better shots of monuments… I was missing pictures of the people I was with. Family and friends looking younger, sometimes happier, and—how shall I put it—sometimes still alive.
Here's my personal submission for "UI problem that has existed for years on touch interfaces, plus a possible solution, but at this point I'm just shouting into the void":
In short, an interface should not be interactable until a few milliseconds after it has finished (re)rendering, or especially, while it is still in the midst of reflowing or repopulating itself in realtime, or still sliding into view, etc.
Most frustratingly this happens when I accidentally fat-finger a notification that literally just slid down from the top when I went to click a UI element in that vicinity, which then causes me to also lose the notification (since iOS doesn't have a "recently dismissed notifications" UI)
As someone who experienced this first hand growing up. I consider how someone feels about free school lunch, a basic test of their humanity.
If you think kids should go hungry or be embarrassed at school because of their parents finances… we can’t be friends, nor acquaintances. IMHO, you are subhuman at that point and not worth my time.
My dad believed that because he paid taxes he shouldn’t have to pay the school to feed me. I begged, borrowed, and stole spare change to pay. He’d chip in once in a while, but once you are so far in debt they won’t feed you anymore (at least they didn’t at the time). I remember going to the lost and found every day to check the pockets of the clothes in there. I learned how to pick the locks on the gym lockers and would steal money from other kids pockets. I sometimes left school so I could go steal lunch from a grocery store near by. I got caught once, but after the lady knew what was up, she conveniently was always looking away from me during mid day of I came in. From the bottom of my heart I hope she receives every possible blessing in this life.
No child should have to do that. Ever! Happy to pay taxes to and live in a state that has solved this problem!
I have been in and out of the academic world my entire career. I have worked as a programmer/engineer for two universities and a national lab, and worked at a startup founded by some professors. There is huge uncertainty with the people whom I have worked with, nobody seems to be sure what is going to happen, but it feels like it wont be good. Hiring freezes, international graduate students receiving emails to self deport, and at my last institute many people's funding now no longer supports travel for attend conferences (a key part of science!).
One of the interesting pieces of science that I think a lot of people don't think about is strategic investment. At one point I was paid from a government grant to do high power laser research. Of course there were goals for the grant, but the grant was specifically funded so that the US didn't lose the knowledge of HOW to build lasers. The optics field for example is small, and there are not that many professors. It is an old field, most of the real research is in the private industry. However what happens if a company goes out of business? If we don't have public institutions with the knowledge to train new generations then information can and will be lost.
I handle reports for a one million dollar bug bounty program.
AI spam is bad. We've also never had a valid report from an by an LLM (that we could tell).
People using them will take any being told why a bug report is not valid, questions, or asks for clarification and run them back through the same confused LLM. The second pass through generates even deeper nonsense.
It's making even responding with anything but "closed as spam" not worth the time.
I believe that one day there will be great code examining security tools. But people believe in their hearts that that day is today, and that they are riding the backs of fire breathing hack dragons. It's the people that concern me. They cannot tell the difference between truth and garbage.
If students went to college only to learn, colleges wouldn't bother giving diplomas.
Compare: My piano teacher doesn't give diplomas because none of her students would care, her students actually want to learn. When my piano teacher cancels class, I am disappointed because I wanted to learn. My piano teacher doesn't need to threaten me with bad grades to get me to practice outside of class (analogous to homework), because I actually want to learn.
There are many college students for whom none of these tests would pass. They would not attend if there was no diploma, they're relieved when their professors cancel class, and they need to be bullied into studying outside of class.
What made us think these students were ever interested in learning in the first place? Instead, it seems more likely that they just want a degree because they believe that a degree will give them an advantage in the job market. Many people will never use the information that they supposedly learn in college, and they're aware of this when they enroll.
Personally, the fact that they can now get a degree with even less wasted effort than before doesn't bother me one bit. People who want to learn still have every opportunity to.
Feedback 1: The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
As a non-Cursor user who does AI programming, there is nothing there to make me want to try it out.
Feedback 2: I feel any new agentic AI tool for programming should have a comparison against Aider[1] which for me is the tool to benchmark against. Can you give a compelling reason to use this over Aider? Don't just say "VSCode" - I'm sure there are extensions for VSCode that work with Aider.
As an example of the questions I have:
- Does it have something like Aider's repomap (or better)?
I've transcended the vanilla/framework arguments in favor of "do we even need a website for this?".
I've discovered that when you start getting really cynical about the actual need for a web application - especially in B2B SaaS - you may become surprised at how far you can take the business without touching a browser.
A vast majority of the hours I've spent building web sites & applications has been devoted to administrative-style UI/UX wherein we are ultimately giving the admin a way to mutate fields in a database somewhere such that the application behaves to the customer's expectations. In many situations, it is clearly 100x faster/easier/less bullshit to send the business a template of the configuration (Excel files) and then load+merge their results directly into the same SQL tables.
The web provides one type of UI/UX. It isn't the only way for users to interact with your product or business. Email and flat files are far more flexible than any web solution.
IMO any system where taking a dependency is "easy" and there is no penalty for size or cost is going to eventually lead to a dependency problem. That's essentially where we are today both in language repositories for OSS languages and private monorepos.
This is partly due to how we've distributed software over the last 40 years. In the 80s the idea of a library of functionality was something you paid for, and painstakingly included parts of into your size constrained environment (fit it on a floppy). You probably picked apart that library and pulled the bits you needed, integrating them into your builds to be as small as possible.
Today we pile libraries on top of libraries on top of libraries. Its super easy to say `import foolib`, then call `foolib.do_thing()` and just start running. Who knows or cares what all 'foolib' contains.
At each level a caller might need 5% of the functionality of any given dependency. The deeper the dependency tree gets the more waste piles on. Eventually you end up in a world where your simple binary is 500 MiB of code you never actually call, but all you did was take that one dependency to format a number.
In some cases the languages make this worse. Go and Rust, for example, encourage everything for a single package/mod to go in the same file. Adding optional functionality can get ugly when it would require creating new modules, but if you only want to use a tiny part of the module, what do you do?
The only real solution I can think of to deal with this long term is ultra-fine-grained symbols and dependencies. Every function, type, and other top-level language construct needs to declare the set of things it needs to run (other functions, symbols, types, etc). When you depend on that one symbol it can construct, on demand, the exact graph of symbols it needs and dump the rest for any given library. You end up with the minimal set of code for the functionality you need.
Its a terrible idea and I'd hate it, but how else do you address the current setup of effectively building the whole universe of code branching from your dependencies and then dragging it around like a boat anchor of dead code.
China has the upper hand and the US is being run by morons. With the US cutting itself off from the largest supplier of electronics while simultaneously destroying the basic research infrastructure that keeps America on the bleeding edge you can expect a brain drain towards Europe and Asia. The damage from this administration will last generations.
This list is very telling. Instead of a healthy marketplace of companies competing to sell their software and services, we end up with one monopolist who gives away mediocre products and in return taxes everything you buy (in the form of ad spending), and then annoys you with the same ads. How is this a desirable outcome?
> Windsurf began in 2021 as Exafunction, founded by MIT graduates Varun Mohan and Douglas Chen. The company initially focused on GPU optimization before pivoting to AI-assisted coding tools, launching Codeium, which later evolved into Windsurf.
> Series B (January 2024): $65 million at a $500 million valuation.
> Series C (September 2024): $150 million, led by General Catalyst, at a $1.3 billion valuation.
> May 2025: $3 billion acquisition from OpenAI
I wonder how much of the value is really from the model or the tooling around it. They all use the same models (mostly Claude, others have been horrible and buggy in my experience). Even co-pilot agent mode now uses Claude. The editor has their own LLM (?) that does the apply since LLMs often return snippets. They work well enough on Cursor. And then you have the auto-complete, which I think is their own model as well.
But the main value from me is from the agent mode and 95% of the value is the underlying model. The other stuff could be more or less a VS Code plugin. The other benefit is the fixed pricing. I have no idea how much 500 calls cost if I were to use the API, but I expect they're probably losing money.
Yes. 100%. Before energy star, refrigerators were made with heating coils glued to the outer panels because it was cheaper to warm the outside of the fridge to avoid condensation than it was to install adequate insulation inside the fridge. The operating cost of those lightly insulated fridges was much higher, but the parts cost was a few dollars lower. Energy star and those yellow power consumption stickers changed that.
> In our own day, the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.
I don't think he's suggesting that AI is inherently bad, but that (like any tool) it can be abused by those with wealth and power in a way that violates human dignity.
In fact, one of the problems the previous Pope Leo warned about in "Rerum Novarum" was not just the intentional abuse of power through technological advances but the unintentional negative consequences of treating industry as a good in itself, rather than a domain that is in service to human interests.
For those who are interested in how this social teaching informed economic systems, check out the concept of distributism, popularized by Hilaire Belloc and G.K. Chesterton.
It's worth a read, even if it's not obvious what it's about from that title.
"To get the actual data, you need to go through a website maintained by the US Trade Commission. This website has good and bad aspects. On the one hand, it’s slow and clunky and confusing and often randomly fails to deliver any results. On the other hand, when you re-submit, it clears your query and then blocks you for submitting too many requests, which is nice."
I've seen a lot of high level engineers at Google leave over the past couple of years. There's vastly more pressure from management and much less trust. And a bunch of L7+ folks have been expected to shift to working on AI stuff to have "enough impact." The increased pressure has created a lot of turf wars among these folks, as it isn't enough to be a trusted steward but now you need your name at the top of the relevant docs (and not the names of your peers).
Prior to 2023 I pretty much only ever saw the L7s and L8s that I work with leave Google because there was an exciting new opportunity or because they were retiring. Now most of the people I see leave at this level are leaving because they are fed up with Google. It's a mess.
> the documentation is poorly written (all LLM vendors seem to have an internal competition in writing confusing documentation).
This is almost certainly because they're all using LLMs to write the documentation, which is still a very bad idea. The MCP spec [0] has LLM fingerprints all over it.
In fact, misusing LLMs to build a spec is much worse than misusing them to avoid writing good docs because when it comes to specifications and RFCs the process of writing the spec is half the point. You're not just trying to get a reasonable output document at the end (which they didn't get anyway—just try reading it!), you're trying to figure out all the ways your current thinking is flawed, inadequate, and incomplete. You're reading it critically and identifying edge cases and massaging the spec until it answers every question that the humans designing the spec and the community surrounding it have.
Which means in the end the biggest tell that the MCP spec is the product of LLMs isn't that it's somewhat incoherent or that it's composed entirely of bullet lists or that it has that uniquely bland style: it's that it shows every sign of having had very little human thought put into it relative to what we'd expect from a major specification.
The solution proposed by Kagi—separate the search index from the rest of Google—seems to make the most sense. Kagi explains it more here: https://blog.kagi.com/dawn-new-era-search
He's been on a bit of a book tour recently and his name kept ringing a bell for me dimly every time I saw him pop up, and then one day it hit me, Rogoff is the economist who was found to have made a serious mistake in their paper about the effect of debts levels on GDP growth a decade and a half ago. The paper argued that the higher the levels of debt, the more gdp growth slowed down and reversed. This paper as used to support a lot of austerity policies in response to the GFC in the years following 2008. Some, at the time, grad students looked into though and found that there were lots of serious mistakes with the paper.
Leaving a comment for others just in case others are experiencing that same mis-connect. As far as the article goes, we'll see! I'm inclined to think that is true, that the US is retreating from the world stage and the dollar will follow, but whether that happens now, later or never, I couldn't say. Interesting times!
Either Klarna is really good at pulling strings to get media coverage, or mainstream media does not fact checking themselves. About a year ago, the company was everywhere in the media when its CEO announced that it created an AI bot that is doing the equivalent of 700 fulltime customer service folks.
I did what seemingly no other publication reporting on it did: signed up for Klarna, bought one item and used this bot.
I was... not impressed?
Klarna's "AI bot" felt like the "L1 support flow" that every other company already has in-place: without AI! Think like when you have a problem with your UberEats order and 80% of cases are resolved without a human interaction (e.g. when an item is missing for your item.)
I walked through the bot's capabilities [1] and my conclusion was that pretty much every other company did this before (automating the obvious support cases.) The real question should have been: why did Klarna not do it before? And when it did, why did it build a wonky AI bot, instead of more intuitive workflows than other companies did?
My sense is that Klarna really wants to be seen as an "AI-first tech company" when it goes public, and not a "buy now pay later loan company" because AI companies have higher valuations even with the same revenue. But at its core, Klarna is a finance or ecommerce-related company: an not much to do with AI (even if it uses AI tools to make its business more efficient - regardless of whether it could use non-AI tools to get the same thing done)
Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.
I interned at zed during the summer of 2022, when the editor was pre-alpha. Nathan, Max, Antonio are great guys and build software with care. I'm happy to see the editor receive the success it deserves, because the team has poured so much world-class engineering work into it.
I worked with Antonio on prototyping the extensions system[0]. In other words, Antonio got to stress test the pair programming collaboration tech while I ran around in a little corner of the zed codebase and asked a billion questions. While working on zed, Antonio taught me how to talk about code and make changes purposefully. I learned that the best solution is the one that shows the reader how it was derived. It was a great summer, as far as summers go!
I'm glad the editor is open source and that people are willing to pay for well-engineered AI integrations; I think originally, before AI had taken off, the business model for zed was something along the lines of a per-seat model for teams that used collaborative features. I still use zed daily and I hope the team can keep working on it for a long time.
[0]: Extensions were originally written in Lua, which didn't have the properties we wanted, so we moved to Wasm, which is fast + sandboxed + cross-language. After I left, it looks like Max and Marshall picked up the work and moved from the original serde+bincode ABI to Wasm interface types, which makes me happy: https://zed.dev/blog/zed-decoded-extensions. I have a blog post draft about the early history of Zed and how extensions with direct access to GPUI and CRDTs could turn Zed from a collaborative code editor into a full-blown collaborative application platform. The post needs a lot of work (and I should probably reach out to the team) before I publish it. And I have finals next week. Sigh. Some day!
The concept they're trying to accomplish (expose possibly remote functions to a caller in an interrogable manner) has plenty of existing examples in DLLs, gRPC, SOAP, IDL, dCOM, etc, but they don't seem to have learned from any of them, let alone be aware that they exist.
Give it more than a couple months though and I think we'll see it mature some more. We just got their auth patterns to use existing rails and concepts, just have to eat the rest of the camel.