It’s especially when they conquer the whole market. It’s why investors favor growth and adoption, even at a loss, until it’s won the market and can turn up the monetization dial.
All the streaming services are enshittifying, even the smaller ones. And other smaller webshops are enshittifying the same way that Amazon does. Like Cory Doctorow described, there's a few big webshops in the Netherlands like bol.com and coolblue.com and they are now also allowing third party sellers, often even from China. The webshops are absolved of all responsibility but they do cash out on every transaction.
I wholeheartedly agree. Bluesky had an interesting idea for identification using a domain you control to verify your account by adding a TXT DNS record for _atproto.
The problem is that it’s only a rented domain and thus a rented username. My DNS provider Porkbun offered a 5 year deal, but I would pay for much longer if I could.
Zed is fantastic. I've been making the leap from neovim to zed lately, and it's been an great experience. Everything feels snappy, and I love how well they've integrated Vim bindings. Their agent mode is nice as well.
It's clearly an underdog to VSCode, so the extension ecosystem isn't quite there yet... but for a lot of the things I've used it for, it's sufficient. The debugger has been the big missing feature for me and I'm really glad they've built it out now - awesome work.
I love how fast Windsurf and Cursor are with the "tab-tab-tab" code auto-completion, where nearly everything suggested is spot-on and the suggestions keep on rolling, almost automating the entire task of refactoring for you. This form of autocomplete works really well with TypeScript and other scripting languages.
IntelliJ / RustRover never got anywhere close to that level of behavior you can get in Cursor and Windsurf, neither in conjunction with JetBrains own models or with Co-pilot. I chalked it up as an IDE / model / language mismatch thing. That Rust just wasn't amenable to this.
A few questions:
1) Are we to that magical tab-tab-tab and everything autocompletes fluently with Rust yet? (And does this work in Zed?)
2) How does Zed compare to Cursor and Windsurf? How does it compare to RustRover, and in particular, JetBrains' command of the Rust AST?
The LSP support for Rust has trailed JetBrains own Rust plugin, which has long since morphed into the language-specific IDE, RustRover.
RustRover has the best AST suggestions and refactoring support out there. It works in gigantic workspaces, across build scripts, proc macros, and dynamic dispatch.
The problem with RustRover has been the lackluster AI support. I've been finding AI autocomplete generally much more useful than AST understanding, though having both would be killer.
I know they’re actively working on this, they released a few updates to the AI extension to make it modular now, so you can pick your own model for example. Soon it will let you wire up your own agents, but if I recall correctly the reason it’s a bit slower there is lack of uniform interfaces
It's fantastic for Rust, it's my main IDE which I've written e.g. voltlane.net in. Fantastic software, and the LLM integration is everything you need IMO (in a good way).
Every time I've encountered a vim emulator I've found it is just close enough that my fingers are doing the wrong things so often it's frustrating. Almost to the point where I would prefer a non-vimmy editor since at least then my fingers always do the wrong thing.
To me it has been the best "vim" that is not a real Vim. Way way better than the vscode plugin. I have used Vim and later Neovim since 2008 or so. Zed is the first non-vim I am truly happy with.
Do you have any metrics on which parts of the whole compiler, std, package manager, etc. take the longest to compile?
How much does comptime slowness affect the total build time?
Well, one interesting number is what happens when you limit the compiler to this feature set:
* Compilation front-end (tokenizing/parsing, IR lowering, semantic analysis)
* Our own ("self-hosted") x86_64 code generator
* Our own ("self-hosted") ELF linker
...so, that's not including the LLVM backend and LLD linker integration, the package manager, `zig build`, etc. Building this subset of the compiler (on the branch which the 15 second figure is from) takes around 9 seconds. So, 6 seconds quicker.
This is essentially a somewhat-educated guess, so it could be EXTREMELY wrong, but of those 6s, I would imagine that around 1-2 are spent on all the other codegen backends and linkers (they aren't too complex and most of them are fairly incomplete), and probably a good 3s or so are from package management, since that pulls in HTTP, TLS, zip+tar, etc. TLS in particular does bring in some of our std.crypto code which sometimes sucks up more compile time than it really should. The remaining few seconds can be attributed to some "everything else" catch-all.
Amusingly, when I did some slightly more in-depth analysis of compiler performance some time ago, I discovered that most of the compiler's time -- at least during semantic analysis -- is spent analyzing different calls to formatted printing (since they're effectively "templated at compile time" in Zig, so the compiler needs to do a non-trivial amount of work for every different-looking call to something like `std.log.info`). That's not actually hugely unreasonable IMO, because formatted printing is a super common operation, but it's an example of an area we could improve on (both in the compiler itself, and in the standard library by simplifying and speeding up `std.fmt`). This is one example of a case where `comptime` execution is a big contributor to compile times.
However, aside from that one caveat of `std.fmt`, I would say that `comptime` slowness isn't a huge deal for many projects. Really, it depends how much they use `comptime`. You can definitely feel the limited speed of `comptime` execution if you use it heavily (e.g. try to parse a big file at `comptime`). However, most codebases are more restrained in their use of `comptime`; it's like a spice, a bit is lovely, but you don't want to overdo it! As with any kind of metaprogramming, overuse of `comptime` can lead to horribly unreadable code, and many major Zig projects have a pretty tasteful approach to using `comptime` in the right places. So for something like the Zig compiler, the speed of `comptime` execution honestly doesn't factor in that much (aside from that `std.fmt` caveat discussed above). `comptime` is very closely tied in to general semantic analysis (things like type checking) in Zig's design, so we can't really draw any kind of clear line, but on the PR I'm taking these measurements against, the threading actually means that even if semantic analysis (i.e. `comptime` execution plus more stuff) were instantaneous, we wouldn't see a ridiculous performance boost, since semantic analysis is now running in parallel to code generation and linking, and those three phases are faiiirly balanced right now in terms of speed.
In general (note that I am biased, since I'm a major contributor to the project!), I find that the Zig compiler is honestly a fair bit faster than people give it credit for. Like, it might sound pretty awful that (even after these improvements), building a "Hello World" takes (for me) around 0.3s -- but unlike C (where libc is precompiled and just needs to be linked, so the C compiler literally has to handle only the `main` you wrote), the Zig compiler is actually freshly building standard library code to handle, for instance, debug info parsing and stack unwinding in the case of a panic (code which is actually sorta complicated!). Right now, you're essentially getting a clean build of these core standard library components every time you build your Zig project (this will be improved upon in the future with incremental compilation). We're still planning to make some huge improvements to compilation speed across the board of course -- as Andrew says, we're really only just getting started with the x86_64 backend -- but I think we've already got something pretty decently fast.
Do any of these algorithms use vector embeddings to determine the semantic similarity between cards? Seems like that might be a useful parameter for tuning the algorithm, since you’re likely to forget something similar on a topic you’ve forgotten things about.