If you look at how macros are mostly used, though, a lot of that stuff could be replaced directly with reflection. Most derive macros, for example, aren't really interested in the syntax of the type they're deriving for, they're interested in its shape, and the syntax is being used as a proxy for that. Similarly, a lot of macros get used to express relationships between types that cannot be expressed at the type system level, and are therefore expressed at a syntactic level - stuff like "this trait is derived for all tuples based on a simple pattern".
There are also proc macros just for creating DSLs, but Rust is already mostly expressive enough that you don't really need this. There are some exceptions, like sqlx, that really do embed a full, existing DSL, but these are much rarer and - I suspect - more of a novelty than a deeply foundational feature of Rust.
But the point is that if you've got reflection (and an expressive base language, and a powerful enough type system, etc), you probably don't need macros. They're a heavy mallet when you almost always need a more precise tool. And the result of using macros is almost always worse than using that more precise tool - it will be harder to debug, it will play worse with tools like LSPs, it will be more complicated to read and write, it will be slower, etc.
I think macros are a necessarily evil in Rust, and I use them myself when writing Rust, but I think it's absolutely fair to judge macros harshly for being a worse form of many other language features.
No disagreement on your point, but this is a different argument than claiming that macros are an ugly hack to workaround lack of reflection.
Because Rust lacks reflection macros are used to provide some kind of ad-hoc reflection support, that much we agree... but macros are also used to provide a lot of language extensions other than reflection support. Macros in general exist to give users some ability to introduce new language features and fill in missing gaps, and yes reflection is one of those gaps. Variadics are another gap, some error handling techniques is yet another, as are domain specific languages like compile time regex! and SQL query macros.
But the point is that almost all of the common places where macros are used in everyday Rust could be replaced by reflection. There are exceptions like some of the ones you mention, but these are clever hacks rather than materially useful. Yes, you can write inline SQL and get it type checked, but you can also use a query builder or included strings and get the same effects but in a much less magical and brittle package.
Macros in Rust are primarily a tool to handle missing reflection capabilities, and them enabling other code as well is basically just a side effect of that.
Note that 4 day work here typically implies 4×8 hours, i.e. a reduction in hours for the same pay. This is how the trial worked in the UK, and is what most groups I've seen are campaigning for.
The goal is not just to rearrange work hours around the week, but to reduce work hours overall.
I find the Krausest benchmarks[0] to be useful for these sorts of comparisons. There are always flaws in benchmarks, and this one particularly is limited to the performance for DOM manipulation of a relatively simple web application (the minimal VanillaJS implementation is about 50 lines of code). That said, Krausest and the others who work on it do a good job of ensuring the different apps are well-optimised but still idiomatic, and it works well as a test of what the smallest meaningful app might look like for a given framework.
I typically compare Vanilla, Angular, SolidJS, Svelte, Vue Vapor, Vue, and React Hooks, to get a good spread of the major JS frameworks right now. Performance-wise, there are definitely differences, but tbh they're all much of a muchness. React famously does poorly on "swap rows", but also there's plenty of debate about how useful "swap rows" actually is as a benchmark.
But if you scroll further down, you get to the memory allocation and size/FCP sections, and those demonstrate what a behemoth React is in practice. 5-10× larger than SolidJS or Svelte (compressed), and approximately 5× longer FCP scores, alongside a significantly larger runtime memory than any other option.
React is consistently more similar to a full Angular application in most of the benchmarks there than to one of the more lightweight (but equally capable) frameworks in that list. And I'm not even doing a comparison with microframeworks like Mithril or just writing the whole thing in plain JS. And given the point of this article is about shaving off moments from your FCP by delaying rendering, surely it makes sense to look at one of the most significant causes to FCP, namely bundle size?
> sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed
This is often repeated, but my own experience is the opposite: when I see a bunch of skeleton loaders on a page, I generally expect to be in for a bad experience, because the site is probably going to be slow and janky and cause problems. And the more the of the site is being skeleton-loaded, the more my spirits worsen.
My guess is that FCP has become the victim of Goodhart's Law — more sites are trying to optimise FCP (which means that _something_ needs to be on the screens ASAP, even if it's useless) without optimising for the UX experience. Which means delaying rendering more and adding more round-trips so that content can be loaded later on rather than up-front. That produces sites that have worse experiences (more loading, more complexity), even though the metric says the experience should be improving.
It also breaks a bunch of optimizations that browsers have implemented over the years. Compare how back/forward history buttons work on reddit vs server side rendered pages.
It is possible to get those features back, in fairness... but it often requires more work than if you'd just let the browser handle things properly in the first place.
Seems like 95% of businesses are not willing to pay the web dev who created the problem in the first place to also fix the problem, and instead want more features released last week.
The number of websites needlessly forced into being SPAs without working navigation like back and forth buttons is appalling.
I think it's more the bounce rate is improving. People may recall a worse experience later, but more will stick around for that experience if they see something happen sooner.
If you have this issue, I can highly recommend working in another country where all your colleagues are using a different keyboard layout to you entirely. This is particularly bad for programming, because while standard layouts are mostly _fairly_ consistent with the letters, the symbols can end up anywhere. Sure, this means you still won't be able to find anything and look like an idiot, but now you can blame their keyboards for being weird rather than your own muscle memory!
This is a philosophical point about what's considered solveable. For example, in the sudoku community, there's this idea of bifurcation, which is when you get to a point in a puzzle where there are two options to take, and you step through each option manually until you figure out which one is correct, and backtrack if necessary.
You can do an entire puzzle this way (just keep on trying options and seeing if they work), but this is generally not on. If you build a puzzle that can only be solved this way, then you've built a bad puzzle, at least by the standards of the sudoku solving community. On the other hand, most complex or variant sudoku puzzles will have moments where there are two or three possibilities for a cell, and you need to look at the immediate effects of those possibilities to figure out which one is correct. So clearly some amount of bifurcation and backtracking is fine.
Fwiw, in the nonograms I've done in various apps, there's almost never a need to guess between different possibilities. I don't know if that's because the puzzle format itself is fairly constrained, or if the apps typically try to exclude these cases. But typically, it's always possible to see a next step, without needing to try things out and guess.
A good sample needs to be random and representative of the population as a whole, otherwise you introduce sampling bias. Imagine trying to do a survey of what people's favourite fast food restaurants are, but doing it inside a McDonald's — it doesn't matter how large your sample is, it's going to be heavily biased. This is why survey companies spend a lot of effort trying to find random, representative samples of the population, and often weighting their samples so that they match the target population even more.
If we treat elections like a survey, then they have a massive inherent bias to the sampling method: the people who will get "surveyed" are the ones who are engaged enough to get registered, and then willing to go to a physical polling station and vote. This will naturally bias towards certain types of people.
In practice, we don't treat elections like a survey. If we did, we'd spend a lot of time afterwards weighting the results to figure out what the entire country really thought. But that has its own flaws, and ultimately voting is a civic exercise. You can do it, you can avoid it: that choice is yours, and ultimately part of your vote. In a way, you could argue that the sample size for an election is 100% of the population, where "for whatever reason, I didn't cast a vote" is a valid box to check on this survey.
That said, the whole "samples can be biased" thing is very much relevant for elections because many political groups have an incentive to add additional bias to the samples. That could be as simple as organising pick-ups to allow their voters to get to the polls, or teaching people how to register to vote if they're eligible, but it could also involve making it significantly harder or slower for certain groups (or certain regions) to register or vote.
A 100% sample is unattainable, not just practically, but fundamentally. Even if you made voting mandatory and ensured collection of every single vote, there will always be people who will fudge their vote because they are not interested in the process. I argue that any election is only representative of people engaged with the process and that fundamentally cannot change. Within that subset, you shouldn't need 100% sampling for high confidence.
But agree that random distribution is key to this, but I don't see how that could change with the messaging that every one must vote, versus saying just vote if you're interested.
I mean that an election is (theoretically) an 100% sample because every eligible has the ability to interact with the voting process at the level that they choose. So the decision for some people to invalidate their vote, or to vote tactically, or not to vote at all, or whatever else: that's part of the act of taking part in an election. In that sense, you can't not take part in an election, if you're eligible to vote.
This is important, because normally, once you take a sample, you need to analyse that sample to ensure that it is representative, and potentially weight different responses if you want to make it more representative. For example, if you got a sample that was 75% women, you might weight the male responses more strongly to match the roughly 50/50 split between men and women in the general population. But in an election, we don't do this, because the assumption is that if you spoil your ballot or don't take part, that is part of your choice as a citizen.
But I think we're saying the same sort of thing, but in different ways: you can either see "the sample of an election is every citizen, regardless of whether they voted" or "the population of an election is everyone who voted", and in either case the sample is the same as the population, and we can therefore assume that it is representative of the population.
This has the advantage that you don't need an extra framework/dependency to handle DI, and it means that dependencies are usually much easier to trace (because you've literally got all the code in your project, no metaprogramming or reflection required). There are limits to this style of DI, but in practice I've not reached those limits yet, and I suspect if you do reach those limits, your DI is just too complicated in the first place.
I think most people using these frameworks are aware that DI is just automated instantiation. If your program has a limited number of ways of composing instantiations, it may not be useful to you. The amount of ceremony reduced may not be worth the overhead.
This conversation repeats itself ad infinitum around DI, ORMs, caching, security, logging, validation, etc, etc, etc... no, you don't need a framework. You can write your own. There are three common outcomes of this:
* Your framework gets so complicated that someone rips it out and replaces it with one of the standard frameworks.
* Your framework gets so complicated that it turns into a popular project in its own right.
* Your app dies and your custom framework dies with it.
I'm not suggesting a custom framework here, I'm suggesting no DI framework at all. No reflection, no extra configuration, no nothing, just composing classes manually using the normal structure of the language.
At some point this stops working, I agree — this isn't necessarily an infinitely scalable solution. At that point, switching to (1) is usually a fairly simple endeavour because you're already using DI, you just need to wire it together differently. But I've been surprised at the number of cases where just going without a framework altogether has been completely sufficient, and has been successful for far longer than I'd have originally expected.
There's a `catch` with effects as well, though, the effect handler. And it works very similarly to `catch` in that it's not local to the function, but happens somewhere in the calling code. So if you're looking at a function and you want to know how that function's exceptions get handled, you need to look at the calling code.
Just to clarify because the grammar is a bit ambiguous in your comment: which case do you see as the common case? I suspect in most cases, people don't handle errors where they are thrown, but rather a couple of layers up from that point.
> And this cannot be done statically (i.e. your IDE can't jump to the definition), because my_function might be called from any number of places, each with a different handler.
I believe this can be done statically (that's one of the key points of algebraic effects). It works work essentially the same as "jump to caller", where your ide would give you a selection of options, and you can find which caller/handler is the one you're interested in.
This suggests a novel plausible IDE feature: list callers of a function, but only those with a handler for a certain effect (maybe when the context sensitive command is invoked on the name of one of the effects rather than on the name of the function).
What do you mean by "only those"? Each caller of an effectful function will have to handle the effect somehow, possibly by propagating it further up the chain. Is that what you meant, an IDE feature to jump to the ((grand-)grand-)parent callers defining the effect handler?
There are also proc macros just for creating DSLs, but Rust is already mostly expressive enough that you don't really need this. There are some exceptions, like sqlx, that really do embed a full, existing DSL, but these are much rarer and - I suspect - more of a novelty than a deeply foundational feature of Rust.
reply