The excuse of parsing performance is responsible for so much weird syntax in web standards today. The standards are literally being dragged around by browser vendors' priorities.
I wish they would do it the other way around for once: come up with the cleanest, easiest-for-humans syntax as possible, and tell browser vendors to fix their damn engines. Perhaps this will lead to the development of a new CSS engine that's actually optimized for the latest goodies.
Instead, for the last decade or so, we've only been getting bits and pieces of half-assed imitations of Sass. Better than nothing, of course, but still seriously lacking.
SASS parses along those exact nesting rules, so clearly it's possible for the browser to do it as well. Just a question of performance hit at runtime.
In the end having the runtime be fast is likely more important than perfect syntax (as we can still pre-compile via better syntax), but the actual numbers and real world use cases matter a lot.
Are these worst case performance scenarios actually realistic or common? I didn't see anybody speaking to that, but also haven't followed this issue in years. I have seen on a lot of the web standards discussions that many of the contributors will pose theoretical worst case situations as something that should meaningfully impact design of a feature, when those worst case scenarios are extremely rare and unrealistic in practice.
The fact that the article raises the possibility of this limitation being removed in the near future means that it's not a law of physics, only a matter of finding a better algorithm.
Even if it were a law of physics, engineers can often circumvent laws of physics using smart caching and other tricks.
We've been waiting for this feature for decades already. It would have been better if they'd just invested a couple of more years to come up with a better parsing algorithm before standardizing a halfway implementation.
> come up with the cleanest, easiest-for-humans syntax as possible, and tell browser vendors to fix their damn engines
W3C has zero leverage over the browser vendors. If they tried to do this the end result would be the same thing that happened to XHTML: the browser vendors would tell them to pound sand and announce that the de facto CSS standard would now be maintained by the WHATWG.
> come up with the cleanest, easiest-for-humans syntax as possible
I'm definitely on the opposite end of this argument. I really think browsers should be focusing on efficient and theoretically rock-solid foundations. The fact that web standards are slow to evolve and become adopted is a feature, not a bug. If you visit a create-react-app project from 2 years ago you probably couldn't get it running today. I would like a web where a hand-made site from a decade ago still works
The global web standard should be reliable and efficient. Syntactic sugars should be left to the ecosystems and only adopted when they don't confer performance tradeoffs
I agree it's a big mistake to not make the nesting rules compatible to Sass/Less.
> Because of limitations in browser parsing engines, you must make sure the nested selector (.bar in the above example) always starts with a symbol.
How hard can it be.. I know that is always easy to say from the outside but in this case I don't understand why the refuse to implement it. Just labelling it as a performance problem doesn't say much because why should it impact performance to check whether there appears another rule with an opening "{" after it.
Imagine a CPU that is fast but has no concept of stack, so no recursion/callstack for you. You can simulate it through registers, but only as much as register pressure allows in a finite register file. Allowing unbounded call stack requires changing the class of a CPU to the one that has stack (and is slower, due to analogy is incomplete).
In the same way, &-less grammar changes the class of a parser required to parse it. You can’t simply adjust the existing one. In the end it all boils down to the amount of dynamic allocations and backtracking required for parsing the same megabyte of css.
Edit: nothing substantial was said about actual numbers, at least from what I’ve read. It’s a common sense that seems nobody has tested (in this particular case).
The excuse of parsing performance is responsible for so much weird syntax in web standards today. The standards are literally being dragged around by browser vendors' priorities.
I wish they would do it the other way around for once: come up with the cleanest, easiest-for-humans syntax as possible, and tell browser vendors to fix their damn engines. Perhaps this will lead to the development of a new CSS engine that's actually optimized for the latest goodies.
Instead, for the last decade or so, we've only been getting bits and pieces of half-assed imitations of Sass. Better than nothing, of course, but still seriously lacking.