Rust is to Servo what C is to UNIX. Even though they live their own lives now, the ties are strong.
I think programming languages must be developed with their application. For Rust, it is Servo, Go is for Google, LUA is for scripting video games, Swift is for making iOS apps, and of course C is for UNIX.
Lua wasn’t made for scripting video games but as a scripting language for a mining company if I remember correctly from this excellent paper[1]. I can highly recommend all the other ACM hopl papers too for those interested in the history of programming languages. The papers don’t seem to be open access but sci-hub might help with that if you’re so inclined.
I once met Waldemar Celes, one of the co creators of Lua. It was meant for use within a tool called Geresim for visualizing large 3D models of oil/gas reservoirs - primarily for Petrobras, the Brazilian national oil company.
Products too! Technology projects without an end-user often become vaporware because it’s impossible to make design decisions without constraints. A real end-user adds constraints. Constraints enable good decision-making. It’s much easier to answer a specific question (“is this a good decision for a bowser rendering engine”) than an abstract question (“is this a good decision for software”).
I don’t use Rust, but appreciate how it solves clear problems. I’m really impressed about the way key issues like packaging & distribution, documentation, and coding style/safety are addressed at the language level. I bet those features emerged from the need to address issues with the open source development model. Without servo, maybe those features don’t make it.
In lines of code, the open source stuff (barring AOSP and Chromium) are a drop in the bucket compared to internal projects.
Also the easiest way for a Googler to open source a hobby project is to open source it under Google's repos - otherwise you need copyright assignments and stuff that's harder to get approved. So it doesn't represent what most Googlers work on.
Google uses many languages beside Go, and I doubt Go is used much outside of cloud infrastructure stuff. E.g. Fuchsia/Android/Chrome add a lot of Rust code recently, but not Go. In Fuchsia, Rust already surpassed C++ in the amount of code. Google was always a heavy user of C++ and Java and it looks like their modern counterparts are Rust and Kotlin.
Not sure why you're downvoted. DevOps is definitely a place where I see Go used most often. Go has a fast compiler giving it a scripting-like feel, very low memory and disk footprint overhead, it is also easy to learn and easy to deploy which makes it a great language for devops - all those tiny programs that orchestrate components of a bigger system. And in this case the simplicity of the language is an advantage as you can find people being productive very quickly, but weak expressiveness is not a problem because you're not building anything large.
However, I would not recommend it for anything that's concurrent, needs high performance and resides on the critical path (critical not in terms of performance but in terms of reliability) or is large / to be maintained by many teams. Just too easy to run into problems.
Not a great batting average - seems like sites need to be purpose built for this or leverage little/no modern javascript libraries for it to render accurately. Servo has been in this state for the last 5 or 6 years since I first discovered it.
Yeah. With each Andreas Kling browser hacking video Ladybird becomes either faster or more bugfixed. Still unstable and a bit slower than servo, but Geez it render Discord! (Kinda. They're working on it)
> Igalia is a private, worker-owned, employee-run cooperative model consultancy focused on open source software. Based in A Coruña, Galicia (Spain), Igalia is known for its contributions and commitments to both open-source and open standards. Igalia's primary focus is on open source solutions for a large set of hardware and software platforms centering on browsers, graphics, multimedia, compilers, device drivers, virtualization, embedded Linux, and device drivers.
Without downplaying all the effort being put into this, I think we're just digging a deeper hole.
Websites do more or less the same thing they did 15 years ago but they are now 20 times more complex to develop and maintain. A good amount of developers would rather donate a kidney than write CSS. Accessibility is a hack. Performance is getting worse despite us having better than ever hardware. We're spending large amounts of time reinventing the wheel. JS is a horrible, bare bones language which is why you can't get anything done without 100 packages.
It's time to move away from HTML/CSS/JS. They worked great for as long as they did but instead of further contributing to the mess that they've become, we should be looking into alternatives.
The second paragraph is just simply plain wrong throughout. You should actually learn the web stack and look at how things work in 2023 instead of reading other people's rants or secondhand opinions to form your own opinions. CSS is not hard. Performance talking is meaningless without benchmarks. No, you can definitely build a decent small website with a few or even no package dependencies. Maybe do a reality check first.
> It's time to move away from HTML/CSS/JS
Just meaningless empty talks over and over on HN. People who don't understand web stacks ranting about web without knowing the complexity and nuances. What alternatives do you have? Are they going to provide enough features and customization as the status quo? Will this new thing achieve at least 80% of development speed? Many companies adopted web stack for their application because of the flexibility and time to deliver. They make products, instead of being a purist
> You should actually learn the web stack and look at how things work in 2023 instead of reading other people's rants or secondhand opinions to form your own opinions.
I didn't base my response on anybody else's words. I've been around the web since the good old days where jQuery was the norm and everything had a phpBB forum. It's coming from my own experience and observations.
> Will this new thing achieve at least 80% of development speed?
You're so concerned with development speed, yet you're rewriting the same thing in a new framework every 2 years.
I've been around the web since document.write() and <font color=red> were cutting edge.
You're wrong.
There is no way, shape, or form where 2008 web technology is comparable to today. (IE6!!!) Not in styling. Not in consistency. Not in performance. Not in accessibility. Not in security. And certainly not in management of complex sites.
A cursory look at caniuse.com should disabuse you of any notion of stagnation or lack of capability.
Folks rewrite "every two years" because it gets better so quickly. Svelte/Solid/Qwik are clearly steps forward from React. Were you advocating for sticking with older stuff simply because you don't like change? Or did you think C, Unix, et al technologies were born fully formed as they exist today with no evolutionary steps (and missteps) in between?
I think the charitable interpretation is there are a whole class of web sites/applications (news sites, e-commerce, etc.) that haven't appreciably changed in the last 15 years. Yeah, there's been some incremental improvement, but the core experience is the same. However, the versions today are generally resource hogs. It's not rare for me to leave a browser tab open and find it grows to > 2GB RAM. I had one hit 28 GB and it would have kept going if I hadn't killed it. One thing I miss on the M1 is the fan kicking on so I know when a browser tab has run away.
I think the OP has a point. We've been building a massively complex ecosystem on a very shaky foundation. The web has indeed advanced, but most of the time it feels like duct taping something onto this monster of a creation. Between backwards compatibility and competing browser vendor concerns, it's hard to push meaningful change through. So many security (especially supply chain) and dependency issues in JS would go away if there were a reasonable standard library. But, there isn't, so we're doomed a large graph of inter-related dependencies that break regularly. The constant churn in frameworks is mostly related to novel ways to work around limitations in the web platform and bridging behavioral differences across browsers.
It's more than just churn in frameworks though. It's depressing how many person-years have been spent on build systems and bundlers or CJS vs ES6. Participating in the open source ecosystem means needing to be familiar with all of it, so it's not enough to pick one and run with it.
Prior to flexbox, people struggled to center content with CSS; it was bad for layout. There's a ton of old CSS out there, much of it accomplishing the same thing in different ways, often with vendor-specific options. You still run into this if you have to style HTML email. Given the difficulty in mapping CSS back to its usages, it's incredibly challenging to refactor. It requires familiarity with the entire product and a lot of guess work to intent since comments don't exist.
Moreover, there have been massive changes in "best practices". React and Tailwind violate the previous principles of unobstructive JS and semantic naming that were the prevailing practices not too long ago. My cynical take is a lot of that is driven by consultants looking to stand out as thought leaders. Regardless, it adds to the complexity of what you need to know if you do have to work with older (often other people's) code.
I'm fairly confident that if we had today's use cases in mind when designing the foundational web technologies we'd have a very different platform to work with. It almost certainly would be more efficient, less error-prone, and likely more secure. It'd be nice if we could take a step back and revisit things. A clean break would be painful, but could work. Developers are already building web apps that only work on Chrome, so much as it pains me to say, having to get all the vendors on board isn't entirely necessary.
The point you appear to be making is the same point made at every step in computing history. It's an old and predictable refrain going back to assembly programmers complaining about the laziness and bloat of C development.
Yes, bad software can suck up all your RAM and CPU time. This was a concern on the family's XT clone with 1MB of RAM. The only reason the bad software back then didn't suck down 32GB of RAM was because they hit the system's limit far earlier than that.
That said, there have always been efforts to reduce the overhead of individual apps. Docker for all its bloat was a dramatic memory and CPU boon over full virtualization. New languages like Go and Rust have much better memory profiles than Java and C# before them. Qwik, Svelte, and Solid are all worlds better in terms of efficiency than React and Angular.
There are clear improvements and also clear highlights of waste within our industry as there has always been. But if you can only see the waste and bloat, you are sadly missing large parts of the whole picture. The truth lies somewhere as a mix of the two extremes, and the overall capabilities of the current landscape are unambiguously more advanced than they were 5, 10, and 15 years ago.
I think your summary is a bit too reductive. I laid out a fairly comprehensive list of issues that are unique to the JavaScript ecosystem and you highlight one point I made about wasting resources. I don't see as many parallels with the past as you. Certainly software has bloated. But, few application platforms are as difficult to optimize for as the web. Working with HTML fragments, the shadow DOM, reducing unnecessary renders, figuring out the most efficient way to find nodes, etc. are things you don't have to deal with with most GUI toolkits. The closest thing I've seen is avoiding a full window repaint by only painting a region that's changed.
To tackle that complexity we keep creating new frameworks, but those often bring their own bloat. So, we create new frameworks. Any time a new framework comes out, there's a light-weight variant that follows (e.g., React and Preact). There's potentially money to be made by building an influential library or framework and I think that has had an adverse impact as well. The entire JS ecosystem is built around a commercial package management system. That's virtually unheard of. I guess Maven Central gets kinda close?
My contention is that if we took a step back and re-evaluated the platform and added a real standard library, I posit we could cut a lot of this out. You could take everything that's good and make it better.
> My contention is that if we took a step back and re-evaluated the platform and added a real standard library, I posit we could cut a lot of this out. You could take everything that's good and make it better.
Many have tried over the years, including in the early years before we had a neutron star's worth of code built on top of it over the decades that followed.
To be pedantic, browser APIs are already a standard library. The behavior of the HTML parser is 100% defined even in the case of imperfect (to say the least) input. Things like classlist, dataset, history, tab index, event handling... they're all there. If you don't LIKE the standard library or you think it is insufficiently elegant, that's your prerogative, but it's a standard library (emphasis on published standard) with multiple implementations.
As a counterpoint, Qt, GTK+, Tk, Swing, Godot, Kivy, FLTK, etc. are not standards nor are they faster for development, more secure, or easier to start working with. Sure, after you get past the (much larger) barriers to entry and actually get something working on one platform like Ubuntu Desktop, you have to refactor/rewrite for MacOS, Windows, and... other flavors of Linux. And no cheating! A Qt app running on MacOS needs to look like a Mac app, not a Linux app that happens to run on MacOS. Every UI toolkit out there with a "better" API just has one implementation (aka not a standard) and does not handle all of the use cases the web has grown into. And the toolkits that approach all of the browser's use cases is inevitably not a cleaner API.
This problem is deceptively hard and hasn't been solved over and over again as folks stepped back.
As for fragments, shadow DOM, unnecessary renders, efficient ways to find nodes, etc., I strongly encourage you to try Svelte, especially as a PWA. You might be pleasantly surprised how many of these issues are addressed and surprisingly cleanly.
> To be pedantic, browser APIs are already a standard library. The behavior of the HTML parser is 100% defined even in the case of imperfect (to say the least) input. Things like classlist, dataset, history, tab index, event handling... they're all there. If you don't LIKE the standard library or you think it is insufficiently elegant, that's your prerogative, but it's a standard library (emphasis on published standard) with multiple implementations.
By standard library, I mean the comprehensive libraries that are shipped out of the box with modern languages. Hundreds of module dependencies (including transitive ones) are added to modern JS applications in order to perform operations that you get out of the box in C++, Java, C#, Go, Rust, Ruby, Python, etc. That means having to pull in 3rd party dependencies just to get a basic app running. It introduces a ton of headaches around compatibility of transitive dependencies, supply chain security concerns, upstream availability, and so on. Yes, ECMA has been progressively adding stuff, but it's a depressingly slow process and often insufficient.
As for the web, I'd barely consider what we have a standard any longer. The W3C was largely cut out of the process. Many "web apps" really only run on Chrome. Google implements what it's going to implement and puts out a token document as a standard. If Apple doesn't like it, they say "no" and developers have to accommodate.
> As a counterpoint, Qt, GTK+, Tk, Swing, Godot, Kivy, FLTK, etc. are not standards nor are they faster for development, more secure, or easier to start working with. Sure, after you get past the (much larger) barriers to entry and actually get something working on one platform like Ubuntu Desktop, you have to refactor/rewrite for MacOS, Windows, and... other flavors of Linux. And no cheating! A Qt app running on MacOS needs to look like a Mac app, not a Linux app that happens to run on MacOS. Every UI toolkit out there with a "better" API just has one implementation (aka not a standard) and does not handle all of the use cases the web has grown into. And the toolkits that approach all of the browser's use cases is inevitably not a cleaner API.
I don't understand the limitation you're imposing on look and feel. It's not like web applications look like native applications in the slightest. It's a big usability problem since you can't learn one set of widgets and consistently use them. I think that was a major step backwards with CSS. While sites can look nicer today than the stodgy look of Netscape, way too often I have to play "guess where the link is" because some intrepid designer removed the underline and coloring so the link looks like normal text. Now that we've decided scrollbars are bad, whole UI elements are completely hidden unless you stumble upon them. All of this has bled into the desktop space since we now have Electron apps and to me it's decidedly worse than what GTK apps look like on macOS.
I'm going to have to disagree on the cleanliness of the API as well. A good UI toolkit provides the ability to layout and scale elements. The also come with a stock set of widgets. It's incomprehensible to me that we don't have a standard tree or tab widget in the browser. I've been pleased by what Shoelaces is doing with web components, but I'd rather just have a standard set of elements that match what I can get on any other platform.
The layout story is a bit better with flexbox and grid layouts, but it took a long time to get there. And good luck if you need to support HTML email; you're back to using tables or a limited subset of inlined CSS. That's stuff Win32 solved and shipped with decades ago. I wasn't part of the design discussion, but I'd imagine it's because CSS wasn't designed for what it's used for today. It was solving the proliferation of display-centered tags, like <font> and <b>. Using the browser as an application platform just wasn't anything seriously considered until MS added XMLHttpRequest for Outlook Web Acces.
Consequently, we have a plethora of ways to layout stuff in the web. Much of that CSS doesn't actually work if you try to resize things so we dropped fluid layouts for fixed ones. Responsive design rectifies some of that, but still often works in discrete fixed-width chunks. Moreover, many developers have no idea what the box model is so they just throw CSS at the page until it looks roughly correct on their browser and never bother testing it anywhere else. Without the ability to use comments or use variables (finally being addressed as well) and CSS is just a mess to work with. And since you can't be sure where a stylesheet is used, they're hard to refactor and hard to detect dead code in. None of those are problems you generally have when working with a platform designed for application development from the outset.
Just to clarify though, I'm not advocating we abandon the web platform. I'm saying its evolution has made it an at-times thorny way to develop applications. Applications have different needs than the largely text-oriented pages the foundations were designed for. It has advantages over Win32, GTK, and others, particularly in distribution. But, it also makes it much harder to do things that are straightforward in toolkits designed for GUI applications.
> As for fragments, shadow DOM, unnecessary renders, efficient ways to find nodes, etc., I strongly encourage you to try Svelte, especially as a PWA. You might be pleasantly surprised how many of these issues are addressed and surprisingly cleanly.
Much of the discussion on this article is about the huge churn in the JS world. Every year or two I'm urged to learn a new toolkit that's going to fix all the sins of the previous ones. Maybe that works great if you're maintaining one project. In practice, it means I've needed to learn React, Angular, Svelte, Backbone, ExtJS, and others. That's not time that was well spent. Some of it I still need to keep in my head to work on open source projects that can't afford a rewrite. Maybe Svelte is the endgame product and we can all rally behind that; I have my doubts.
I've tried the whole PWA thing out. It's unnecessarily complicated and stymied by iOS limitations. I got it working, but I had to work around some major libraries. Offline GraphQL cache access is an exercise in frustration. I ended up using IndexedDB and custom replication. I needed to add another auth system because JWTs aren't secure to store in the browser storage. Debugging service workers wasn't fun. Debugging in a mobile browser even less so. I'd love for PWAs to supplant all these simple mobile apps. Although, the irony that mobile app development might be the great app reset I'm looking for isn't lost on me. I just wish it didn't come with 30% tax.
By way of my background, I've worked extensively with browser rendering engines. For about five years I ran a TechStars-backed startup that detected rendering difference is pages across browsers and over time. A sort of automated visual regression testing (called web consistency testing), but did so by analyzing the DOM and detected the exact element that was a problem (as opposed to doing image comparisons). I became intimately familiar with the major browsers of the time and had built up a comprehensive knowledge base of how they render things differently. I had to write native browser extensions to work around JS performance issues or interface with the browser in ways not exposed via the DOM. I ended contributing pretty heavily to Selenium in the process as well. I don't mean to say I know everything, but I get the sense you think I'm arguing in the abstract and haven't actually tried working with all of this stuff.
I've also worked pretty extensively with the web building sites and applications over the past 25 years. Initially I was excited by it. No more minimum system requirements! Now I'm dejected to see what we actually have compared to what could have been. I've been burnt out on side project because I'm invariably fighting the dependency graph. I've been told I need to upgrade constantly for security reasons. Semantic versioning doesn't mean much when you have to constantly upgrade because very few projects maintain more than one line of development. TypeScript helps, but I've also had to debug and deal with libraries shipping incorrect TypeScript definitions, undermining the whole system. I've been working to remove dependencies, but in general you run into NIH objections.
Much of that has little to do with the browser APIs other than that the major browsers all use JS as their lingua franca. I've tried using other languages that compile to JS and I'm currently building something with WebAssembly, but so far I always get dragged back into the JS layer. That's why my critique has a lot riding on the JS layer. I think the web platform as a whole could be vastly improved by modernizing JS. It's not simply used to glue DOM interactions to event listeners. It should have a standard library.
I'm happy to keep discussing, but I think we're probably going to have to just agree to disagree. As an industry, we'll continue to incrementally improve things. We'll almost certainly make backwards steps as well. The bolted together monster is what we have and it'd be almost impossible to replace. That doesn't mean it couldn't be better with a comprehensive rethinking.
> React and Tailwind violate the previous principles of unobstructive JS and semantic naming that were the prevailing practices not too long ago. My cynical take is a lot of that is driven by consultants looking to stand out as thought leaders.
You're missing that people are adopting this on their own because they enjoy the benefits during development.
This is also my explanation for why apps with the same complexity as 15 years ago are now slower: What Andy Giveth, Bill taketh away. Developers (or rather their management) are choosing to make "lazy" trade-offs, i.e. prioritizing programmer productivity over execution performance. Which makes economical sense.
I wish there was a way to shift the economic incentives because that would change things very quickly. Maybe some browser performance limits.
OPs right, and it's sort of immature to pretend its all broken and say wildly false things like "you're rewriting the same thing in a new framework every 2 years." Who is? That sounds like a management problem, not a...I don't even know, language problem? Package manager problem?
> You're so concerned with development speed, yet you're rewriting the same thing in a new framework every 2 years.
I think that meme gets thrown around a lot, and it used to be true, but the framework churn of JS has largely stopped at this point. A lot of us (me included) have been just using the same stack for several years now.
Sure, React has evolved since 2015/2016 (e.g. the move to function components and hooks) but the old code still runs, with minimal to no changes. Patterns and best practices have largely been established. You can entirely avoid the bleeding edge stuff and still be productive.
Elsewhere in this thread there are comments about needing to continue pace to move beyond React. I don't think we've seen the end of the churn. I agree that React is a pretty stable base these days and you can just work with that, but it's gotten long in the tooth in places so new frameworks are still sprouting up.
Old code continuing to run isn't what people mean by churn though. jquery still runs. The underlying browser APIs are stable. But you can't say React has stable patterns and best practices when the classes->hooks thing has happened so recently.
do you remember IE6? that's before jQuery, the time of ActiveX and java applets
you're right that it got better by the time of jQuery...
when I started on this (highschool time for me), we was writting HTML by hand. PHP 3 was used in production; PHP 4 was brand spanking new. CGI was a thing, imagine that! C code used to dynamically make HTML stuff, exposed on the web! (through C? gateway interface)
Regular good old html without or with minimal amount of javascript. Absurdly fast compared to todays site. Many orders of magnitude less power.
Faster to develop and better user experience too.
But I also think we should think about alternatives. Things like gemini are interesting but will have a hard time go mainstream. But I do believe we need something completely new because the user hostile nature of the web today is so devastatingly toxic that I see no hope for the future.
The best thing would be to split webpages and webapps. Webpages can keep using HTML with some minimalistic, gracefully degrading javascript, while webapps need something better than hacks upon hacks upon hacks upon ... on top of technology that was meant to display static interlinked documents.
Gemini is cool but probably just repeating the pattern. If companies and users wanted that, HTML use would have gone a different way.
The truth is that both consumers and producers of even “simple”, “static” content want a lot more than these kinds of solutions offer. We have web fonts, animated gradients, SVG logos, responsive layouts, and the like because people want them. Choosing another tech that lacks them will only result in them all getting added back again.
Yeah, that's why absolutely nobody bought color TVs back when those first became available.. /s
People care about things looking nice. In particular, people who pay others to create websites also obviously want their stuff to stand out, look nicer, and "pop" more than the competition - and the things the person you replied to listed is part of that.
That's not the fault of the underlying technology, if anything it will be EASIER for predatory adTech to display such things when the entire UI is rendered in an opaque <canvas>-equivalent. How do I get uBlock to block specific elements (like a video player) when there is no longer a discrete video element to block?
I wonder if Astro will remain a simple static site generator. If it tries to add dynamism (and complexity), I think it will lose some of its luster. But if all anyone needs from it is a static site generator, it looks very clean.
A lot of the functionality and minimalism leads me to believe it was inspired by Svelte and other recent web tools. That said, the complete lack of JavaScript in the output seems to me like a stylistic—even ideological—choice rather than an established use case. Still, it's easier to make a simple solution elegant than a complex one.
Astro fits into its intentionally small niche very well.
> I do believe we need something completely new because the user hostile nature of the web today is so devastatingly toxic that I see no hope for the future.
Capitalism ate the HTML web and it's native to expect that it wouldn't eat the gemini web given there's a profit to be made. The thing that makes an alternative technology non-hostile is simply that it's too unpopular to make a profit from making it suck.
The tone of this post really rubbed me the wrong way. I know nobody reads the guidelines, and that this post violates them even more, but I really like the civilised interaction norm here and when otherwise earnest and substantive replies come in form that appears to me combative and unempathetic then it's quite disappointing.
Gemini for pages and if you really need an app, wasm and webgl/webgpu. HTML/CSS/JS can definitely be replaced by better, already existing things, but browsers will still need to understand those for many years going forward, even if everyone suddenly stopped developing with them.
Webapps don't really have semantics. Accessibility is the toolkit's job, leaving that to the web platform invariably generates half baked results in an app.
CSS has become much easier. CSS grids and flexbox have removed all the stupid layout hacks from long ago. No more need for HTML tables to get two elements to align. The CSS spec has also improved a lot.
JS has also improved massively, though it can't rid itself of its original flaws it has solved a lot of problems in fifteen years.
Accessibility has become trivial. You really don't need all that much knowledge about screen readers anymore. If you follow the standard even just a little, screen readers will already Just Work.
HTTP/3, WebP, Brotli, and a whole range of technologies have made some of the worst pain points of web development go away for free. No more messing around with connection pipelining, ordering resources to work around the browser load order, minifying text resources manually, you just let the tech stack do its thing and it'll work fine.
The problem isn't the web stack, it's the framework of the week throwing out the last five years of development as "bloat", redefining how things Should Be, and then growing to a form where it's considered "bloat" and replaced again.
I've started my web dev career with PHP on the server. It's really not that bad. The web works fine.
Things that work fine are boring, though. Writing HTML and wiring basic actions to it is tedious work when there are all of these interesting frameworks and methodologies to try out.
Or, in some cases, people skip the basics and start with learning React or another heavy Javascript framework. Who needs to know the difference between a <nav> and a <div class="nav"> when you only have two weeks to get through the bootcamp?
It's really not all that difficult to make websites. Web apps are even easier because you can demand Javascript and all of its tooling to be present.
We've tried replacing the web with apps on phones. As it turns out, that's just as hard, often even harder. The actual problems that make web development hard just aren't easy problems to solve. Throwing complex layers of other people's code over them sometimes helps, but in most cases anything that promises to make web development easy just moves the complexity some place else.
Look elsewhere - every other UI framework we’ve tried before has been worse in terms of compatibility, functionality, flexibility, and available prebuilt tooling.
In the late 90's we had native apps with UIs more complex than Figma that ran fine with 1/100 the CPU, RAM and storage we have now. The online rush buried a whole class of development tools that has to be painfully reinvented over the next 20 years and are still bogged down by the incidental complexity of using the web as an application platform.
One example of this, I think, is that modern UIs are seemingly no longer able to display large lists. The native win32 API had virtualized list views 27 years ago, since Windows 95 OSR2 [1][2].
IIRC they were:
* Often Windows-only.
* Infrequently updated, with bugs bad enough to crash the app, sometimes the system
* Dependent on the user to install new versions
* Able to access everything on your computer
* Non-collaborative
* Dependent on you for data backups
These are non-trivial things.
Now that I’m thinking about it, I still don’t have convenient sandboxing of desktop apps.
We have online package managers, garbage collected languages, containers, unit testing, memory protection, remote backups, cross platform UI toolkits - None of the issues you mention required having to serve everything through a browser window set in motion by a subpar language written over a weekend with cosmetic considerations as first design principle.
BTW you can have easy app sandboxing today using flatpak, works like a charm on Fedora.
Is the Flatpak sandboxing actually secure, though? Or does it work like a charm because most of the security enforcement is disabled in practice?
Allegedly [1] a lot of popular packages use "--filesystem=host", which completely defeats the security of sandbox by granting access to the user's home directory (i.e, allows arbitrary code execution through modification of configuration files).
I think I would rather trust the browser's sandbox, where sandboxing has been in place from the start and applications are designed for it.
It's true many flatpaks are still leaky to match usability expectations. We don't know how to devise proper information partitioning schemes or aren't willing to accept the ones we come up with it.
Ultimately it comes down to who you trust the most. Do you trust your cloud provider to not look at your data and sell you off or do you prefer trusting your local application to not fuck around where it shouldn't?
Nothing you say is wrong per se. But what GP is saying i believe is that most website should not need these features to provide value to users. And indeed most website useful to me are still mildly interactive documents. The problem is that web browsers, standards, ecosystems inflated to cater these few webapps needing advanced control over the machine. In a sense web is just the new java: a new environment said to be "cross-plateform" where it is in fact just a new plateform that got its vm ubiquitous.
The web-as-vm has nothing wrong, but it has eaten the web-as-interactive-documents. And now to have a lightweight web experience not focused on webapps i am stuck with the heavyweight runtimes with even more stuff on top just to disable features, lock down invasive websites that grabbed the newly available features to implement invasive anti-features, etc.
The beauty of the Web stack is that all of what you are describing is optional.
You can write simple, accessible, performant websites, which use JS as a bonus and have all the basic features without. And as a bonus, it works across all the browsers, not just Chrome/Firefox. As a bonus, it works for all the accessibility scenarios, not just the standard ones in the test suites. And as a huge bonus, it's much easier to maintain and less fragile.
I'm personally on the extreme end, what with trying to support not just today's browsers but also the retro mainstream such as Netscape and IE, but you don't have to go that far to have an enjoyable experience with this platform.
> The beauty of the Web stack is that all of what you are describing is optional.
The problem is not that you cannot write lightweight accessible and beautiful websites, you can. The problem is that this is now all part of the "default web browser" and so many people are taking advantage of its widespread availability to use it without a good reason to. Most commonly fingerprinting, tracking, following corporate dev fads, etc. And because of that it is now borderline impossible to have a decent experience using a lightweight browser because so many websites make crucial use of stuff they should really not. It's possible but it takes time and know-how, personally i don't find it fun enough to invest more time than crafting my ublock whitelists. And this is already more than most people (even people that would be able to do it).
I don't have a strategy for fixing the entire ecosystem, but for my little corner of the world, I guess I've constructed a mental venn diagram where I only browse at the overlap of "content I'm interested in" and "sites which are accessible to me", and leave the rest of it alone. There is more than enough for me to browse in that space, especially combined with my own websites, that I rarely even think about the horrors you mention.
One of the coolest things about it is that I have noticed over time that obnoxious frontend correlates very strongly with crappy content, so the average quality of what I read and watch has improved drastically. I try to practice a "mental diet", and it has helped tremendously with that.
In some ways, it's not unlike IRL, where there are places I would rather not be, and they have certain tells, and I'm OK with them being there, I just don't go inside if I can help it, and I leave as quickly as possible if I do.
I think the Web is still very young, and now that we have can have ML-assisted markup generation, accessibility will be coming around, just like wheelchair ramps became the norm. Pretty soon, we won't be dismissing a 0.01% browser as not worth supporting, because it will be so much easier to just tell the server, "please remove JS and all but the minimal markup from your pages".
If you want to see a PoC of what this may look like, here is a quick demo video. The NoJS bit is at about 2:30. https://vimeo.com/828698165
While I agree with websites being 20 times more complex, I don't agree with:
> A good amount of developers would rather donate a kidney than write CSS.
If I was doing web development, I would most definitely use modern CSS to make responsive websites. I would prefer it and use it in almost all cases over using JS.
> Accessibility is a hack.
If one uses HTML semantically, and does not resort to hacks, it actually offers a lot of accessibility. More than most "modern" JS-only websites.
> Performance is getting worse despite us having better than ever hardware. We're spending large amounts of time reinventing the wheel. JS is a horrible, bare bones language which is why you can't get anything done without 100 packages.
Yes. Most of that performance loss is due to JS bloat (need to download that framework first), ads, and unwanted tracking.
JS itself has actually improved, at least the APIs for events, DOM access, AJAX, and probably more. What has not improved is the mindset and development practices of developers and companies. Probably 90% of the websites using big frameworks would not need any of it and could live on server side rendered templates like we had more than a decade ago already. Sprinkle in some interactive components only on pages that need them and most of a website's pages would not be affected by that at all. Done.
Sometimes we should take a step back and really ask ourselves what kind of website we are building. What is the character of that website? Is it merely a website that shows some information about a company? It can probably live on server side rendered templates. A blog? Same. Not every website needs to be a "web app". Most of them actually do not.
> It's time to move away from HTML/CSS/JS. They worked great for as long as they did but instead of further contributing to the mess that they've become, we should be looking into alternatives.
I don't agree. It is time to embrace standard and good usage of HTML and CSS and eschew heavy JS frameworks and JS on the server as much as possible. I would not put HTML/CSS and JS into one basket here to throw out things. HTML and CSS have made great progress. JS too, but as I said, the mindset and practices of the ecosystem and people are not there.
Agree a load of old crap that could have been achieved in the most part with a better implementation of html frames. Sames sucky problems like publishing barriers, Devs holding sites to ransom. Etc. Etc.
A good open source app for a shop, a church, a school would be a better focus of a million monkeys on MacBooks.
> Websites do more or less the same thing they did 15 years ago
As a tech enthusiast, I’ve paid close attention to what tasks I can achieve on my PC (and eventually smartphone) over the past 15 years. In my experience, this is simply not true.
What has changed significantly on the website side?
My bank website did a full re-design to be modern looking, but it's slower, no longer supports multiple tabs, and links don't work well, especially if you go through the sign in flow. The ones that didn't redesign into a JS heavy spa are still faster and links tend to be more reliable for e.g. bookmarking. They don't look great, but they tend to be more functional.
I can buy something, pay using a number of different standardised mechanisms (debit card details, Apple/Google Pay, other web services like PayPal) and have it quoted for shipping from almost anywhere in the world to my house with just a few presses in a mobile browser or on my laptop. There are a huge number of moving parts involved in providing that experience and it is fairly consistent between large tech players and small.
It might not be flashy stuff like a nice replacement for JavaScript to make developers' lives easier, but a lot of my story above depends on significant advancement in web standards plus industry experience and consolidation. The web (as well as walled gardens like Apple and Google native) has been critical in enabling that. It's the bedrock for the entire digital economy, including many transactions that drive the physical economy. 15 years ago you couldn't even trust that the average user's browser had a decent fetch API.
edit: I do agree that things have gotten slower and everything is terrible.
It has been educational to watch the SerenityOS / Ladybird team implement browser features. As they make real websites work, you can see how the technologies for even simple brochure sites have changed. For example, logos may be SVG instead of simple images which makes sense given the range of screen sizes and resolutions that need to be supported.
You can also see that CSS continues to grow but in pragmatic ways that actually reduce the amount of JavaScript to do the kinds of things that users expect these days.
On the performance and capability front front, WASM is a game changer.
Regardless of what JS framework is in vogue, I do not think it is fair to say that we are doing the same thing on the web as we always have or even to say that the base technologies are getting more complex for no reason. At least, it is no more fair than any other programming domain. I mean, we can look at Excel and VisiCalc and say that we are creating the same apps that we always have with more complexity and bloat. I mean, people do say that but there is an awful lot left out of that analysis. Even more so if I compare VisiCalc and Excel via Office365 in a browser.
Regular people are routinely accomplishing far more with their computers than they have in the past. More and more of that is being done in web browsers.
> Regardless of what JS framework is in vogue, I do not think it is fair to say that we are doing the same thing on the web as we always have or even to say that the base technologies are getting more complex for no reason.
Certainly. The things you can do on the web have vastly increased, but the things we actually are doing are mostly the same. Read some text, make an account, login, submit a form, make a payment, upload a file. That's what the vast majority of the web has been and still is, yet the path you take to get there is many times more complex and the user experience hasn't proportionally gotten that much better. Arguably, it's gotten worse.
Well, I think you can break it into two parts. I see the underlying tech getting better, making it easier, and making it faster. Look at the HTML dialogue element as an example.
So, “the path you take to get there” CAN be much improved as a developer.
Now, as a user, you may find that sites that could be simpler are much heavier and complex than you want them to be. My question is why? My argument is that both users and producers of these sites WANT them to be more complex. Which means blaming the technology is misplaced.
Certainly we CAN still make the sites that ran in Netscape 4. Why don’t we?
It always boggles my mind how people on HN keep claiming that websites are slow because of JS when every 10th headline is about raytracing in WebGPU at 60fps or running LLM inference locally in the browser or similar.
Let me open your mind up to the possibility that no language can prevent unperformant applications being written in it and that the vast majority of web apps don't treat performance as their primary concern as long as it's good enough for their particular definition of "good enough".
If you are so motivated, creating high performant web apps is easy. The tooling to make improvements is among the best for all programming languages.
Those examples are both using...the GPU. No wonder you can make websites fast if you offload everything to a separate processing unit. Which unironically is one reason why people are pushing for WASM and GPU rendering for canvas as an application platform.
Indeed, this is one reason why Ian Hickson (who helped create the HTML5 spec) is advocating for moving towards an application platform based on WASM instead. He has some good comments on this thread.
>In theory, HTML is device independent. In practice, we can't even get web apps to work on desktop and mobile, to the point where even sites like Wikipedia, the simplest and most obvious place where HTML should Just Work across devices, has a whole fricking separate subdomain for mobile.
Does he actually know why the cases are separate? Asserting this without any historical analysis is meaningless. For all we know, Wikipedia split desktop and mobile domains before responsive tooling in the browser got good and they just never changed it cause, like most businesses, they have better things to work on as the current solution still works.
> Without downplaying all the effort being put into this
> Websites do more or less the same thing they did 15 years ago
That is a huge downplay if anything :) Websites certainly does the same they always done, the difference nowadays is that there are web apps too, interactive applications, which existed 15 years ago sure, but they didn't have the same scale as today.
But, most web apps today should just be websites, but there are some use cases for web apps too, although they are few.
Once you start to understand there is a difference between websites and web apps, things start to make sense. There is still a huge misuse of making things into web apps, but at least you start to understand why the ecosystem moves in a direction you seemingly don't grasp.
> It's time to move away from HTML/CSS/JS. They worked great for as long as they did but instead of further contributing to the mess that they've become, we should be looking into alternatives.
There is huge efforts into this already, which you also seem to have missed. The whole WASM effort is about being able to write code for browsers in any language you want, and it's already usable today. It's missing some vital things like DOM manipulation to be 100x more useful, but again, still useful today. Lots of games are written in Rust and deployed as WASM for example, runnable in the browser.
I actually think you have a point. Sure, it’s missing nuance and there are certainly things 2000s tech will not do properly, but to be honest I’m not quite hundred percent sure what the past two decades brought other than funky frameworks and libraries. Useful, but funky. I know for example CSS animations and layout possibilities are off the charts nowadays but they do not strike me as fundamental improvements although the QoL is certainly appreciated.
What happened last two decades is browsers getting their act together though, that made a huge difference.
I’m a webdev. I work with React and Angular. Again, missing nuance but I’m not completely sure we are on the right track at all. Not sure what we should do.
I made a comparison between crystal meth and wordpress before. I’m not sure getting us all on the web (crystal meth) train is the way to go. Sure it’s standard and lots of people like it. It’s cheap.. but yeah. The alternative might be painful (aka not do it).
>"JS is a horrible, bare bones language which is why you can't get anything done without 100 packages."
BS. I use JS for front ends without any frameworks Just some of domain specific libs. Works fine and I "get the shit done". Sure I prefer compiled type languages like C++ but code in anything if it makes the the project completed faster given whatever constraints.
What do you mean? HTML is just a hierarchical document format, and CSS is just a logical document styling language. Those two are about as simple as they can be.
Accessibility is a very hard and complex thing where half of its outspoken "promoters" fight against every gain because they have money invested on the status-quo. That's why it doesn't improve fast.
Most developers avoid CSS exactly like most developers avoid SQL, to avoid logical programing. That doesn't mean that logical programing isn't the best known paradigms for data querying and document styling, it just means that most developers are bad and will avoid learning something as hard as they can. Honestly, I have no idea how to fix this, but conceding our tools to the preferences of the worst of us isn't going to lead to anything good.
JS indeed can be improved. But guess what, many people are working hard to replace it.
HTML/CSS are fine, as is javascript, but yeah I don't understand why web pages get bigger and bigger and bigger and slower. Just because we have more bandwidth, I don't know why we have to chew through with ever more complicated webpages. I get stuff like slack or gmail, but showing a company webpage? Why does that take 10 seconds to download and render?
Because users want that? As a result, sites that deliver that deliver more value to their owners than sites that don’t?
I do not subscribe to the view that it is all just collectively worse because we are cooperatively moving things away from our preferences.
I use ancient hardware. I wish the web was much lighter. That said, I do not see the fact that it ( and everything in computing ) keep becomes more and more resource intensive as evidence if some kind of incompetence or collective failure.
It just means I represent too little demand to dictate the supply.
Depends. What browsers do I have to support ? The amount of kidneys I donate is directly proportional to how much I have to target IE6, at which point I donate both and would rather die.
Could Servo be used to build desktop webview type apps that can leverage all the html/js/css libraries that are available?
The main complaint people seem to have about this approach is the large size of the executable, I don't know if Servo can make a different in this respect.
I really like sciter for webview-type apps. It's really lightweight, integrates well with native code, and brings some native-app concepts to JS (like the communication between multiple app windows). Sure, it's not open source, but you can have the source code if you pay for it, I can respect that.
My main issue is that it's lacking some of the DOM API. It's complete enough that you wouldn't really notice when writing code yourself, but good luck finding a charting library that runs without modifications.
Sciter is very interesting. It uses a custom rendering engine with a custom fork of QuickJS. Extremely lightweight and fast. Quite a nice project honestly.
I was thinking about building a desktop app framework using servo, but skipping any html/css parsers and without any JS engine. It would use rust and some dsl/macros that compile to servo function calls that build the UI. I'm not sure how much smaller it would be, but I imagine not having node would be a good start
I'd been wanting to see this, preferably with JS being optional, and just allowing direct DOM access.
I initially thought this was what Azul was, but it's only just using Servo's WebRender compositor, and rolls its own CSS parser, DOM, and layout engine, so it doesn't benefit from most of the work done on Servo, and supports less CSS features.
I think it used to be a goal to be embeddable in the way you describe. Or at least better than gecko and at least as good as chromium of the time. I’m not sure if that goal changed. I think it probably doesn’t yet support enough features to meet your requirements.
If you use Electron your app is bundled with Webkit + NodeJS and runs two different processes. Servo is like a minimal/limited Webkit and you dont need to write your app logic in NodeJS.
I think it would be great if the homepage could have a prominent link to the documentation as it wasn't obvious to me where to find it. I followed the link to Github which in the readme has a howto for building the project and there is a docs folder which has a few documents mostly with dealing how to get started contributing.
In the end, I think Servo is better served by being in the Linux Foundation rather than under Mozilla, as Mozilla seems to stray further and further from the path they were walking a decade ago, sadly.
Servo seemed like a great project for Firefox. Both in the near term, by merging parts of servo like the CSS engine, and in the long term by promising a much better rendering engine than anything else on the market.
But sadly good for Firefox doesn't mean good for Mozilla. Making a great browser isn't necessarily their highest priority.
well...look at thunderbird - mozilla basically let it go as well and it made spectacular come back with a huge funding from users / community... as if user do like to contribute directly to the software development and not the mozilla org with it's weird golas... food for though...
Here is a good article on how important Servo was to the early development of Rust and the feedback loop between the two [1].
I find it interesting to see how Rust chose run-time over compile-time. Some of this is completely understandable (eg the whole borrow-checking mechanism that is so valuable). Generally though, I've found you want to favor compile-time, particularly incremental compile-time. It's better to get something to work first and then optimize performance later than the other way around.
The main competitor of Rust for writing browsers is C++, and its compilers don't have in-unit incremental compilation. So that's why Rust didn't start out with incremental compilation, but it was added later on, so now you do have it (although it could be improved in many ways).
> It's better to get something to work first and then optimize performance later than the other way around.
I see it repeated a lot without any justification. And I observe that in practice a reverse statement is true.
If you make something work first without looking at its performance from the very first phase of development then you will soon find yourself in a place where the only way to improve performance is doing a full rewrite.
It is way easier to add more features to a program that has good performance-oriented design than to improve performance of a program that has accumulated thousands of features. And particularly programs like web browsers, compilers and database engines don't have mythical "95% of CPU power is taken by 5% of code" profile. Their profiles are flat, and when you improve perf by 5% you're already blogging about it.
> It's better to get something to work first and then optimize performance later than the other way around.
As far as I gather that is one of the issues why Rust is slow to compile. Fast compilers were usually built with performance in mind from the very start while rustc was first made work and then optimized.
> It's better to get something to work first and then optimize performance later than the other way around.
That’s what they did by using LLVM instead of making their own compiler backend. The compiler could have apparently (according to what have been said) been faster, earlier, if they didn’t have to go through LLVM.
But yeah, you’re right about their priorities: minimizing runtime has always been a high priority while designing the language, while designing for fast compiles has not. At least according to my view from the peanut gallery
I don’t know anything about compilers, but it would have been interesting to see if a language like Rust would look different today in this regard if they focused on incremental compilation (instead of batch) from day one.
There's a balance between the two and that balance shifts based on product/project requirements. I can't imagine anyone writing a slow browser engine and being like "yea but we will make it fast later!", Screen tearing, 5second plain text page loads, etc don't make for compelling demos. Similarly for an OS or a database you can't have it be below some reasonable performance threshold even in prototyping. Otherwise you aren't even solving the problems, the apparent remaining 10% of the work basically demands a complete rewrite plus many new feature editions - ie youve done nothing.
> I can't imagine anyone writing a slow browser engine and being like "yea but we will make it fast later!"
Funny, that’s the exact opinion of the people building Ladybird for SerenityOS. They are of course doing it for fun, and it doesn’t take five seconds to show text.
OTOH, dev machines are usually beefier and fewer in number than prod machines.
So if you can give yourself head room to make prod _really_ fast, in a few years the dev systems may double in power anyway.
Incremental compiles are a common source of bugs, right? I find it's easy to do something from scratch (like imgui painting the whole window every frame) and hard to optimize it not to (like damage in GUIs)
Happens with some frequency with building Gecko. You can see documented instances of it for the CLOBBER file. It allows the person building to optionally have the build system force a clean rebuild.
Having the ability to turn off expensive compiler options is nice. For Solvespace (C++), turning on LTO make compilation about 6x longer for maybe 15 percent runtime performance. I do development without LTO and let releases get LTO. The borrow checker in Rust might be to important to turn off even sometimes.
> The borrow checker in Rust might be to important to turn off even sometimes.
Variations of this point come up often, that the main reason rust compile time is slow is typechecking or borrowchecking, but that’s not true, while those operations are expensive, it’s monomorphization (and llvm) that are the main drivers of compile time currently.
Building a compiler on top of llvm is unfortunately going to be sluggish for feedback loops , regardless of your type system.
Also, unlike LTO, borrow checking is not just an optimization, turning it off would be like allowing you to add arrays and booleans together: meaningless and wrong.
Anyways, you wouldn’t save more than ~10% time.
Apart from the rant above, I do agree that having the ability to tune optimization levels for development vs prod is good, I would like it if it were easier to do stuff like PGO or other stupidly expensive optimizations for releases.
GPU probably won't be a good fit for the browser, except for purely graphical tasks, like drawing images, for which they are already used for.
GPUs are good for massively parallel tasks, for example, shading every individual pixel on a 3D rendered scene, which is what they are designed for, bruteforcing hash functions, as it is done in cryptocurrency mining, or multiplying large matrices, as it is done in deep learning.
But most of what a browser does is not massively parallel. JS is mostly single-thread, and layout, parsing HTML, or managing network connections may benefit from a bit of parallelism but we are not talking about thousands of simultaneous operations with no interdependence. And for that, GPUs suck, they don't have the caching, branch prediction, synchronization, etc... abilities of a CPU, and attempting to do that on a GPU will only bring it down while its overpowered math units will be idling.
Is this now intended to be a full featured browser? That'd be great.
Last I checked this was going to be a sandbox for Rust ideas with no intention of Gecko bring dropped by Firefox.
It would be cool if it could at least be a "secure" window with heavy restrictions on add-ons etc. Not full featured, maybe not even much hardware acceleration. Just a single window that is super hardened.
The paint phase of rendering can easily happen on the GPU, and more and more stuff will move to the GPU as time goes by. The style and layout phases… unlikely.
Think of a GPU this way: If you could reasonably launch a thousand threads on the CPU and get a benefit from it—i.e., essentially, you have tens of thousands of largely independent work chunks (or more)—your problem might be well-suited to a GPU. A large web page contains a couple thousand DOM elements, and many of them depend on each other wrt. style, so it's not a good fit. (Stylo is parallelized, so it is capable of using a multicore CPU, but there's no way it can get a reasonable gain from 1000 cores.) Layout is also nontrivial to parallelize.
However, paint is a different beast. In many cases, you can do per-pixel parallel work. So various forms of font work, line/shape rendering, compositing/blending, scrolling, video decoding… that's where the GPU is useful.
Who knows? A rendering engine will still be needed though so another prediction could be that Servo supports such GPU-based rendering.
Maybe a better prediction would be for the CPU/GPU/MPU/APU/xPU distinction to become less relevant since processors will gain those capabilities? A future where processors come in chiplet-based packages with a few generic cores and some more tailored to massively parallel matrix processing maybe?
How much cheaper can they get? Most laptops have CPUs/boards with integrated GPUs, and desktop GPUs can be bought for very cheap, as long as you don't buy the latest greatest edition of everything.
It's hard to lose on this one: even an underpowered, integrated GPU will probably do a better job than an average CPU. Desktops have figured this out a while ago, both OSX and Windows have been rendering on the GPU for a very long time and it's not necessarily to enable any extra graphical effects, but just to make the interaction faster, smoother, and more energy-efficient.
Servo will forever be a testament to Mozilla's absolute failure as an organization. Finally there was a renewed interest in Firefox as a technology and their dipshit CEO fired them all "because covid" while taking their largest ever bonus.
I hope that it can become something more than that, but I'll always remember it.
Technology doesn't matter if it doesn't convert users. Perhaps the improvements were too small given the level of investment necessary and given the bleak outlook, they decided they couldn't keep it up. I am happier knowing Mozilla will live to see another day than see it die off.
As far as pay goes, the Mozilla board ultimately is in charge of setting that. I'm not sure it makes sense to blame the CEO for taking the compensation they were promised. The fact it is correlated with layoffs just speaks to the large incentives those in charge throw at the CEO to perform layoffs when necessary.
The technology is what matters to users. Mozilla only matters in that they develop and publish the software that users care about.
If the software is forked and development is continued as an open source project under the auspices of another organization, that works just as well for users. Conversely, if the software dies but the Mozilla Corporation survives as the world's leading manufacturer of paperclips, that's of no value to the software's users.
the Mozilla Board is a living proof that the following keywords in absolutely any leadership position are a death sentence: Stanford, Harvard, McKinsey. The Mozilla board are sycophanths that prop eachother up in the various boards they are each a part of to leech off as much as they can before jumping ship to another.
> Technology doesn't matter if it doesn't convert users
Browsers have primarily won on the merits of their implementation. It is how Chrome was primarily marketed, for example.
> Perhaps the improvements were too small given the level of investment necessary and given the bleak outlook, they decided they couldn't keep it up.
That doesn't seem likely. The project had significant potential for core technology metrics - performance, security, stability.
> I'm not sure it makes sense to blame the CEO for taking the compensation they were promised.
The CEO is the one who asked for the increase in pay. This is publicly documented. In fact, she gets paid more than the sum total of all Mozilla donations, meaning that corporate sponsorship pays for the browser and user donations pay for her salary.
The CEO is also in a perfectly fine position to reduce pay in a time of uncertainty. Any board would approve that (for reference, I was a CEO of a small [8 figures] tech company).
The CEOs pay had been rising significantly well before the layoffs, nothing to do with the board pushing for layoffs.
I think you underestimate how much power a CEO has, the board is not in charge of all decision making.
Who cares if their CEO did not invent JavaScript, she donated to politically correct causes, shook the right hands, and virtue signaled at the right time. As far as Mozilla is concerned, they got the leader they deserved.
It's not so much this Woke Right view that everyone deserves 0 consequences in every environment, and lefties have 0 consequences, but the inverse.
Bad idea to fight gay marriage when you run a company, not very respectful or inclusive and causes a lot of headaches for board, HR, and legal.
Nobody really cares about the new CEO as long as they don't do that. The CEO that replaced Brendan wasn't a she, you're trying to connect events 6 years after Brendan to Brendan.
Just to think, if every developer got together and contributed a single line of code to a new internet stack/browser/protocol project we could overtake the corporation poison that's current spreading within the web-space.
If there is such a project where every developer can contribute one single line, the result is inevitably that the codebase is a horrible unmaintainable mess. Sorry this is the stupidest thing I have seen this morning.
The dream of that everyone could get together and make an internet utopia without corporate input would be a pleasing moment. That was the illusion I was trying to make.
The internet is so bogged down by corporate walled gardens that nothing is individual anymore. Not to say they don't exist but have an very minute presence.
Neither Layout 2013 nor Layout 2020 fully support widely used features like flexbox, grids, or a reliable float implementation. This report proposes focusing on improving Layout 2020, but until the project benefits from the renewed energy and focus it's not ready for projects needing these.
I always thought servers could prelayout the HTML to determine X and Ys and widths for different browser viewport sizes and this metadata would speed up layout because you have a good candidate for layout immediately.
When it comes to changing the DOM and reflow, I wonder if you can reflow against the row only into chunks of the screen and recomposite bitmaps to move elements down the screen.
Some web pages are really pathological with relayout costs
I’ve been out of the browser rendering engine game for a long time.
But even back in the day, the various ways that a DOM element’s physical layout could escape its parent’s bounding box, the multiple stages of coordinate evaluation, the zillion edge cases: it’s hard.
And that’s assuming that all inputs are perfectly formed. By the time you’ve cranked in enough Poe’s Law to be more than a tech demo, and enough performance to interest anyone?
It’s one of the nastiest practical problems you’ll encounter in a long career and the cross product of the kooky standard and the bugs you have to honor on the real web, a back compat story worse than Win32 mean that there’s no One Elegant Answer.
It’s a fucking mess and my hat is tipped to the hard-ass veterans who can even make a credible attempt.
I’d rather be on the hook for a modern RDBMS than a modern browser.
> I’d rather be on the hook for a modern RDBMS than a modern browser.
I've worked on both a RDBMS and a browser, and yes, I'm inclined to agree. In particular, it is largely acceptable for an RDBMS to pick a niche where you perform well, and do worse in others—and you're (silently) allowed to support only a given subset and have your own incompatibilities. Nobody would ever use a browser that performs well on NYT but could hardly run YouTube (or the other way around).
Interestingly, there are very small teams who have written usable instances of both.
I haven't been part of a small team building either, but the obvious small team doing RDBMSes is SQLite. Three people building the world's most popular software, and possibly the world's most robust database. They found a niche and implemented it perfectly.
For browsers, the most prominent example is probably LibWeb/Ladybird, part of SerenityOS (but with a Qt port, allowing it to run on other operating systems). I've never tried it myself, but supposedly it's complete enough to run large web applications fairly acceptably, which is amazing for something developed largely by basically two people AIUI.
I've been watching the SerenityOS videos and especially the Ladybird videos show a decent chunk of spec. One man (okay, one man with a history in WebKit development) and a bunch of volunteers gave built up a browser that could almost be competitive if it existed back in the day. Of course the people working on the project are incredibly talented, but I'd argue that if one full time employee developing both an operating system and a browser can get this much done with a community, the problem isn't as hard as many people deem it to be.
There's a lot that can go wrong, but the spec describes much more than I remember reading back in the day. Layout algorithms and such are all spelled out. There's no arcane knowledge in finding out what the right size of an <img> is, there are rules that will tell you exactly what you want to know if you follow the spec.
Not everything has been documented as well as it should be, but the vagueness and complexity of the web stack devs often like to lament about really isn't that difficult. It's a lot of reading, and boring reading at that, but the problems all seem relatively straightforward to me.
Even quirks mode has been largely documented. A spec for things that don't follow the spec!
I know things were very hard back when Internet Explorer was still a major browser because there were no rules. These days, you can either find your answer in the spec, or the publicly available source code for your competition.
With the amount of features being crammed into database systems these days, I'm not sure what I would prefer to maintain. A modern RDBMS sure seems less complex than a browser, particularly because there's no user facing UI, but the problems with an RDBMS are a lot more about (inventing) complex algorithms than many of the modern browser challenges.
> Is compositing different bitmaps an embarrassingly parallel problem?
If you can make an initial approximate guess of the area for each section, you can split regions with lower and upper size constraints that can be laid out in parallel. A global branch and bound algorithm can coordinate those processes, tightening the upper constraints of one region when another connected region exceededs its minimum bounds.
Indeed there still may be degenerate cases where multiple regions need to be recalculated several times, but overall the default case should be able to finish without much conflict.
> Some web pages are really pathological with relayout costs
I'm always wondering how much do these awesome pathological sites contribute to CO2 emissions. Along with ADs, autoplay videos, huge and slow JS libraries, animations, and other needless and outright unwanted content. Even more on mobile. I'd say it's a lot and could be cut off immediately with very positive impact and no real loss.
CSS Containment <https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Contain...> defines ways of optimising a lot of calculation and recalculation, roughly by forbidding various of the harder or slower cases. (It’s approximately just an optimisation hint, though it does change behaviour if you try to break out of the containment you declare. It’s conceivable that in the future browsers could determine whether these optimisations can safely be done without the need for the hint, though that could only help relayout, never initial layout.) Some forms of containment that you can declared definitely allow for parallel layout to work without it being speculative.
> I always thought servers could prelayout the HTML to determine X and Ys and widths for different browser viewport sizes and this metadata would speed up layout because you have a good candidate for layout immediately.
They tried this with <img srcset>, where it can have a lot more value since you can fetch a lower-quality version of the image if you don’t need more pixels. In practice, effective use is somewhere between uncommon and extremely rare, depending on how you want to classify things. Some reconceptualisation of that that integrated with container queries <https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_contain...> might have more potential. But all up, things like media queries, viewport units and calc() make any kind of “serialise the layout” concept extraordinarily difficult to do to any useful degree, and for almost no benefit.
Was about to write it's a cautionary tale for "let's procrastinate to write x by going meta and bikeshedding on tools and languages" camp but pavlov beat me to it, interpreting what he wrote as sarcasm.
> Cautionary tale for the ‘rewrite it in Rust’ camp
Mozilla, famously, made multiple attempts to update Firefox's rendering engine to take advantage of multiple CPU cores that had to be abandoned before they switched over to Rust and started to see some success.
>Parallelism is a known hard problem, and the CSS engine is very complex. It’s also sitting between the two other most complex parts of the rendering engine — the DOM and layout. So it would be easy to introduce a bug, and parallelism can result in bugs that are very hard to track down, called data races.
There is already a link to an interview with Josh Matthews, who led Servo development, where he makes the case that moving to Rust from C is the factor that finally allowed the effort to succeed after three previous failed attempts.
That's just not true. Over 10 years ago I switched to Chrome because it was much faster but it's been nearly 5 years since I switched back because Firefox was blowing it away
I encourage people to listen to Josh Matthews interview [0]. He talks about how, early on, Servo was a "guiding light for Rust".
0: https://podcasts.google.com/feed/aHR0cHM6Ly9ydXN0YWNlYW4tc3R...