It's always dangerous to predict the future, but one possible future for the web feels like it's starting to come into focus: javascript as the default language everywhere (frontend, backend, edge), plus the ability to compile lots and lots of other languages to WebAssembly.
A runtime that works everywhere (or, more accurately, a handful of partially compatible runtimes) will encourage moving processing around the network relatively seamlessly. Code will run on the backend sometimes, and in the browser and apps, sometimes, and in internet edge nodes, sometimes. This feels like the future we've been aiming for since we first added Java applet support to the browser.
There's a huge amount of toolchain and runtime work still to do to get there, of course. But it's a pretty cool future. Even (maybe especially) to language snobs like me.
The article describes the four eras of "how do I web server" as: physical servers -> infrastructure as a service -> platform as a service -> function as a service. Just to make that a little more anecdotal ...
I built my first websites in 1994. There really wasn't a commercial web, yet. To build a website in 1994 you probably either stuck some files in a special place under your home directory[0] (if you were at a university), or you downloaded and installed Apache to some machine of your own. Maybe an old machine you left running under your desk.
Then in 1995 Netscape went public and kicked off the "dot com" era, and Bill Gates wrote the famous "Internet Tidal Wave" memo. [1]
All of a sudden you could get paid to build websites! Big ones (or so we thought, back then). You probably needed redundant bandwidth, and a way to expand server capacity, and 24/7 power guarantees. There wasn't really a way to get that stuff except to buy hardware yourself and put it in a data center. You paid somebody with a fancy building a few thousand dollars a month (at a minimum) for a cabinet, and a committed "95th percentile" amount of bandwidth, a power connection rated to a certain number of apps, and a network drop. Then you bought a lot of machines from Dell or Sun (depending on your budget), spent a lot of time setting them up, and drove with them in the trunk of your car to the data center.
Then the first generation of "infrastructure as a service" (Rackspace) made the logistics a lot easier, and the costs a little better. Then AWS made the costs a lot better. And now "platform as a service" offerings like Elastic Beanstalk have made the logistics even easier still.
But, arguably, the as-a-service evolutions haven't made it possible to actually do new things I couldn't do on my own hardware. Faster, easier, cheaper, yes, definitely. And those are really important!
But wasm-at-the-edge feels new and, maybe, transformative.
I really like this paragraph about "the future" at the end of the Cloudflare announcement about supporting WebAssembly [2]:
"We're excited by the possibilities that WebAssembly opens up. Perhaps, by integrating with Cloudflare Spectrum, we could allow existing C/C++ server code to handle arbitrary TCP and UDP protocols on the edge, like a sort of massively-distributed inetd. Perhaps game servers could reduce latency by running on Cloudflare, as close to the player as possible. Maybe, with the help of some GPUs and OpenGL bindings, you could do 3D rendering and real-time streaming directly from the edge."
You don't need WASM to deploy compiled binaries to a web server, so I don't understand the excitement. It may be convenient for the provider (CloudFlare), but not necessarily for the customer. Performance will be better without WASM and it's an additional step in the build pipeline, compared with CloudFlare supporting any x86 binary.
WASM's purpose is to compile languages to run in a web browser. A CloudFlare "worker" is a web server that runs at a data center close to the user.
It's useful to be able to write code that runs on your server, and in the browser, and in an "edge" thingy too (for some definition of "edge"). x86 binaries aren't going to be able to do that.
It's always dangerous to predict the future, but one possible future for the web feels like it's starting to come into focus: javascript as the default language everywhere (frontend, backend, edge), plus the ability to compile lots and lots of other languages to WebAssembly.
A runtime that works everywhere (or, more accurately, a handful of partially compatible runtimes) will encourage moving processing around the network relatively seamlessly. Code will run on the backend sometimes, and in the browser and apps, sometimes, and in internet edge nodes, sometimes. This feels like the future we've been aiming for since we first added Java applet support to the browser.
There's a huge amount of toolchain and runtime work still to do to get there, of course. But it's a pretty cool future. Even (maybe especially) to language snobs like me.
The article describes the four eras of "how do I web server" as: physical servers -> infrastructure as a service -> platform as a service -> function as a service. Just to make that a little more anecdotal ...
I built my first websites in 1994. There really wasn't a commercial web, yet. To build a website in 1994 you probably either stuck some files in a special place under your home directory[0] (if you were at a university), or you downloaded and installed Apache to some machine of your own. Maybe an old machine you left running under your desk.
Then in 1995 Netscape went public and kicked off the "dot com" era, and Bill Gates wrote the famous "Internet Tidal Wave" memo. [1]
All of a sudden you could get paid to build websites! Big ones (or so we thought, back then). You probably needed redundant bandwidth, and a way to expand server capacity, and 24/7 power guarantees. There wasn't really a way to get that stuff except to buy hardware yourself and put it in a data center. You paid somebody with a fancy building a few thousand dollars a month (at a minimum) for a cabinet, and a committed "95th percentile" amount of bandwidth, a power connection rated to a certain number of apps, and a network drop. Then you bought a lot of machines from Dell or Sun (depending on your budget), spent a lot of time setting them up, and drove with them in the trunk of your car to the data center.
Then the first generation of "infrastructure as a service" (Rackspace) made the logistics a lot easier, and the costs a little better. Then AWS made the costs a lot better. And now "platform as a service" offerings like Elastic Beanstalk have made the logistics even easier still.
But, arguably, the as-a-service evolutions haven't made it possible to actually do new things I couldn't do on my own hardware. Faster, easier, cheaper, yes, definitely. And those are really important!
But wasm-at-the-edge feels new and, maybe, transformative.
I really like this paragraph about "the future" at the end of the Cloudflare announcement about supporting WebAssembly [2]:
"We're excited by the possibilities that WebAssembly opens up. Perhaps, by integrating with Cloudflare Spectrum, we could allow existing C/C++ server code to handle arbitrary TCP and UDP protocols on the edge, like a sort of massively-distributed inetd. Perhaps game servers could reduce latency by running on Cloudflare, as close to the player as possible. Maybe, with the help of some GPUs and OpenGL bindings, you could do 3D rendering and real-time streaming directly from the edge."
[0]-https://httpd.apache.org/docs/2.4/howto/public_html.html [1]-https://www.wired.com/2010/05/0526bill-gates-internet-memo/ [2]-https://blog.cloudflare.com/webassembly-on-cloudflare-worker...