I'd be surprised if StackExchange was in the top 100 largest .NET web applications (I guess it depends how you define size, but I've seen apps with much larger code bases, bigger infrastructure, and bigger data sets, and I live in a relatively small country).
I think it's likely it's one of the top 100 by traffic, but (not to belittle the achievements of the Stack Exchange guys) I'm sure there's at least 100 bigger sites out there by code/infrastructure.
Flight booking systems come to mind. I don't know how many web applications Microsoft would have individually (I imagine they use .NET...) and whether you want to count those (microsoft.com, bing.com, accounts.live.com, Azure...). Not to mention in-house web applications.
According to the fairly suspect Alexa they are in the top 100, world wide (regardless of technology). So by traffic I guess yes, they are in the top 100.
Clearly you can't forget it. Some teams can succeed in a wide variety of languages, some organisations can fail in any language.
Given the number of teams that have succeeded with .Net, I'd say that organisational failings are to blame for this one. Have you already forgotten the reputation of the consultants hired to implement it?
Hopefully I'll get around to benchmarking the results on Roslyn, although the results are a little bit biased as we have some incredible perf people working on the product.
With all due respect, given that you work at Microsoft ... But this article specifically talks about web applications being a primary use case for the new JIT (RyujiT)
>>Quote: "But “server code” today includes web apps that have to start fast. The 64-bit JIT currently in .NET isn’t always fast to compile your code, meaning you have to rely on other technologies such as NGen or background JIT to achieve fast program startup.
Don't read too much into the specific example of "web apps". The point here is that the classic server app of 10 years ago was often a long-running service that processed large batches of work.
Think, for example, about gene sequencing: JIT compile the app once, chew on data for hours. No one cares too much about the startup time because the app will be running far long enough to amortize the cost of a thoroughly optimizing compilation.
The classic server app of today is often a web app that doesn't do a ton of work relative to its startup time. Web apps often start up, do a bit of work, then shut down. Waiting a long time for the JIT compiler degrades each launch significantly.
As others point out, there are solutions to making web apps faster. But rest assured: RyuJIT helps all kinds of apps: server, client, web, computational.
I'm sure it will help with web apps for which JIT time is a significant performance drag. But even then, my knee-jerk response was to think that faster JIT compile times shouldn't really impact the life of anyone who knows about ngen.exe.
Reading the article, it sounded to me like Microsoft's primary motivation was cutting back on their development costs by shrinking the codebase. Faster compilation was just a nice side effect that also happened to make for a sexier selling point.
NGen doesn't help too many folks with asp.net, since it just doesn't work. And for a class of x64 applications, NGen is a terrible solution because it takes so long to precompile the entire application.
I'm going to put together a more detailed article for the Codegen blog regarding motivation, history, all that fun stuff...
But I think there is confusion—or at least I am confused—about what precisely RyuJIT provides. I know it provides quicker compilation times, which would affect the start-up time for web-apps. But does it also provider quicker-executing code? That is, does it do a better job with its optimization of web-apps' code once compiled?
Yes, a quick-starting web-app is awesome for when you need to add or replace nodes to your cluster. But I can usually suffer some warm-up time, even in that scenario.
My opinion is that web-apps are disturbingly often CPU bound and what I want most of all is faster web applications. Not in start-up time (that's icing on the cake, really) but in bottom-line request-processing throughput and latency.
The post's author commented: "Currently, we generate code comparable to JIT64. Sometimes we're a little better, sometimes we're a little worse. (I'll post some samples on the codegen blog as I get time) But we're just getting started, honestly. I expect that we'll be generating better quality code in almost every situation before we release a full, supported JIT (and I don't think we're that far away today)." (That said, CPU perf of the JITted code isn't everything.)
My next big perf task is to get the Roslyn self-build as a test for RyuJIT. It's a pretty interesting combination of JIT compiler throughput, and generated code quality.
True, however don't most large web-based applications have some sort of "warm-up" process/procedure to compile and populate the mostly used caches and pages?
A shorter ramp-up time is always welcomed but JIT is not the only cost. Depending on your application, JIT might not even be the highest/longest portion of your startup time.
Tiny? Let's agree with that.
But StackExchange is one of the most open "platforms" of .Net, and that's why its popular (at least for me, and i'm a .net developer)
The StackExchange network must be one of the largest .NET web applications.
EDIT: typo