Hacker News new | past | comments | ask | show | jobs | submit login
Windows 10 and .NET Native (anandtech.com)
158 points by WhitneyLand on Oct 3, 2015 | hide | past | favorite | 107 comments



I don't see why anybody should be surprised about the swing back to Ahead-of-Time compiled native code. I'm actually surprised it has taken as long as it has.

Although interpretation and Just-in-Time compilation were not new ideas at the time, Java first brought them to a wide audience around 20 years ago with its use of the JVM.

Anyone who remembers those times, and even many years beyond then, will remember how slow JVM-based applications were compared to native apps.

Even today, there's still a noticeable difference between JVM-based software and native software, although the exceptionally massive increase in computation power have rendered the differences easier to overlook in many cases.

Since then we've seen similar techniques used by a variety of other platforms.

Despite many years of effort and research, and much investment, we've yet to see Just-in-Time compilation rival Ahead-of-Time compilation in terms of the performance.

While Just-in-Time's supporters will claim that it allows for better optimization or better portability, or they can show some contrived benchmarks where it has a slight edge, we haven't seen such techniques stand up in the real world.

Ahead-of-Time C and C++ compilers have consistently compiled software that's very fast, lean, and the best performer in real-world scenarios.

For years, we've had people pointing out this very obvious reality regarding how Ahead-of-Time compilation proves to be superior to Just-in-Time compilation in realistic scenarios. Yet we've seen their claims very forcefully denied by the Just-in-Time advocates.

Maybe after 20 years, it has finally become clear that the Just-in-Time supporters were wrong, and that Ahead-of-Time compilation is the better approach.

With battery-powered devices more prevalent than ever, the need for high-performing and efficient binaries is greater than it has ever been. Returning to Ahead-of-Time compilation is just what we need given these circumstances. It's just a real shame it has taken so long for this important reality to be recognized.


There have been AOT compilers from third party JVM vendors almost since the begining. Used mostly by companies that didn't want to expose their bytecode.

However Sun decided to bet the farm on JIT and was against having AOT compilation on the reference JDK. Meaning anyone that wanted a AOT toolchain for Java had to go shopping.

Now, it appears Oracle is of a different opinion.

"Java Goes AOT" at JVM Language Summit 2015

https://www.youtube.com/watch?v=Xybzyv8qbOc&list=PLX8CzqL3Ar...

Being used to memory safe languages with AOT compilation like Turbo Pascal, Modula-2, Modula-3, Oberon, Delphi, Ada among many others, I also would have prefer that Java and .NET had offered the same type of toolchain from the beginning.

Now almost 20 years later, we need to teach young generations that their default implementations were just that, implementations. As they got used to the (wrong) idea that memory safe languages imply a VM.


Delphi and pascal are memory safe? That's news to me. I have created my fair share of buffer overruns and memory corruption in both of them (they were my first languages I programmed in)


Compared to C they are.

You surely were using unsafe constructs explicitly, like taking the address of a variable wiht Addr or @, using pointer manipulation functions.

With C they just happen, given the lack of bounds checking, implicit conversions, automatic decay of arrays into pointers (specially bad as arguments), null terminated strings and undefined behaviours across compilers.


Alas, the Swiss version of Algol lost out to the New Jersey version :-(

Bounds checking and non-nullable references are for wussies, after all. That, and a worse-is-better C compiler is easier to implement and thus more "available". (like Joel Spolsky said "Go ugly early")


Joel didn't say that. You're probably thinking of this piece by Steve Yegge: http://steve-yegge.blogspot.com/2007/02/next-big-language.ht...

But I wouldn't exactly characterize that as advice Steve Yegge wants people to follow, either.

> Back when I was in the Navy, just out of boot camp, an otherwise entirely forgettable petty officer first class instructor of ours offered us, unasked and free of charge, his sage advice on how to pick up women at a bar: "Go ugly early." With this declaration, he had made it clear that he and I thought rather differently about certain things in life.

> If you want to spare yourself a lot of angst in deciding which programming language to use, then I recommend this simple rule: Go ugly early. C++ will go out with you in a heartbeat.

> For my part, I want to encourage people to make their own languages


Oops. Right. "Worse/ugly is better" was meant as bitter sarcasm, though.


Haha wussy or not, damn did I get a lot of bullet proof shit done with M2. Now I have to work with the modern version of PL1. The more keywords the better the language, right?


Not sure how to interpret the "more keywords" comment. Words do have the advantage that they force people to put a space around them (usually), though, rather than jamming a bunch of operators together, "C/Unix Hoax" style.

  for(;P("\n"),R--;P("|"))for(e=C;e--;P("_"+(*u++/8)%2))P("| "+(*u/4)%2);
http://www.gnu.org/fun/jokes/unix-hoax.html


> Being used to memory safe languages with AOT compilation like Turbo Pascal, Modula-2, Modula-3, Oberon, Delphi, Ada among many others, I also would have prefer that Java and .NET had offered the same type of toolchain from the beginning.

Do those languages offer consistent, defined behaviour on all platforms (e.g. for integer overflow)? Particularly in the context of multithreading - do they have a crossplatform memory model? That to me was the big advantage of the JVM.


They had their own set of issues, but not even close to the anything goes of undefined behaviour across C compilers.

That was actually a reason why many of us embraced Java.

C compilers were between K&R and C89.

C++ compilers were even worse with the standard still ongoing.

These other languages suffered from not being adopted by OS vendors and as you remember, back then we used to pay for compilers. So not being part of the official OS compilers was a burden to adoption.

Java being C++ like, more consistent behaviour and free (as beer) won the hearts of many.


But those have almost nothing to do with using a VM. You can achieve them with your compiler by emitting overflow checks and fences where appropriate to match the defined model.


Eiffel is probably another "other" Pascal/Modula descendent.

Java beans uber class invariants :-(


Whenever I read posts like this I feel like two technologies are being portrayed as combatants in an epic war over the years, with engineers taking sides and firing shots at each other.

In reality I think technologies are just tools which have their appropriate and inappropriate uses. Both AOT and JIT have their strengths and weaknesses, and engineers make their choices based on those tradeoffs. With the linked article, it sounds like the startup time tradeoff is being used as the main motivation for making a change, probably because mobile users are more sensitive to this. That doesn't mean JIT fundamentally sucks - it's worked pretty well on the web with Javascript, for example.


This is a kind of wishy washy liberal attitude that is found all over. An attempt to say both sides of an argument have a point. Which is always true, otherwise there wouldn't be an argument. The part I take umbrage at is that it implies both sides have an equally valid argument. That is rarely true. Here JIT has been claiming superiority for twenty years. It has clearly failed. It's slower to start, slower to run, and uses more memory. That will very probably never change. It still has some advantages in some areas and some niches, but AOT has won. The war, if you really must use that term, is over. We see this in the resurgence of interest in C++ and in the move in mobile ecosystems to AOT.


You're wrong. It is a battle. Look at how quickly the swords come out when anyone brings up C# and Java or Python and Ruby or Backbone and Angular. Any discussion involving developers and the tech they use quickly devolves into all out verbal conflict. People LOVE to argue about this stuff because it gives them something to do besides actually doing anything productive.

It's all rather sickening, really.


Well, technologies don't battle (except in the marketplace, which often shows the success of both), but some engineers battle over them. Meanwhile, wiser engineers are calmly choosing their technologies based on their suitability to the task at hand, instead of starting arguments over an all-or-nothing mentality.


There's the rub, I think.

Historically the big managed platforms have offered some real advantages. JIT compilation has been a part of that, since it helps to some of the reflective features that have in turn supported some useful tools for increasing developer productivity such as aspect-oriented programming, object-relational mapping and mocking.

These gains may have come at the cost of raw execution times, but most of the core consumers of the technology measure their success in terms of dollars, and are in the privileged position of being able to offset the performance losses by upgrading their servers. For them this is a perfectly sensible decision. Developers are much more expensive than computers, so they were simply being pound wise and penny foolish.

In other environments the calculus has been different. Console games have always had constrained resources, a homogeneous target environment, and high performance demands. Correspondingly, AOT's always been how things are done int hat domain.

Sometimes it's even changed over time. Bytecode made a lot of sense for mobile development a decade ago, back when there were relatively many competing hardware platforms and people interacted with their phones differently. Nowadays there are fewer more homogeneous platforms and do more task switching, so the cost of AOT compilation has gone down and the potential benefits have gone up. Hence this shift in .NET, which perhaps seems even more sudden since .NET itself was primarily targeted at enterprise development until relatively recently.


Console games use interpreted (not even JITed) Lua quite a lot for the areas where iteration time is of most importance. As the parent said, wiser engineers just choose the best tech for the task.


Java: the expressiveness of an Algol subset with the performance of a Lisp. :-)

C#/.NET being a slightly improved Java and all.

I love interpreters with automatic memory management. I love assembler (sort of). It's painful when the wrong one gets put in the wrong place, though.


With your comment, I would think the correct course of action would be not commenting at all in the discussion. But, I guess you don't have anything 'productive' to do.


Being able to have an objective discussion of the appropriate uses of one technology over another is very productive. I, in no way, said anything contrary to that. But meaningful discussions are rare because they devolve so quickly.


How do you dare living your life doing something entertaining rather than doing something productive? To the gas chamber with you!


HAHA! I see what you did there! But you missed my point.


The things you are talking is correct, Two technology should be just technology , but in real world I don't think so , companies tend to sell their technology as some kind of magical power and revivals wants to show other companies technology is just crap .

But what you are saying is mostly true in academia or ideal world , but not in reality .


See also CISC vs RISC.


I mean, it's almost a moot point. Each year the number of native apps I run goes down by a couple, and the number of webapps I use go up by a few.

Native apps I use right now?

Chrome - Already native compiled

IntelliJ - Full JIT mode, but due to the fact I start it once and run it for a month, not a huge deal.

Command line unixy stuff - Already native compiled

(On gaming computer) Games - Already native compiled

I mean desktop apps DO exist, but just not that many to use day to day... from what I have seen of the windows app store, I can find 20 colors of calculator apps, but nothing that impressive.


For me it goes the opposite direction.

On mobile only apps. The Web is for reading stuff.

On the desktop the same thing, except for all those forums and travel booking websites that don't offer me a choice.

Messaging, email, productive software, work related stuff all native software.

Exception being when some customers do require web projects.


> all those forums and travel booking websites that don't offer me a choice.

So you would prefer to have the friction of an app store, or even just a platform-specific native executable, for every little service? And of course there's also the friction on the developer side, especially when submitting to app stores which act as gatekeepers.

Sure, native apps have better performance and often a better user experience all around, especially mobile apps. But your comment about forums and travel booking websites has got me thinking that we should really try to solve the performance and UX issues of the web platform rather than retreat to native apps.

Edit: Yes, I've taken both sides on the web versus native debate. I recently posted that JavaScript-based mobile web apps are second-class compared to AOT-compiled native mobile apps. I attribute my latest flip-flop to the fact that just last night I went through the tedious process of submitting a new app to the iOS App Store -- an app that could probably have been a web app. So now I want the web to win.


> So you would prefer to have the friction of an app store, or even just a platform-specific native executable

What friction?

    apt-get install $PACKAGE_NAME
Native apps will always be far superior to the web, because the web browser (at least, any with that claims to care about security) is required to maintain a separation between the "page" and the browser that frames the page, or you trivially enable "phishing" and other UI-impersonation attacks. Native has far more flexibility.

Personally, I absolutely hate the jitter every time Javascript (or most other VMs) decides to GC for 50ms or more, and will go out of my way to avoid it.

> app stores which act as gatekeepers

So stop supporting those gatekeepers, both as a developer and as a user. As long as people keep giving them money and market share, they will continue to gain power as gatekeeprs.

> try to solve the performance and UX issues of the web platform

That would require defining a new platform that isn't the web. The web is a document-based design, and while a lot of clever tricks have been found that let people pretend it's something else, all of the core components of the web (HTML, etc) are designed for documents. This is good - we need a way to freely exchange documents.

If you want a way to develop applications that can be distributed as easily as documents are on the web, it would probably be a good idea to create some VM base portable application format that was actually free of stuff like HTML/CSS. An HTML replacement that worked more like a GUI toolkit would be much nicer for app development.


> because the web browser (at least, any with that claims to care about security) is required to maintain a separation between the "page" and the browser that frames the page, or you trivially enable "phishing" and other UI-impersonation attacks. Native has far more flexibility.

I'm not going to deny that native apps have more power to interact with hardware resources—though I would question whether that's a good thing for most applications I use—but this isn't a good place to draw a distinction. Window managers have to maintain the same security guarantees (look at the trickiness behind the impersonation protection in Windows Vista's UAC for example), and that's all the browser is in this context: a compositing window manager.

> If you want a way to develop applications that can be distributed as easily as documents are on the web, it would probably be a good idea to create some VM base portable application format that was actually free of stuff like HTML/CSS. An HTML replacement that worked more like a GUI toolkit would be much nicer for app development.

Besides the fact that this is unrealistic, I don't see how this would end up significantly better than the modern Web platform. JS is already faster than Objective-C in method dispatch. Use flexbox and the "document-oriented" layout goes away. Use simple graphics and your painting will be quick.

Honestly, speaking as someone who's measured this stuff a lot, the main thing that's slow in the Web that native doesn't have is style recalculation and layout. This is not insignificant, but I think any portable application delivery platform needs both of these. Native often gets by with hardcoded style and pixel layouts, but that obviously doesn't work on anything that has to run on arbitrarily many devices. Add in restyling and you pay a performance penalty, but redesigning the system from scratch isn't going to eliminate that penalty.

It's a common meme that "just reinvent the Web around applications and it'll be faster", but I have yet to see specific suggestions that would both preserve portability and aren't implementable in the existing Web platform. That's not to say browsers of today are optimally architected, of course.


> Window managers have to maintain the same security guarantees (look at the trickiness behind the impersonation protection in Windows Vista's UAC for example), and that's all the browser is in this context: a compositing window manager

Browsers supply a lot more than that window management: menus, key event handling, find, scrollbars, etc. You don't have the ability to hook into these with enough flexibility, which is why Gmail has 1998-era pagination (which is still better than Yahoo Mail's janktastic infinite-scroll).

The web simply doesn't let you show lots of email in a table like Mail.app, or even Google's native gmail app.

> It's a common meme that "just reinvent the Web around applications and it'll be faster"

Faster isn't really the point. Gmail is definitely not slow. Where it (and all web apps) fall short is in their lame UI, attempting to recreate desktop metaphors under the limitations of the web. Examples are its mystery meat keyboard shortcuts, almost-but-not-really context menus, its fixed-size non-draggable "windows" (chat, compose, etc).


> What friction?

> apt-get install $PACKAGE_NAME

Desktop Linux is a lost cause, except for hackers' workstations. For any application targeting the general population, these days we should probably assume a mobile device, which means either the app stores or the web (or both, if you can manage that).


" it would probably be a good idea to create some VM base portable application format that was actually free of stuff like HTML/CSS. An HTML replacement that worked more like a GUI toolkit would be much nicer for app development."

People are working on it. Have a look at flutter.io or Qt and QML


Actually this was one of the goals of XHTML, but then the HTML 5 movement killed it.

Had it worked out we could have gotten something like XAML, instead of the current Frankenstein model filled with workarounds and still failing short of a native experience.


> What friction?

    apt-get install $PACKAGE_NAME
This friction.

Both on the side of the user who would need to go through an install process that is often an order of magnitude longer than loading a web page, and on the side of the developer who needs to go through the tedious process of developing and packaging their apps for every single native platform they want to support.


Everything in life worth having requires some effort.


I'd prefer this in many cases as well. I can use web versions of Evernote, Pocket, OneDrive, Email, but I would much rather use their platform specific native executables. It's a better, faster experience that integrates better with the device I'm using. I could watch Netflix in my web browser, but the Windows 10 app is faster & smoother. It even irritates me when the Kindle Android App drops me out to a web browser to browse the store & buy more books, instead of just letting me do it all from within the app.


> So you would prefer to have the friction of an app store, or even just a platform-specific native executable, for every little service?

Yes, every time I have the option to do so.


Why exactly do you prefer a native app whenever you have the option?


Execution speed and integration with OS features.


If I have an executable that is running on my system, I know that it will continue to run tomorrow. If I am using a web service, I don't know whether it will continue to exist. If it is running locally, I can manage my own backups, my own settings.


Huh? Do you really think that a closed app store is the only way of distributing applications?

Did you sleep through last 30 years of computing?


> Did you sleep through last 30 years of computing?

No, but mobile devices are overtaking PCs for a lot of computing activities, and on the popular mobile platforms, the only choices are a closed app store or the web.


Web Assembly is going to add an interesting hybrid: the actual native compilation will happen on the client but the app developer will have the ability to put significant AOT resources into optimizing the byte-code sent over the network.


I gather that you don't run many apps on a mobile device. The shift toward mobile apps seems to be what's motivating the current trend toward AOT compilation.


running chrome on android right now. One of the 3 of 4 mobile apps I use.


The JVM was very slow in the beginning because it was just an interpreter and had no JIT compilation. Then there also was the issue with 80bit floats on x86 making float computations slow, as the results of each operation needed to be saved back to memory to truncate them to 32/64 bit.

The speed difference since those problems have been overcome has been relatively low. The biggest performance problem is garbage collection, which you can't overcome just with implementing ahead of time compilation. Another issue is the lack of SIMD support of JIT VMs.


.NET has SIMD support as of version 4.5.2.


> While Just-in-Time's supporters will claim that it allows for better optimization or better portability, or they can show some contrived benchmarks where it has a slight edge, we haven't seen such techniques stand up in the real world.

Other than JavaScript, which you're executing right now to post this comment?

Web app usage is huge, and there isn't going to be any shift to native code there anytime soon. NaCl (not PNaCl) has been dead as a proposal for Web content for a long time. That's because JITs actually do allow for better portability.

Edit: OK, HN doesn't actually use JS, so mea culpa on that rhetorical flourish. Still, the point stands that you're using the Web a lot, and that's a testament to the success of JITs.


Just because they took Usenet away from us, native guys.

A am yet to use any forum online that is as comfortable to use.

> Web app usage is huge, and there isn't going to be any shift to native code there anytime soon.

I am betting on mobile to change the wave.


I didn't even think about Usenet when responding to your comment about forums on the other thread. Still, users of modern web forums would probably view Usenet as archaic. The web platform provides a lot more room for experimentation, because a hundred thousand forums can run a hundred different forum packages with a thousand plugins. The user doesn't have to install an app, because any client-side code is automatically downloaded and run in a sandbox. Even without JavaScript, one can do a lot with HTML forms. If we returned to a standardized application-level protocol (in this case, something like NNTP) accessed by native apps, I think the result would be stagnation like what happened with Usenet. And how many apps for specific forums would a user be willing to install?


Yeah, Usenet isn't exactly the example I would use as a paragon of usability. Even in 1996, I remember it was hard to set up. It's actually a classic example of how the Web won: the frictionless nature of typing the URL of your forum over NNTP configuration dominated all other considerations, even if the Web forum ran a bit slower when you got there.


Nobody took usenet away, people just stopped using it because it became so terrible.


> Edit: OK, HN doesn't actually use JS, so mea culpa on that rhetorical flourish.

Hey now, I'm probably using JS to post this comment because apparently Firefox is made with it!


does HN even use that much JavaScript?


No, AFAICS there's no JS at all in either this page or the "Add Comment" page. One HTML file, one CSS file and a couple of tiny GIFs; that's it.

EDIT: oops, tell a lie, there's a tiny inline snippet to do voting.


That simplicity (both the lack of client-side dynamics and the lack of unnecessary images) renders nice and fast, too. Too many sites waste time with "shiny" javascript tricks, way too many images and other bloat.


The issue is not AOT vs JIT, but one of language semantics and levels of abstraction.

You could prove this point in a trivial way by JIT compiling C with LLVM. The binary would have an increased startup cost, but would otherwise run identically to an AOT compiled C program. It might even run faster, since the LLVM JIT could run with the equivalent of the -march=native to the AOT compiler. (In an AOT environment, you can only do this if you can guarantee that the binary will only run on the machine it was compiled on, or otherwise know the exact specs of the target machine.)

A much more interesting example is Terra[1], a language with C-like semantics embedded inside Lua. The Terra-Lua combination allows you to use Lua as a sort of C++ templates replacement, giving you much better metaprogramming than traditional C/C++. This allows you to do cool things like implement a version of matrix-multiply which is specialized to the target hardware, allowing you to match or almost match the performance of ATLAS and Intel MKL. Mind you: these are not your usual C/C++ programs, they are mostly written in manually hand-tuned (or auto-tuned) assembly.

What we're really looking at here is a difference in abstraction levels. It's not that the JVM JIT is incapable of getting C-like performance. Obviously it can, in certain cases. The issue is that the Java semantics prevent the JVM from getting C-like performance in a predictable and dependable way from idiomatic Java programs. This should not really be a surprise, given how far Java is from C.

What we really need to do is take a step back and reconsider our assumptions. Our modern programming languages lock us into certain assumptions about language design and semantics that can be blinding at times. Consider that even C doesn't get optimal performance in all cases---this is why Fortran still exists. What would it take to get to programming language with nice semantics and truly optimal performance? I don't have a closed form answer, but I'm confident that the solution will require revisiting some of the assumptions baked into not just our compiler infrastructures, but also our languages.

[1]: http://terralang.org/


Java was interpreted initially. The JIT compiler came later (around 2000, IIRC).


> Maybe after 20 years, it has finally become clear that the Just-in-Time supporters were wrong, and that Ahead-of-Time compilation is the better approach.

I think your argument is only correct for one subset of languages - C, C++, Haskell - languages like that. For languages like JavaScript, Ruby, Python, JIT compilation is proving to be massively more successful, and it's really easy to understand. As soon as you have features like eval, AOT compilation is never going to be effective.


Agree with you but I have to call something out about JS. The thing I most hate about JavaScript is how slow JIT is and I think we have wasted tons of industry resources trying to write better engines instead of pushing for lower level access to the client metal from the web server.


> The thing I most hate about JavaScript is how slow JIT is and I think we have wasted tons of industry resources trying to write better engines instead of pushing for lower level access to the client metal from the web server.

Like Web Assembly?


>> Even today, there's still a noticeable difference between JVM-based software and native software, although the exceptionally massive increase in computation power have rendered the differences easier to overlook in many cases.

Measurable I'd agree with, noticeable no, not really unless you mean startup time; otherwise the JVM is very fast once it's started, that isn't disputable.


Unmanaged native code definitely has an edge, even in I/O-heavy applications like databases.

http://www.scylladb.com/technology/cassandra-vs-scylla-bench...

Not to mention low-memory environments like mobile.


A lot of change in computing seems to be moving the location of a cache to be closer or further from the user in a cyclical manner. Maybe the same observation can be applied to source code?


> Despite many years of effort and research, and much investment, we've yet to see Just-in-Time compilation rival Ahead-of-Time compilation in terms of the performance.

I see this repeated ad nauseam, but it's false.

Other platforms are not rivaling C/C++ in terms of performance and not AOT compilation. But the reason for why C/C++ is still winning in terms of performance has little to do with it being compiled ahead of time and more to do with manual memory management and not using a garbage collector. C++ compilers are also very mature, being themselves the subject of research. But compare Java's performance to Go, or to Objective-C for that matter and the situation changes completely.


You are talking about high advancements in the compiler research. But is there any statistically significant performance difference between the code compiled via latest GCC vs. OpenWatcom (which is unmaintained for a few decades)?


> they [desktop apps] really want to move to the new app platform anyway

1. Stop anthropomorphizing desktop apps, they don't like it.

2. Can't tell if the author means "desktop devs want to" (false IMO), "desktop devs ought to" (also false IMO) or "MS wants desktop devs to" (undeniably true).

3. It depresses me that just as MS is genuinely 'getting' open source, they're completely missing the appeal and value of an open platform. One step forward, two steps back.

EDIT:typo


I think the referent for "they" here is "Microsoft", not "desktop apps".


I don't see how MS is getting open source. What makes people say that? Is it the 14 years it took for them to open-source .NET Core, an event that is happening at a point in time when .NET is most irrelevant?

What really pisses me off however is that it's 2015 and Windows 10 and Outlook still don't support CardDAV and CalDAV. They'll get their hand forced by market pressure of course and as usual they'll end up supporting the minimum they can to get away with, which means they'll provide special hooks for Google and probably iCloud. All of this in the context of Edge, yet another buggy and proprietary incarnation of IExplorer in a world in which Chromium and Firefox exist. And of course, we mustn't forget the patents racketeering they are doing at the expense of the open-source projects that compete with them

But oh boy did they change. Their PR department is genius though.


> "MS wants desktop devs to" (undeniably true)

What makes you say this, the publishing of a third app model for Windows, messaging, some combination of the two, something else?


Both, plus the fact that significant new tech infrastructure projects like the Windows Runtime are (mostly) restricted to Store apps.

I'm not anti- app stores or declarative UI in general, and I think things like standardized sandboxing would be great for many desktop apps too. But a platform that places you at the whim of a single distribution path, and cripples portability to boot, isn't worth it.


It's frustrating that the .NET Native compilation is so tied to the Rosylyn toolchain that it can't be used for other languages like F#, even though they're using the same IR.


Yeah this seams to me like a repudiation of the whole idea of .NET, as a multi-language system. They may add support for F# later, but by that time, the damage will have been done because the message is clear: "use anything other than C# or VB at your own peril".

Currently F# is completely shut out - it can't even be used to make libraries to be included in these kinds of apps! That makes it more dangerous career-wise to use F# going forward, lest your team get wedged into a position of having to rewrite one of your libraries because of your choice of using your "pet" language. Not good for the career.

The whole point of bytecode is to make things like this language independent. Big failure of vision to let this group do such a hack, IMO.


Yep, it's incredibly disappointing. The CLR model is rich enough that MS Office can be compiled entirely to MSIL and run fine. The F# folks had to drag Microsoft kicking and screaming into shipping generics - else the CLR and C# would have probably only gotten the lame Java-esque erasure model.

Yet despite being shown up, utterly, entirely, they just sorta stumble forward. They realise there's no serious competitor in the language space - Java's remained terrible. There's no threat to C#, and it's a lot easier to maintain a compiler than a platform. The runtime hasn't seen any change since .NET 2 introduced generics (with driving work from MSR). Since then, they've tweaked perf a bit, but haven't extended the model at all.

Until there's a real threat, they continue down this easier path. Swift shows up and demonstrates tuples and matching won't kill average devs, ok C# will probably add those. And honestly they can probably get fairly far just reacting here and there.

It is sad though to see the grand idea of the CLR fall this far.


This is most likely just an artifact of F# producing unusual IL that's harder to handle. In my experience, it's less 'we only support C#' and more 'compilers other than C#/VB.net produce very strange IL that is hard to handle well' - which is true. Given the small # of users that rely on those weird compilers, it's hard to justify supporting them first, especially if they aren't performance-sensitive.


I emailed the .NET Native team asking them about this. They confirmed that they reverse out IL to higher-level constructs. So they aren't really working at the IL level, they're working on C# represented as IL.

This isn't entirely uncommon. A lot of code that deals with Expression<T> does so very poorly and won't work on many valid trees because they weren't able to get C# to emit certain expressions.



> At this time, it is not available for desktop apps, although that is certainly something that could come with a future update. It’s not surprising though, since they really want to move to the new app platform anyway.

Thanks, but no thanks. I'm not investing time and effort into a toolchain that ultimately targets only a single platform, namely Windows 10, because they don't want me to be able to just xcopy a .exe without touching their store


How do I verify the app I downloaded was made by the developer? I suppose on phones there's no way to do that now anyway or at least I don't know how on iOS.

My point being if ms is compiling then how do I sign the executable as proof it's from me?


Good question. I assume you upload a cert? My question: what happens when I side load on my phone with a different compiler but ship it to MS who uses a different version and the app runs differently?


Probably MS will re-sign it with their certificate, like what Apple did in their app store.


You know, this is all very well, but the Windows 10 app market is pretty much non-existent. You know who really uses .NET on Windows? LOB applications, front ends to complex systems. The kind of thing you don't see in the app store.

And that market is better served with, you know, a compiler, that runs on the command line, on a machine under a developer's control.


The Windows Store was traditionally just scams with blatant trademark infringement. This has severely damaged their rep. A quick search now shows a lot of it is cleared up. Unfortunately, that just leaves the shovelware junk.

Here is an interesting point: Right now, the #3 top free app is "Freeflix/Free Movies Unlimited"[1] and #4 is a "free Mp3 downloader".

The publisher for the free movies is "Wamba Dev" which results in no related hits on Google outside of Windows Store. Contact is a hotmail account, and the privacy policy is on "dopeware.com".

So Microsoft isn't even able to get real publishers in top positions on its own Store. Pretty damn sad.

Also the Metro runtime is diseased and broken. Opening the Store app right now to do these searches took about 15 seconds. On a decently high-end ThinkPad. It's embarrassingly bad. Even the Metro calc takes 2-3 seconds to open, ffs.

1: https://www.microsoft.com/en-nz/store/apps/freeflix-free-mov...


One consequence of these 'app store compiled' models is that the compilation units you upload will have to be licensed to the app store in such a way that the derived native apps are legitimately licensed. That probably rules out, for example, even including LGPL libraries in your app, since the resulting application will statically compile that library code in with .NET components.


LGPL doesn't forbid static linking, you'll just have to provide linkable object files (or source code) separately to enable re-linking. (https://www.gnu.org/copyleft/lesser.html#section4)


I'd love to have the JVM equivalent of this for my company's main desktop product, written in Java. Due to being Java (and having an embedded JRE) the app has slow startup, due to the JVM having to be started up. The installer is 90 MB on OS X, 55 MB on Windows, much of which is the embedded JRE. So our our fortnightly updates are much bigger than they need to be.

We tried an AOT compiler called JET. It was fine, albeit expensive. But it has an extremely long build time, no great for our continuous build system. It's support Java version also lags considerably behind the current Oracle Java release.


Oracle is planning to finally add it to the standard JDK around Java 10 timeframe, but it might be a premium feature similar to the mission control licensing.

https://www.youtube.com/watch?v=Xybzyv8qbOc&list=PLX8CzqL3Ar...


Check out Avian (http://oss.readytalk.com/avian/). You can use it with the OpenJDK class library. You can even run your application plus the class library through ProGuard, then AOT-compile the whole thing to generate a boot image for fast startup.


As I understand it, a big part of the slow load time for Java apps is sucking everything on the classpath into RAM, even though most apps will use a fraction of the classes that are loaded.

Java 9's module system should make that a lot less of a problem by giving the JVM guidance on what it should load and skip.

I am not a Java expert, though, so I may have misunderstood what's going on.


The JVM class loader by default reads a class on first reference to it in the code it's executing. Never refer to it, never gets loaded.

Startup on the JVM is more affected by the JIT warming up, and the heap getting into a "stable" configuration.


Thanks for the correction.


Every poorly performance problem I've ever encountered has been due to poor design choices, poor choice of algorithm or, most often, badly implemented algorithms.

Once past the JIT compilation delay the execution speed is not significantly different for AOT and JIT code for mature platforms such as .Net and Java.

There are issues in CPU and RAM resource limited systems such as mobiles and embedded systems. And also web servers - where small improvements in scalability can dramatically affect business viability and profitability.


IIRC the .NET Native literature says the main benefits are reducing RAM by a fair amount. Which makes sense - no runtime to load type info and no JIT to run. AFAIK, they aren't claiming the compiled code is _that_ much better than what the JIT does.

Of course it's probable that reducing RAM footprint significantly might improve runtimes if it makes better use of cache.


I just wrote 18,000 programming language statements in Visual Basic .NET, in files with 80,000 lines including comments, blank lines, etc., compiled much of it with a command line script with line

    programPath = dq || SystemRoot || ,
    '\Microsoft.NET\Framework\v4.0.30319\vbc.exe' || dq
and give the rest to the Microsoft Internet Information Server (IIS) to compile and run as Web pages, and it all works, but, still, I can't make any sense out of the OP at all.

E.g., my command line compiles result in a file with extension EXE. What's in that? Sure, I have the Microsoft .NET Framework 4.0 installed, and the VBC.EXE invoked as above is, of course, one of the files in the installation of that .NET Framework.

When from a command line I just run that script that runs that VBC.EXE, I get nice output showing the command line options for the program -- simple, direct, explicit, terrific.

So, how do I connect what that .NET Framework 4.0 VBC.EXE does with the discussion in the OP?

BTW: Why use Visual Basic .NET instead of C#? C# borrows too much of the old C syntax, and Visual Basic has syntax more traditional and more like that of Pascal, PL/I, Fortran, etc., and I find that more traditional syntax easier to write and read and less error prone. But, likely and apparently, the difference between C# and Visual Basic .NET is mostly just syntactic sugar anyway.

By the way, where does the famous CLR -- common language runtime -- enter this picture?

Thanks.


What?


What part of my question do you not understand?


What is the question?


The question was:

> So, how do I connect what that .NET Framework 4.0 VBC.EXE does with the discussion in the OP?


It doesn't at the moment. .NET Native only applies to Windows Store (metro) apps.

But in theory, it would work like a super-powered ngen.exe does (look up ngen if you don't know). But instead of still requiring a runtime installed, .NET Native will also link in the required bits of the runtime statically (the GC for instance).

This model is old and has been discussed quite a bit. Mono has supported it for a long time. MS has been ideologically opposed to it for over a decade and only just recently reversed their opinion, probably due to limited resources on low-end devices that MS has now realised are important.


The VB compiler compiles your code, not into native machine code, but into an intermediate representation targeting the virtual machine defined by the Common Language Runtime. This intermediate representation is known as CIL (Common Intermediate Language), and it is the bytecode that the .Net CLR then JITs at runtime.

This article is about compiling .Net code direct to machine code and skipping the whole CIL/CLR/JIT chain.


Thanks for the progress.

I've used virtual machines for decades, e.g., IBM's VM with the interactive CMS (Conversational Monitor System). In what sense is the Microsoft CIL/CLR a virtual machine? E.g., does it have any security properties, e.g., restrictions via some attribute control lists?

The CLI is a virtual machine only in the sense that the byte codes are not actual machine instructions for any real processor core but are just some intermediate code instructions for an imaginary machine that does not really exist and, in that sense, is virtual?

So, on Windows, say 7, 8.1, 10, Windows Server of some year, etc., suppose I take a file A.VB I've typed in with Visual Basic .NET source code, give that file A.VB to the .NET program VBC.EXE, which is in the collection of .NET files, and get out from running VBC.EXE file A.EXE. Now A.EXE has CIL byte code?

Why byte code? Or, in what sense can each code be only one byte long?

Does this byte code mean that it would be the same for running on 32 bit Intel x86 processors, 64 bit Intel processors, ARM processors, etc.?

The CLR part is mostly run time, that is, code my program A.VB and A.EXE needs to run, that is, in old terms, a library to be linked in via a linkage editor? Then with .NET on Windows, the JIT work also plays the role of a linkage editor?

One old trick was, don't even bring in the code to be linked and, really, don't even link to that code and, instead, as the program runs and such code gets called, that is, an attempt is made to use it, in case that happens (which maybe often it won't), there is a software fault, interrupt, or some such and the JIT code, still in Windows and still available to run, only then, after the interrupt, actually gets and links the code that is needed but so far was missing?

Another old trick was, really, never bring that library code in the user's program, do link to it, but have the code it part of the user's address space that is shared with all the user address spaces or even in another security ring -- maybe Microsoft is also doing some such trick? I just outlined the Windows Global Cache, e.g., mostly from DLLs instead of EXEs?

Okay, I can see the point, as in the OP, of the effort for native. Indeed, I've suspected that on Windows frequently used programs are kept in a cache somewhere and, likely, ready to read into an address space or part of a process as fast as possible, maybe even with the usual address relocation work already done. So, this looks like an under the covers version of native for Windows?

Ah, I just looked up Microsoft's ngen, and maybe what I just described was ngen?

I've seen no very clear documentation of these issues. I've been guessing at what happens, and that's not so good.

Thanks for the tutorial.


Some of your questions are because there is a lot of overloaded terminology. Virtual Machine, for example, has two meanings:

1) A system virtual machine provides a complete system platform which supports the execution of a complete operating system (OS).[1] These usually emulate an existing architecture, and are built with the purpose of either providing a platform to run programs where the real hardware is not available for use (for example, executing on otherwise obsolete platforms), or of having multiple instances of virtual machines leading to more efficient use of computing resources, both in terms of energy consumption and cost effectiveness (known as hardware virtualization, the key to a cloud computing environment), or both.

2) A process virtual machine (also, language virtual machine) is designed to run a single program, which means that it supports a single process. Such virtual machines are usually closely suited to one or more programming languages and built with the purpose of providing program portability and flexibility (amongst other things). An essential characteristic of a virtual machine is that the software running inside is limited to the resources and abstractions provided by the virtual machine—it cannot break out of its virtual environment.

The CLI is the second type of virtual machine and something like VirtualBox, VMware, and IBM mainframe VM's are the first kind.

> Why byte code? Or, in what sense can each code be only one byte long?

Again, it's not "byte code" it's "bytecode" which is defined here: https://en.wikipedia.org/wiki/Bytecode

"Bytecode, also known as p-code (portable code), is a form of instruction set designed for efficient execution by a software interpreter"


Thanks.

Okay, I know some about the details of IBM's VM and a little about VMware.

I have wondered: The way VM has worked on IBM mainframe instruction set has been partly due to an accident of the design of that instruction set and, then, some extensions for VM. Really, that way, tough for the program running to know if it is running as a virtual machine or not. So, in particular, the program running on VM can be using privileged instructions and not know that it doesn't really have access to the real hardware.

So I've wondered if the Intel x86 instruction set also has this accident and, thus, can run operating systems that use privileged instructions but not know they are running on a VM. Or maybe the ability to run on a VM was from some extensions to the Intel instruction set. Do you know?

Does the Microsoft CLI/CLR software have some security features beyond just any native program in, say, Fortran or assembler running in an address space, process, or whatever Microsoft calls where a program runs?

I'm beginning to get the prerequisites for reading the OP, that is, understanding what the long standard alternative to native code has been.

Heck, I can understand native code -- at one time I entered some simple programs via the computer console sense switches. And I printed out the object listing from a Fortran compiler and went over the machine language instructions one by one. I discovered that even for some simple code and a good Fortran compiler, assembler could be faster by a factor of several.

Just read the Wikipedia article. Nice and easy.

Thanks.


> So I've wondered if the Intel x86 instruction set also has this accident and, thus, can run operating systems that use privileged instructions but not know they are running on a VM. Or maybe the ability to run on a VM was from some extensions to the Intel instruction set. Do you know?

No, originally x86 was very difficult to emulate in a virtual machine. User-mode can be run directly on the processor but, for kernel-level code, binary translation was necessary to dynamically re-write the code containing privileged instructions. However, these days nearly all modern x86 processors now have extra instructions specifically for virtualization.

> Does the Microsoft CLI/CLR software have some security features beyond just any native program

There are a bunch of sandboxing features available in the CLR but most applications run with full trust and can do anything a native program can do.


Super! Thanks, I needed that!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: