Oracle went the wrong direction with unsafe. Unsafe memory access is a feature of .NET and the CLR, not a hidden API where you get reprimanded by the community for using it. Not surprisingly, c# has succeeded where Java has failed in being cross platform, used for instance in game development with the unity engine or multi platform mobile development via xamarin. On the other hand, no one in the .NET community is complaining that their code is segfaulting all the time.
The jdk API developers should not be granted a special API because they are more trustworthy than the rest of us "average" developers. Unsafe should be made public, not removed and replaced.
Unity "proving" C#'s success in games is about as reasonable as saying that Android games in Java proves Java's suitability for games. Its an accident of history that those languages were tied to particular projects that were very successful. Unity could have used Java for scripting (in fact, it does now for Android) and Android could have been delivered in c#. Claiming that these products could have only been delivered in their target languages is just ridiculous
Honestly, this is a poor argument. While both Tim Sweeney and John Carmack are legends of the gaming industry, their opinions are still should be taken with a pinch of salt.
Here's why: both Unreal Engine 3 and Unreal Engine 4 are giant OOP behemoths that powers sluggish games and sluggish editors. They consume a rediculous amount of memory, especially in large projects. Even in the slides you linked, they admitted that there is no obvious way to optimize the engine or scale it over multiple cores, because its performance suffers from death by a thousand cuts. Recently Epic spent a huge amount of time trying to optimize their garbage collector, which, arguably, without it they could have better spent the man hours elsewhere. And then there's the new network replication blueprint which is designed to overcome OOP performance issue. And Tim Sweeney swears by OOP. All of these while the industry is moving more and more towards Data Oriented design. Regarding John's tweet, an interesting question was that could he have built the successful Doom games with BASIC? Most likely no.
Well, I would gladly like to know which other game engines, using beautiful Data Oriented design, free of OOP cruft and GC as you put it are able to beat Unreal at this.
"A Star Wars UE4 Real-Time Ray Tracing Cinematic Demo"
After all, it should be relatively easy to challenge such giant bloated sluggish behemoth OOP engine.
As for BASIC, probably. If John made use of a proper BASIC compiler for MS-DOS like PowerBasic, nee TurboBasic, with the right set of compiler flags.
Naturally it would also have its share of inline Assembly, just like Doom, as every owner of "Zen of Assembly Language" first edition knows that C and C++ compilers for MS-DOS weren't the speed daemons developers nowadays taken them for.
You should spend some time learning how AMOS used to be loved on the Amiga Demoscene back in the day.
Not surprised to see such things created with Unreal because they have loads of money and man power to throw at problems. But then again the 2 examples you show does not really fairly represent anything, because the raytracing demo uses super expensive Nvidia hardware that not many people can touch, the second demo also requires specific mocap hardware and there's only one character active in the whole scene. Data oriented is still relatively new, give it some time. But I would say if Unreal 4 was written with data oriented design in the first place, it would have been much better, at least performance wise. Even if it is a mixture of OOP and DOD, no one says you cannot do that. It was a lost opportunity. Working with it right now is a pain, really. After all, if the engine's performance is already so great, why did they need to spend so much time fixing performance problems in their latest releases?
If you want to see what data oriented design can do, with regards to games, you can check out Unity's lastest show of their beta ECS and Job systems. And their custom C# compiler called Burst. Essentially sidestepping the Garbage collector, compile away many of C# safety features, like bounds check, under specific conditions. While processing entities in straight array iteration fashion. All in the name of performance, undoing the damages caused in the past. I guess you know that newer generations of CPUs require effective use of the cache as well as multithreading to extract the full power out. At least Unity technologies should be applauded for trying to do that, and for trying to bring it to the mass.
I'm pretty sure that many game engines do use data oriented design to some capacity, especially those that used to run on PS3. But just because they are not available off-the-shelf, no source code to look at, doesn't mean they don't exist. Off the top of my head is Insomniac Games, arguably successful studio which is also a strong advocate of DOD.
All this is to say that there are times when even the legends hold questionable beliefs, which we should take with a grain of salt.
Yes, I have seen all GDC and Unite 2018 presentations about HPC# and what is currently available.
You should find some submissions done by myself.
HPC# job is only to replace those parts currently written in C++, everything else is written in plain C#, with GC, as Unity always was. In fact they have migrated to .NET 4.6 with 4.7 on the roadmap.
Just like Unreal uses C++ with GC, but naturally in performance critical paths they resort to other techniques.
Which isn't much different than on the old days, avoid malloc() and new on the performance critical paths.
Just because there is a GC available doesn't mean one has to use it for every single allocation.
Unity's ECS is implemented in a OOP language, C#, and anyone that has spent some time in the CS literature about component systems, knows that they are just another variation how to do OOP.
One of the best sources uses Component Pascal, Java and C++ (COM) to described them (1997), with the 2nd edition (2002) updated to include C# as well.
Indeed, you can always say that avoid GC, avoid allocations on the hot path, but then in practice people write sloppy code all the time, then trip over by the GC, I'm guilty of it myself on numerous occasions in the past. Maybe, just maybe, without the GC in the first place, people are forced to write better code? The same thing goes for OOP. In particular, inheritance is severly abused in too many code bases, that then causes all sort of problems. IDK, maybe because I hold an extreme view about those things, that they are better off being left out completely? ¯\_(ツ)_/¯
It's not a poor argument at all. The claim was: "And the notion that a garbage collector should ever be part of a game engine (or anywhere near it) is frankly baffling to me.
.. unless you're making some turn based thing." But UE4 uses GC. One of the premium games engines uses the very thing that the post claims should never be used in a game engine. I have some sympathy for the idea that GC should at least be tightly controlled and that no-GC has its place. But to claim that GC should never be part of a game engine is simply contradicted by reality.
Well I wasn't arguing with that claim, but rather pjmpl's references to what Tim and John said, that I believe should be taken with a grain of salt.
With regards to GC, do check out Unity3D new ECS/Job system/burst compiler. Essentially sidestepping the garbage collector, C# safety checks, while trying to work with the how the new generation of CPUs are designed to perform.
They aren't sidestepping the garbage collector nor the C# safety checks for the whole engine, because HPC# is only used in a very focused area, everything else is plain old C# as always.
I can also state that in the old days C wasn't worthwhile to be useful as a programming language to write game engines, because proper game engines were written mostly in Assembly with C, Pascal or Basic as their scripting layer if at all.
I'm a Unity developer. I'm well aware of what the ECS/Jobs/Burst architecture brings. I still disagree with your assertion. Note the fact that this data oriented pivot occurs NOW with Unity. It's managed to do pretty well up to this point despite NOT having that feature. Using GC didn't stop it becoming wildly successful.
No where did I say that the GC hindered Unity's boom in anyway. In fact I do agree that the GC, together with C#, allowed Unity to be wildly successful. But it did so by allow too many people to write sloppy code that has performance problems. And that code would leak memory all over the place without the GC. It is sort of eroding the art of programming.
I'm not sure if you were making a point about Java... but: you can effectively run Java without GC. If collection pauses are an issue you can preallocate everything and ensure no significant GC at runtime.
Microsoft tried to make this happen with .NET compact framework on Windows Phone for years and failed. So, no, Android could not have been delivered in C#.
Also .NET compact framework is not the same runtime as the .NET Framework, .NET MDIL (WP 8.x) or .NET Native (WP 10/UWP).
Actually, thanks to being AOT compiled to native code, budget WP 8.x and WP 10 device, used to be faster than their Android counterparts on the same price range.
Java has surely not failed in being cross platform, not if we are comparing it with .NET.
.NET Core still doesn't run in many environments, where I can gladly put Java on. The computing world is much bigger than just macOS, Windows and GNU/Linux.
And then there is the issue that even Swing/JavaFX/SWT are better supported that whatever .NET Core UI framework one can think of.
Xamarin is nice and way better than React Native, Cordova or Qt, and yet you aren't going to find much love for it on online forums.
On game development C# got lucky with Unity, because finally there was a company that was firm to hold on to managed languages in game development no matter what, and they managed to turn out one of the best engines out there for indies.
Had Unity decided to go the way of Unreal with UnrealScript and the story would be quite different.
I still remember the JavaGamming initiative at Java ONE, Java3D or the JOGL project, but like many other Sun initiatives, they let it die after a couple of years. Maybe here the story would have been different if they were actually committed to it.
Thing is UNIX companies never understood desktop and gaming cultures, so here .NET has notoriously an advantage by having been part of a desktop OS SDK since its inception, from a company that actually understands desktop and gaming cultures.
Stack allocation is the one 'killer feature' c# has had over java since forever. Really critical for games, ie 3d math for example. Still no 'value objects' in java, I don't when or if they're planned nowadays, i stopped following that long ago.
> And then there is the issue that even Swing/JavaFX/SWT are better supported that whatever .NET Core UI framework one can think of.
Try to use GUI designers and commercial component libraries for Swing/JavaFX/SWT and then try to use GUI designers XNA/FNA, Qt and Gtk+ while using only C# code.
I am speaking about production quality GUI tooling with commercial support, not a legacy framework abandoned by Microsoft due to their internal politics, or some bindings written over a couple of weekends that still rely mostly on C++.
>Not surprisingly, c# has succeeded where Java has failed in being cross platform, used for instance in game development with the unity engine
C# succeeded in game development? That's its claim to fame? Being an optional scripting language used by Unity? The real work is done by the graphics engine that's written in C++.
>or multi platform mobile development via xamarin
Last time I checked the percentage of apps on the App Store and Google Play that were written with Xamarin tools were so low that they barely even registered.
No and no. I’m no unity expert but I think it is based on Mono, an open source implementation of the CLR and the C# compiler. So it doesn’t "steal" the syntax, it use a compiler made for the language. To put it simply, Mono was (cf infra) the open-JDK of the .net world.
Xamarin is a startup founded and run by people working on Mono.
In the recent years Microsoft opened its new C# compiler (Roslyn) and bought Xamarin with have blurred the lines between the implementations.
Kind of, Mono never was up to speed with what .NET Runtime is capable of, plus before Xamarin got acquired by Microsoft, Unity didn't want to renew their licenses so C# on Unity was stuck into .NET 3.5 world.
They also started their own native code compiler for C#, called IL2CPP, because it compiles MSIL via C++ compiler.
Recently they migrated to .NET 4.6, although the latest version is .NET 4.7.2, due to some Roslyn integration issues.
Additionally they started another project to compile a C# subset (HPC#), with a new compiler named Burst, as a means to start porting some of their C++ modules into C#.
Right, and they are just one example of unsafe apis moved into the public jdk.
Oracle took a fairly measured approach here, they saw what was used and made public apis that are either clones of the private ones or that better capture common usage.
That it took so long isn't great, but they are certainly going the right way.
I think OP means, how in C#, you have the unsafe keyword, which restricts unsafe usage to particular unsafe code blocks, and allows any unsafe operation to be contained.
> Please don't insinuate that someone hasn't read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
Check out Project Panama. It's trying to bridge the gap between Java and C# when it comes to native interop, which is the main usecase for 'unsafe' features.
> I’ll leave you with a question to ponder, would Java have been as successful as it has been had sun.misc.Unsafe not been hidden in the library code but still accessible?
Yes, and it would have been as successful if Unsafe was never added. The language choice rarely hinged on this, it was just used to improve performance and could have been done natively if it had to.
Hardly, I spent a lot of time in spaces where Java and others languages would be considered. If there was ever any hint if performance being a bottleneck we chose C++ over it. Even moreso because unsafe doesnt exist on some variants(Android).
What unsafe gets you is good cache usage. JNI and other approches negate that benefit(also disables inlining and other things).
We just have different experiences, as in a lot of shops I was in, performance was not the primary factor in language choice, but it could be its primary detractor of course if poor. Rarely did we ever find the bottleneck of the language the issue, and we dropped into JNI as needed (or more often reworked away from our original assumptions). I would say the majority of Java's success came from these kinds of shops, and the ones where performance made so much of a difference you would abandon the language altogether, I still doubt unsafe saved that abandonment from happening.
This is not to say Unsafe was not appreciated by lots of places, esp wrt high performance libs, embedded dev, HFT shops, etc. But I don't believe that Unsafe really affected success/popularity of the language, heck, it's not even the most important pert of the performance story around the success/popularity of the language not to mention all the other non-performance-related aspects that really did affect its success.
Not denying that Java is/was popular, just that it's popularity was curbed by the lack of value types, direct memory allocation(unsafe) and a few other performance sensitive features.
The same reason that you see Unity dominate the game space is that the native interop and cache usage story is much clearer.
FWIW JNI is not a silver bullet to performance problems by any means. The actual overhead of the JNI call either into or out of the VM is non-trivial and usually evicted whatever cache line you were touching. For things that are performance critical you'd usually see a drop of 10-50x between mixed JNI and code that was more cache aware but did the same work(in C++/C/C#).
The irony here is that although C# has had direct memory and structs available for ages, Unity never took advantage of that - until just now - the in-development ECS / C# Jobs / Burst compiler architecture is utilising packed structs and manual memory allocation heavily. Why I say this is ironic is that Unity/C# have succeeded despite not using these features that you claim curb Java's popularity in the same space
I tend to agree with you. Unsafe was great and useful, but Java was rarely used for the kind of things that would have been impossible without it. I think the real impact on its share was minimal, tbh.
I'm glad that it is moving into a fully supported state, though.
Java was rarely used for that kind of things, but now it is used widely.
There's a new crop of Java databases, starting from Apache Cassandra and towards memory grids such as Apache Ignite or Apache Geode. And they benefit tremendously from access to Unsafe.
Yes it's probably 1% of code ever written in Java by line count, but with potential of being 20-50% by CPU and RAM usage. It is sine qua non, a vital piece of infrastructure.
I don't think so. Do you mean by something like JNI? The use-cases of Unsafe that I see most often would not have been tractable using JNI. They rely on intrinsification.
> it was just used to improve performance
But to the point where Java is not suitable at all without the performance, so it's not a nice-to-have for some people, it's essential.
The use cases I saw were memory alloc, but I admit not being too familiar with its common uses. The other uses I saw were just for perf improvement over existing synchronization/reflection approaches.
While I agree that performance is often essential, I don't think the gains from unsafe measurably affected the language's success. Even without, the performance was still a draw compared to many other high level options. If anything, I do think it kept a bunch of libs from dipping into JNI.
I see many of non-trivial java projects using UnSafe. At this point, why not make UnSafe a feature and let programmers use memory outside the purview of Garbage Collector ?
- You cannot get the native address of arrays in order to pass them to native code.
- Arrays are subject to being moved by the GC, so even if you could get the address they could move.
- Arrays are limited to around 16 GB, and that's if you're packing data into longs.
- Accessing an array involves a bounds check, which may float out of a loop, but may well not.
- Arrays cannot be allocated with memory that is anything except mapped private.
I use Unsafe to put memory into a location where I can get a stable native address in order to make native calls using it. Using an array would not allow that.
It's not a JIT problem, it's a VM problem. Graal uses the same VM with a different (or additional JIT) JIT and inherits the same (intentional) 2 GB limitations.
While J9 can give you larger arrays AFAIK you're not guaranteed to get a single, contiguous memory region. You won't notice in Java but JNI criticals may return copies.
I'm not sure what you're getting at. Why would you want to? The issue was having a big chunk of native memory that you can pack data into on the java side and read contiguously without barriers on the native side and that's what a native allocated byte buffer would get you. Or am I missing something? Not sure what varhandles adds to the situation? Sorry if I'm being dense :o)
> There are many ways you can use these methods that will return a result that is meaningless (if you’re lucky) or causes the JVM to stop abruptly (if you’re not).
In my world I prefer a hard crash than me wingless results, which I then have a hard time reading down ...
The jdk API developers should not be granted a special API because they are more trustworthy than the rest of us "average" developers. Unsafe should be made public, not removed and replaced.