Hacker News new | past | comments | ask | show | jobs | submit login
Xenko Game Engine 2.0 released (xenko.com)
74 points by bluesilver07 on April 25, 2017 | hide | past | favorite | 75 comments



Congratulations on the work achieved, specially on adopting higher level languages.

The growing adoption of C# among game engines, brings me back memories of when game engines finally started to allow C++ on their source code.


I have memories of people trying Java for the same purpose. I believe, the C# will ultimately meet the same fate. Here is why (I believe that):

Switching from C/C++ to a GC language is nothing like switching from Assembler to C/C++. We did not write Assembler because we liked it. We wrote it because we targeted 8/16-bit archs, which did not fit C/C++ memory model quite well. Because of this Assembler had huge speed and size advantage. As soon as flat memory archs became viable almost everybody dropped Assembler without regret. We lost a bit of performance (which could have been easily regained by rewriting inner loops in Assembler) in exchange of reusable, portable code, type system and terse program texts. These allowed to write much more complex games by engaging bigger teams who could sensibly collaborate over a much bigger project in C/C++.

What is the transition to GC from C++? You sacrifice a lot of performance for freedom from managing memory. I don't know about other people but having shipped few AAA games my only concern was fitting shit into available memory (which GC does not help, to say the least). You allocate memory on load. If you load multiple levels you nuke the previous level's memory. If you stream you do the same but your "levels" are now "segments" in a pool. Of course, if you want to go full-GC and allocate your structures byte by byte, uint32_t by uint32_t - be my guest, not sure how you plan to compete but, hey, load times (same as frame rate) are not that important as they say on the internet :) Nevertheless, as soon as 32-bit targets become completely obsolete people will find out that you can pre-allocate everything off-line in 64-bit so all you will have to do at run-time is map/unmap pages (which GC won't help either). In conclusion: one dropped performance on the floor (due to cache issues, SIMD issues, off-line vs JIT quality issues etc) and picked up the solution to the problem one should not have had in the first place.


C# is, among GC languages, particularly bloated with regards to memory. There is huge amounts of runtime type information (including stringified versions of everything) and you have close to no control over memory management without hooking into ugly APIs. I wish that Swift or Rust had been usable early enough to get swept up in the hype train so that we could avoid this crap. Also, it would mean I didn't have to use OOP to write games.


C# specifies the language, not the runtime. Apart from .NET and Mono, you can compile C# to cpp (what Unity does under the hood, btw).

> Also, it would mean I didn't have to use OOP to write games.

There are many instances where other programming paradigms are more usable - but in my experience, gameplay logic, specifically, maps very well onto classic OOP concepts.


If gameplay mapped well onto OOP then the big game companies would use it, and not data-driven programming (specifically, entity-component systems). I think that there are a couple things that require OOP-style dynamic dispatch, but they don't need to be pervasive, even a function pointer stored as a struct member is more than enough to provide all the benefits of OOP without having to take on all the drawbacks.


You should take a look at what is going on in the latest dotnet core CLR and C# compiler around 'ref-like' types and Span<>s. Basically a way to let you write 'safe' C# code that interacts with memory that is allocated in different ways (for example with controlled allocation outside of the GC).

Since C# is a multiparadigm language with strong functional as well as OO fundamentals, I suspect it will make it possible for libraries and frameworks to be built which enable allocation free code in C# gameloops. It's not just game engines which have these requirements - the same functionality is also critical for high performance servers, and machine learning applications, so this is a language feature with a lot of demand.


You're forgetting that this transition takes place in game engines: so the critical path is still mostly C++, while C# is responsible for game logic and most of it is run on callbacks, not in the update loop.

Same goes for memory: textures and other assets are managed through the engine; C# only contains game logic data and handles to the assets.


I was once working on a solo indie game using C# with OpenGL bindings. If you create it along the normal program lines of object instantiate/use/destroy (and repeat), then you're soon going to run into serious issues.

At the time I had many issues with the GC, Large Object Heap and fragmentation (although this had still been in a much older .NET version).

For a game project, both for speed and memory, you want as much as possible to be pre-allocated (and all at once). During run-time you want as few objects as possible being instantiated, that way you save cycles on both the objects, and eventually on the GC. Of course it's not always that easy.

After that it depends on your targets, you'll never fit a Unity C# game as tightly into any target as a native app, but it's a trade-off that's worth making in many cases.


Java only failed because Sun didn't care enough for pushing it into game developers after their initial presentations. Actually they didn't care for any other scenario that wasn't JEE, in terms of putting resources into it.

Switching from Assembly into C, Pascal, Modula-2, QuickBasic was mostly an amateur thing until SDKs started to be based on C.

Same with C++, we were the outliers until console vendors started pushing C++ into devs. See Mike Acton's talks about about how he enjoyed having been forced to move from C into C++.

The thing with GC is not the freedom to manage memory, you still need to take care about how memory is managed. What one gets is type safety and an uniform way to manage memory across libraries.

You know exactly who is responsible for releasing it, there are no double deallocations, releasing invalid addresses, or having to deal with library specific smart pointers (CPtr, QSmartPtr, unique_ptr, my_in-house_ptr, ...).

> due to cache issues, SIMD issues, off-line vs JIT quality issues etc

Having a GC doesn't preclude lack of support for cache handling or SIMD.

Many GC based languages have such support and actually C# 7 with Microsoft's toolchain (not Unity's one) does offer such features, even if they can still be improved.

As for JIT quality, it is no different from any other compiler, otherwise as I referred, why not discuss C performance using MS-DOS compiler's benchmarks.


One problem I have with memory-managed languages for writing high-performance-code is that you need to give up the simplicity of not having to care about memory management. You need to know exactly how the garbage collector in this specific language (and current version of the language) works, and how to build your code around that. The resulting 'mental burden' is the same or even bigger as in C or C++, and the resulting code often looks worse then C++ code which does the same. An extreme example of this is asm.js, much higher performance than typical JS since it only uses numbers, not JS objects (and thus basically removes the need for garbage collection), but so much different from manually written JS that it is basically a different language.

As for smart pointers in C++: they are only useful in a code architecture that essentially emulates C# or Java (all objects as single entities on the heap) using C or C++ in such a "memory-managed style" really doesn't make much sense since it will be slower then a GC. The point about manual memory management is to reduce dynamic allocations as much as possible and have all your data either on the stack or in long-lived, big memory chunks. If you need to allocate and deallocate all the time, or track the ownership and lifetime for every little memory chunk, you're not using C/C++ to its advantage.

PS: C# for tools is great though (not so much because of the language, but because of the standard framework, which is much nicer than the C++ stdlib).

[edit: typos]


Just like you need to know how malloc()/new/STL allocators for your specific C or C++ compiler are implemented.

And most implementations actually suck at multi-threaded code, to the point there are companies selling better implementations of them.

> The point about manual memory management is to reduce dynamic allocations as much as possible and have all your data either on the stack or in long-lived, big memory chunks.

Also possible in GC enabled languages, grated not available in all of them.

As former C++ dev, I can guarantee you that unless you write 100% of the code yourself, there will be leaks, double frees, delete calls instead of delete[] ones, releasing unallocated memory and totally lack of control of third party leaks.


The point I'm trying to make is that you loose the advantage of a garbage collected language once performance becomes important, so there's no point in using such a language for high-performance code in the first place.

Integrating third party dependencies into C++ is its real achilles heel, I agree. I only accept dependencies that come with source and either don't allocate dynamic memory at all, or allow full control over allocation by providing custom hooks. I also learned over time that typical C code is usually much easier to integrate than typical C++ code (which very often is an over-engineered template mess).

The way to deal effectively with memory management bugs is to allocate as little as possible. There are good analyzer and runtime tools (for instance the new runtime memory debuggers in Xcode and Visual Studio, the static analyzers, the clang address sanitizer in Xcode etc), these provide a much better view on what's going on under the hood than most managed-language-tools. And those tools make it quite trivial to catch most memory-related bugs. If you're doing hundreds-of-thousands of allocations it becomes much harder though, because than the problems will be lost in all the noise.

It is a valid argument though to use higher-level languages in high-level gameplay code, but this should only do scripting-style stuff, glueing subsystems and game objects together. Up in this area I don't want to care about ownership and lifetimes. It's important to find the balance though where the high-level code should better be moved into a low-level, central system.


>Same with C++, we were the outliers until console vendors started pushing C++ into devs. See Mike Acton's talks about about how he enjoyed having been forced to move from C into C++.

The first (major) console C++ SDK library is GNM/GNMX for PS4. Are you saying C++ games has been outliers till 201x? I guess this depends on how you define outliers then. Other than the guy casting his game writing on youtube (Code Hero?) I don't know of any significant titles in C past 2000. I have not seen all the games but I shipped a few myself and have friends working on many more and nobody that I know had been writing pure C. Think of all vtbl-> you'd have to write if you are targeting Windows/Xbox just for the pleasure to name your source file .c instead of .cpp!

>What one gets is type safety

Thanks, we've got this in C++ already.

>You know exactly who is responsible for releasing it, there are no double deallocations, releasing invalid addresses, or having to deal with library specific smart pointers (CPtr, QSmartPtr, unique_ptr, my_in-house_ptr, ...).

Never had been a problem for me. The only game I touched that had smart pointers had them because it used NetImmerse and it got cancelled anyways.

>Having a GC doesn't preclude lack of support for cache handling or SIMD.

I don't know what is cache handling, to be honest, but what I meant is that, since GC moves stuff around memory, it's likely to create cache hazards. And since it probably uses a single pool, you cannot change cache policies on per-entity basis.

>As for JIT quality, it is no different from any other compiler, otherwise as I referred, why not discuss C performance using MS-DOS compiler's benchmarks.

Really? You believe JIT will be on-par with a profile assisted, bruteforcing compiler, which can spend a week optimizing a single function?


> The first (major) console C++ SDK library is GNM/GNMX for PS4.

The major C++ SDK was the DirectX SDK for the Sega Saturn.

> Are you saying C++ games has been outliers till 201x?

Yes, the code was basically C compiled with C++ compiler.

> >What one gets is type safety

> Thanks, we've got this in C++ already.

Not when what most write is actually C compiled with C++ compiler.

> I don't know what is cache handling, to be honest, but what I meant is that, since GC moves stuff around memory

Is the ability to write cache friendly code.

If you don't want the GC to move memory around then don't allocate it via the GC.

Many GC enabled languages also allow for global statics and stack allocation. Even C# has some support here, even if it isn't comparable to what Modula-3 or D allow for.

Also there is also the possibility to just allocate it off GC heap.

> Really? You believe JIT will be on-par with a profile assisted, bruteforcing compiler, which can spend a week optimizing a single function?

No, but JITs can also make use of PGO just like AOT compilers. IBM J9 JIT and .NET RyuJIT have such support.

Also many GC enabled languages, including C# do have AOT compilers to native code code as well, it is not as JIT is the only viable approach.


>The major C++ SDK was the DirectX SDK for the Sega Saturn.

a) there was no DirectX SDK for Sega Saturn. b) DirectX SDK (including one for Sega Dreamcast) is not C++. It has C++ bindings but is usable from C.

>Yes, the code was basically C compiled with C++ compiler.

You mean if I have taken that code and compiled with C compiler it worked? You realize even the DirectX C++ wrapper is not already C, right? You know classes, overloads, namespaces are not C?

>Not when what most write is actually C compiled with C++ compiler.

Could you explain exactly how this works? The C++ compiler uses ML to recognize that the code is not exactly following Alexandrescu's book and turns off typechecks? I seriously just don't understand what you mean here. I usually take "C compiled with C++ compiler", "C with classes" etc as "not enough GoF patterns for my taste" but you are making some other claim here it seems.

>If you don't want the GC to move memory around then don't allocate it via the GC.

So, why do you want GC in the first place? For types, which somehow disappeared from C++?

>No, but JITs can also make use of PGO just like AOT compilers.

How exactly does it work? The code stops executing for a week, the profiler gets run to under user credentials and then JIT finally decides?


> there was no DirectX SDK for Sega Saturn.

Broken memories, I didn't bother to search for it (Saturn vs Dreamcast).

> You mean if I have taken that code and compiled with C compiler it worked?

Of course it would, that was one of the design goals of C++.

C90 is mostly a C++98 subset, except for stronger type conversion rules (no implicit void* conversions), precedence order for operator ?: and typedef/struct namespaces.

> You realize even the DirectX C++ wrapper is not already C, right? You know classes, overloads, namespaces are not C?

Yes, but COM is also callable from C by design. Also I have seen many codebases that have restricted C++ code to calling DX APIs, with everything else being compilable by a C compiler as well.

> Could you explain exactly how this works? ...

1 - Rename .c translation units to .cpp, .cxx, .C

2 - Invoke C++ compiler on them

3 - Fix compiler errors related to semantic differences in C subset of C++

4 - Forbid use of any C++ specific feature beyond those required to use the OS SDK.

> So, why do you want GC in the first place? For types, which somehow disappeared from C++?

Productivity.

> How exactly does it work? The code stops executing for a week, the profiler gets run to under user credentials and then JIT finally decides?

PGO data generated by the JIT compiler gets updated after each application execution and is used as input for optimization selection just like in an AOT compiler by a multi-stage compiler.

Feel free to read Android 7 ART source code to learn how about a possible implementation.


>Of course it would, that was one of the design goals of C++. >C90 is mostly a C++98 subset, except for stronger type conversion rules (no implicit void* conversions), precedence order for operator ?: and typedef/struct namespaces.

I think either I don't understand something you are trying to say or you are confused. C being a subset of C++ (one of design goals) does not imply C++ is a subset of C. C++ code in general cannot be compiled with C compiler without rewriting.

>Yes, but COM is also callable from C by design. Also I have seen many codebases that have restricted C++ code to calling DX APIs, with everything else being compilable by a C compiler as well.

You have blah->Foo(bar); in C++. In C it won't compile by any design. That code has to be rewritten as blah->vtbl->Foo(blah,bar). It is not C code as you claimed. It won't compile with C compiler. It's C++.

>> Could you explain exactly how this works? ... >1 - Rename .c translation units to .cpp, .cxx, .C

I asked how type checking disappears in this process, not how you compile C with C++....

>PGO data generated by the JIT compiler gets updated after each application execution and is used as input for optimization selection just like in an AOT compiler by a multi-stage compiler.

How can a JIT compiler obtain the PGO data in the first place? Is it running under profiler all the time? You realize that you've just refuted your claim about performance not being affected, right?


You are the one not understanding what it means to pick C code and compile it with a C++ compiler, minus the semantic differences.

Should I enumerate all of them to make you happy?

> You have blah->Foo(bar); in C++. In C it won't compile by any design. That code has to be rewritten as blah->vtbl->Foo(blah,bar). It is not C code as you claimed. It won't compile with C compiler. It's C++.

Nothing prevents you to write COM calls in C++ code just like in C, blah->vtbl->Foo(blah,bar). The code won't stop compiling.

> I asked how type checking disappears in this process, not how you compile C with C++....

The whole point was about writing C like code with a C++ compiler.

> How can a JIT compiler obtain the PGO data in the first place? Is it running under profiler all the time? You realize that you've just refuted your claim about performance not being affected, right?

By using a multi-stage JIT compiler with different levels of optimization and making use of multi-cores.

99% of the applications are never able to saturate all cores to the point it matters to the overall performance.

I have always been on the Pascal and C++ side against C since the early 90's on BBS and USENET.

So this type of disbelief against better tooling is not strange to me.

Cry Engine, Unreal (C++ with GC), Unity, MonoGame, with their separation between lower level C++ and higher level languages, and the way those engines are being adopted by Sony, Nintendo, Microsoft, Amazon, Google speak for themselves.

Just like C and Pascal overtook Assembly, C++ overtook C, something else will overtake C++.


Ok, you completely lost me on the point you are trying to make wrt "C compiled with C++" however I wholeheartedly agree with your penultimate paragraph.

Sony and Nintendo do not use these engines, MS might be making the new Gears with UE though, rest of their first party don't. Amazon and Google have one successful game between themselves (and that's a mobile game). All these engines bring to the industry are indies and mobile games. UE actually lost quite a lot of AAA they had in UE3 times. Epic's and Crytec's own games don't do quite well (Crytec is not even making payroll). So yeah, looks like C++ games do not have much to worry about.


> Ok, you completely lost me on the point you are trying to make wrt "C compiled with C++" however I wholeheartedly agree with your penultimate paragraph.

Use the platform C++ compiler, but write code only the language subset common to C, that could even be compiled by a plain C90 compiler.

The only exception being, wrapping C++ SDK APIs into C like wrappers.

A pattern I have seen too often.

Dismissing indies is the wrong attitude, they were the first ones to adopt C and Pascal, C++, Objective-C and nowadays C#, Swift, Java.

Sony does use C# a lot on their internal tools.

Nintendo uses Unity for prototyping.

Microsoft and Google are using Unity and Unreal a lot for their VR work.

All of them invited MonoGame developers and supported them porting MonoGame into their platforms (PS4, Switch, UWP Xbox).

C++ might not go away, people still write games in C and Assembly, but it will become that very thin bottom layer at the engine architecture diagram.

Just give it time, eventually even all major C compilers became written in C++.


>A pattern I have seen too often.

And I have not seen ever. I have seen some C games (like Quake) but never worked on one. And I've been working in games since 90s.

>Sony does use C# a lot on their internal tools.

And now we are in the theretory of "console games are written on PC!!!1! So there!". Sony also uses Perl and Python on their servers and their movie studios use a lazy evaluated shader language. Does it mean games are switching to any of these soon to you?

>Nintendo uses Unity for prototyping.

And probably play Pokemon Go, also written in Unity! I see the pattern now... it's all clear.

>Microsoft and Google are using Unity and Unreal a lot for their VR work.

Listen. Unity has it's uses. If anything it keeps indies off the streets and provides artists with a way to earn a quick buck. I am not against Unity. It's just silly to imagine it has any chance competing with the traditional game industry tools. Saying so and so uses unity for something, which is not a game does not make the point I imagined you arguing here. Which is C# displacing C++ in games. Elon Musk likely uses Unity so there. Can't beat the old Musk.

But it still does not matter anything at all for the people who had been writing games in C++.


> Not when what most write is actually C compiled with C++ compiler.

Is this moving the goal posts a little bit? C++ now doesn't necessarily look anything like C++ written 20 years ago. Way back, people were writing C+, or C-with-classes, but that's what C++ was, in that era.


Not at all, just because C++11, C++14 and C++17 offer lots of improvements over C++98, doesn't mean people use them.

I can assure you that at enterprise level, every time I have to integrate C++ code with our Java or .NET stacks, it looks exactly like C++ written 20 years ago, even if it was actually written last week.

Just because C++ has been vastly improved, doesn't mean everyone using it are adopting the new features, some people rather stay in Python 2 forever.


> Really? You believe JIT will be on-par with a profile assisted, bruteforcing compiler, which can spend a week optimizing a single function?

I was in game development looong ago (C++ / assembler, lots of 3D software rendering). What kind of compiler is that ? Could you give some pointers ?


All modern compilers do this (MSVC, clang, I believe gcc too). Search for "PGO" (Profile Guided Optimization). Also, not just for C++ - PS3 shader compiler did brute force instruction scheduling optimization (using shaderperf instead of profiler).


damn time to refresh my knowledge :-) That's what happens when you do CRUD too long :-)

Last time I optimized code the hard way was using VTune to channel the right operation in the right pipeline.


In UE4 everything is garbage collected. Lots of high performance, AAA games using it.


Maybe Nim would be a good GC included C++ replacement then? Its GC can reliably be constrained to a maximum run time of 2ms, and it has strong macro support.


IMO, 2ms is too much for strong macro support. Even if GC had 0 run-time cost I'd still rather use C++ since 99.9% of my code is creating memory structures of particular layout (I write graphics). To write the same code in a GC language I'd have to map memory to a file and then use the strong macro system to emulate native struct type from C++ :)


This is Nim vs C#, not Nim vs C++. If you're writing hot loops and graphics code a GC is utterly anathema to your goals. I think given 5/10 years Rust will be the language of choice here, but the ecosystem is nowhere near good enough yet.


As of now I don't see any problems with C++. I doubt Rust is going to offer something so much better that people will abandon the tools they already have for these benefits. Though, who knows? I never touched Rust.


It just needs a OS company to pick it up and push it down dev throats, like it happened before with C and C++.

If you want the goodies on Windows you need C++, if you want the goodies on iOS and OS X you needed Objective-C and now Swift as well, the goodies on Android require Java, the goodies on consoles moved from C to C++.

Same here, if tomorrow one of those companies decides their SDK should be Language X, anyone that wants a piece of the pie will learn Language X.


This could come from a IoT company that does things well. A company providing unsupervised devices would benefit from a OS with a strong approach to avoiding security bugs.


Unity games are often reported to have performance issues, allegedly due to the underlying GC (or lack of understanding of it). I'd rather hope for the adoption of a less messy C++ alternative that is suitable for quasi-realtime applications. So far Jai seems to be the most serious competitor in this regard.


Because Unity uses a Jurassic Mono implementation with a stone age GC and JIT implementation.

By the same metrics, we should evaluate performance of C and C++ code by using benchmarks with MS-DOS C and C++ compilers, which should be about the same age.

Don't mix languages with implementation when talking about performance.


Amen brother. I think most perf problems in Unity games would go away if Unity updated their GC. It's definitely our #1 source of frame rate hiccups. Plus, it makes you write weird crappy code just to avoid GC allocations. Ugg =(


Are there plans to replace Mono with .NET Core?


No, Unity guys are doing their own thing with IL2CPP.

Basically they generate C++ from MSIL as a means to generate native code, instead of building their own compiler backend.

They plan however, to eventually update their Mono runtime.

https://forum.unity3d.com/threads/future-plans-for-the-mono-...


> So far Jai seems to be the most serious competitor in this regard.

1. Jai is not even released.

2. Jai is a programming language, not a game engine.

Unity is slow because they are using an outdated C# implementation (some old Mono).


1. If you follow its development you know the features and the design philosophy. What's your point? 2. I picked Unity because it's the most popular engine with an underlying GC.

Regardless of their GC implementation, in games you don't want opaque runtime behaviour, which a GC always introduces. This leads to weird workarounds in your code that nobody understands instead of having clear and explicit memory management.


My point is that until a language is proven, as in used on a large scale it is just a nice idea. Let's wait until Jai is released and used by different teams for different tasks until we proclaim its superiority.


I didn't claim superiority, I just that it's the most serious competitor. A point I still stand by. The original post compared the adoption of C# to the adoption of C++, the now de-facto standard for games. C# does not have what it takes to become the de-facto standard for games.


Sure it does.

C++ became a de-facto standard for games, because console SDKs moved from C to C++, pushed by the companies selling them.

The same companies that are now adopting Unity and have already toyed with the idea of having a C# SDK.

If for the sake of example, PS5 SDK would be C# based, devs that wanted any money from PS5 games would adopt it, regardless of their feelings regarding C++ vs C#.


>If for the sake of example

This is a terrible example and you have no idea what you're talking about.


As someone that once upon a time was at SCEE SOHO office, maybe I do know one or two things.


Not to be too pedantic but I'm pretty sure he's never actually referred to it as "Jai" and everyone's only been inferring that because of the source file extensions.


Unity games are often reported to have performance issues, allegedly due to the underlying GC (or lack of understanding of it).

Is there a way of running C# with reference counting?


At that point you've created a different language with different semantics.


I think "you can write C#, but you can only have your references form a DAG" would be acceptable in a lot of contexts, like game programming. (Existing libraries would still require some form of tracing, but I think there are ways that the tracing could be handled by another thread.)


I instinctively get that thought whenever I work with C#. It's really nice but would it kill them to at least offer simplistic memory management semantics?


People think RC as GC algorithm is simpler, but it isn't.

Naive implementations are slower than tracing GC, while high performance ones are as complex as tracing GC ones, while having the burden of forcing the developers to explicitly deal with cycles.


high performance ones are as complex as tracing GC ones, while having the burden of forcing the developers to explicitly deal with cycles.

I bet there are plenty of game developers who would be only too glad to arrange their references as a DAG, if they could have guaranteed low latency.


Maybe, but if they don't take care, some stack overflow surprises might happen, the fun of cascade deletions.


How is this compared to Unity?


Frankly any comparison is going to be superficial unless the comparison is done by a person who has actually shipped a non-trivial game on both engines.

So many of the problems you run into making a game on Unity, Unreal, etc come down to the out-of-the-ordinary requirements of your game and the inevitable peculiarities of the engine itself. Part of becoming an "expert" in building games on a third party engine is knowing where the pain points lie, where you should absolutely not fight The Way It's Done, and what bugs remain unfixed for years on end (Unity asset bundle system, looking at you).

Edit: My point being that unless you know the needs of your particular game and have familiarity with a candidate engine, it will be very difficult to determine if your choice is "best" or not.


Reminds me of all the "Lets make a battlezone like shooter"-gamedevs who dropped by on the Spring-Engines dev site. And yes, you are perfectly capable to do that. And no, it wont work well- for the reason, a RTS-Engin has a built in lower physical Simulationframe update- and is allowed to act "slower" to commands then a shooter. So, if you want that shooter to happen- engine rewrite it is.


As someone who's been through the pain of making UE3 fit for a game it wasn't built for that's one the best nuanced answers I've seen on the subject.


They'll be competing similar to Shiva Engine, it will be a mindshare battle as they aren't the first Unity competitor.

Unity is the most known and jobs and markets have sprung up around it as well as a massive asset store now exists, it also has many integrations such as apis and tools that take years to acquire.

Unreal and Epic have Tencent money and are on the way.

The OSS side of this engine is pretty interesting. Will be fun to have another competitor in this area. Xenko looks like a direct Unity competitor even down the the pricing and tiers. The big challenge is the community, asset stores and launched titles under the engine. Branding is important as well, to me Shiva and Xenko are forgettable brands, Unity and Unreal are set in minds.


If I only had time to learn another game engine. But I'd rather spend it making games with Unreal or Unity, and I doubt Xenko has something important they don't.



Looks like they're on Gitlab

https://git.xenko.com/xenko/Xenko-Runtime


Maybe the developers just haven't merged back into 'master' in awhile:

https://github.com/SiliconStudio/xenko/network


I think they are just closing the source as they make it commercial...

Edit: Oh, nevermind - on the home page they explicitly say that they're open source.


And by "open source" they mean a proprietary product[0] with "personal" (stripped down and highly restrictive license) and "pro" versions[1], naturally.

The old version (which is on github) is GPLv3 apparently.

[0] http://xenko.com/legal/eula/

[1] https://store-dev.xenko.com/get-xenko


They say that on the front page, but apparently they are not, from the FAQ "Currently the source code of the Game Studio is available only to Xenko Pro Plus and Custom subscribers on a corporate basis. If you would like to subscribe to Xenko Pro Plus or Custom, please contact us." and "Under the terms of the Xenko end user license agreement, you can share modified versions of Xenko in source code or binary form only to users who have the same Xenko license as you or higher. Additionally, you must include the Xenko end user license agreement at the root of the shared files."


No, not actually completely open source. In particular, you don't get editor source unless you license the pro or customer versions.

Unreal may not be open source, but having source access at all license levels continues to be incredibly attractive.


Always tell me what the product is in the first sentence or two.


Front page:

> Next-Level C# Game Engine

> Xenko is an open-source C# game engine designed for the future of gaming. It comes with a full toolchain and is especially well suited to create realistic games but allows you much more!


But not on this page, which happens to be the first page I've ever read on this project.

The Economist is a good example here. In every article, every time a new name or topic is introduced, it will provide a few words describing the person/thing. An example taken from the first sentence of the first article I opened: "Much of the language used by Mike Pence, America’s vice-president, [...]"


Professional journalists write like this, because journalism is intended to be consumed "statelessly"—i.e. with no assumption of previous knowledge.

Writers for "progress announcements" blogs like this one, don't tend to write in this style, because these writers know that the only interest anyone would have in a such a blog is if they already knew what the subject was and then wanted to subscribe for updates.

Which is to say, deep-linking to a progress-announcement blog from another website is basically almost always entirely useless, and the webmasters of such blogs would actually do well to disable it entirely (e.g. with a robots.txt policy + a server-side hotlink-detection redirection rule) and just suggest that people interested in sharing the announcement, should instead write a few lines of "actual journalism" on their own blog about the release, and then share that.

People who like linking directly to primary sources would hate this, but sometimes primary sources are not in the easily-consumed essay or encyclopedia-article styles that much of the modern web has become. Sometimes a primary source is just a commit log, or a diary, where you need to "go back" to get any context. The primary source gets to do what it likes; it's not beholden to the Internet. If someone wants a good summary to share, they're beholden to write one.


It is not actually open source, though- see https://news.ycombinator.com/item?id=14199899


The beta version is GPL3; the new version 2.0 apparently is not.


Cool to see Silicon Studio came out of SGI Japan.


ohh you can use C# 7 features, gotta play around with it now


windows.. meh




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: