The Matrix demo is close but the facial expressions are still a bit off like the person who's doing mocap is wearing a mask even if they're not. It'll be quite interesting when they finally get those issues resolved and when developers also try to make more unreal (pun intended) looking effects that interact more with realistic looking settings or characters. Having the freedom to make such a disjointed thing in a 3d renderer like how surrealists use to do with oil paints and the like it something underutilized in cinema and games imo.
Horizon Zero Dawn has the best facial expressions I’ve seen yet. First I noticed the lip synching was very good. Then I noticed it also seemed to be doing procedural micro expressions as well as small eye saccades. Paired with subtle and expressive voice acting it’s remarkable.
Are you being serious? Don't get me wrong, HZD is an amazing game, but man the faces are absolute robots. That's not the team's fault though - any open world game with lots of dialogue will pale in comparison to hand animated faces.
> any open world game with lots of dialogue will pale in comparison to hand animated faces.
Pretty sure this won't age well - or that it's already aged out. There's only so much animators can do. In the film industry algorithms often augment an animator's work. Often they do the bulk of the work, and animators will fine-tune to get the exact effect envisioned.
Also, artistic animations can look great, but they can also be very inconsistent. Even more so when you need multiple people to work on thousands of lines of dialogue. It also requires an enormous budget - not all studios will have Gollum-like funding for every single NPC.
That said, I agree eye movement is a great way to convey feeling, and Horizon Zero Dawn did it well (and its DLC even better). Yakuza 0 is another example of doing a good job. The exagerated animation style also shows in the characters' eye movements. A great sample is this sequence that, very conveniently, shows a blind character that lacks eye expression. https://www.youtube.com/watch?v=jfVqfelfYHo
I mean HZD. Haven’t gotten to HFW yet. I think the animation below the neck is subpar, but not the faces. On a hunch I checked Last of Us 2. To the best of my observation, the eyelids, circular muscles, and nostrils don’t micro-twitch, and the eyes don’t regularly saccade. That’s what I’m talking about.
Need the go back and try to push on with forbidden West but man.. it was brutal even after just finishing ZD;not sure how approachable it will be after a long break.
Forbidden West is the sequel that came out in February. From "it was brutal even after just finishing ZD", it sounds you're mixing it up with The Frozen Wilds DLC that came out for Zero Dawn.
Forbidden West has some of the best mocap with voice acting I've seen in a while since the God of War 2016 game. It's one of those things that's part art and part technology really, but making better tools will get this over the hurdle I think.
Do companies use machine learning for face animation at this point? E.g. capture a bunch of facial mocap data and phoneme streams and train a transformer model on how the face would move given a new stream? I guess it's probably easier just to translate phonemes to facial movements with much the same outcome?
Agree. Not sure how much could be blamed on the poorly written dialogs. In contrast, for example, Witcher 3 was so engagingly written that I never minded the procedurally animated dialog scenes.
For me, it was definitely a visual thing. As in, they look plastic in stills too. There's just this very unrealistic uncanny valley thing going on when characters smile that few games have been able to get around so far (but I haven't really played any latest gen stuff, so maybe its improving).
Young Aloy was worse than adult Aloy [1] but its like that for adult Aloy too, and most other games, although I found HZD worse than most in that respect. I haven't played Forbidden West yet, so I'm curious to see how much they've improved there.
The demo was great, but it really fell apart any time the mocap models moved. Despite the animation being great, the movement is still stilted and fake.
The next step change will have to be some form of advanced kinematic rigging that creates advanced motion granularity akin to the graphical fidelity of nanite graphical tech.
UE5 is a step in that direction, but there is still lots of work to be had.
These tools are still going in the direction of making a person sit in a chair all day.
What will really be a game changer is AI that can generate the bulk structure having been trained on our best of the best hand built models and we can iterate on the emotional details.
I’m working on the “bulk structure” part, training models to generate random game worlds that roughly adhere to the rules, look and feel like everyone favorites right now (while avoiding copyright issues).
After that my goal is empowering consumers directly to nudge the styles in their preferred direction.
I’m mostly motivated by the MBA-ifying of everything. My goal now is to just have AI produce new content for me, even if such a thing puts game developers out of a job. I’m starting to experiment with cartoons as well. Optimizing for myself like we do.
Recently Unity released a video with quite impressive facial animations. Unlike unreal, this version is not yet ready to actually try yourself yet unfortunately.
Facial detail may be great, but it’s amazing to me that they would put so much effort into making an animated character look life-like, but neglect to have it breathe.
There is zero chest wall movement. If you have a friend who works in medicine — especially emergency med or intensive care - send them this link and ask them how natural this looks. Bet most of them will immediately notice the same issue.
Yeah that one is surprisingly good, even if the eyes move in a way that feels like the character can't see. I'm not sure why I feel that way, but something seems empty.
> First off, there’s Lumen—a fully dynamic global illumination solution that enables you to create believable scenes where indirect lighting adapts on the fly to changes to direct lighting or geometry—for example, changing the sun’s angle with the time of day, turning on a flashlight, or opening an exterior door. With Lumen, you no longer have to author lightmap UVs, wait for lightmaps to bake, or place reflection captures; you can simply create and edit lights inside the Unreal Editor and see the same final lighting your players will see when the game or experience is run on the target platform.
Please correct me if I'm wrong, but isn't this basically a software-based implementation of NVIDIA's hardware-based RTX?
As mentioned by others, not really. Lumen can use either software or hardware raytracing as one of its components, but it uses them for different purposes. (One of the downsides of hardware raytracing is that you're limited to the very specific kinds of acceleration data structures that are supported by the GPU.)
RTX is basically marketing term for Nvidia's hardware that can accelerate ray-tracing. Previously you had to interact with it through proprietary Nvidia APIs, but now there is DXR and Vulkan Ray-tracing, which allows to interact with other hardware or software implementations of these standards. AMD have hardware accelerated ray-tracing now(but it's quite slow tbh) and Intel just released their GPUs with hardware RT. Lumen is GI algorithm that builds on RT APIs. You can learn about it's inner workings here: https://www.youtube.com/watch?v=2GYXuM10riw
> Previously you had to interact with it through proprietary Nvidia APIs
That was never the case, OptiX was never intended for use in games. VK_NV_ray_tracing and DXR 1.0 were pretty much there since launch for the former and quite soon afterwards for the other.
I was talking about VK_NV_ray_tracing precisely, which is proprietary Vulkan extension. And I thought there was a similar situation with DXR(at first proprietary Nvidia extension and then common standard), but I was mistaken.
> I was talking about VK_NV_ray_tracing precisely, which is proprietary Vulkan extension
Khronos does not promote an extension to KHR until multiple vendors ship support for it.
VK_KHR_ray_tracing is a direct continuation of VK_NV_ray_tracing with very light code changes needed. (and nothing prevents a vendor from shipping support for the two)
RTX provides hardware primitives from which one could build portions of a global illumination solution. Lumen is such a full-featured, turn-key solution which I believe can run in "software" (as in not using dedicated ray tracing hardware functions) or can take advantage of hardware raytracing, including RTX, for some portion of its calculations.
Unreal was already the best engine purely on being easy to work with, support from third party programs and community, none of those things can be said about CryEngine.
"Lumen implements efficient Software Ray Tracing, allowing for global illumination and reflections to run on a wide range of video cards, while supporting Hardware Ray Tracing for high-end visuals."
apologies again, I forget how nit-picky this place is, what I MEAN is that it is only useful with hardware acceleration. I am sure a software render exists but I somehow doubt that it's ability to ray-trace in realtime works.
And to everyone else, yeah - it does run on next gen platforms no problem that also happen to have hardware accelerated ray tracing.
I'm telling you, I have a 1060 and it looks very very good. You can read on their documentation how they do it, but it basically doesn't need the RT accelerators, because they precompute the geometry, with SDFs or some other things I don't understand. The difference between RT hardware and just software is purely performance, but performance is good with just software.
Can someone that has experience with Unreal Engine give some sort of approximation of what is required to sort of achieve minimum competency as a developer in it? Like, with web stuff, you can get up and running fairly quickly and slowly scaffold your way to more intricate projects... can the same be done with something like this or is the initial learning curve a bit steeper?
Yes, unreal is quite easy to get started in. You start with a scaffolding immediately when you start a new project. It literally has template options for different game types. It’s very friendly to artists as nearly all the art tools are gui driven. Blueprints are okay, I didn’t love them for all things but for a lot of things it was okay. Code is harder, you have to read their implementations of things if you want to do anything sufficiently complex. There’s not a lot of comprehensive technical material for the internals. Using unreal engine as a programmer feels like the using “enterprise” software in terms of scale and bloat except that it was built by seriously competent engineers.
easy to get in but way tougher to finish a product. You are literally stuck to using forums and dealing with out of date and incomplete C++ documentation which is a non-starter for anybody without gaming dev experience. You really can't rely on blueprints alone, and it feels way more work than it should be for simple things you could just write in C# in Unity.
Unity isn't exactly perfect either, there's just confusion about which version to start out on but the one that has the most amount of tutorials and userbase seems to be the answer.
Absolutely correct that UE C++ is daunting. You just have way too much responsibility and you absolutely need experience with C++. It also takes more developers who are harder to find compared to Unity devs.
Unreal Engine really isn't it for indie or small studios. It just takes so much longer to make something on it, and you almost certainly end up working with C++ to fix performance issues, debugging, etc.
For large studios especially film studios using it to create 3D environments? It's perfect and those are UE's target market since they are guaranteed to have revenue income that can pay UE since it works on percentage of revenue generated and small studios, indie devs, the risk is far greater.
This symmetric financial incentives mean the indie, small studios are always sidelined as they don't pay the bills. That's where Unity really shines.
One of the things I saw when playing with it is that they emphasize searching and intellisense reliant style programming with autocomplete, ie, make a guess and let intellisense tell you the object/type/whatever you're playing with. So you don't technically have to read their implementations, if you know how to leverage intellisense properly. In fact Microsoft specifically sped up intellisense with Unreal Engine (but the changes affect all intellisense users): https://devblogs.microsoft.com/cppblog/18x-faster-intellisen...
> Is there any good engine made for programmers? A kind of "bootstrap" of video games?
There's so many starter kits out there, for Unity at least, even for older platforms such as Ogre3D and SDL, that I think the highest barrier of entry for any programmer is the fun idea.
I saw the launch trailer back when early access came out last year. I had been playing VR pretty often but had never even considered anything in 3d dev because it seemed too hard.
After seeing how far unreal had come and downloading it to test it out I finished up the consulting project I was on I started looking into making VR games.
Initially I messed around with some other projects too but for maybe the last 5 or 6 months I've been working on pretty much just this.
It was a lot to learn but once you understand the core concepts it does not seem that different to me that normal software development.
I'm currently learning how to use c++ instead of blueprints. Each thing I learn makes the engine seem easier and like any other piece of software. There is some cruft but less than you might expect.
I watched a bunch of courses and videos but looking back I think the best place to start would have been the docs. For unreal and blender. They took me a while to work through and I had to stop and Google terms constantly but they were really helpful for understanding some of the basic mental models and terminology in the space.
There is a learning curve but it's not too bad. It took me about 6 months to go from zero experience to winning prizes in their 'make something unreal' contest for unreal engine 3. If you have any development experience it should go pretty fast.
It is a lot because the platform is now so expansive. I mainly use it for my AI/DL projects and it took me about 6 months to get up to speed. There are many Youtube tutorials. It helps if you know how to use Blender and other DCC tools such as Houdini.
I would highly recommend starting with Unity instead. Unreal is more powerful but it also requires a much steeper learning curve. And 99.99% of games don’t have the budget to take advantage of the extra power.
From an art workflow perspective nothing even comes close. Just put down your models in the scene, place the lights and your done. All the additional scene setup and art passes that are no longer required really set artists free to actually work on the art instead of optimization and fake lighting. No baking lightmaps, no baking LODs, no hours spent placing 'fake lights' to simulate GI.
I just created a new scene in Unity 2020.3.5 and terrain going out to 6000 and a simple camera fly-script to fly around and it all looks good to me - no shaking.
Using 32-bit floating point you'll have 0.1 accuracy up to 100000 at least.
Edit - I just realized 10cm accuracy is not great (and that I'm talking to myself) so I tested with 0.001 and that's stable up to 9000.
I've been working with Unity with the intention of making a large open world, and so this was one of the things I deliberately experimented with and tried to quantify. I found I had to be several orders of magnitude farther from the origin to notice stuff like that, and frankly I was a little disappointed that it was so hard to reproduce. Is there a sample project or description you could share that would help people observe that? I would not be surprised if certain circumstances other than purely distance make it worse. For example, when I traveled large distances solely on the world X axis, I saw my meshes quiver only when the camera was also aligned on/orthogonal to the X axis (don't remember which one exactly.)
I may be wrong, but I think that is talking about creating games that run on Apple Silicon, not building the editor/engine itself for Apple Silicon (i.e. the dev setup). Regarding the XCode GPU plugin, that is cool; didn't see that, thanks!
- Epic Launcher is using Rosetta
- Unreal Editor is using Rosetta (first launch took an hour or two, maxing out one core only)
- ShaderCompileWorker is using Rosetta
- Packaged project for macOS is using Rosetta
Yeah, at least it’s all working via Rosetta. When I first got my 13in MBP when it came out, I couldn’t get it to compile at all.
The editor runs pretty well actually. I did an unreal 5 build, and it was a bit laggy, but if you turn off Lumen and TSR for the viewport renderer, it’s a lot better. It’s definitely usable under Rosetta, but native would surely be a lot better.
Our venture backed startup is actively working on integrating web support back into Unreal Engine, as we see a huge opportunity for a easy cross-platform export target for developers.
We're working on a WebGPU backend right now for both UE4 and UE5, and have already upgraded UE4 to support WebGL 2.0 from 4.24-4.27, as support for HTML5 was removed back in the 4.23 release.
Another major innovation is we've imported a more improved compression format (Basis) as part of our pipeline, we've also created an asynchronous set fetching that only grabs the required data that a user needs to see at any given moment, streaming the rest of the assets in the background as necessary. This dramatically reduces load times, which was one of the biggest complaints with Unreal on the web previously.
Conjecture
1) unreal is overkill for web and I feel ue5 is more, you can make a 500mb web game, but I don't play it
2) web makes no money, except in China, but point 1 still applies
As mentioned, official support was dropped and the community supported extension hasn't been updated in ~2 years. If you need to target WebGL, then Unity is a better bet.
If Epic were public, I’d park all of my retirement funds in there. I’ve watched this UE5 development and actively developed in it as a non-gaming dev! They’re thinking 5-10 years ahead.
In 10 years, I think we'll all be using 3D engines like this to develop everything the same way 3D cards were once for gaming/animation only. The tooling has demanded so much and Unreal has pretty much nailed it. If they can make their engine more generalized, it's going to become a killer app.
You should note that I'm an amateur that toys with different game engines from time to time, but I don't think that any of their competitors is even close to be able to catch up with UE even if they have 10 years(not talking about current state of UE, but about state of UE 10 years from now).
Unity does have some nice ECS things going, but overall their tech isn't good(prime reason why is that they don't make games, they make engine).
Main branch of CryEngine is dead as engine that anyone aside from CryTek is using, but Amazon invests a lot into their fork(but it lacks vision TBH).
Godot is good for 2d, but they aren't really going for that kind of experience.
Post-NeRF world: Google Maps and Waymo would've mapped the entire world, every city, road, terrain, crowd everything you can think of with any range of artistic tones, lighting, weather effects.
It would be just a matter of paying Google a license fee to use their environment in a film or games.
"Hey Google, generate a GTA V clone with cel-shading located in Liberia with Final Fantasy 7 characters but not enough to infringe on US copyright laws."
"Sure, here is the link you can share or the downloadable executable to submit to Steam"
"Good, you go ahead and submit that to Steam and let me know when it hits 10,000 downloads"
At the same time the investment in epic is only a very tiny fraction of tencents market cap. So it does not make sense to invest in tencent if you are interested in epic.
This sort of engine is a vast new vista of computing possibilities- apart from games and films, the opportunities for such 3D environments in many businesses is large (maybe huge!)
So as someone who has never touched game dev, what is the learning curve like - exponential? What is "basic competence" once past "hello world"?
A lot of it is learning the engine api and how different parts interact. The basic competence level requires a pretty large body of knowledge. Can you, for example,
Extend a character class, give it a skeleton, an animation blueprint with some animation states that read the character state, hook it up to player inputs, make it do some sounds, and add a bit of flair to some actions it can take?
Can you build a basic ai behavior tree? Run some line traces to interact with colliders so your guy can interact with an object?
Shoot a gun and spawn some bullets that can damage another unit?
You absolutely need those basic skills to be in a position to produce anything on Unreal Engine but its the sheer amount of work that comes after to produce anything of value is where the challenge is in my opinion. It almost certainly involves a dedicated asset generation capacity to generate enough novelty and/or a polished, smooth mechanic that can handle all the edge cases thrown at it. And there's a reason why the UE tools out of the box are so limited and frustrating to use to the point that its impossible to work on anything without opening up your wallet to purchase productivity tools sold on its marketplace and still struggle with the workload. Especially when it requires you to become proficient at C++ because blueprint's value drops outside animation rigging and VFX, both of which Unity's 3rd party marketplace tools have filled rendering UE's initial user friendliness nil. You are stuck with an enterprise tool marketed and suited for large studios but you thought you could make indie games faster than Unity.
My project is about 5% c++ base classes and GAS setup and 95% blueprint. I have no clue what you’re talking about with productivity tools but that sounds like nonsense. Unity doesn’t even have working networked multiplayer built into it last I checked.
Unity requires asset generation as well. Because… how could it not? That’s not something an engine can do short of being a 3D model engine on the editor side for no reason when blender is already great.
As someone who blindly learned c++ and c# for Unity and Unreal I would say they’re barely a lift compared to learning the engine api of either approach.
You keep making strange arguments in other comments about how Unity is more friendly to tiny studios for financial alignment reasons which is very odd to me. Unreal is a royalty on revenue. Unity is a cost per seat. The latter is obviously more costly to small studios with no initial revenue.
I jumped into game development earlier this year because I too feel that there are plenty of business use cases for a game engine to power business intelligence applications; especially in VR/AR. It was not only my first time doing 3D, but also my first time using a real IDE. Results have varied.
The first thing I'll tell you is that the tooling for building enterprise apps is woefully inadequate out of the box. The things that come easiest to you, specifically CRUD app building, are much more difficult in a game engine. There are no native UI elements, no theming in the way we've come to expect, and the workflows are all completely different. I was used to building database connected backends, and switching to a state machine model really threw me for a loop. If you have experience building native apps, you're probably going to have a much better time than I did.
As far as learning the interface, that's not so bad. Follow some of the tutorials and you'll be able to get up and running in no time. The rabbit hole for me was understanding WHY things were doing what they were doing, as opposed to figuring out how to do it. Except in those times which I COULDN'T do something natively that I was expecting to be able to do...which happens a lot.
My recommendation is to set up the game engine of your choice, learn how to build to multiple targets, and then craft something you've already done elsewhere to see how you like it. Once you get that going, you should have a cursory understanding that will allow you to flex some creative muscle.
It depends on what your core mechanics of the games are and the myriad of edge cases that arise as a result of those decisions. It can very well become exponential and it is a big reason why indie games are simple, anything more complicated to emulate a studio production, would quickly balloon into incredible amount of work and the biggest risk is not knowing if it will mount to anything.
I learned the hard way and realized just how hard game development is and how much we take for granted the human cost. You really really must love games to be involved with them. You cannot do it based on ROI calculations it really requires passion and patience.
UT4 is still a free game with some community content.
With no scripting support, adding content like vehicles is pretty much impossible. Blueprints are not sufficient for such a task. A few people implemented them but requires C++ and the game doesn't support downloadable c++ mods. The C++ mods would have needed to be incorporated into the native side of the game but it never came. There is enough content there collectively for a new game but it would need Epic to lead the way. It's a bit too late now for that.
I play UT4 and UT2004 and UT2004 has a bigger community today, 16 years later, than UT4 does.
Gotcha. I'm going to a LAN party this weekend and was wondering if it would be worth it for the gang to install... otherwise we'll end up playing a few hours of UT2k4 ONS-RedPlanet with the extended vehicles mod.
There is an Unreal-like game called Lyra included. It's pretty barebones as it's intended for developers rather than players. The gameplay loop is actually pretty tight and rewarding.
The matrix demo project requires 100GB of disk space!
While uninstalling games to free up space to download the project, I noticed that there was no correlation between a game's disk space usage and the amount of fun it provided.
With cloud gaming gaining in popularity at the same time asset detail/sizes are ballooning in size, it certainly feels like we're now ~1-2 generations away from games/experiences being so large most people won't bother installing them locally.
Sounds like a smaller drop than I would have expected tbh. Valley was pretty short. Is a large game of that detail really possible to ship with sane size requirements?
Yes, that's essentially what it is. It runs a preprocessing step on meshes to generate a hierarchical LOD structure, and then at runtime it uses a custom software rasterizer running on the GPU(!) to dynamically render whatever level of detail is appropriate. The term "virtualized" in the description indicates that chunks of geometry are fetched on demand (analogously to virtual memory), rather than loading the entire fully-detailed mesh up front.
The biggest limitation is that Nanite currently only supports rigid, non-deformable meshes, so it's mainly intended for scenery rather than characters.
Note that you can use Nanite for almost all models in your scene and if at some point a model needs to be deformed you can swap-in a traditional mesh in its place to handle the deformations. That's how cars in the Matrix Awakens demo work, you can check that out if you enable the Nanite view and try to crash some cars.
Is Nanite "just" software? Was there something that prevented it being invented 10, 20 years ago? Is it that its choices of detail would not be as good as what artists had to manually choose? Or does it rely on modern hardware, and if so, what specifically?
We got much more programmable GPUs over the years, but first consoles with GPUs that could conceivably run Nanite came out only in 2013. And Nanite really loves streaming of a lot of small data packages from drive, and for this SSD is really instrumental. That's why Nanite wasn't really feasible before(you don't really make fundamental AAA engine tech that runs only on PC if you are not Chris Roberts).
Nanite is basically a really complex compute shader reimplementing a rasterizer, along with generating LODs only for what is being seen, at the appropriate level, no matter the zoom (instead of fixed amounts of LODs)
So, Compute Shaders are relatively recent, anything DirectX 9 era and before is right out. Then, you still need a solid enough GPU to run that at interactive framerates, and fast storage to stream all that data in and out. And finally, there was simply no or very little research on the subject.
To be honest, I'm not sure. I watched the Nanite tech talk (https://www.youtube.com/watch?v=TMorJX3Nj6U) because I find graphics technology fascinating, but I don't have any experience actually using Unreal Engine.
My understanding is that the Nanite renderer completely bypasses the normal shader pipeline, and therefore deformation using vertex shaders isn't supported. This doesn't mean you can't have deformable meshes in your scene -- they just have to be rendered in a separate pass that doesn't use Nanite.
This is how I understand it as well. Currently, you can think about it like those old cartoons where parts of the background that were to be interacted with were a slightly different color to help the animators know what they had to work with.
Since we are so early in the stages of Nanite tech, I expect UE6 will tout some sort of deferred and active Nanite rendering to allow for handoff for deformation.
Seam staggering between levels for smooth swaps
Current hardware rasterizers chug on small polys so they made a GPU software rasterizer (!)
Actually it's a hybrid rasterizer, and gnarly features like derivatives are supported on both sides
The GPU scheduler chugged so they abused Z-test wave repacking
Streaming compression
Not only is it automatic LOD, it also avoids multiple draw calls for a single pixel. If your geometry is so complex that multiple polygons would render to a single pixel in the end (say, a very complex fractal viewed from far away), Nanite also manages to crunch all these polys into one. Effectively, it makes rendering massive scenes a much more constant amount of draw calls.
I'm confused. I downloaded UE5. Created starter fpp project. Built it and ran it as standalone onn dx12. Switched to fullscreen, showed fps counter (120fps on 120hz oled tv) but the "game" doesn't feel like 120fps. It feels like at most 60. And before I turned off motion blur it was really smudgy and after I turned if it's really choppy.
Can be a few things, but a lack of mouse input smoothing can be a culprit. If the viewport angle is "locked" to the mouse input ticks, it can display at effectively 60 Hz, where you'll get two identical frames per mouse movement. So 120 Hz of graphics, but 60 Hz of content.
- easy scripting to make live services easier to iterate
- easy assets cooking, so you don't have to spend too much time to make different kind of assets for different platforms
- collaboration, so your team can work on the same project with ease
- then all the other systems audio/input/networking/terrain stuff etc
Fortnite is available on every platforms in the market (minus the web), their ability to add content and update it quickly, across all of their supported platforms at the same time is a great showcase
Just like Genshin Impact is a great showcase for Unity
While more of a tech demo than a game, there was "The Matrix Awakens: An Unreal Engine 5 Experience"[1] that came out in mid December. It's free on the latest gen consoles.
Because that was alpha version of an engine and it is far easier to ship demo to three SKUs than to a limitless permutations of PCs(and you shouldn't forget that most PCs won't be able to run this with adequate performance). Matrix Demo is available as UE5 project right now and you can download it and build it(it weighs 90 gigs though).
Brilliant! I've been waiting for them to release this together with the Matrix sample. It looks like they have done so but have stripped a lot of the Matrix-specific assets.
The Quixel integration is going to help devs quickly build out gorgeous concepts. It will also usher in a new era of asset flippers building garbage that also looks beautiful.
Now to download and wait out the hours-long build.
Now building module 524 of 3998...
Inevitably, the Windows build is a download via a provided launcher, and the Linux build is getting the sources from Github, searching forums for the directions, and compiling for hours. Supposedly Nanite will work in Linux in the release version; it didn't in the pre-release.
Pretty much. It's not for everyone but here's the nice thing about Unreal: If you pay for it using Paypal and it turns out the plugin is not as described, you can get your money back.
Good luck of that ever happening with Unity. I was promised refund by Unity support for a totally non-responsive and outdated plugin that was on sale. They delayed and delayed until I could no longer get a refund.
Having said that I do not recommend Unreal Engine for solo indie development but Unity also feels weird with confusion around which version you need to start with, DOTs etc.
Unity seems like it is it for indie developers but its increasingly a cluttered landscape with lot of shoddy plugins
I found Unity to be easier to start with but led to a lot of garbage.
Unreal has a steeper initial learning curve but the system itself is more pleasant and the built in components are much more helpful. I switched to unreal after a couple years of Unity and strongly advocate others to do the same.
Not very moddable at all.
I tried to go down that rabbit hole once, but i didn't find any guides on how to do it.
The best stuff i could find are for hacking it and not modding.
On most modding comunities they tell you to stay away from it.