Much of what makes this appealing has nothing to do with the render engine. What I saw was some really excellent character animation / motion capture. The rendering itself wasn't particularly jaw dropping. And that's not a critique of unity. Rather it's a critique of their chosen subject. Metal, walls, and artificial objects in general are all very easy to render convincingly. Show me some trees, grass, translucency, or volumetrics and then I'll be impressed.
Even then, I'm unconvinced that the ability to do realistic graphics in real time will actually manifest itself in vediogames/VR experiences we can interact with. So much of what makes this look good is the excellent art direction, and as you stated, excellent textures and animations of an artificial thing. But animation students have been producing similar quality visuals for their show-reels for ages. All that has changed is the ability to render faster, but the creation of this content is still being done by the same processes.
For videogames/vr to break the photorealistic barrier, there needs to be some order of magnitude reduction in art development costs for these experiences to be affordable. Not all videogames can be star wars battlefront, where probably $150M ($330M overall cost to develop and market [1]) was spent making the best damn textures videogames have ever seen, but produced a simple, limited game.
Photorealism is a dead end for videogames unless art costs come down.
I'm curious how you manage noise in low-light situations. The one video I found had significant compression artifacts so I wasn't able to tell what the actual engine looks like.
It's pretty but the demo doesn't have a hint of motion in it, and such organic objects as there are (plant leaves) are very plasticky. The HDR bloom is lovely, but presumably being faked just as it would be with any other render pipeline.
> But animation students have been producing similar quality visuals for their show-reels for ages. All that has changed is the ability to render faster, but the creation of this content is still being done by the same processes.
Have you seen modern texturing pipelines? It's most certainly not the same process as 10 years ago. Have a look at this procedural wood flooring generator, built in Substance Designer's node-based texture workflow: https://www.youtube.com/watch?v=Zc5Pdcbjr0U
Same for animation. Doing it by hand won't get you the quality of results that you need for photorealism, so we're bringing in new processes that require a bunch of equipment, a bunch of time, and probably multiple crew members. Used to be you'd make a walk cycle by hand, now it's mocap. With the same old processes, there's no way Star Wars: Battlefront would have been able to make its content in the volume it needed, even with a large art budget.
Now that we know how to create these assets on at least a somewhat feasible budget (compared to armies of artists manually doing 8K textures in Photoshop), the next step will be taking this high-end work intensive stuff and bringing it down to the point where we can crank out similar quality results with lower time investment. It'll take a couple more years, but there's definitely a demand for it.
Yes, the tools have gotten much much better, that is true. But it still costs an incredible amount of time to generate the art assets, it's a huge part of the budget.
I have not. That's super neat and I'm glad people are making progress! Is there anything happening on the animation side which will push past mocapped movements?
A couple of really cool things about the Substance texturing approach:
Feed it a 3D model and generate maps of its inside corners and outside edges, which can be used to mask layers for dirt accumulation and edge wear https://www.youtube.com/watch?v=zTYia53801k
No reference handy, but you can expose parameters like "How much rust" or "color of scattered accent bricks" and make the textures be configurable in-engine. Really streamlines the process if the person building out a level in the game can adjust those sorts of features without going back to the texture artist every time.
EDIT: Here's a cool one http://polycount.com/discussion/comment/2384431/#Comment_238... You can feed it an arbitrary mask to control the color pattern. Note how it's not just painting the color over the bricks, it actually makes brick seams following the edges. And if I had to guess, the dirt/sand accumulation is exposed as a configurable parameter.
That sounds like the step forward these things need. VR to really get a good look at your stuff, and the ability to mocap while sitting at your pc. Cool stuff.
Both the topic: 'more spent on marketing than production' and the first post 'numbers can't be trusted because of creative accounting practices'. I wouldn't be surprised if this applied to the game studios, too.
Anyway, you're essentially stating two things. One is that art students have been able to produce this level of quality for their show-reels for ages. If students can do it, it can't be all that expensive. So why did we not see it in games before, but only show-reels? Not because these students or studios couldn't translate a show-reel to an interactive show-reel (i.e., a game). Rather more likely, because there's no market for it because nobody has the money for the hardware to run these things real-time. Now we're seeing this thing render on real-time on consumer hardware that might in 5 years be affordable to mainstream audiences.
Seems to me the biggest barrier was there's no market because there's no cheap hardware that can run high-quality in real-time. Of course cost of art is a factor, a big one, but I don't think it's been the primary limiting factor.
Lastly, the cost of art at any given quality level has come down in a big way. It's offset by increasing demands. But try to buy some assets of 2011 type quality, it's cheap, while 5 years ago you'd have had to hire a big team to deliver that. Once you approach realism, the amount of improvements to quality diminish and you get a build up of cheap assets (textures, template models etc) that can be tweaked and used and bought cheaply. Assets from 5-10 years ago are like cheap commodities already, tomorrow's assets are expensive, but at some point there's a limit to quality improvements and just like every other industry (e.g. smartphones in 2016) you see commoditisation and the cost come down.
A two minute film is not a videogame. A show reel requires one good animation to work in one specific situation. A videogame needs an animation that works generally. Also, making a good animation is hard, and artists deserve to be paid. Think of movies, where there are still movies today which have unrealistic CG, despite computation being far from a bottleneck for a feature film (Legolas on the elephant in LOTR: Two Towers) - it's not the technical challenges, it's the artistic challenges.
Right now, the vast majority of games have characters with fixed walk cycles that are used no matter the terrain. Realistic CG needs to have a walk cycle that captures the subtle changes in gait that corresponds to a given surface. I know people have been doing research on adaptive walk cycle, but afaik it has yet to hit production games.
For generating art, there is hope, as procedurally generated games look fantastic (No Man's Sky), but have yet to expand beyond sci-fi games, or into games with story and specific art styles.
Maybe reusing assets is the way forward, but I'm skeptical. Reusing the visuals just means more army guys fighting in sandy deserts crouching behind crates. Maybe the Storm Trooper models for battlefront really are as good as they get. But even then, I think aesthetically realism can lead to a dead end. The most visually impressive game I've played, other than Star Wars Battlefront is The Witness, which is simple in the CG sense, but has some really tremendous visual aesthetic moments that are something I've never seen done in a game. For me, the stylized look and the artistic/game opportunities that enabled were far more exciting than Battlefront's perfect rocks.
> still movies today which have unrealistic CG, despite computation being far from a bottleneck for a feature film (Legolas on the elephant in LOTR: Two Towers)
One point to note - LOTR Two Towers was released in 2002. 14 years ago.
I'm claiming that at this level hardware power, the hardware power isn't the limiting factor in realism, the animations are.
Realistic visuals needs realistic walk cycle. But since realistic walking isn't a perfect stable cycle (it is disturbed by head/hand motions, surface variations, etc), there can be no realistic walk cycle because real walking isn't a cycle. AFAIK current videogame animation all involves walk cycles. Realism at this level of detail needs some new form of animation, not the love and care of a person that is so present in this demo.
regarding the walk cycles on terrain, good work is seeing use with inverse kinematics. For modern era games the technology is readily available and generally robust that solves this issue.
I wouldn't describe the usage as trivial, but it's in line with other toolsets that deal with different issues.
>If students can do it, it can't be all that expensive.
That's not really true at all. It comes down to man-hours and how much time it takes the artist to do the work. A student isn't incurring that cost when they do work for themselves, but a company would be.
A valid point, but even realistic metal isn't necessarily easy. The volumetrically diffuse reflections (that actually get sharper as geometry approaches them) you see of the robot prisoner's arms on some of the walls haven't been possible in real-time until only very recently, and is clearly a new feature Unity is showing off here.
What's most interesting to me, though, is the Unity logo at the end, which I assume must be showing off engine capabilities as well. It makes heavy use of sub-surface scattering, which is of critical importance to lifelike skin and faces.
I'd much prefer if more people tried to improve animation, because state of the art motion captured movements still look like crap in modern AAA games. Graphics is good enough for now.
Matrix Reloaded was 13 years ago. That's a bit unfair. You can still tell, I think, especially if you know where to look, but I don't think most people would notice, in Fast and the Furious 7, for example.
Terminator 2 used far, far more practical effects than one is likely to assume these days. The iconic shot of the t-1000 blown in half and sewing it self back up? Practical effect. Really amazing stuff over all. https://www.youtube.com/watch?v=EYQMfT6nsQs
Honestly the most impressive part to me was being able to convey a story of "human somehow put into a machine" pretty much only through physical acting. That's not something you see every day in video games.
Absolutely this. It fully conveyed real emotion with a sense of panic.. then seeing the different behaviors in the crowd too. The guards. The whole thing tells a story. I hope it's significantly more than just a tech demo. I'm hooked.
And I suppose it's worth pointing out that creating all the tooling to put this together is just as impressive as a lot of the more obvious aspects. It's a giant content creation pipeline coming together. I expect these demos from Epic, not Unity. They're stepping up.
You may want to play SOMA then. The whole game deals with this very subject and its implications. I can highly recommend it for the excellent storytelling, dense atmosphere and brilliant sound design.
That was what mainly put me off, it makes no sense. First the robot starts off breathing like he'd been under water for 3 minutes and came up for oxygen, what? A robot running on oxygen? Then he breathes and moans, what? Vocal cords on a moaning robot? Stumbling because his robot muscles haven't been used in a long time? It made no sense, they took a human's motion, behavior, appearance and sound, and then just exchanged flesh for metal, which makes no sense to me. And then you say that it's only the acting that made him human, come on, everything except his skin made him appear human!
I mean, anyone can come up with some convoluted ideas that explain why the above does make sense... like the robot breaths because there's an organic brain in there that needs oxygen, they moan because it's still a human brain and the machine is sending his brain an overload of sensory data that is hard to deal with, the robot stumbles because his brain is new to interfacing with its machine parts etc. But I personally didn't like it and kinda roll my eyes when they go overboard with the anthropomorphism. Still it was hella cool, hope the full version will explain away my doubts neatly :)
I assumed it was all in his 'mind'. These didn't act like AI robots, seemed clear from some of the exposition that there's a mind in there that thinks it is human. Notice the different choices different robots took - one ripped the covering off it's arm, in the big crowd you can see different robots reacting differently - one pushes another out of the way for instance. I'm pretty sure there's reasons for how they acted.
Put me in a vat, I'm not going to make moaning or breathing sounds in my mind. And it seems like a pretty silly engineer who'd simulate such a thing. That's really my point. If it was a cartoon I couldn't care less, but I like my sci-fi to approach some level of realism. (and I'm really not the type of guy who complaints about sound in space, it's about immersion for me and a breathing and moaning robot kills it for me.)
Amputees still 'feel' the missing limb months and years later, so I find it plausible you would still 'feel' reactions in your body even if you no longer had one. I didn't take the sounds to be literal, just an artistic technique.
To be clear though, I think your opinion on this is every bit as valid as mine, I just really enjoy discussing film.
because they replicated a function of humans that they wanted intact, like speaking, that requires breath, nice demo, as always story is king, it gripped me :)
I want to watch this movie/or someone play this game for a few hours. I can assume that Adam is a human who was put into a robot for some reason. There is some potential there, like Chappie re-imagined.
It looks like they are/were all prisoners based on the word 'Felon' on his name plate as well as the orange pants. Definitely will be an interesting story (maybe repurposing felons as robots to test stuff).
If you haven't already, you should watch/play SOMA. It is a masterful exploration of the concept of the ego and self, and the consequences of attempting to shift that into a robot (robots).
I am sure we would be able to clone/copy/emulate brain to machines someday. It will change the world in so many ways. We would be Immortal and that will change our priorities so much. People living in virtual worlds. Future would be so cool.
There have been a couple of games with atrocious performance on PS4 including Broforce and Firewatch—and Broforce is a 2D sidescroller(!)
I don’t know whether Unity is innately bad, or whether frameworks in general just tend to enable bad code.
Would love to hear more from people who know more; right now I associate Unity with people starting out in games rather than a platform people continue to use after they hone their skills.
Unity's massive problem is that it's still using an ancient version of Mono, with awful garbage collection and other performance issues.
In most other respects, it's an acceptable game engine. It's been used for a ton of big professional games you've probably played without even being aware:
Not only that, Unity games also have a very inefficient game loop, which incurs a LOT of latency on input. You don't notice it much on consoles, but on the PC, with 60 FPS, and the mouse, it's super noticable.
Honestly, you can make a horribly inefficient game using any engine. Unity just has a lower barrier to entry, as it's very hobbyist-friendly (as you rightly point out), so there's more games being made overall with it, more games made by people with little experience in performance optimization.
I don't see this as a bad thing, either. The tradeoff, from the hobbyist's point of view, is that if you decide to use Unreal Engine, for example, your cost of learning the tech is so high you're unlikely to ever finish your game.
Source: I'm a (professional) game developer and we've used Unity for multiple projects.
You can write crappy games in assembler if you like. There are plenty of horribly performing Unity games simply because it sets a lower bar for developers. Also, rather ironically, developers using the less expensive license are forced to show the Unity logo at launch which tends to associate Unity with low-budget games.
There are plenty of awesome games done in Unity which you probably didn't realize they were done with Unity.
Except that was in reference to the demo they gave at GDC while using the editor. Not necessarily the card used to render the demo, someone should ask.
I went from "Whoa, this is astounding for WebGL" in the first scene to "Ok, this probably isn't actually rendering on the client" as I saw some of the diffuse reflection shaders in the escape to "I feel kind of silly now" once all the other robots appeared.
The YouTube chrome is hidden, but the video must be in 4K; it looked like native resolution on my Retina MacBook Pro.
If you like the philosophy of this, I heartily recommend you check out the first season of Ghost in the Shell: Stand Alone Complex. The second season is so-so, but the first season is amazing and jam-packed with cyberpunk and philosophical challenges like this one.
They’re also doing a (whitewashed) live-action version of some amalgamation of the movie and TV show, so might as well watch the original (1.0) movie and anime, before Hollywood ruins them for you.
Well isn't that kinda what Unity stands for, running things on multiple platforms, including in your Browser? So if you want to showcase the engine, showcase the engine.
But anyway I don't want to be overly critical, after all this is probably just an internal demo, they thought would be cool to share with the public.
I was confused for a bit but it's a youtube video not running in engine... you can understand how I'd be confused as unity has a web player.
Edit: Apparently they don't really have the web player anymore. Still confusing. Was actually hoping that for once I'd be proved wrong about WebGL / encriptem.
It is still possible to export to e.g. WebGL, but the features required for something like this is not available in the browser. Also they were running on 3x Titan X, which doesn't exactly have widespread adoption :)
Great demo by the unity folks. Hopefully the small studios can harness that power just as the people that make the engine. They usually know how to cheat it best.
I imagine this is a recording of the demo which was streamed in real time, as opposed to being rendered (a process which can take significant time per frame for scenes this complex).
That was my initial reaction when they said rendered in real time, but I'm guessing it meant low latency (real time) while waiting for frames to render during processing.
As opposed to a recording of a 1 minute video that took a few hours to render on industry hardware.
They ran it on a pretty beefy consumer PC that would cost thousands of dollars, definitely not feasible for ordinary gamers. Point is, it ran real-time on consumer hardware. It's sort of a showcase for what might become mainstream in the future, and what is already possible if you're the type of consumer who buys graphics cards that cost $1k for gaming.
Wait what? It says rendered in realtime. You mean there was an actor with dots on him / her and that film was playing on a monitor next to them so the director could see the cables popping of?
I find it particularly clever. What they are telling is that this is body for a human to occupy. Sensation and sound of breathing is simulated for him.
He's still in bit of a shock of waking up not in his own body. I guess pulling was instinctive reaction upon seeing it and finding a (painless) hole in the side of his face.
I thought that also, but look at his body, it's machine to the bone so I was thinking where is the place for the human to occupy? I was thinking, maybe he just "thinks" that he is breathing because he was a human and wakes up to be like this.
What's that with the downvotes? A robot doesn't need oxygen to fuel chemical reactions.
You're referring to Raytracing or other physical rendering approach.
This is still polygonal rendering but by real time they mean that it was rendered 1:1 live on a cpu.
Most games trailers actually render at some fraction of real time (12:1) (and often up-res everything) to a series of files which are turned into the actual trailer.
Also this is running on a pretty beastly machine. So realtime for them might be 12:1 time ratio on your machine at the same quality. Or much lower res and real time.
No. I meant high-fidelity rendering as was shown in the post. Of course Moore's Law will soon give us high-fidelity ray tracing in real time also. The new hardware implementations are a step in that direction.
As for "running on a pretty beastly machine", that's my point - Moore's Law says this will soon run on your phone.