Hacker News new | past | comments | ask | show | jobs | submit login
Unlimited Detail Wants To Kill 3D Cards (rockpapershotgun.com)
91 points by lobo-tuerto on March 10, 2010 | hide | past | favorite | 64 comments



A big part of what video cards do is not focused on models. It's focused on lighting and shading. That's what gives games realism, something completely lacking in these demo scenes.

Make a tech demo that blows Crysis out of the water that runs in software on commodity hardware and we'll talk.

Hire some fucking artists if you need to. Saying "this is just programmer art" is a copout. It's like a slacker student who says "I could get straight As if I studied more and did my homework".

I'm not unwilling to entertain radical ideas but you need to show something more than flythroughs with lighting reminiscent of Quake II.


I agree. When I want to make a game look pretty, the things I increase before model detail are (roughly in this order): texture quality, lighting/shading quality, shadow detail, THEN model detail and finally post processing and special effects.

I find using high resolution textures has a much greater impact on quality than high polygon models, especially when we're already making models seem like they have higher detail using such tricks as normal mapping.

I also agree with the programmer art comment. The visuals in the video are TERRIBLE. If they want to prove that this is superior, then they need to make something that actually looks better what they want to prove is inferior - in this case that means that the art must be on par with existing games.


I'm itching to find out what the modelling tools are like. At this point I doubt they would gel with many 3D artists. Programmer designed interfaces for creative tools are generally a pretty ropey affair.

However, I'm certainly willing to attempt to do battle with the uncanny valley if they let me.


I bet they would work great. If you could tell a 3D artist that he can use as much detail as he wants for this game... well, you'll get what they show on Zbrush central. All those models have hundreds of times more polygons than what would ever be accepted in a real game.


I use ZBrush to create displacement maps using HD geometry, which doesn't use actual polygons in the classic sense. A lot of the ultra high poly models you see on the turntables and such use the same technique. You can easily make a mesh well over a billion polys that doesn't impact the system. http://www.pixologic.com/docs/index.php/HD_Geometry

I doubt they'd have anything anywhere near ZBrush's flexibility and depth.


You could totally take a displacement map and tessellate a model with it. In fact, that's what OpenGL4 and DirectX does in hardware.


And while you're at it get someone who can give a presentation.


The contacts page lists "Greg Douglas" as CTO - sounds like it might be this guy: http://www.doolwind.com/blog/game-developer-spotlight-greg-d...

Also, the same name appears against:

- A C++ R-Tree ( http://en.wikipedia.org/wiki/R-tree ) implementation: http://www.superliminal.com/sources/sources.htm (towards end)

- A contributor to the GameMonkey script engine.

I think it is legit, but the other shoe will drop when we see which knob got turned to max. at the expense of the others - My initial guess covers pretty much what others have pointed out - materials, lighting, compression. The example scenes look instanced to all hell (ie: the scene is a DAG).

Edit: Also see:

Bruce Dell's comments (and one from his dad!) in:

http://www.tkarena.com/Articles/tabid/59/ctl/ArticleView/mid...

Comments from a 'Greg' - I assume 'Greg Douglas' who reports seen the inner loop:

http://www.somedude.net/gamemonkey/forum/viewtopic.php?f=12&...


It sounds a little to me like the old "once the polys get small enough, everythings a particle system" approach. Sure you save lots of GPU by essentially doing away with surfaces and textures (by making everything a floating colored dot) but you then have to contend with massive storage, manipulation, and filtering problems.

They might be at the stage where they say "we'll just make everything a point! this'll be cake! all our renderer will have to do is figure out which points to show." and not yet at the phase where a they come to realize that a room full of monsters will require 100 gig of ram.


Indeed. Through many mentions of the word "unlimited", never once was the word "storage" mentioned, or "cache" or "memory". How about the design toolchain? The only solution I can think of is to store all the assets as ... polygons. Unless there is toolchain support for a CSG/procedural approach.


Another thing I wonder is, if their "search" system, what kind of indexing is required? How long does it take, and can you re-index on the fly? Their demos have a conspicuous lack of any kind of movement at all, much less dynamic geometry.

And what about shading? Shading typically require surface normals, something that's not readily available from a mess of points in 3d.


Shading in particle systems (when its addressed at all) usually involves making a topo-map like structure out of the points and then creating virtual polys out of the contours. You can then apply shading to groups of points contained in the virtual polys based on those surface normals. It takes lots and lots of cpu. Decidedly not like the "it'll run on your cell phone without a GPU" hype presented here.

My guess is that they created a small static particle system that looks like 3d figure, "rotated" it by selectively displaying particles and got all excited.

A classic case of needing to be an expert in a field before trying to push the state of the art in order to save yourself the trouble of chasing a dead end that most people in said field already know is infeasible.


He did mention the word "compress" though. Once. That might be a key part of their technology, but to say that they glossed over it would be a gross understatement.


I could conceive a fractal analog of the spray paint tool.


I think he mentions around the 5 minute mark that this is not quantised (point clouds like Voxel, which comes in again in the end minute comparison).

Why is the move from a polygonised surface to a smoother surface made with more polygons and not made by mathematically defining curves like polylines in 2D vector art. I know processing, but surely current GPUs can manage completely smooth curves for some games (not FPS in other words).

What also interests me is that using a perfectly smooth line rendering appears actual less real than using the voxel approach and how that will effect developmemt of game physics.


I would imagine a fair amount needs to be dynamically generated and procedural, and perhaps with jitter etc. added to reduce the cookie-cutter effect.


If the Duke Nukem Forever team still existed I bet they'd be jumping on this technology right now.


I listened to their explanation and it sounds a bit shady. First they used that old salesman trick of using a British accent to appear smart, so that was already a red flag.

And then their explanation of the secret to their technology sounded rather ridiculous. They said that their technology was like the google search engine or like searching for the word "money" in an MS word document (the latter was the lamest attempt at subliminal messaging you are likely to find outside of a political ad). Needless to say that is a very silly explanation.

Of course a lot of graphics systems have methods where they determine what elements are supposed to be visible and render only those elements. But that is not something that will give you "unlimited detail".

So yeah .... shady.


Come on, there's a lot to be critical about in this video, but do you honestly think he faked a British accent to appear smart? There's a few million of them that talk like that, you know. And the word search for money being lame subliminal messaging? Really?


Maybe I am being unfair, but that accent seemed a bit fake.


Sounded a bit like Lloyd Grossman to me, and 'data' as 'darta'. From some digging around, I think these guys are in Australia. There are a couple of other vowel slips which would fit.


it sounded real to me (an englishman). that didn't stop it sounding extremely annoying.


Well, then you don't know English.


I read this in a British accent.


If I understood correctly, it seems to resemble a search engine in that it takes all the dots that compose the 3D world and search for the ones that need to be displayed to compose the 2D image on the screen at any given viewing angle?

If that is correct, then I think the search engine example is pretty good at explaining it.


Graphics cards already have this functionality, it's nothing new.


Not arbitrarily. Make sure it's nothing new before you say it is. How complicated is the predicate they're searching?


I can say it is nothing new. I have written and seen many patents that describe systems that do that.

As far as the "predicate they are searching" the video mentioned nothing about the predicate other than it is for things that are visible, and that is nothing new.


They could just be being glib. Clipping and LOD aren't the same thing as one another. Consider the 2-D case of sampling a line (uncountably many points) for a raster display. Reimagining our favorite line drawing algorithms as search problems requires a lot of work to figure out how to best search the space. I hope these guys are doing something like that.


The problem here is that we essentially have snake oil and vaporware. As a result, all the comments are angry and quite empty too. It seems like their idea could be interesting, but they should really do a proper presentation and tech demo before saying they buried traditional ways of rendering 3D graphics


From a modelers point of view, this type of technology would be perfect, essentially rendering redundant most of the current workflow, from battling with poly counts to playing tricks with displacement maps and baked occlusion. The model swapping he mentioned is pretty outdated, most engines use dynamic LOD which automatically reduces a models polycount depending on your view distance, also removing parts of a model that are under a certain size.

I'm not sold yet, it's making some incredibly bold claims. I'm aware of point cloud data models from back when it was included in the 3D Mark '01 benchmark (the rotating horse statue, the test was called point sprites http://www.ixbt.com/video/images/3dmark2001/gf3-sprites.jpg ), and found it an oddity at the time, not all that visually appealing and certainly not as a believable modelling method. It's not raytracing, although it does bare a lot of similarities.

I'll be keeping a sceptical eye and an open mind on this one.

Anyway, here's their website http://unlimiteddetailtechnology.com/

Edit: Thinking about it, it's biggest downfall will occur when physics are involved. For a static scene it's ideal, but to get the branches of the trees to blow in the wind, or to achieve any kind of real time environmental change beyond lighting would likely cause some serious difficulties.


Isn't real time ray tracing more interesting?

http://blogs.intel.com/research/2007/10/real_time_raytracing...


Their "search" algorithm is ray tracing. It must be, to some extend. They claim they only process each pixel on the screen, so they must trace a ray and find out which voxel/point/whatever to draw. The "unlimited" part breaks down as soon as you do any sort of reflection, refraction, or diffuse lighting. You can't just trace one ray for each screen pixel; you need to handle branching and scattering and the number of rays to trace becomes exponential.



But what about model animation?

Very helpful reddit comment about this demo: http://www.reddit.com/r/gaming/comments/bbg9c/unlimited_deta...


It's probably a lot easier to render when it's the same model over and over.


Wouldn't take 100 gigs of ram as previously stated... they could probably do the trick with raid ssds. I mean, people will pay $500 for a video card, recently Newegg had great 40gig ssds available for $99. Three or four of those still cost less than top end video cards, and the prices are only coming down. Plus, a lot of end users aren't doing much with their multicore amazing i series processors these days... mostly going to waste in a lot of systems. This is vaporware until proven otherwise, but I look forward to finding out which one ;-D.


I point out that there is no animation whatsoever in their demo.


I saw another video on their site where the grass seemed to move. It could of course be a side effect of the low quality video version.


Given we're on a collective patent kick at the moment, this is surely the perfect example of why we have software patents. If we assume this to be real, who here would like to have spent years working on this, only for ATI and NVIDIA to reap all the rewards?


Who wants to have spent years working on this only to find out that octrees, which are probably crucial to this technique, have already been patented several times over?

"Patent 4694404 covers the use of octrees to implement a nearer-object-first painting order. Patent 5123084 describes a similar nearest-first octree graphics method. Patent 5222201 also concerns octree graphics methods, and describes a heuristic for speeding up the conversion of objects into octree representations."

http://www.ics.uci.edu/~eppstein/gina/quadtree.html


That argument only works if you assume they can make their products without infringing on any of ATI's or NVIDIA's patents.


Or, far more likely, that ATI or NVIDIA license the IP, or buy the company to get the IP.

That doesn't happen without a patent, and that's what patents are for... To ensure that the big players can't simply steal the game-changing idea you've been working on.


ATI and Nvidia would still have to write code to make it work. That's the hard part, not coming up with the idea.


Are you serious? Figuring out the algorithm is absolutely the hard bit here. When was the last time you had trouble implementing an algorithm?

I feel like I'm feeding a troll here - I had to check your profile to be sure I wasn't. I think you're letting your dislike of patents warp your normally intelligent viewpoints.


It's said sometimes on HN that "actually implementing it is the real problem, not coming up with the idea". This is referring to startups; in this domain an idea like "let's do a site just like myspace but with feature X!" is worth nothing, but an actual product can be worth a lot.

I think it's no trolling, he just translated this to a domain where it makes no sense, which is a good reminder that web startups are not representative of all programming/business/engineering problems. And that one must be careful not to use a phrase like a meme, without thinking about its implications.


I'm not going to comment on patents here, but you're absolutely wrong in assuming that implementing an optimized algorithm is trivial, especially in real-time graphics.

Sure, coding up B-trees from a textbook description is easy. But in a video game, 10fps and 30fps is the difference between unplayable and perfect, which means that your "by the book" implementation likely won't cut it. Video game developers spend months squeezing the last 1.1x improvements out of their inner loops, using clever bit coding techniques, cache alignment, often even hand-optimized assembler.


Getting the algorithm to run on an unreliable parallel computer (i.e. a video card) is pretty hard.


That may be true for your average webapp; perhaps even for a reddit or farmville, but it's not for an innovating algorithm.


Bullshit.


As impressive as this sounds, it's interesting that the graphics they display are Quake 2 generation . . .

Although they call it search technology it sounds like a very effecient graphic codec - blending pixels, focusing on rendering frame vs. scene etc.


"searching" a "point cloud" sounds like a winner, reminds me of Seadragon's claims about bandwidth/screen resolution, but in 3D (and complicated with a search problem with probably lots and lots of parameters)


How to you get specular lighting in a particle system? You need a surface normal but there aren't surfaces.

Plus the unlimited thing bugs me. How does any algorithm that is O(N^0) work?


I knew something like this would come up the day the polygon-based 3D was out. The nice graphical games we played suddenly had turned into cold, rigid polygonic characters (E.g. the difference between Diablo I and Diablo II, or AOE or Age of Kings anyone? (Can't forget the moment Griswald the Blacksmith was moving towards me as a polygon zombie in Diablo II, where he looked like a decent Scottish lad in Diablo I). While polygons do give that extra depth feeling, the individual objects look annoyingly geometric unless there are a swarm of very small polygons.

I think this will pick up if the right conditions get in place.


You have no idea what you're even talking about. Both Diablo 1 and 2 used pre-rendered sprites. Yes, technically they're "polygonic", but in the same sense that a Pixar movie is polygon-based: the amount of detail is only limited by the time they were willing to give their render farms. This page has a nice animated comparison's between D1's and D2's sprites for Diablo himself, you'll see how small the difference actually is. http://diablo.wikia.com/wiki/Diablo If anything, the models they used for D2 were more detailed, better lighted, and encoded with more colors.


I, for one, am impressed with the graphics seen here. Even if it turns out that the issues such as animation and lightning dynamics are really limiting, the degree of artistic freedom this kind of technology already offers is really exciting. There will at least be a niche for these kinds of games, I think.


Sounds like a good implementation of Ray-Tracing. Lighting and animation are going to be major hurdles to overcome, followed by tools.


I agree that it sounds a fair amount like ray-tracing.

More specifically, the method of searching for color on a point-on-screen basis as opposed to a plethora of triangles overlaying each other sounds like ray-tracing in my mind. Though, even as I write this, I'm still second-guessing my position now that I've had more time to think about it.

Either way, what makes it impossible for them to use the current algorithm they're using to evaluate lighting on the point they discovered to be the one that's rendered, as well? ... <_< oh wait, that ~is~ raytracing, isn't it!

I wonder how they properly anti-alias it (and there would very likely be aliasing on those questionable/edge-of-object 'points')? Render a larger scene then scale and sharpen it, or something?

[edit: As a side note, why on earth would this affect the graphic card industries? The substantial math involved could likely be converted int matrix multiplications, and we already know graphics cards are stellar at that ]


But if you listened to their 8 minute video they said it was not ray tracing.


I think it sounds a lot like ray tracing:

The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024x768 (if that is our resolution) points, one for each pixel of the screen.


Maybe just first bounce? Cast rays out perpendicular from the lense (to handle FOV) and return back any point at the first bounce.


was coming here to say the same thing.

incidentally, nvidia is starting to get into raytracing - http://www.anandtech.com/video/showdoc.aspx?i=3721&p=6

[edit: although the comment says the video says that it's not... watching the video now. hmmm. looks like they're exploiting fractals in some way? they seem to have some scale-free way of encoding the data that they use to generate the image?]


The 8-minute demo has lighting and simple reflections.


Blast Processing!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: