Hacker News new | past | comments | ask | show | jobs | submit login

It sounds a little to me like the old "once the polys get small enough, everythings a particle system" approach. Sure you save lots of GPU by essentially doing away with surfaces and textures (by making everything a floating colored dot) but you then have to contend with massive storage, manipulation, and filtering problems.

They might be at the stage where they say "we'll just make everything a point! this'll be cake! all our renderer will have to do is figure out which points to show." and not yet at the phase where a they come to realize that a room full of monsters will require 100 gig of ram.




Indeed. Through many mentions of the word "unlimited", never once was the word "storage" mentioned, or "cache" or "memory". How about the design toolchain? The only solution I can think of is to store all the assets as ... polygons. Unless there is toolchain support for a CSG/procedural approach.


Another thing I wonder is, if their "search" system, what kind of indexing is required? How long does it take, and can you re-index on the fly? Their demos have a conspicuous lack of any kind of movement at all, much less dynamic geometry.

And what about shading? Shading typically require surface normals, something that's not readily available from a mess of points in 3d.


Shading in particle systems (when its addressed at all) usually involves making a topo-map like structure out of the points and then creating virtual polys out of the contours. You can then apply shading to groups of points contained in the virtual polys based on those surface normals. It takes lots and lots of cpu. Decidedly not like the "it'll run on your cell phone without a GPU" hype presented here.

My guess is that they created a small static particle system that looks like 3d figure, "rotated" it by selectively displaying particles and got all excited.

A classic case of needing to be an expert in a field before trying to push the state of the art in order to save yourself the trouble of chasing a dead end that most people in said field already know is infeasible.


He did mention the word "compress" though. Once. That might be a key part of their technology, but to say that they glossed over it would be a gross understatement.


I could conceive a fractal analog of the spray paint tool.


I think he mentions around the 5 minute mark that this is not quantised (point clouds like Voxel, which comes in again in the end minute comparison).

Why is the move from a polygonised surface to a smoother surface made with more polygons and not made by mathematically defining curves like polylines in 2D vector art. I know processing, but surely current GPUs can manage completely smooth curves for some games (not FPS in other words).

What also interests me is that using a perfectly smooth line rendering appears actual less real than using the voxel approach and how that will effect developmemt of game physics.


I would imagine a fair amount needs to be dynamically generated and procedural, and perhaps with jitter etc. added to reduce the cookie-cutter effect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: