It seems a bit PR-ish, but it's very impressive nonetheless. Stick it out for some vaguely technical sounding details at the end. http://www.youtube.com/watch?v=Q-ATtrImCx4&ttl=1
Maybe it actually works as they say it does.. maybe.
If it does, there are plenty more possible issues with the system. Mainly, memory usage. At one point they say (IIRC) they have 8 billion voxels in a scene. If we assume that each voxel is located by three floats, that's a whopping 96 GB of memory to keep track of each of those.

What else is conspicuous in its absence? Any sort of real lighting support. I can only speculate what sort of performance hit that system might take attempting to run any sort of dynamic lighting, but I'm sure it would not be pretty. It basically ends up raycasting, which looks nice and all, but requires all sorts of processing power.

In short, I don't think it has a future, but it's rather intriguing.
The Tari wrote:
Maybe it actually works as they say it does.. maybe.
If it does, there are plenty more possible issues with the system. Mainly, memory usage. At one point they say (IIRC) they have 8 billion voxels in a scene. If we assume that each voxel is located by three floats, that's a whopping 96 GB of memory to keep track of each of those.

128 actually, once you throw in 32bit color....though you're assuming they are retarded enough to not reuse static model data across object instances. I agree though that this is obviously a storage bound algorithm.



The Tari wrote:
It basically ends up raycasting, which looks nice and all, but requires all sorts of processing power.

Single-pass ray casting seems like it would be a lot cheaper when you're being handed the end points on a golden platter instead of calculating them from a mathematical description of a surface. But you'll notice they also have some lighting effects with the water. And pixel shaders obviously aren't going to be any harder with this system.

[edit]
Someone pointed out on Digg that storing normal data with each point makes lighting much more feasible.
Or alternatively, they just figure out scaling for each of the thousands of copies after they turn one animal into polygons. Notice all their scenes use greatly-repeated objects.
KermMartian wrote:
Or alternatively, they just figure out scaling for each of the thousands of copies after they turn one animal into polygons.
I like that idea.
Quote:
Notice all their scenes use greatly-repeated objects.
I noticed this as well. While the environments were greatly populated with plants and buildings, they all seemed generic and repeated.
Browse their website gallery. Some of the scenes definitely seem to have a higher diversity of models
elfprince13 wrote:
Browse their website gallery. Some of the scenes definitely seem to have a higher diversity of models
Here's a shortcut to their site:

http://unlimiteddetailtechnology.com/description.html

Some of the terms they're throwing around sound a little overly-jargon to me, but we'll see.
Some major problems:

1) Memory usage is going to be retarded. Not only for huge number of points, but also for their search system. I am very curious in how they are organizing their data for a fast 3D search.
2) Shadows are done by doubling the points and putting it underneath the objection - so this will have horrendous dynamic shadows (listen to 2:55)

It is also very misguided. Number of polygons stopped having a significant quality impact years ago. Lighting plays a much larger role. This appears to completely lack something like soft shadows. The lighting looks super basic. Global illumination? Doubt it. God rays? Doubt it. Ambient Occlusion? Doubt it.

Look at this sample: http://www.youtube.com/watch?v=8bRkyG3R-eI
Polygons + advanced lighting == freaking awesome. Which anyone who has seen a raytraced movie could tell you.

Also, simulating shapes with points has no inherent benefits over polygons. Some things in the real world are flat (glass windows for example) - those things take 2 polygons or a billion points. Polygons easily win. And tessellation eliminates the need for model swaps and allows for super high polygon counts on models with no increased CPU or memory usage. And tessellation allows for seamless LOD.

It does look like they just did tons and tons of instancing. I wonder how "unlimited" it really is. DirectX could also render 8 billion of those creatures, and it would do it a hell of a lot faster thanks to instancing support.
To cut down on memory I suppose for simple things like say, a brick wall, instead of having a tiny point for each freaking particle of sand they could merely have a "brick texture algorithm" that would approximate the nuances of the grains of bricks on the fly, rather than actually storing a billion points for a brick wall. One could use this method of creating and throwing out points on the fly with most generic textures. So what you eventually wind up with is a polygon based skeleton with on the fly point-rendering to get the extra detail. But then again, that would use a good deal of processing power if not done very efficiently.
Pseudoprogrammer wrote:
To cut down on memory I suppose for simple things like say, a brick wall, instead of having a tiny point for each freaking particle of sand they could merely have a "brick texture algorithm" that would approximate the nuances of the grains of bricks on the fly, rather than actually storing a billion points for a brick wall. One could use this method of creating and throwing out points on the fly with most generic textures. So what you eventually wind up with is a polygon based skeleton with on the fly point-rendering to get the extra detail. But then again, that would use a good deal of processing power if not done very efficiently.


Doing that on the fly is extremely costly and expensive. I doubt that would work anyway, as they claim to be doing a 3D search. You can't search for something that isn't there. The polygon skeleton with extra detail when you get close is what DX11 does with tessellation.
OpenGL has supported Tessellation for years from what I've heard, it's not unique to DX11. I'd love to see what type of hardware they're running their tests off of and what type of requirements would be needed to run something like this. From what it looks like, it could be lags best friend. They're claiming an awful lot, we're just going to have to wait and see what new information they provide.
swivelgames wrote:
OpenGL has supported Tessellation for years from what I've heard, it's not unique to DX11.


ATI has had a vendor extension to do tessellation up to 16 steps, it is not an OpenGL standard (Nvidia didn't support it, for example).

DX11 has required tessellation up to 64 steps.
Kllrnohj wrote:
swivelgames wrote:
OpenGL has supported Tessellation for years from what I've heard, it's not unique to DX11.


ATI has had a vendor extension to do tessellation up to 16 steps, it is not an OpenGL standard (Nvidia didn't support it, for example).

DX11 has required tessellation up to 64 steps.
The point was not to say it was an OpenGL standard, but to point out that DX11 was not the only one that supported it...

On the subject of lighting, I'd be interested in seeing how they work something like that up. Who knows if they'd be able to utilize some of the advancements made for the current standard. I wouldn't really know.

None of us really know the stage it's at right now until they release more information about it. The fact is they had a goal and they reached it, and advanced lighting was not it. If they've really found an efficient way to process a 3d world using point cloud data then they've certainly made an exceptional advancement, regardless of whether or not they have sexy lighting yet, it's flawless, going to be the new standard, or will have a large amount of significance in today's technological world. If advancement in other areas is necessary, they're one step closer. Remembering that this type of system was almost literally laughed at originally (and apparently still is), this type of advancement proves it's not at all a lost cause.

What I'm saying is instead of bashing it by over-analyzing it, wait until there is exceptional evidence disproving what they claim. If what they claim is true (which is somewhat a big If) then, yes, they have other areas that they need to focus on (like lighting/shadows, physics, etc...). But with the type of advancement this is, it's quite expected. All they have done is claimed they posses the algorithms necessary to efficiently and quickly render a 3d world using the point cloud system. Indeed they don't have an excessive amount of evidence for them except for claims and videos, but we'll see in the next 16 months if they have a feasible method given the current standard of hardware.
swivelgames wrote:
None of us really know the stage it's at right now until they release more information about it. The fact is they had a goal and they reached it, and advanced lighting was not it. If they've really found an efficient way to process a 3d world using point cloud data then they've certainly made an exceptional advancement, regardless of whether or not they have sexy lighting yet, it's flawless, going to be the new standard, or will have a large amount of significance in today's technological world. If advancement in other areas is necessary, they're one step closer. Remembering that this type of system was almost literally laughed at originally (and apparently still is), this type of advancement proves it's not at all a lost cause.


Except you can't just bolt on lighting. But that isn't its biggest problem. More than likely this system absolutely sucks at animation. John Carmack has already discussed hybrid voxel+polygon approach, using voxels for static environment and polygons for anything animated or dynamic.

Quote:
What I'm saying is instead of bashing it by over-analyzing it, wait until there is exceptional evidence disproving what they claim. If what they claim is true (which is somewhat a big If) then, yes, they have other areas that they need to focus on (like lighting/shadows, physics, etc...). But with the type of advancement this is, it's quite expected. All they have done is claimed they posses the algorithms necessary to efficiently and quickly render a 3d world using the point cloud system. Indeed they don't have an excessive amount of evidence for them except for claims and videos, but we'll see in the next 16 months if they have a feasible method given the current standard of hardware.


But it doesn't matter if they can render a static, over instanced world with point cloud data if the resources required are ridiculous and it can't be animated. This isn't a new idea - voxels have been around for 20 years, give or take. Rendering voxels efficiently isn't a problem either. Heck, Crysis used voxels for the terrain.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement