Point cloud data
Point cloud data
I just saw this video on youtube:
Unlimited Detail Technology
They are making quite nice graphics, using apparently only the CPU.
They say that they are producing unlimited detail, using unlimited point cloud data. I'm not sure i fully understand what that means, or how something can be unlimited inside a computer, but it looks good. And according to them it's the new poop.
So is this for real? Any of you know anything about this stuff?
The official site: http://unlimiteddetailtechnology.com/
Unlimited Detail Technology
They are making quite nice graphics, using apparently only the CPU.
They say that they are producing unlimited detail, using unlimited point cloud data. I'm not sure i fully understand what that means, or how something can be unlimited inside a computer, but it looks good. And according to them it's the new poop.
So is this for real? Any of you know anything about this stuff?
The official site: http://unlimiteddetailtechnology.com/
point cloud = voxel (http://en.wikipedia.org/wiki/Voxel)
My company: http://www.kloena.com
My blog: http://www.zhieng.com
My co-working space: http://www.deskspace.info
My blog: http://www.zhieng.com
My co-working space: http://www.deskspace.info
-
- Posts: 758
- Joined: Mon Mar 31, 2008 3:32 pm
- Location: Bulgaria
This is rather weird IMO. Those "models" look nice, but when you take a close-up, the surfaces look somehow rugged like they are using too big atoms. Won`t it be quite senseless to draw a simple cube of 8 points using millions of points if we don`t want to make it "curly"?! The grass surface looks like an orange peel... Everything looks as very unreal even it is detailed. I doubt the water there is made of "atoms". I think that the same scene made of trigs and using normalmapping etc. shaders and with optimization will look even better and at least 10x faster on their super-pc. Why there aren`t that many flat surfaces in the video? How about an animated character? Each bone will be responsible for a few millions of points. Tada! C`mon, let`s interpolate between 2 keyframes... Modified voxels.
Unlimited detail? No, thanks, my pc`s power is quite LIMITED! (coding on AMD K7 1,14 512 128ATI 9550)
Unlimited detail? No, thanks, my pc`s power is quite LIMITED! (coding on AMD K7 1,14 512 128ATI 9550)
"Although we walk on the ground and step in the mud... our dreams and endeavors reach the immense skies..."
I'm not sure how will this be efficient for the artists to do modeling and animation using decent tools. There is always a reason why big companies reject their ideas.
My company: http://www.kloena.com
My blog: http://www.zhieng.com
My co-working space: http://www.deskspace.info
My blog: http://www.zhieng.com
My co-working space: http://www.deskspace.info
Sorry, but for me it just looks like an enormous amount of... propaganda
Okay, lets not be closed of mind. Something like that is also being investigated in my university. It is an analytical description of models intended to save bandwidth and processing power.
In theory, it can encode a shape's volume using formulas, and hence, the render uses only the points it needs, vector intersections are faster, and can have any degree of detail because the information stored is not an amount of voxels or triangles, but coeficients that can be stored hierarchically. They are compressed, and can be transferred and transformed fastly, then, decompressed and used. Like the JPEG compression. The render process is, then, a search for the points rendered on each frame, because it uses a numerical algorithm, which is in essence, a search, to fit a number for the pixel.
Using little amounts of information enables the rendering to be processed faster, with lots of detail. At least, that is what it is being done in the University of Granada, in Spain, with other universities and colleges in Europe. I guess that that unlimitted detail technology is going to use something similar.
Tesellation does something quite similar IMO, and is less complex. It uses the polygons and the textures to make the description of a mesh, and that is processed on the fly through geometry shaders, They can increase the detail on a screen coverage basis. And they are ALREADY out.
Okay, lets not be closed of mind. Something like that is also being investigated in my university. It is an analytical description of models intended to save bandwidth and processing power.
In theory, it can encode a shape's volume using formulas, and hence, the render uses only the points it needs, vector intersections are faster, and can have any degree of detail because the information stored is not an amount of voxels or triangles, but coeficients that can be stored hierarchically. They are compressed, and can be transferred and transformed fastly, then, decompressed and used. Like the JPEG compression. The render process is, then, a search for the points rendered on each frame, because it uses a numerical algorithm, which is in essence, a search, to fit a number for the pixel.
Using little amounts of information enables the rendering to be processed faster, with lots of detail. At least, that is what it is being done in the University of Granada, in Spain, with other universities and colleges in Europe. I guess that that unlimitted detail technology is going to use something similar.
Tesellation does something quite similar IMO, and is less complex. It uses the polygons and the textures to make the description of a mesh, and that is processed on the fly through geometry shaders, They can increase the detail on a screen coverage basis. And they are ALREADY out.
"There is nothing truly useless, it always serves as a bad example". Arthur A. Schmitt
Well, just to clarify, voxels and points are not the same thing. As Wikipedia states, a Voxel is a volumetric pixel IE, it has volume, it has dimensions. Points do not. That is why it is called "unlimited" detail, because each object can be made up of a theoretically infinite number of points and still have all the points fit inside the geometry of the object. (Since points don't take up space)
I saw this the other day too. It seems pretty cool, but it doesn't seem like it will be of any practical use anytime soon. Here are my observations, I could be wrong on some of these:
First of all, you notice he said he gets about 60FPs or so max, and as far as I can tell they are running in 1024 x 768. Most people want to play their games in at least 1280 x 1024, and the performance of games is generally measured at 1920 x 1080 or higher. Taking 60FPS for 1024 x 768, that means that it should be about 36FPS for 1280 x 1024 and about 23FPS for 1920 x 1080. Note that these are using his best numbers.
Second of all, they are doing it in software now, when they could just wait and do it in Compute Shader later. Compute Shader is one of the new shader stages that has been added to DX11 class GPUs. It's for being able to do general purpose programming on the gpu, and it's so that you can take advantage of the throughput nature of the GPU without having to go through the polygon renderer. This is especially useful for things like... search algorithms :p I know not everyone has DX11 class cards now, but if the give it a few years more will.
Third of all, If you look at the edges in the video, you can see they are getting some FOV distortion. They've got to fix that, and I figure that if you are rendering a single point at a time, that has to be a nontrivial thing as you would have to do the correction algorithm per point.
Fourth, lighting and shading and such are, to my understanding, basically not possible. Since the world is comprised of an arbitrary cloud of points (or clouds, I guess) you can't do per-object lighting, since there's no real definition of a side or a face of an object. Instead, you have to do the calculations every frame one point at a time. Again, since they are doing this in software, this is going to create a tremendous slow down that I suspect will make it unplayable on even the beastliest machines. Just for laughs, I just ran the per-pixel demos of Irrlicht on the two software renderers on my Intel Core i7. You are not going to find many PCs with a more powerful processor than this (You will find a few, though, I have the 920.) For the Irrlicht Software Renderer it, of course, couldn't actually do the lighting or even display all the polygons properly, and I got about 110 FPS. For Burning's Software Renderer, I got 33 FPS, and it was able to do per-pixel lighting. And, of course, this was the much faster polygon rendering and it was with very simple geometry. I don't think they will be able to implement lighting and shading in the current model anytime soon. I guess if they wanted they could take advantage of the 8 logical threads of an i7, but if you are going to force people to have an i7 why not force people to have DX11 class hardware? (Which, but the way, is not very expensive now that we have the 5670 and the like.) Forcing DX11 class hardware does not mean forcing Direct X because OpenGL 4 is supposed to take advantage of the DX11 class hardware updates. (Though I admit I haven't looked at what specific things OpenGL 4 takes advantage of yet)
I saw this the other day too. It seems pretty cool, but it doesn't seem like it will be of any practical use anytime soon. Here are my observations, I could be wrong on some of these:
First of all, you notice he said he gets about 60FPs or so max, and as far as I can tell they are running in 1024 x 768. Most people want to play their games in at least 1280 x 1024, and the performance of games is generally measured at 1920 x 1080 or higher. Taking 60FPS for 1024 x 768, that means that it should be about 36FPS for 1280 x 1024 and about 23FPS for 1920 x 1080. Note that these are using his best numbers.
Second of all, they are doing it in software now, when they could just wait and do it in Compute Shader later. Compute Shader is one of the new shader stages that has been added to DX11 class GPUs. It's for being able to do general purpose programming on the gpu, and it's so that you can take advantage of the throughput nature of the GPU without having to go through the polygon renderer. This is especially useful for things like... search algorithms :p I know not everyone has DX11 class cards now, but if the give it a few years more will.
Third of all, If you look at the edges in the video, you can see they are getting some FOV distortion. They've got to fix that, and I figure that if you are rendering a single point at a time, that has to be a nontrivial thing as you would have to do the correction algorithm per point.
Fourth, lighting and shading and such are, to my understanding, basically not possible. Since the world is comprised of an arbitrary cloud of points (or clouds, I guess) you can't do per-object lighting, since there's no real definition of a side or a face of an object. Instead, you have to do the calculations every frame one point at a time. Again, since they are doing this in software, this is going to create a tremendous slow down that I suspect will make it unplayable on even the beastliest machines. Just for laughs, I just ran the per-pixel demos of Irrlicht on the two software renderers on my Intel Core i7. You are not going to find many PCs with a more powerful processor than this (You will find a few, though, I have the 920.) For the Irrlicht Software Renderer it, of course, couldn't actually do the lighting or even display all the polygons properly, and I got about 110 FPS. For Burning's Software Renderer, I got 33 FPS, and it was able to do per-pixel lighting. And, of course, this was the much faster polygon rendering and it was with very simple geometry. I don't think they will be able to implement lighting and shading in the current model anytime soon. I guess if they wanted they could take advantage of the 8 logical threads of an i7, but if you are going to force people to have an i7 why not force people to have DX11 class hardware? (Which, but the way, is not very expensive now that we have the 5670 and the like.) Forcing DX11 class hardware does not mean forcing Direct X because OpenGL 4 is supposed to take advantage of the DX11 class hardware updates. (Though I admit I haven't looked at what specific things OpenGL 4 takes advantage of yet)
My company: http://www.kloena.com
My blog: http://www.zhieng.com
My co-working space: http://www.deskspace.info
My blog: http://www.zhieng.com
My co-working space: http://www.deskspace.info
Well, don't let the mistakes of UD put you off of infinite point geometry (I don't know if that's the real term for it, but that's what I am calling it). In fact, ever since 3D rendering started, the trend has been to make the polygons become smaller and smaller, and they should eventually reach the point where they are just points, anyway. UD is trying to beat the trend and get there sooner. You got to give them credit for trying, but they are still a ways away. Still, the fact that they can physically render infinite point geometry is impressive.
Hardware Tessellation does not even really relate to this. Tessellated polygons are still polygons. It is quicker to draw X number of points than it is to draw X number of polygons, but you can represent an object with less than X polygons, so that is why polygon rendering is faster. If you use hardware tessellation to approach X polygons, then it will be too slow as well.
Also, point cloud rendering does not inherently make things look cartoony. If you want an example of how realistic an object that is made up of a lot of small things can be... look at real life :p When infinite point rendering is done properly, then each point can be analogous to an atom. That means that point cloud rendering is inherently more realistic than polygon rendering.
Also, point cloud rendering does not inherently make things look cartoony. If you want an example of how realistic an object that is made up of a lot of small things can be... look at real life :p When infinite point rendering is done properly, then each point can be analogous to an atom. That means that point cloud rendering is inherently more realistic than polygon rendering.
-
- Posts: 331
- Joined: Sat Sep 02, 2006 4:11 am
- Location: Michigan
- Contact:
like i said on the a7 forums.. in the future it will be great but for now, its already a ram killer, now think about animation on top of that, plus if the search only is affected by whats visible, then there is no non visible collision calculation so that is a lot more rendering time as well. there are of course easy ways around the second one (invisible polygon collision shell) but animation would still be wayyyyy more ram killing.