Blooddrunk game engine+editor {Looking for potential users!}
Actually, it is not. GI techniques are those which take into account the flow of light. The Ambient Occlusion takes into account the geometry and the position of the camera, and it is used to simulate the occlusion of the ambiental light on the geometry due to folds, cavities, and such. It may look like a contradiction, but the point is that AO doesn't take into account the light and thus, it is not a GI technique.
Originally, AO was used to simulate dirt on surfaces. But now it is used as a fast GI approach, and to enhance the detail of the GI solutions.
Originally, AO was used to simulate dirt on surfaces. But now it is used as a fast GI approach, and to enhance the detail of the GI solutions.
"There is nothing truly useless, it always serves as a bad example". Arthur A. Schmitt
True. I retract my previous statement.GI techniques are those which take into account the flow of light.
(Though it could be argued that AO does take the flow of into account(though not on a global scale, yeargh), or one would have to agree that whitted raytracing isn't a GI technique either)
I was thinking more along these lines of thought.. (Used for X -> technique for X.. sortof).But now it is used as a fast GI approach, and to enhance the detail of the GI solutions.
I'll blame my lack of better judgement on not having regained consciousness after waking up yet x]
And we finally have the inferred renderer!!! (very WIP, no normals or steep parallax )
Uploaded with ImageShack.us
Uploaded with ImageShack.us
-
- Posts: 1215
- Joined: Tue Jan 09, 2007 7:03 pm
- Location: Leuven, Belgium
It's nice that you got an inferred renderer up and running, but from the research I did about it I can't really say that it currently is a better alternative to forward or deferred rendering
It's nice that you can render your transparent geometry in a deferred fashion, but the DSF filtering needed to take everything into account is rather expensive if you want some decent output (scaled down G- and L-Buffers can cause some really horrible artifacts)
It would take quite a few lights for an inferred render to be more efficient than a deferred setup with transparent geometry rendered in a forward fashion with a similar output quality
I'm going to wait and see whether this technique matures somewhat over time, could be that some advancements are made which make it more attractive to implement
It's nice that you can render your transparent geometry in a deferred fashion, but the DSF filtering needed to take everything into account is rather expensive if you want some decent output (scaled down G- and L-Buffers can cause some really horrible artifacts)
It would take quite a few lights for an inferred render to be more efficient than a deferred setup with transparent geometry rendered in a forward fashion with a similar output quality
I'm going to wait and see whether this technique matures somewhat over time, could be that some advancements are made which make it more attractive to implement
and youre missing out on the most important aspect of my approach:
a) this is NOT inferred lighting... this is inferred rendering... so my gbuffer is 64 bits fatter
b) I can do the DSF IDs without modifying vertex data (gl_PrimitiveID or gl_VertexID)
c) SSAO will be of superior quality/performance with the DSF
d) the lbuffer is fatter but separate ambient buffer and a diffuse.rgb+specular.a, this will allow the SSAO to work on diffuse+ambient but not specular and the SSGI could separate itself out to endorse indirect specular and diffuse more than ambient
e) with my approach you should be able to balance fillrate/vertex throughput by inferred lighting some obejcts and only writing to 2 gbuffer textures and doing a second geometry pass(grass), or writing to all 4 MRTs without a second geomtery pass (skinned meshes, highpolycount models)
f) you can increase the inferred alpha polygon lighting quality by rendering it twice... once rendering all the solid pixels and second time rendering the transparent pixels, so on a chain-linked fence the quality only drops on the feathered outlines.
a) this is NOT inferred lighting... this is inferred rendering... so my gbuffer is 64 bits fatter
b) I can do the DSF IDs without modifying vertex data (gl_PrimitiveID or gl_VertexID)
c) SSAO will be of superior quality/performance with the DSF
d) the lbuffer is fatter but separate ambient buffer and a diffuse.rgb+specular.a, this will allow the SSAO to work on diffuse+ambient but not specular and the SSGI could separate itself out to endorse indirect specular and diffuse more than ambient
e) with my approach you should be able to balance fillrate/vertex throughput by inferred lighting some obejcts and only writing to 2 gbuffer textures and doing a second geometry pass(grass), or writing to all 4 MRTs without a second geomtery pass (skinned meshes, highpolycount models)
f) you can increase the inferred alpha polygon lighting quality by rendering it twice... once rendering all the solid pixels and second time rendering the transparent pixels, so on a chain-linked fence the quality only drops on the feathered outlines.
Re: Project {Consolidation} game engine+editor WILL ADOPT N0
I just semi-finished the DSF filter:
And I was amazed with my amazing invention of deferred wireframe (using gl_PrimitiveID as the vertex id)
I will post some pictures
gl_VertexID doesn't work as a nice DSF value. I'm currently deliberating whether more expensive normal.z component and depth comparison will pay off better as DSF or should I still use the object+primitive IDs which would both fail in case of discontinuities on normal maps (same polygon, same mesh). I think it will come down to SSAO blurring, whichever one will be faster I'll take.
the lighting quality is quite amazing anyway, I never thought one could upscale a 848x530 image to 1200x750 and do not notice any aliasing at all!
I think I will lean towards the depth+normal component comparison (especially perspective warped depth), because I can store normals in 24bits without shitty reconstruction.
This might make my DSF more expensive in terms of computation but the pixel reads will be less. I also got a VERY peculiar && evil method of DSF, when people think DSF they think no if-statements and bilinear interpolation by hand. So for every pixel they do 4 reads for the DSF comparison, but then 4 reads for the lbuffer (or 8 in my case) this kinda kills (12 texreads each pixel). So instead I use the god-given feature of bilinear filtering to get the interpolated 4 reads of the lbuffer in one read, and then subtract the bad pixels with appropriate weights. This works because 90% of all pixels can use all 4 pixels from the lbuffer surrounding them without a problem so only 6 texture reads are performed (instead of 12 and 4 interpolation calculations). Then 2% have one weight wrong (8 reads and one interpolation calculation), most have 2 weights wrong (5%), some (2%) have 3 weights wrong (12 reads and 3 interpolations). then you have the awful 4 weights wrong (total DSF fail) which requires (14 reads and 4 interpolations) and a special case(however the 2 texture reads can be prevented). So my DSF is a slightly faster than you'd think.
I'll get normalmaps working and I shall post a screenie for comparison with the badly aliased picture.
And I was amazed with my amazing invention of deferred wireframe (using gl_PrimitiveID as the vertex id)
I will post some pictures
gl_VertexID doesn't work as a nice DSF value. I'm currently deliberating whether more expensive normal.z component and depth comparison will pay off better as DSF or should I still use the object+primitive IDs which would both fail in case of discontinuities on normal maps (same polygon, same mesh). I think it will come down to SSAO blurring, whichever one will be faster I'll take.
the lighting quality is quite amazing anyway, I never thought one could upscale a 848x530 image to 1200x750 and do not notice any aliasing at all!
I think I will lean towards the depth+normal component comparison (especially perspective warped depth), because I can store normals in 24bits without shitty reconstruction.
This might make my DSF more expensive in terms of computation but the pixel reads will be less. I also got a VERY peculiar && evil method of DSF, when people think DSF they think no if-statements and bilinear interpolation by hand. So for every pixel they do 4 reads for the DSF comparison, but then 4 reads for the lbuffer (or 8 in my case) this kinda kills (12 texreads each pixel). So instead I use the god-given feature of bilinear filtering to get the interpolated 4 reads of the lbuffer in one read, and then subtract the bad pixels with appropriate weights. This works because 90% of all pixels can use all 4 pixels from the lbuffer surrounding them without a problem so only 6 texture reads are performed (instead of 12 and 4 interpolation calculations). Then 2% have one weight wrong (8 reads and one interpolation calculation), most have 2 weights wrong (5%), some (2%) have 3 weights wrong (12 reads and 3 interpolations). then you have the awful 4 weights wrong (total DSF fail) which requires (14 reads and 4 interpolations) and a special case(however the 2 texture reads can be prevented). So my DSF is a slightly faster than you'd think.
I'll get normalmaps working and I shall post a screenie for comparison with the badly aliased picture.