Page 4 of 4

Posted: Tue Jan 25, 2011 9:56 am
by Mel
Actually, it is not. GI techniques are those which take into account the flow of light. The Ambient Occlusion takes into account the geometry and the position of the camera, and it is used to simulate the occlusion of the ambiental light on the geometry due to folds, cavities, and such. It may look like a contradiction, but the point is that AO doesn't take into account the light and thus, it is not a GI technique.

Originally, AO was used to simulate dirt on surfaces. But now it is used as a fast GI approach, and to enhance the detail of the GI solutions.

Posted: Tue Jan 25, 2011 11:54 am
by Luben
GI techniques are those which take into account the flow of light.
True. I retract my previous statement.
(Though it could be argued that AO does take the flow of into account(though not on a global scale, yeargh), or one would have to agree that whitted raytracing isn't a GI technique either)
But now it is used as a fast GI approach, and to enhance the detail of the GI solutions.
I was thinking more along these lines of thought.. (Used for X -> technique for X.. sortof).
I'll blame my lack of better judgement on not having regained consciousness after waking up yet x]

Posted: Mon Feb 14, 2011 8:32 pm
by devsh
I have reached 20k lines of code and I can proudly present the first example, however simple it is ( linux only and compiled, if you want the full thing go to SVN)

Posted: Mon Feb 14, 2011 8:34 pm
by devsh
I have reached 20k lines of code and I can proudly present the first example, however simple it is ( linux only and compiled, if you want the full thing go to SVN)

Posted: Sat Feb 19, 2011 7:51 pm
by devsh
sorry forgot to post the link to demo

Posted: Wed Jun 29, 2011 12:06 am
by devsh
And we finally have the inferred renderer!!! (very WIP, no normals or steep parallax :( )
Image

Uploaded with ImageShack.us

Posted: Wed Jun 29, 2011 12:37 am
by Radikalizm
It's nice that you got an inferred renderer up and running, but from the research I did about it I can't really say that it currently is a better alternative to forward or deferred rendering

It's nice that you can render your transparent geometry in a deferred fashion, but the DSF filtering needed to take everything into account is rather expensive if you want some decent output (scaled down G- and L-Buffers can cause some really horrible artifacts)
It would take quite a few lights for an inferred render to be more efficient than a deferred setup with transparent geometry rendered in a forward fashion with a similar output quality

I'm going to wait and see whether this technique matures somewhat over time, could be that some advancements are made which make it more attractive to implement

Posted: Wed Jun 29, 2011 10:11 am
by devsh
i think SSAO at 1/2 resolution will make up for the DSF, and the DSF will be very useful when blurring the SSAO which usually goes over edges.

Posted: Wed Jun 29, 2011 10:46 am
by devsh
and youre missing out on the most important aspect of my approach:
a) this is NOT inferred lighting... this is inferred rendering... so my gbuffer is 64 bits fatter
b) I can do the DSF IDs without modifying vertex data (gl_PrimitiveID or gl_VertexID)
c) SSAO will be of superior quality/performance with the DSF
d) the lbuffer is fatter but separate ambient buffer and a diffuse.rgb+specular.a, this will allow the SSAO to work on diffuse+ambient but not specular and the SSGI could separate itself out to endorse indirect specular and diffuse more than ambient
e) with my approach you should be able to balance fillrate/vertex throughput by inferred lighting some obejcts and only writing to 2 gbuffer textures and doing a second geometry pass(grass), or writing to all 4 MRTs without a second geomtery pass (skinned meshes, highpolycount models)
f) you can increase the inferred alpha polygon lighting quality by rendering it twice... once rendering all the solid pixels and second time rendering the transparent pixels, so on a chain-linked fence the quality only drops on the feathered outlines.

Re: Project {Consolidation} game engine+editor WILL ADOPT N0

Posted: Wed Jun 29, 2011 11:48 pm
by devsh
I just semi-finished the DSF filter:
And I was amazed with my amazing invention of deferred wireframe (using gl_PrimitiveID as the vertex id)
I will post some pictures :)

gl_VertexID doesn't work as a nice DSF value. I'm currently deliberating whether more expensive normal.z component and depth comparison will pay off better as DSF or should I still use the object+primitive IDs which would both fail in case of discontinuities on normal maps (same polygon, same mesh). I think it will come down to SSAO blurring, whichever one will be faster I'll take.

the lighting quality is quite amazing anyway, I never thought one could upscale a 848x530 image to 1200x750 and do not notice any aliasing at all!

I think I will lean towards the depth+normal component comparison (especially perspective warped depth), because I can store normals in 24bits without shitty reconstruction.

This might make my DSF more expensive in terms of computation but the pixel reads will be less. I also got a VERY peculiar && evil method of DSF, when people think DSF they think no if-statements and bilinear interpolation by hand. So for every pixel they do 4 reads for the DSF comparison, but then 4 reads for the lbuffer (or 8 in my case) this kinda kills (12 texreads each pixel). So instead I use the god-given feature of bilinear filtering to get the interpolated 4 reads of the lbuffer in one read, and then subtract the bad pixels with appropriate weights. This works because 90% of all pixels can use all 4 pixels from the lbuffer surrounding them without a problem so only 6 texture reads are performed (instead of 12 and 4 interpolation calculations). Then 2% have one weight wrong (8 reads and one interpolation calculation), most have 2 weights wrong (5%), some (2%) have 3 weights wrong (12 reads and 3 interpolations). then you have the awful 4 weights wrong (total DSF fail) which requires (14 reads and 4 interpolations) and a special case(however the 2 texture reads can be prevented). So my DSF is a slightly faster than you'd think.

I'll get normalmaps working and I shall post a screenie for comparison with the badly aliased picture.

Re: Project {Consolidation} game engine+editor WILL ADOPT N0

Posted: Thu Jun 30, 2011 8:47 pm
by devsh
DSF = 80% complete just need to perfect the "plane slope deviation threshold filtering"
Image

Oh yes baby... I hope you can see my artifacts.. I even blew them up 9x and there are only 2 overly red pixels where the lighting bleeds from the beam onto the background.

Re: Blooddrunk game engine+editor {Looking for potential use

Posted: Sat Jul 02, 2011 4:14 pm
by devsh
SSAO at 1/2 resolution straight into the LBuffer.. amazing speed ++ results using DSF filter. I hope I can resolve the thin lines offsetting the SSAO and I hope to blur it somehow and tweak it.

ImageImage
ImageImage
ImageImage
ImageImage
ImageImage
ImageImage
ImageImage