irredit global illumination - how does it work?

Discuss about anything related to the Irrlicht Engine, or read announcements about any significant features or usage changes.
Post Reply
spock
Posts: 16
Joined: Tue Aug 01, 2006 4:53 am

irredit global illumination - how does it work?

Post by spock »

i think irredit can do hardware accelerated global illumination baking?

could anyone give a short description how this works? i guess this doesn't use ray tracing?
niko
Site Admin
Posts: 1759
Joined: Fri Aug 22, 2003 4:44 am
Location: Vienna, Austria
Contact:

Post by niko »

Yes, by simply using the GPU as raytracer :)
spock
Posts: 16
Joined: Tue Aug 01, 2006 4:53 am

Post by spock »

really? :p you trace individual rays? isn't this cumbersome with a gpu if you don't use CUDA which i guess you don't use since it is quite new?

i guess you render a view from each light map texel? what resolution do you use? how do you align the view along the normal (since roll isn't really defined)? what problems do arise with this approach? ...

or is this all a trade secret? :)

maybe someone knows where i can read about such stuff?
zenaku
Posts: 212
Joined: Tue Jun 07, 2005 11:23 pm

Post by zenaku »

-------------------------------------
IrrLua - a Lua binding for Irrlicht
http://irrlua.sourceforge.net/
spock
Posts: 16
Joined: Tue Aug 01, 2006 4:53 am

Post by spock »

darn, those books are so expensive. :P

gpu gems 2 has a free pdf for chapter 14 though. does irredit use that method? i don't think so.
omaremad
Competition winner
Posts: 1027
Joined: Fri Jul 15, 2005 11:30 pm
Location: Cairo,Egypt

Post by omaremad »

Books dont hold secrets!, infact i just studied shadrs from all the info on the internet.

Learn shaders first then doing thing like fast (or even realtime radioisty) would just show themselves trust me.

I already made a mini radiosity calculator in render monkey by doing the following.

Get your mesh, make sure its UV'ed and all its uvcoords are normalized.
make the final clipspace position equal to the uv coords (with some shift to get uv coords into screen range(*0.5 then*2))

using this transformed position we need two outputs normals and positions, set up render targets and capture them via a simple frag shader.

Set up a render target and run the fragment program on your transformed mesh, the fragment program will take the normal from a neighbouring fragment (chosen by a random tcoord offset) and dot product with the current fragments normal (rember that normal we captured). This dot product should give us the "form factor" or how much a face faces a face :lol:

Now use the positions to calculate light attenuation also multiply by local lighting equation reults.

Your final result can then glcopied to your cpu, or applied back to a "normally" transformed mesh for realtime radiosity.

Its ussually lack of knowledge about 3d math that hinders your creativity, so learn that rather than buying expensive books
"Irrlicht is obese"

If you want modern rendering techniques learn how to make them or go to the engine next door =p
omaremad
Competition winner
Posts: 1027
Joined: Fri Jul 15, 2005 11:30 pm
Location: Cairo,Egypt

Post by omaremad »

for the actual shadows you can use any texture based shadow technique, the stuff i wrote above does radiance transfer.
"Irrlicht is obese"

If you want modern rendering techniques learn how to make them or go to the engine next door =p
spock
Posts: 16
Joined: Tue Aug 01, 2006 4:53 am

Post by spock »

i am no master in 3d maths but i know the most important things and also how to program hlsl shaders.

i have already written a very basic ray tracing light mapper and i understand how the same could be done with shaders and texture shadows.

i haven't yet worked on global illumination though and don't know much about it. i probably would be able to do an extremely brute force software solution but that's not what i want. :)

i don't understand how the dot product of the randomly offset neighbouring fragment normal and the current fragment normal is of any help? :P what is the "form factor" and why is this supposed to show how much a face is occluded by another face?
omaremad
Competition winner
Posts: 1027
Joined: Fri Jul 15, 2005 11:30 pm
Location: Cairo,Egypt

Post by omaremad »

Well in shaders your only random acess data is textures, so you store all your data in textures in UV space, then randomly sample. The local light occlusion is handled by shadowmaps, global lighting is giverned by the form factor, so you take the local lighting result and transmit it to another fragment by using the form factor. its not 100% perfect since you can transmit through other polygons but those other polygons also transmit darkness through negative dot products(backfacing to another polygon) so it all averages out. This solution is suitable for realtime since the data doesnt need to be reprocessed into array form. I think niko is doing it by storing data as an array where each pixel represents a vertex or triangle then looping through the texture as if its an array.
"Irrlicht is obese"

If you want modern rendering techniques learn how to make them or go to the engine next door =p
Post Reply