The requirements are the following:
-deffered rendering engine
-static mesh with lightmap coordinates(i.e: unique mapping of all the texels)
-CPU side calculation of light at "map sensors"
map sensors are points on the map where direct lighting is estimated with the cpu then a virtual light is sent to the defferd rendering system to bounce the light.
![Image](http://img861.imageshack.us/img861/196/sensor.png)
So the render sequence would be
========CPU===========
///precalculation, create map sensors and find out surface diffuse colour at location
/////loop/////
calculate light intensity at each map sensor
multiply with precalculated surface diffuse
create deffered virtual lights
========GPU===========
render normal lights
render virtual lights
This technique is similar to the technique called "light splatting" except its not post process and thus takes into account off screen lightbounces.
Code that works out the sensor positions and surface colour is already part of my gpurad application, the diffrence is this maybe possible to do in realtime since its rendering in a deffered system rather than to a light map
Anyone intrested in doing this on their deffered engine?
![Laughing :lol:](./images/smilies/icon_lol.gif)