A simple, robust and extendible postprocessing class
I really like your class!
Espesially the lens flare effect.
But it is really slow. I get 30 fps when i look to the sun.
I dont know much about shaders but is it possible to do the pixel check for the sun in a pixel shader?
Like this one:
http://http.developer.nvidia.com/GPUGem ... _ch26.html
Great job again!
Espesially the lens flare effect.
But it is really slow. I get 30 fps when i look to the sun.
I dont know much about shaders but is it possible to do the pixel check for the sun in a pixel shader?
Like this one:
http://http.developer.nvidia.com/GPUGem ... _ch26.html
Great job again!
hmm, that looks very interesting. I'll give it a try when I get time, though this will probably be done in my new class (not released yet) because it has built-in downsampling & other niceties.
2 potential problems which come to mind;
* This cannot track more than 1 object, so having 2 bright spots will have odd behaviour (the flare being half-way between them). Could be fixed by giving a rough expected position & size, and using this for fine-tuning and obstruction checking.
* The method he uses for overlaying the duck is very inefficient - using a pixel sampler in the vertex shader and a standard pixel shader would be much more efficient.
2 potential problems which come to mind;
* This cannot track more than 1 object, so having 2 bright spots will have odd behaviour (the flare being half-way between them). Could be fixed by giving a rough expected position & size, and using this for fine-tuning and obstruction checking.
* The method he uses for overlaying the duck is very inefficient - using a pixel sampler in the vertex shader and a standard pixel shader would be much more efficient.
haha... i have a better idea for you
multiply the sun (bright spot) billboard vertices by worldViewProj matrix (in irrlicht)
one you get them they are in -1 to 1 range
multiply by 0.5 and add 0.5
you get texcoords of where the sun is in the scene
(irrlicht has this function somwhere, I've seen it)
make a depth texture for the render target you drew your scene into
have a 128x128 (or 64x64 if the sun is small) render target.
have a screenquad with changing Tex Coords
turn on Anisotropic filtering
have a pixel shader that records the INVERSE (1.0-diff) LINEAR difference between sun depth and pixel depth
then another rtt which is half the size
render the content of the previous render target into that
and so on and so on untill you render into an rtt of size 2x2
in the sun flare vertex shader you can use that RTT (sample it right in the middle, the bilinear interpolation will avergae all 4 pixels) and its float value raised to some power and a cut off at 0.5 difference will provide a nice attenuating flare effect
multiply the sun (bright spot) billboard vertices by worldViewProj matrix (in irrlicht)
one you get them they are in -1 to 1 range
multiply by 0.5 and add 0.5
you get texcoords of where the sun is in the scene
(irrlicht has this function somwhere, I've seen it)
make a depth texture for the render target you drew your scene into
have a 128x128 (or 64x64 if the sun is small) render target.
have a screenquad with changing Tex Coords
turn on Anisotropic filtering
have a pixel shader that records the INVERSE (1.0-diff) LINEAR difference between sun depth and pixel depth
then another rtt which is half the size
render the content of the previous render target into that
and so on and so on untill you render into an rtt of size 2x2
in the sun flare vertex shader you can use that RTT (sample it right in the middle, the bilinear interpolation will avergae all 4 pixels) and its float value raised to some power and a cut off at 0.5 difference will provide a nice attenuating flare effect
Glad to hear you are thinking of implementing this DavidJE13!
Also, i have to admit that devsh's approach will probably have more accurate results. Calculating the
*Off topic: Nice community ppl
That is exactly what i was thinkingCould be fixed by giving a rough expected position & size, and using this for fine-tuning and obstruction checking.
Also, i have to admit that devsh's approach will probably have more accurate results. Calculating the
will not be color dependent and it will avoid the problem with white objects. I dont know about the performance differences.difference between sun depth and pixel depth
*Off topic: Nice community ppl
I did not think about the exact way it could work (the truth is i cant..)
I just thought that your idea of using the depth instead of the color might be more accurate.
Anyway, i think we made our point and it is up to DavidJE13 now, because he will write the actual thing. It is still great as is.
Another thing is depth of field. I wrote an example to make it "adaptive" using a ray from the camera and then get the collision point from the ray. After that i pass the distance between the intersection and the camera to your shader (i can post the code if someone wants, but i dont think it is a big deal).
Again i think it might be a more "gpu based" solution. What do you think?
Sorry DavidJE13 for throwing ideas and not write the code myself and post it, but as i said i dont know much about shaders (i just started learning some very basic stuff)
I just thought that your idea of using the depth instead of the color might be more accurate.
Anyway, i think we made our point and it is up to DavidJE13 now, because he will write the actual thing. It is still great as is.
Another thing is depth of field. I wrote an example to make it "adaptive" using a ray from the camera and then get the collision point from the ray. After that i pass the distance between the intersection and the camera to your shader (i can post the code if someone wants, but i dont think it is a big deal).
Again i think it might be a more "gpu based" solution. What do you think?
Sorry DavidJE13 for throwing ideas and not write the code myself and post it, but as i said i dont know much about shaders (i just started learning some very basic stuff)
hi rootroot1;
firstly, make sure you always use powers-of-two for the texture sizes (that is, the dimensions given to the nodes). The window size can be non-power-of-two.
second, how big is bigger? 801x600? if the cut-off is above 1024 then it could be a limitation of your video card. 4096 is the most common upper limit that I've seen but it could be that you have an old card. It's also possible that you're running out of VRAM, in which case using fewer stages in the pipeline should fix it (new version will use intelligent texture sharing to reduce this problem, as soon as I get it working!)
if these don't work, posting your graphics card details may help solve the problem.
also, @devsh;
that's an idea I attempted when I first wrote the effect, but at that time I couldn't access the depth buffer (I was using 1.5) so I used vector collision tests which made it VERY slow (this evolved into an option in the current version, but using only a single vector and bounding boxes - much less effective). I think you're right that it would be a better solution now, but one problem I can think of is that there is no support for semi-transparent objects, so in some cases the sun could loose it's flare for no apparent reason. (also I've never tried to get the depth buffer since it apparently became available but I imagine it's not too difficult)
final thought; having the flare effected by white objects isn't necessarily a bad thing, especially if combined with some HDR technique. I kind of wish I could do a GPU-based convolution so that every pixel on the screen could cast it's own lens flare, which would make a very flexible and realistic solution. But that's just wishful thinking
firstly, make sure you always use powers-of-two for the texture sizes (that is, the dimensions given to the nodes). The window size can be non-power-of-two.
second, how big is bigger? 801x600? if the cut-off is above 1024 then it could be a limitation of your video card. 4096 is the most common upper limit that I've seen but it could be that you have an old card. It's also possible that you're running out of VRAM, in which case using fewer stages in the pipeline should fix it (new version will use intelligent texture sharing to reduce this problem, as soon as I get it working!)
if these don't work, posting your graphics card details may help solve the problem.
also, @devsh;
that's an idea I attempted when I first wrote the effect, but at that time I couldn't access the depth buffer (I was using 1.5) so I used vector collision tests which made it VERY slow (this evolved into an option in the current version, but using only a single vector and bounding boxes - much less effective). I think you're right that it would be a better solution now, but one problem I can think of is that there is no support for semi-transparent objects, so in some cases the sun could loose it's flare for no apparent reason. (also I've never tried to get the depth buffer since it apparently became available but I imagine it's not too difficult)
final thought; having the flare effected by white objects isn't necessarily a bad thing, especially if combined with some HDR technique. I kind of wish I could do a GPU-based convolution so that every pixel on the screen could cast it's own lens flare, which would make a very flexible and realistic solution. But that's just wishful thinking
No, i have a good video card with DX10 and CUDA support
NVIDIA GeForce 9500 GT
.......
Screen resolution doesn't matter now, I solved it ..
A have a few more questions now :
1. DIRECT3D9 and antialiasing - result black screen (With OPENGL gui becomes dirty)
Have any suggestions how to solve it ????
2. Maybe it was a good idea to port your Interface to CG/ using Nadro's wrapper ( this gives no problems between OpenGL and DirectX)
Thank's for your answers in advance !!!!!!!
NVIDIA GeForce 9500 GT
.......
Screen resolution doesn't matter now, I solved it ..
A have a few more questions now :
1. DIRECT3D9 and antialiasing - result black screen (With OPENGL gui becomes dirty)
Have any suggestions how to solve it ????
2. Maybe it was a good idea to port your Interface to CG/ using Nadro's wrapper ( this gives no problems between OpenGL and DirectX)
Thank's for your answers in advance !!!!!!!
The library as-is doesn't support Direct3D9. I never bothered to figure out why - if you can see what needs fixing please tell me.
(It should just disable itself when used with it. If it doesn't that's a bug)
No idea what would make the gui "dirty", since this never touches it - can you elaborate?
I'd rather not port to CG, because that's limited to DirectX's features. OpenGL has some extra abilities that I've used for some of the shaders. Basically I'd like to keep the flexibility to have each shader as good as it can be in OpenGL and have a more basic version for DirectX. I may add the ability to use CG shaders as well (for custom shaders); I'll see how Nadro's wrapper looks.
Final note - for anti-aliasing, the new version uses no special shaders at all; just the standard material and an identity view matrix. So it should work on every driver. Only problem would be rendering to textures larger than the screen size on some hardware. Some kind of grid rendering (i.e. rendering each quadrant of the screen to its own texture then combining them once aliased) may help there, but that's a project for some other time.
(It should just disable itself when used with it. If it doesn't that's a bug)
No idea what would make the gui "dirty", since this never touches it - can you elaborate?
I'd rather not port to CG, because that's limited to DirectX's features. OpenGL has some extra abilities that I've used for some of the shaders. Basically I'd like to keep the flexibility to have each shader as good as it can be in OpenGL and have a more basic version for DirectX. I may add the ability to use CG shaders as well (for custom shaders); I'll see how Nadro's wrapper looks.
Final note - for anti-aliasing, the new version uses no special shaders at all; just the standard material and an identity view matrix. So it should work on every driver. Only problem would be rendering to textures larger than the screen size on some hardware. Some kind of grid rendering (i.e. rendering each quadrant of the screen to its own texture then combining them once aliased) may help there, but that's a project for some other time.
with this parameters
and with this
when fullscreen, the screen is black
Do you have any ideas how to fix that ??????
Code: Select all
SIrrlichtCreationParameters p;
p.AntiAlias = 2; //problem is here
p.DriverType = EDT_OPENGL;
p.Bits=32;
p.Fullscreen = true;
p.Vsync = true;
p.WindowSize = dimension2du( 1280, 1024 );
IrrlichtDevice* device = createDeviceEx( p );
.....
and with this
Code: Select all
SIrrlichtCreationParameters p;
p.AntiAlias = 2; //problem is here
p.DriverType = EDT_DIRECT3D;
p.Bits=32;
p.Fullscreen = true;
p.Vsync = true;
p.WindowSize = dimension2du( 1280, 1024 );
IrrlichtDevice* device = createDeviceEx( p );
...
Do you have any ideas how to fix that ??????
Hi, David!
just wishes: Would be great to see underwater caustic emulation effect made by postprocessing. I said "emulation" because for realistic caustic it needs to calculate many values and could be so hard for the post process. With the DoF and distant blur and some camera "swimming"(not in shader of course) it could look fantastic
Good luck
just wishes: Would be great to see underwater caustic emulation effect made by postprocessing. I said "emulation" because for realistic caustic it needs to calculate many values and could be so hard for the post process. With the DoF and distant blur and some camera "swimming"(not in shader of course) it could look fantastic
Good luck