A New Discussion From Me.
-
- Posts: 49
- Joined: Tue Jan 18, 2011 12:35 am
A New Discussion From Me.
Alright guys, its been awhile since I've had any time to work on my projects, due to life issues and everything, but I've been studying some graphics related things. I've come to you all, hoping to receive some enlightenment and to discuss certain things I've had on my mind recently. I'm currently doing a base project, to get my proof of concept down, then I'll go back and add in the graphical related stuff.
Shadow Mapping
I get the basic principal behind shadow mapping, its kinda bluntly obvious, but I've been missing certain things in my conceptual thinking.
In most implementations, it seems that we have things that are devised as 'occluders and receivers' do we create a depth map(shadow map) for every
occluder and then compare it to every receiver? That sounds like it might be a little costy, but then again I haven't actually tried implementing it, yet.
I'm not positive I'm correct on that, so it would be a great help if someone could shed some light on that.
Also, what if a shadow maps size isn't 'as-big' as the rendered pixels? How do you even make those comparisons then..?
Grass
This isn't really a 'must' for my project, I'm planning for mostly barren type places, to avoid using grass, however I would like to have some insight on it.
I've seen several implementations of grass, that are dependent on basically several quads that are rendered as a whole, but only certain ones are called to
be rendered as a single mesh at a single time. (If you don't get what I mean, I'll try to explain it better)
I'm sure there is several ways to optimize this, anyone wanna share some techniques, with a more straight forward explanation then all math?
Graphical Interface
We all know Irrlicht's built in GUI is 2D, which mixing with 3D isn't always that fast as the hardware has to switch between two render modes. Other side-effect are, since (from what I know) no GPUs support 2D Acceleration anymore, you have to pre-compute rotations on images and etc.
Adding in a Quad-Based GUI system(possibly single surface) would be great, but I came up with another idea to use off-screen rendering of the 3D GUI, then applying the rendered image to a render target texture, which could be drawn after post-processing effects and the final scene, on to a quad that covers the screen.
I'm more so curious, if people prefer to use the built in Irrlicht GUI over implementing another type of GUI, or how other people have implemented their GUI systems.
Oh and Last, but not least, reflection
To me reflection could be easily achieved using a secondary camera for every reflective object and rendering its output to a render target, then blending that render target over the reflective object's current textures the only side effects I can see, is scaling the camera to be just right and having several objects using reflection could be deadly, FPS cost wise.
Sorry if this is useless rambling to you all, but I guess I'm just looking to share ideas, rather then keep them cooped up or for ideas to be explained by general people and not through some 'tutorial/book'. Thanks again.
Shadow Mapping
I get the basic principal behind shadow mapping, its kinda bluntly obvious, but I've been missing certain things in my conceptual thinking.
In most implementations, it seems that we have things that are devised as 'occluders and receivers' do we create a depth map(shadow map) for every
occluder and then compare it to every receiver? That sounds like it might be a little costy, but then again I haven't actually tried implementing it, yet.
I'm not positive I'm correct on that, so it would be a great help if someone could shed some light on that.
Also, what if a shadow maps size isn't 'as-big' as the rendered pixels? How do you even make those comparisons then..?
Grass
This isn't really a 'must' for my project, I'm planning for mostly barren type places, to avoid using grass, however I would like to have some insight on it.
I've seen several implementations of grass, that are dependent on basically several quads that are rendered as a whole, but only certain ones are called to
be rendered as a single mesh at a single time. (If you don't get what I mean, I'll try to explain it better)
I'm sure there is several ways to optimize this, anyone wanna share some techniques, with a more straight forward explanation then all math?
Graphical Interface
We all know Irrlicht's built in GUI is 2D, which mixing with 3D isn't always that fast as the hardware has to switch between two render modes. Other side-effect are, since (from what I know) no GPUs support 2D Acceleration anymore, you have to pre-compute rotations on images and etc.
Adding in a Quad-Based GUI system(possibly single surface) would be great, but I came up with another idea to use off-screen rendering of the 3D GUI, then applying the rendered image to a render target texture, which could be drawn after post-processing effects and the final scene, on to a quad that covers the screen.
I'm more so curious, if people prefer to use the built in Irrlicht GUI over implementing another type of GUI, or how other people have implemented their GUI systems.
Oh and Last, but not least, reflection
To me reflection could be easily achieved using a secondary camera for every reflective object and rendering its output to a render target, then blending that render target over the reflective object's current textures the only side effects I can see, is scaling the camera to be just right and having several objects using reflection could be deadly, FPS cost wise.
Sorry if this is useless rambling to you all, but I guess I'm just looking to share ideas, rather then keep them cooped up or for ideas to be explained by general people and not through some 'tutorial/book'. Thanks again.
Re: A New Discussion From Me.
Are you sure modern GPUs doesn't support accelerated 2D? this can be true internally. but externally drivers will convert 2d function calls to 3d functions so you don't have to warry about 2d functions because you will be able to use them also on modern gpu. instead 2D accelerated GUI is very used in modern games wich have GUI/menus with nice shader transiction effects or animated elements. 2d functions are just sortcuts anyway, there's nothing that a 2D function can do and a 3d function can't do.
Junior Irrlicht Developer.
Real value in social networks is not about "increasing" number of followers, but about getting in touch with Amazing people.
- by Me
Real value in social networks is not about "increasing" number of followers, but about getting in touch with Amazing people.
- by Me
-
- Posts: 49
- Joined: Tue Jan 18, 2011 12:35 am
Re: A New Discussion From Me.
For everything, I've read and heard 2D acceleration is non-existent now a days. However, most people make Quads/Sprites(all the same thing to me) then alpha them out and layer them on top of the screen. I have seen something close to what you are talking about, but I've yet to see 'actual' rotations is the best way to put it.
Like, as in the mario-style the rotation.
Like, as in the mario-style the rotation.
Last edited by LookingForAPath on Wed Nov 30, 2011 6:00 pm, edited 1 time in total.
-
- Posts: 1215
- Joined: Tue Jan 09, 2007 7:03 pm
- Location: Leuven, Belgium
Re: A New Discussion From Me.
Current graphics hardware can do matrix transformations so extremely fast that doing accelerated 2D drawing by just leaving out your z-component in a 3D transformation is just as fast as native 2D drawingLookingForAPath wrote:For everything, I've read and heard 2D acceleration is non-existent now a days. However, most people make Quads/Sprites(all the same thing to me) then alpha them out and layer them on top of the screen. I have seen something close to what you are talking about, but I've yet to see 'actual' rotations is the best way to put it.
The 2D drawing system for my engine is completely based off of 'screen-aligned' quads, and I can do any 2D operation I want on them on the GPU through a very very simple vertex shader
-
- Posts: 49
- Joined: Tue Jan 18, 2011 12:35 am
Re: A New Discussion From Me.
Yea, that is what I'm doing, but I wondered if off-screen rendering, then overlaying the camera with the visible render(kinda like a render to target) would be beneficial? Instead of moving all those vertices with the camera? I mean, I know it really won't have a MAJOR performance hit, but I keep having these weird thoughts, haha.
-
- Posts: 1215
- Joined: Tue Jan 09, 2007 7:03 pm
- Location: Leuven, Belgium
Re: A New Discussion From Me.
Are you actually rendering the quads right in front of your camera? Your quads shouldn't be considered as a part of your scene, they should be rendered separately as a different pass in screen-space
-
- Posts: 49
- Joined: Tue Jan 18, 2011 12:35 am
Re: A New Discussion From Me.
Could you explain better? See that is kinda what I didn't understand.
-
- Posts: 1215
- Joined: Tue Jan 09, 2007 7:03 pm
- Location: Leuven, Belgium
Re: A New Discussion From Me.
The common thing to do with 3D objects would be to define a world matrix for each object which defines a transformation from model-space to world space in 3 dimensionsLookingForAPath wrote:Could you explain better? See that is kinda what I didn't understand.
After that the view- and projection matrices would be applied to project your 3D world onto your 2D screen, because after all that's what a camera is: a medium which can project a 3D-scene onto a 2D plane
This process should not be applied to 2D objects since there's no need to do a projection if we're already working in a 2-dimensional space
The only thing you need to is provide a transformation matrix which transforms your screen aligned quad relative to your actual screen quad. That way you can perfectly do any 2D-transformation (eg. translation, rotation, scaling, etc.) within the confines of your screen
-
- Posts: 49
- Joined: Tue Jan 18, 2011 12:35 am
Re: A New Discussion From Me.
Very interesting, I'll have to figure out exactly how to do that, I get what you mean, basically a quad, with no depth that just stays on the screen at all times, right?
Ah
By the way, Rad, I know what you mean, that is exactly what I plan on doing, I totally forgot about that..
Ah
By the way, Rad, I know what you mean, that is exactly what I plan on doing, I totally forgot about that..
-
- Posts: 49
- Joined: Tue Jan 18, 2011 12:35 am
Re: A New Discussion From Me.
Okay, I hate to bring up this discussion, but shaders...I've been looking at things like XEffects and in XEffects, BlindSide uses a polygon positioned infront of the camera which is used as a render target to call of the post-processing and shadow effects, but my question is, how is he combining all those separate textures into one and drawing them all at once?
I mean, how should I combine my shader with the currently rendered geometry and display it back on the screen, unless I do it their way? and how exactly does their way work..?
I've been reading through it and I mostly get it, but I'd like a general explanation. I guess its because, I'm super curious about implementing my own shadow mapping, because I'm a stickler and like to learn things more then I do, just want to use them.
I mean, how should I combine my shader with the currently rendered geometry and display it back on the screen, unless I do it their way? and how exactly does their way work..?
I've been reading through it and I mostly get it, but I'd like a general explanation. I guess its because, I'm super curious about implementing my own shadow mapping, because I'm a stickler and like to learn things more then I do, just want to use them.
-
- Posts: 1215
- Joined: Tue Jan 09, 2007 7:03 pm
- Location: Leuven, Belgium
Re: A New Discussion From Me.
About post-processing:
First of all, your scene is rendered to an off-screen render target
Now you can use your scene texture as an input for a post-processing effect, these are rendered by using full-screen quads like you mentioned
You can stack post-processing effects by rendering to different off-screen render targets and using those as input for your next effect, and when you're finished you can render your last effect to your actual output
Keep in mind though that using a new texture for each effect will use up a lot of memory soon, so you should try to re-use textures as much as you can
Also keep in mind that you might want to recall the output of a previously applied effect, so if you want to design a post-processing framework you'll have to find a solution for that
About (simple) shadow mapping:
I assume that you know you'll first of all need to generate shadow maps for the lights you want to cast shadows with (ortho for directional, perspective for spot, cube/dual paraboloid/whatever for points). These shadow maps will contain the depth of each object in camera space; this means that if you provide a transformation matrix from world or view space to camera space (or vice versa, although this needs some workarounds) you will be able to compare your geometry positions with the values stored in your shadow map(s) to determine occlusion. You could store your geometry positions in an auxiliary buffer, or you could use your depth-buffer to reconstruct geometry positions if you provide your shader with enough info about your setup (ie. screen resolution, far-plane distance, etc.) if you want to do this as a post-processing effect.
I have to say that I'm more experienced with these things in deferred renderers (makes the process much easier, and allows for really nice post-processing effects out of the box), so it could be possible that I overlooked something when it comes to forward rendering, but I'm sure someone will be glad to correct me
First of all, your scene is rendered to an off-screen render target
Now you can use your scene texture as an input for a post-processing effect, these are rendered by using full-screen quads like you mentioned
You can stack post-processing effects by rendering to different off-screen render targets and using those as input for your next effect, and when you're finished you can render your last effect to your actual output
Keep in mind though that using a new texture for each effect will use up a lot of memory soon, so you should try to re-use textures as much as you can
Also keep in mind that you might want to recall the output of a previously applied effect, so if you want to design a post-processing framework you'll have to find a solution for that
About (simple) shadow mapping:
I assume that you know you'll first of all need to generate shadow maps for the lights you want to cast shadows with (ortho for directional, perspective for spot, cube/dual paraboloid/whatever for points). These shadow maps will contain the depth of each object in camera space; this means that if you provide a transformation matrix from world or view space to camera space (or vice versa, although this needs some workarounds) you will be able to compare your geometry positions with the values stored in your shadow map(s) to determine occlusion. You could store your geometry positions in an auxiliary buffer, or you could use your depth-buffer to reconstruct geometry positions if you provide your shader with enough info about your setup (ie. screen resolution, far-plane distance, etc.) if you want to do this as a post-processing effect.
I have to say that I'm more experienced with these things in deferred renderers (makes the process much easier, and allows for really nice post-processing effects out of the box), so it could be possible that I overlooked something when it comes to forward rendering, but I'm sure someone will be glad to correct me
-
- Posts: 49
- Joined: Tue Jan 18, 2011 12:35 am
Re: A New Discussion From Me.
Thanks again, Radikalizm.
I know how shadow mapping works in a simplistic basis, but thanks for rephrasing, brought a little bit of new light.
On Post-Processing effects:
Okay, sweet I was figuring that is how it worked, rendering the processed effects to a render target texture, then displaying that render target texture, then changing it and displaying it again and repeat.
That was after going through Xeffects' code a couple of times, so I assume that I'm right in say that is the 'applicable processes'.
I know how shadow mapping works in a simplistic basis, but thanks for rephrasing, brought a little bit of new light.
On Post-Processing effects:
Okay, sweet I was figuring that is how it worked, rendering the processed effects to a render target texture, then displaying that render target texture, then changing it and displaying it again and repeat.
That was after going through Xeffects' code a couple of times, so I assume that I'm right in say that is the 'applicable processes'.
-
- Posts: 49
- Joined: Tue Jan 18, 2011 12:35 am
Re: A New Discussion From Me.
Alright, so I just reformatted this PC(the one I do my little 'experiments' on) so I'm about to get back to trying to implement this here shadow mapping, I'm starting to wonder if I should make certain shadow casters 'non-active' (thus not drawn in the depth map) for certain light points, but I'm not sure, I guess I'm just gonna force my method and see how it goes, then from there I'll eventually make something outta it, haha.
-
- Posts: 1215
- Joined: Tue Jan 09, 2007 7:03 pm
- Location: Leuven, Belgium
Re: A New Discussion From Me.
You could always do frustum culling on your shadow camera to guarantee that objects the light won't reach will not get drawn. This will probably cause an overhead on simple scenes though, so be sure to profile and see what works bestLookingForAPath wrote:Alright, so I just reformatted this PC(the one I do my little 'experiments' on) so I'm about to get back to trying to implement this here shadow mapping, I'm starting to wonder if I should make certain shadow casters 'non-active' (thus not drawn in the depth map) for certain light points, but I'm not sure, I guess I'm just gonna force my method and see how it goes, then from there I'll eventually make something outta it, haha.
-
- Posts: 49
- Joined: Tue Jan 18, 2011 12:35 am
Re: A New Discussion From Me.
So, I've got my basic principal mapped out of how I'm going to go about this, although I'm not positive my way of doing things is the 'industry' or 'pro' standard, here is basically what I'm doing.
I have a structure called 'ShadowLight', every 'ShadowLight' has a pointer to a LightSceneNode, it has a 3D vector for bounds(any Caster, within the bounds is rendered) and a Camera of its own. The camera is set to ortho projection and using a 200 far value. Using this, I plan to loop through every 'ShadowLight' then for every caster within it, render them with a depth shader and draw that to a render target, converting the pixel coordinates using the normal WorldViewProjection matrix calculations on both the 'ShadowLight' Camera and the User's camera, check if the pixel is in shadow.
I'm not considering point lights yet, but I figure I'll just handle them using a single texture and rendering to the texture 6 different times, calculating for each texture, less memory usage, but more writing and reading, however I would really rather not have several textures with 512x512+ pixels worth of data. On, spot lights and directional lights, I am just rotating the camera towards the light's direction and placing the camera at the same spot as the camera, I am not sure that will give a accurate image, because I haven't been able to try all of this out yet.
(By "light's direction" I mean Light->getLightData().direction() )
I've got the whole thing set up, just need to do the render scene function, on the class that is handling all the data, right now I'm too tried to do that though.
Any thoughts/Comments someone would like to make would be great.
I have a structure called 'ShadowLight', every 'ShadowLight' has a pointer to a LightSceneNode, it has a 3D vector for bounds(any Caster, within the bounds is rendered) and a Camera of its own. The camera is set to ortho projection and using a 200 far value. Using this, I plan to loop through every 'ShadowLight' then for every caster within it, render them with a depth shader and draw that to a render target, converting the pixel coordinates using the normal WorldViewProjection matrix calculations on both the 'ShadowLight' Camera and the User's camera, check if the pixel is in shadow.
I'm not considering point lights yet, but I figure I'll just handle them using a single texture and rendering to the texture 6 different times, calculating for each texture, less memory usage, but more writing and reading, however I would really rather not have several textures with 512x512+ pixels worth of data. On, spot lights and directional lights, I am just rotating the camera towards the light's direction and placing the camera at the same spot as the camera, I am not sure that will give a accurate image, because I haven't been able to try all of this out yet.
(By "light's direction" I mean Light->getLightData().direction() )
I've got the whole thing set up, just need to do the render scene function, on the class that is handling all the data, right now I'm too tried to do that though.
Any thoughts/Comments someone would like to make would be great.