Rendering with OpenGL
-
- Posts: 15
- Joined: Sun Jul 29, 2007 4:51 pm
- Location: State College, PA
Rendering with OpenGL
Hello everyone,
I am trying to implement quad buffered stereo for my project but I am having a tough time in figuring out the sections of the source which actually render using OpenGL. I need to make changes to support writing to back buffers.
Thanks in advance.
I am trying to implement quad buffered stereo for my project but I am having a tough time in figuring out the sections of the source which actually render using OpenGL. I need to make changes to support writing to back buffers.
Thanks in advance.
-
- Posts: 15
- Joined: Sun Jul 29, 2007 4:51 pm
- Location: State College, PA
To add to that
I have looked at COpenGLdriver.h and .cpp but I am unable to figure out how control flows from beginscene().
-
- Posts: 15
- Joined: Sun Jul 29, 2007 4:51 pm
- Location: State College, PA
The code below is for implementing active stereo (quad buffered) in an OpenGL application. I need to figure out how I can implement this in Irrlicht.
I will be able to change the camera views but how do I fiddle with drawing to different buffers?
Any other thoughts on implementing the code will also be useful.
Thanks again.
I will be able to change the camera views but how do I fiddle with drawing to different buffers?
Any other thoughts on implementing the code will also be useful.
Thanks again.
Code: Select all
Thanks to Paul Bourke.
void HandleDisplay(void)
{
XYZ r;
double ratio,radians,wd2,ndfl;
double left,right,top,bottom;
camera.near = camera.focallength / 10;
/* Misc stuff needed for the frustum */
ratio = camera.screenwidth / (double)camera.screenheight;
if (camera.stereo == DUALSTEREO)
ratio /= 2;
radians = DTOR * camera.aperture / 2;
wd2 = camera.near * tan(radians);
ndfl = camera.near / camera.focallength;
top = wd2;
bottom = - wd2;
/* Determine the right eye vector */
CROSSPROD(camera.vd,camera.vu,r);
Normalise(&r);
r.x *= camera.eyesep / 2.0;
r.y *= camera.eyesep / 2.0;
r.z *= camera.eyesep / 2.0;
if (camera.stereo == ACTIVESTEREO || camera.stereo == DUALSTEREO) {
if (camera.stereo == DUALSTEREO) {
glDrawBuffer(GL_BACK);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
left = - ratio * wd2 - 0.5 * camera.eyesep * ndfl;
right = ratio * wd2 - 0.5 * camera.eyesep * ndfl;
glFrustum(left,right,bottom,top,camera.near,camera.far);
if (camera.stereo == DUALSTEREO)
glViewport(camera.screenwidth/2,0,camera.screenwidth/2,camera.screenheight);
else
glViewport(0,0,camera.screenwidth,camera.screenheight);
glMatrixMode(GL_MODELVIEW);
glDrawBuffer(GL_BACK_RIGHT);
if (camera.stereo == ACTIVESTEREO)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(camera.vp.x + r.x,camera.vp.y + r.y,camera.vp.z + r.z,
camera.vp.x + r.x + camera.vd.x,
camera.vp.y + r.y + camera.vd.y,
camera.vp.z + r.z + camera.vd.z,
camera.vu.x,camera.vu.y,camera.vu.z);
CreateGeometry();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
left = - ratio * wd2 + 0.5 * camera.eyesep * ndfl;
right = ratio * wd2 + 0.5 * camera.eyesep * ndfl;
glFrustum(left,right,bottom,top,camera.near,camera.far);
if (camera.stereo == DUALSTEREO)
glViewport(0,0,camera.screenwidth/2,camera.screenheight);
else
glViewport(0,0,camera.screenwidth,camera.screenheight);
glMatrixMode(GL_MODELVIEW);
glDrawBuffer(GL_BACK_LEFT);
if (camera.stereo == ACTIVESTEREO)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(camera.vp.x - r.x,camera.vp.y - r.y,camera.vp.z - r.z,
camera.vp.x - r.x + camera.vd.x,
camera.vp.y - r.y + camera.vd.y,
camera.vp.z - r.z + camera.vd.z,
camera.vu.x,camera.vu.y,camera.vu.z);
CreateGeometry();
} else {
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0,0,camera.screenwidth,camera.screenheight);
left = - ratio * wd2;
right = ratio * wd2;
glFrustum(left,right,bottom,top,camera.near,camera.far);
glMatrixMode(GL_MODELVIEW);
glDrawBuffer(GL_BACK_LEFT);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(camera.vp.x,camera.vp.y,camera.vp.z,
camera.vp.x + camera.vd.x,
camera.vp.y + camera.vd.y,
camera.vp.z + camera.vd.z,
camera.vu.x,camera.vu.y,camera.vu.z);
CreateGeometry();
}
-
- Posts: 15
- Joined: Sun Jul 29, 2007 4:51 pm
- Location: State College, PA
Hybrid,
I was thinking of changing how Irrlicht renders using OpenGL. Would it not be easier to have Irrlicht render individually to both back buffers? Changing viewpoints may become difficult though. Thoughts? Also, how would you implement your suggestion of having special render targeting to an individual backbuffer?
Thanks
It might be easier to PM
I was thinking of changing how Irrlicht renders using OpenGL. Would it not be easier to have Irrlicht render individually to both back buffers? Changing viewpoints may become difficult though. Thoughts? Also, how would you implement your suggestion of having special render targeting to an individual backbuffer?
Thanks
It might be easier to PM
-
- Admin
- Posts: 14143
- Joined: Wed Apr 19, 2006 9:20 pm
- Location: Oldenburg(Oldb), Germany
- Contact:
You should consult example 13 for RTT. It's basically a technique to render to a texture and use the result in later passes. You're doing basically the same, excpet that bbr is a special backbuffer instead of a texture. Similar to pbuffer, which is the old version of RTT.
You use the RTT by specifying the special texture as render target. If we pass this method a special value we could use predefined targets offered by the driver.
PM is ok, check my website for the addresse (real email ), but if others should be able to give suggestions we should use the forum
You use the RTT by specifying the special texture as render target. If we pass this method a special value we could use predefined targets offered by the driver.
PM is ok, check my website for the addresse (real email ), but if others should be able to give suggestions we should use the forum
So FBOs dont use seperate backbuffers? But they can be bigger than the device...
ShadowMapping for Irrlicht!: Get it here
Need help? Come on the IRC!: #irrlicht on irc://irc.freenode.net
Need help? Come on the IRC!: #irrlicht on irc://irc.freenode.net
I see
I already look at anaglyph with opengl, that's why I already saw the code above.
but what I want to do its to separate each of view, left and right, in different viewport.
I dont want to use the nvidia stereo drivers
I have 2 small screen, 1 by eye for emulate the 3d vision
but I have some problem for create each cam
in fact, I try to use an array of cam[3], 1 is the main cam like the anaglyh and the left and right cam is each time puch with the good view.
but when I try to use it, the control doesn't work, the left and right dont move (in fact the main cam just wont move).
I already look at anaglyph with opengl, that's why I already saw the code above.
but what I want to do its to separate each of view, left and right, in different viewport.
I dont want to use the nvidia stereo drivers
I have 2 small screen, 1 by eye for emulate the 3d vision
but I have some problem for create each cam
in fact, I try to use an array of cam[3], 1 is the main cam like the anaglyh and the left and right cam is each time puch with the good view.
but when I try to use it, the control doesn't work, the left and right dont move (in fact the main cam just wont move).