Reading/writing z-Buffer
Reading/writing z-Buffer
Here are my questions:
1. How do I write directly to z-Buffer, pixel by pixel?
2. How do I read from z-Buffer, pixel by pixel?
3. How do I retrieve z-Buffer, either to save it as an image, or for further manipulations?
1. How do I write directly to z-Buffer, pixel by pixel?
2. How do I read from z-Buffer, pixel by pixel?
3. How do I retrieve z-Buffer, either to save it as an image, or for further manipulations?
vitek wrote:Couldn't you just setup a shader to render the distance from the camera as color, similar to the depth map used for shadow mapping? Once you can render the depths, it is just a matter of saving the resulting data as an image.
Travis
That's not bad idea although I'll need to do conversions I'd rather not.
OK, I've just got some problems with positions of near and far clipping planes. I cannot make any sense of them.
I know exact positions of geometry in my scene, I set near and far values for camera and get completely nonsensical behavior. For example, I have zNear and zFar for which all of the scene is visible. If I increase zFar, part of the scene that is closer to the camera wil get clipped out. Generally zNear and zFar I set do not correspond to actual I clipping I see when I render the scene.
Not to mention that z-buffer values are also useless because I cannot tell where in the scene is 0, and where is 1.
I have also tried with the orthogonal camera in order to make analysis easier but it is same there.
Can anyone shed any light on what is going here?
I know exact positions of geometry in my scene, I set near and far values for camera and get completely nonsensical behavior. For example, I have zNear and zFar for which all of the scene is visible. If I increase zFar, part of the scene that is closer to the camera wil get clipped out. Generally zNear and zFar I set do not correspond to actual I clipping I see when I render the scene.
Not to mention that z-buffer values are also useless because I cannot tell where in the scene is 0, and where is 1.
I have also tried with the orthogonal camera in order to make analysis easier but it is same there.
Can anyone shed any light on what is going here?
I would be very surprised if setting the near plane affected the far plane or vice versa. I don't have a compiler available to write up a testcase and verify it. Could you take a few minutes to write up a simple testcase that illustrates the problem. Something very simple like the 15.LoadIrrFile example.
Travis
Travis
Here you go. Test scene contains three planes perpendicular to y axis. One is
at y=0, another at y=-5, and one at y=-10.
Orthogonal camera is located on y-axis at y=20, looking at origin. Initial near and far values are 15 and 35 respectively.
Use Q/W to decrease/increase near value and A/S for far value.
After app finishes it dumps z-buffer into txt file. I use Mathematica to do some math stuff with it.
http://rapidshare.com/files/313459295/weird_z.rar
at y=0, another at y=-5, and one at y=-10.
Orthogonal camera is located on y-axis at y=20, looking at origin. Initial near and far values are 15 and 35 respectively.
Use Q/W to decrease/increase near value and A/S for far value.
After app finishes it dumps z-buffer into txt file. I use Mathematica to do some math stuff with it.
http://rapidshare.com/files/313459295/weird_z.rar
Are you sure you're using all the precision that the texture has to offer? If you are using only 8-bit precision then you will obviously encounter problems like this.
Search for some functions that pack a floating point value in 4 8-bit integer values for shaders, then do the reverse conversion when you lock and read the render target.
There is a function in scene manager that gives you a ray for any pixel of the screen. Combining this information (Start and direction of ray) with the depth buffer values, you can reconstruct the entire view's geometry. Remember to divide the depth values by the far value in the shader to get something in the 0-1 range, you can also subtract the near value if you want the distance from the near plane rather than the camera.
Search for some functions that pack a floating point value in 4 8-bit integer values for shaders, then do the reverse conversion when you lock and read the render target.
There is a function in scene manager that gives you a ray for any pixel of the screen. Combining this information (Start and direction of ray) with the depth buffer values, you can reconstruct the entire view's geometry. Remember to divide the depth values by the far value in the shader to get something in the 0-1 range, you can also subtract the near value if you want the distance from the near plane rather than the camera.
ShadowMapping for Irrlicht!: Get it here
Need help? Come on the IRC!: #irrlicht on irc://irc.freenode.net
Need help? Come on the IRC!: #irrlicht on irc://irc.freenode.net
If you're referring to reading z-buffer, I use glReadPixes to read GL_DEPTH_COMPONENT to GL_FLOAT values. If there is any logic to this function, it will read depth buffer with 32-bit precision as it apparently does since I've exported z-buffer output as a matrix to Mathematica where it gets nicely drawn and subjected to various mathematical tortures.BlindSide wrote:Are you sure you're using all the precision that the texture has to offer? If you are using only 8-bit precision then you will obviously encounter problems like this.
Search for some functions that pack a floating point value in 4 8-bit integer values for shaders, then do the reverse conversion when you lock and read the render target.
I was kinda hoping to use hardware's capabilities to determine visible pixels.BlindSide wrote:There is a function in scene manager that gives you a ray for any pixel of the screen. Combining this information (Start and direction of ray) with the depth buffer values, you can reconstruct the entire view's geometry. Remember to divide the depth values by the far value in the shader to get something in the 0-1 range, you can also subtract the near value if you want the distance from the near plane rather than the camera.
Could you post the code here in this thread usingtomkeus wrote:Here you go.... http://rapidshare.com/files/313459295/weird_z.rar
Code: Select all
tags? It should be pretty short.
Ah I thought you took vitek's suggestion and rendered it using a depth shader, my advice doesn't really apply otherwise.I use glReadPixes to read GL_DEPTH_COMPONENT to GL_FLOAT values.
ShadowMapping for Irrlicht!: Get it here
Need help? Come on the IRC!: #irrlicht on irc://irc.freenode.net
Need help? Come on the IRC!: #irrlicht on irc://irc.freenode.net
Well, it's abit longer but here it isvitek wrote:It should be pretty short.
Code: Select all
#include <irrlicht.h>
#ifdef _IRR_WINDOWS_
#include <windows.h>
#endif
#include <gl/GL.h>
#include <gl/GLU.h>
#include <fstream>
using namespace irr;
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
#ifdef _IRR_WINDOWS_
#pragma comment(lib, "Irrlicht.lib")
#pragma comment(lib, "opengl32.lib")
#pragma comment(lib, "glu32.lib")
#pragma comment(linker, "/subsystem:windows /ENTRY:mainCRTStartup")
#endif
class MyEventReceiver : public IEventReceiver
{
public:
// This is the one method that we have to implement
virtual bool OnEvent(const SEvent& event)
{
// Remember whether each key is down or up
if (event.EventType == irr::EET_KEY_INPUT_EVENT)
KeyIsDown[event.KeyInput.Key] = event.KeyInput.PressedDown;
return false;
}
// This is used to check whether a key is being held down
virtual bool IsKeyDown(EKEY_CODE keyCode) const
{
return KeyIsDown[keyCode];
}
MyEventReceiver()
{
for (u32 i=0; i<KEY_KEY_CODES_COUNT; ++i)
KeyIsDown[i] = false;
}
private:
// We use this array to store the current state of each key
bool KeyIsDown[KEY_KEY_CODES_COUNT];
};
int main()
{
MyEventReceiver receiver;
u32 width(320), height(320);
f32 nearz(15), farz(35);
wchar_t nearbuf[16], farbuf[16];
swprintf_s(nearbuf, 16, L"Near: %f", nearz);
swprintf_s(farbuf, 16, L"Far: %f", farz);
//Set up planes
S3DVertex plane1[4] =
{
S3DVertex(-5, 0, 0, 0, -1, 0, SColor(255, 255, 0, 0), 0, 0),
S3DVertex(5, 0, 0, 0, -1, 0, SColor(255, 255, 0, 0), 1, 0),
S3DVertex(5, 0, 10, 0, -1, 0, SColor(255, 255, 0, 0), 1, 1),
S3DVertex(-5, 0, 10, 0, -1, 0, SColor(255, 255, 0, 0), 0, 1),
};
u16 indices1[6] = {0, 1, 2, 2, 3, 0};
S3DVertex plane2[4] =
{
S3DVertex(-25, -5, 0, 0, -1, 0, SColor(255, 0, 255, 0), 0, 0),
S3DVertex(-15, -5, 0, 0, -1, 0, SColor(255, 0, 255, 0), 1, 0),
S3DVertex(-15, -5, 10, 0, -1, 0, SColor(255, 0, 255, 0), 1, 1),
S3DVertex(-25, -5, 10, 0, -1, 0, SColor(255, 0, 255, 0), 0, 1),
};
u16 indices2[6] = {0, 1, 2, 2, 3, 0};
S3DVertex plane3[4] =
{
S3DVertex(15, -10, 0, 0, 0, 0, SColor(255, 0, 0, 255), 0, 0),
S3DVertex(25, -10, 0, 0, 0, 0, SColor(255, 0, 0, 255), 1, 0),
S3DVertex(25, -10, 10, 0, 0, 0, SColor(255, 0, 0, 255), 1, 1),
S3DVertex(15, -10, 10, 0, 0, 0, SColor(255, 0, 0, 255), 0, 1),
};
u16 indices3[6] = {0, 1, 2, 2, 3, 0};
//Set up Irrlicht
IrrlichtDevice* device = createDevice(EDT_OPENGL, dimension2d<u32>(width, height),
21, false, false, false, &receiver);
if(device == 0)
return 1;
IVideoDriver* driver = device->getVideoDriver();
ISceneManager* smgr = device->getSceneManager();
IGUIEnvironment* guienv = device->getGUIEnvironment();
IGUIFont* font = guienv->getFont("fonthaettenschweiler.bmp");
//Create text nodes for near and far values output
ITextSceneNode* neartxt = smgr->addTextSceneNode(font, nearbuf,
SColor(255, 0, 0, 0), 0, vector3df(0, 0, 22));
ITextSceneNode* fartxt = smgr->addTextSceneNode(font, farbuf,
SColor(255, 0, 0, 0), 0, vector3df(0, 0, 20));
//Info on positions
ITextSceneNode* info = smgr->addTextSceneNode(font, L"Distance from camera to red "
L"plane is 20, green, 25, blue 30.", SColor(255, 0, 0, 0), 0, vector3df(0, 0, -22));
//Setup orthogonal camera
ICameraSceneNode* camera = smgr->
addCameraSceneNode(0, vector3df(0, 20, 0), vector3df(0,0,0));
CMatrix4<f32> projmat;
projmat.buildProjectionMatrixOrthoLH(60, 60, nearz, farz);
camera->setProjectionMatrix(projmat);
camera->setUpVector(vector3df(0, 0, 1));
//Array for z-buffer storage
float* zbuffer = new float[width*height];
//Create material for planes
SMaterial mtl;
mtl.Lighting = false;
mtl.BackfaceCulling = false;
u32 keytime = 0;
//Main loop
while(device->run())
{
driver->beginScene(true, true, SColor(255,100,101,140));
//Accept key events after each 0.15s
if(device->getTimer()->getTime()-keytime > 150)
{
//Set near and far values
if(receiver.IsKeyDown(KEY_KEY_Q))
{
keytime = device->getTimer()->getTime();
if(nearz > 0)
nearz--;
}
else if(receiver.IsKeyDown(KEY_KEY_W))
{
keytime = device->getTimer()->getTime();
if(nearz < farz-1)
nearz++;
}
else if(receiver.IsKeyDown(KEY_KEY_A))
{
keytime = device->getTimer()->getTime();
if(farz > nearz+1)
farz--;
}
else if(receiver.IsKeyDown(KEY_KEY_S))
{
keytime = device->getTimer()->getTime();
farz++;
}
}
//Convert values to strings
swprintf_s(nearbuf, 16, L"Near: %f", nearz);
swprintf_s(farbuf, 16, L"Far: %f", farz);
//Update text display
neartxt->setText(nearbuf);
fartxt->setText(farbuf);
//Update camera
projmat.buildProjectionMatrixOrthoLH(60, 60, nearz, farz);
camera->setProjectionMatrix(projmat);
//Draw text
smgr->drawAll();
//Reset world transformation
driver->setTransform(ETS_WORLD, CMatrix4<f32>());
//Set material for planes
driver->setMaterial(mtl);
//Draw planes
driver->drawIndexedTriangleList(plane1, 4, indices1, 2);
driver->drawIndexedTriangleList(plane2, 4, indices2, 2);
driver->drawIndexedTriangleList(plane3, 4, indices3, 2);
//Read z-Buffer
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, zbuffer);
driver->endScene();
}
//Write z-buffer to txt file
std::ofstream out("buffer.txt");
for(int j = 0; j < height; j++)
{
for(int i = 0; i < width; i++)
out << zbuffer[j*width+i] << ' ';
out << std::endl;
}
delete[] zbuffer;
device->drop();
return 0;
}