now that 32 bit rendering is the default in version 0.14; only my apps compiled with DX show up ok.
If compiled with opengl they show up in 16 bit.
anyone else having this prob?
DX:
http://www.danielpatton.com/afecelis/Irrlicht/skydx.jpg
GL: (notice the 16 bit stripes)
http://www.danielpatton.com/afecelis/Irrlicht/skygl.jpg
apps don't show up in 32 bit in openGL
hmmmm... this is strange
I changed the skybox in the examples to something that I knew would band due to its nature (in this case a gradient) so I could look at it with the example programs, and it looks fine in both DirectX and OpenGL to me
DirectX:
OpenGL:
But, if I set the color depth of my desktop to 16bit to force the banding, obviously it seems SOMETHING is going on because what is displayed then differs...
DirectX:
OpenGL:
very strange...
(btw the green border around them is because I made a green gradient first then went back to make it black and white and lower the jpeg compression)
I changed the skybox in the examples to something that I knew would band due to its nature (in this case a gradient) so I could look at it with the example programs, and it looks fine in both DirectX and OpenGL to me
DirectX:
OpenGL:
But, if I set the color depth of my desktop to 16bit to force the banding, obviously it seems SOMETHING is going on because what is displayed then differs...
DirectX:
OpenGL:
very strange...
(btw the green border around them is because I made a green gradient first then went back to make it black and white and lower the jpeg compression)
-
- Posts: 11
- Joined: Mon Nov 28, 2005 5:00 am
- Location: N.Y.
I could be wrong, but I think it is a clear demonstration of different usage for colorcomponents in 16 bit mode:
as you see, the opengl-pic has 16 clear levels of grey, so you can think of 16 different intensity levels aka 4 bit per colorcomponent as 4bit * 3 [for RGB ]=12 you can see that openGL is reserving 4 bits for alpha channel. This is also well known as "4444 Mode".
The DirectX-Screen has 32 different levels and also a very thin line in the middle of them. I think this is a "565 Mode", where 5 bits are for Red , 6 bits for Green and 5 bits for Blue so you have 32 [2^5] intensity levels and green has 64 levels, as the eye is more intensive for green tones, however the directXsurface has no support of an alphachannel.
Greetings,
Peter
as you see, the opengl-pic has 16 clear levels of grey, so you can think of 16 different intensity levels aka 4 bit per colorcomponent as 4bit * 3 [for RGB ]=12 you can see that openGL is reserving 4 bits for alpha channel. This is also well known as "4444 Mode".
The DirectX-Screen has 32 different levels and also a very thin line in the middle of them. I think this is a "565 Mode", where 5 bits are for Red , 6 bits for Green and 5 bits for Blue so you have 32 [2^5] intensity levels and green has 64 levels, as the eye is more intensive for green tones, however the directXsurface has no support of an alphachannel.
Greetings,
Peter
thanx for the comments guys, and for the explanation peter. It's weird since my desktop depth is at 32 bits. Perhaps it's an ATI issue? I'll try it on another box with Nvidia graphics to see if it makes any difference.
But it's something that grabs my attention since it didn't happen in earlier versions when you had toset it manually:
let's see what happens with Nvidia
But it's something that grabs my attention since it didn't happen in earlier versions when you had toset it manually:
Code: Select all
driver->setTextureCreationFlag(video::ETCF_ALWAYS_32_BIT, true
I have same problem... i cant see diference when i set ETCF_ALWAYS_32_BIT or ETCF_ALWAYS_16_BIT flags on opengl with 32bit tga texture. My screen uses 32 bit. How to solve this problem/bug?
My project in forum- ATMOsphere - new dynamic sky dome for Irrlicht
This is very, very easy! (Thanks to Phil for the idea!) Irrlicht seems to ignore the 32bit flag for OpenGL... Change the following line:
COpenGLTexture.cpp, line 173:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, ImageSize.Width,
ImageSize.Height, 0, GL_BGRA_EXT , GL_UNSIGNED_BYTE, ImageData);
to
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, ImageSize.Width,
ImageSize.Height, 0, GL_BGRA_EXT , GL_UNSIGNED_BYTE, ImageData);
This single character forces OpenGL to use 32bit textures... Have fun!
COpenGLTexture.cpp, line 173:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, ImageSize.Width,
ImageSize.Height, 0, GL_BGRA_EXT , GL_UNSIGNED_BYTE, ImageData);
to
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, ImageSize.Width,
ImageSize.Height, 0, GL_BGRA_EXT , GL_UNSIGNED_BYTE, ImageData);
This single character forces OpenGL to use 32bit textures... Have fun!