Hm, I have some solution, but it's not really meant for realtime.
The first thing you need are rectangular renders in 6 directions, each with a FOV of 90°. Easiest done using render target textures. The 28.CubeMapping example in
Irrlicht svn trunk has code which nearly does that. renderEnvironmentCubeMap renders in all 6 directions - except in your case you probably need to render into 6 individual rendertargettextures instead of putting it all in a single cubemaptexture as that code does.
I'm not sure what you mean with 230° render angle (the result is basically a sphere), but if you want to ensure a specific camera direction is set you can do that also in that step.
Then I got a function (mostly from the stackoverflow link mentioned in code) which calculates the equirectangular texture from those images. Just have to be sure they are all in the right direction and order (you may have to experiment). Minor note: The direction checks in that code might fail in Irrlicht 1.8 (that had over-optimized some divisions), in which case replace the extDir == 1 checks by unitDir == a checks (I'll have to rework and test that myself, but can't do that right now). You know it goes wrong when it jumps in the last else (so you can set a breakpoint or something).
edit: Seems that functions expects all 6 images copied into a single texture, so you'd have to either copy the rendertargettextures together first (easy, but slow) or rewrite it slightly so it works with 6 independent textures instead.
Code: Select all
// What you put in is an image with is 6 quadratic images (cubeImg) side-by-side in a fixed order (we use the same order as vray)
// The cubemap is created by rendering a camera from a fixed position to each side (with 1:1 width/height ratio and 90° FOV)
// The function calculates from that cubemap a texture which can be put on a sphere.
//
// Think about putting a sphere in box.
// Calculate for each texture-coordinate the direction a line from the center needs to have to hit the sphere at those coordinates.
// We're using polar coordinates for that.
// Then expand that line until it hits the cube.
// Find out which side of the cube it hit and you can get a coordinate on the cubeImg from the point it hit.
// It's based on answer from Bartosz here: https://stackoverflow.com/questions/34250742/converting-a-cubemap-into-equirectangular-panorama
void convertVRayCubemapToEquirectangular(irr::video::IImage* equiRectImg, const irr::video::IImage* cubeImg)
{
using namespace irr;
if ( !cubeImg || !equiRectImg)
return;
core::dimension2du targetDim(equiRectImg->getDimension());
float u, v; //Normalised texture coordinates, from 0 to 1, starting at lower left corner
float phi, theta; //Polar coordinates
u32 cubeFaceWidth = cubeImg->getDimension().Width / 6;
u32 cubeFaceHeight = cubeImg->getDimension().Height;
for (u32 j = 0; j < targetDim.Height; j++)
{
//Rows start from the bottom
v = 1.f - (((float)j+0.5f) / targetDim.Height); // the +0.5f to get the pixel center
theta = v * core::PI;
float sinTheta = sin(theta);
float cosTheta = cos(theta);
for (u32 i = 0; i < targetDim.Width; i++)
{
//Columns start from the left
u = (((float)i+0.5f) / targetDim.Width);
phi = u * 2 * core::PI;
core::vector3df unitDir(sin(phi) * sinTheta * -1.f, cosTheta, cos(phi) * sinTheta * -1.f);
// Unit dir extended until it hits one of the cube faces
float a = core::f32_max3( core::abs_(unitDir.X), core::abs_(unitDir.Y), core::abs_(unitDir.Z));
core::vector3df extDir(unitDir/a);
f32 cubeU, cubeV;
u32 xTile; // to find the correct tile in the cubeImage
if (extDir.X == 1)
{
//Right
xTile = 0;
cubeU = ((extDir.Z + 1.f) / 2.f) - 1.f;
cubeV = ((extDir.Y + 1.f) / 2.f);
}
else if (extDir.X == -1)
{
//Left
xTile = 1;
cubeU = ((extDir.Z + 1.f) / 2.f);
cubeV = ((extDir.Y + 1.f) / 2.f);
}
else if (extDir.Y == 1)
{
//Up
xTile = 3;
cubeU = (extDir.X + 1.f) / 2.f;
cubeV = ((extDir.Z + 1.f) / 2.f) - 1.f;
}
else if (extDir.Y == -1)
{
//Down
xTile = 2;
cubeU = (extDir.X + 1.f) / 2.f;
cubeV = (extDir.Z + 1.f) / 2.f;
}
else if (extDir.Z == 1)
{
//Front
xTile = 4;
cubeU = (extDir.X + 1.f) / 2.f;
cubeV = (extDir.Y + 1.f) / 2.f;
}
else if (extDir.Z == -1)
{
//Back
xTile = 5;
cubeU = ((extDir.X + 1.f) / 2.f) - 1.f;
cubeV = (extDir.Y + 1.f) / 2.f;
}
else // even with floats x/x should always be 1 or -1, so shouldn't get here
{
cubeU = 0.f;
cubeV = 0.f;
xTile = 0;
}
cubeU = core::abs_(cubeU);
cubeV = core::abs_(cubeV);
f32 xPixelCenter = cubeU * (cubeFaceWidth-1) + 0.5f;
f32 yPixelCenter = cubeV * (cubeFaceHeight-1) + 0.5f;
irr::video::SColor c(cubeImg->getPixel((u32)xPixelCenter+xTile*cubeFaceWidth, (u32)yPixelCenter));
#if 1 // bilinear filter (I think, not 100% certain if bilinear works exactly like this, but filters pixel with neighbors in 2 directions)
f32 d = 1.f;
irr::video::SColor cx;
f32 xpc2 = -(f32)cubeFaceWidth;
f32 xm = fmodf(xPixelCenter, 1.f);
if ( xm < 0.5f )
{
xpc2 = cubeU * (cubeFaceWidth-1) - 0.5f;
d = xm*2.f;
}
else if ( xm > 0.5f )
{
xpc2 = cubeU * (cubeFaceWidth-1) + 1.5f;
d = (1.f-xm)*2.f;
}
if ( xpc2 >= 0.f && xpc2 < cubeFaceWidth )
{
irr::video::SColor c2(cubeImg->getPixel((u32)xpc2+xTile*cubeFaceWidth, (u32)yPixelCenter));
cx = c.getInterpolated(c2, d);
}
// TODO: wrapping options (complicated and probably hard to notice, let's ignore...)
else
{
cx = c;
}
irr::video::SColor cy;
f32 ym = fmodf(yPixelCenter, 1.f);
f32 ypc2 = -(f32)cubeFaceHeight;
if ( ym < 0.5f )
{
ypc2 = cubeV * (cubeFaceHeight-1) - 0.5f;
d = ym*2.f;
}
else if ( ym > 0.5f )
{
ypc2 = cubeV * (cubeFaceHeight-1) + 1.5f;
d = (1.f-ym)*2.f;
}
if ( ypc2 >= 0.f && ypc2 < cubeFaceHeight )
{
irr::video::SColor c2(cubeImg->getPixel((u32)xPixelCenter+xTile*cubeFaceWidth, (u32)ypc2));
cy = c.getInterpolated(c2, d);
}
// TODO: wrapping options (complicated and probably hard to notice, let's ignore...)
else
{
cy = c;
}
c = cx.getInterpolated(cy, 0.5f);
#endif
equiRectImg->setPixel(i, j, c);
#if 0 && defined(_DEBUG) // TEST: show the extDir results (in range 0-1)
irr::video::SColorf dummyTestf((extDir.X+1.f)/2.f, (extDir.Y+1.f)/2.f, (extDir.Z+1.f)/2.f, 1.f);
irr::video::SColor dummyTest = dummyTestf.toSColor();
equiRectImg->setPixel(i, j, dummyTest);
#endif
}
}
}
But as mentioned - not realtime. That would need at least the last calculation to be done on a shader (I might have that one as well somewhere if I hunt around my codes a while). And you still have to render the scene 6 times (I don't think that can ever be avoided). Maybe I can recommend better options if you can tell me what you are trying to accomplish.