I had to rewrite the Oculus Rift-support for Gekkeiju Online to get DK2 working so I decided to clean up the code a bit and make it available to others as well. The code has been tested with Irrlicht 1.7.3 and 1.8.1, and with both DirectX 9 and OpenGL under Windows.
https://github.com/Suvidriel/IrrOculusVR
It's currently not using Oculus Rift's SDK rendering but handles the distortions with the distortion meshes. Also I've been unable to test the Direct to HMD-mode but the code contains the same parts as were in the documentation so it should be working at least in theory. The shaders used are from the SDK.
There's a really small example code for using the Oculus Rift rendering with fps-style camera where body and head rotations are controlled individually.
Oculus Rift Development Kit 2-support
Re: Oculus Rift Development Kit 2-support
Really cool, I will try it once i get my DK2
Re: Oculus Rift Development Kit 2-support
This is fantastic. You've done all the difficult stuff so I don't have to. Thanks.
On my DK2 using the demo in your repo I'm finding that the view continues to move for a fraction of a second after my head stops moving. It's just enough to be noticeable and it's not something I've seen on other DK2 apps.
I'm more than happy to try and sort this myself but, as I'm only just starting out wit the Rift, I wondered if you could offer some guideance. Is this a known problem when combining Irrlicht with the DK2? Are these known issues with the head tracking - I know it has some kind of predictive movement system which is perhaps not getting correctly configured. Any suggestions would be gratefully received.
Thanks.
On my DK2 using the demo in your repo I'm finding that the view continues to move for a fraction of a second after my head stops moving. It's just enough to be noticeable and it's not something I've seen on other DK2 apps.
I'm more than happy to try and sort this myself but, as I'm only just starting out wit the Rift, I wondered if you could offer some guideance. Is this a known problem when combining Irrlicht with the DK2? Are these known issues with the head tracking - I know it has some kind of predictive movement system which is perhaps not getting correctly configured. Any suggestions would be gratefully received.
Thanks.
Re: Oculus Rift Development Kit 2-support
I haven't noticed that effect but it's likely related to the updating of absolute positions - they tend to cause quite a lot of headaches. Possibly the position update happens 1 frame too late.
Also I haven't tested the code with the SDK version 0.4.3 yet so it's possible they changed the timewarp somewhat too.
I'll look into this soonish as the issue is not present in Gekkeiju Online in which I use nearly similar code.
Also I haven't tested the code with the SDK version 0.4.3 yet so it's possible they changed the timewarp somewhat too.
I'll look into this soonish as the issue is not present in Gekkeiju Online in which I use nearly similar code.
Re: Oculus Rift Development Kit 2-support
I spent some more time looking at this yesterday.
I was wrong about there being lag. What I was seeing was a kind of optical illusion: because I was testing with unlit solid colour models hanging in space it was difficult to see what was really happening. I think the eye compensates during head movement but as soon it stops the brain has to reinterpret what it's seeing and this adjustment is what looked like lag.
So, anyway. If I look straight ahead at the camera and roll my head (i.e. angular movement around the Z axis) everything is fine. However, if I shake or nod my head (i.e. angular movement around the Y axis or X axis), then the scene moves more than it should i.e. the floor, walls, etc, appear to swing; the parallax effect it too great. I have a textured floor now and if I look down and turn my head from side to side the effect is quite clear. I don't see the same thing in the "tiny room" or Tuscany demos.
I think that what's going on is that the displacement of the eyes (or rather the effective camera for each image) is incorrectly offset (along the Z axis when looking straight ahead) from the pivot point of the head. So the view I get is what I'd see if my eyes were some distance in front of my head - the spatial translation of the eye cameras is magnified because of their distance from from pivot point. From a quick look at the code, I couldn't see anything obviously wrong and a few pseudo-random changes didn't have the desired effect. I don't really understand the Oculus' head model yet so I need to read about that next.
FWIW, I'm using SDK 0.4.3.
I was wrong about there being lag. What I was seeing was a kind of optical illusion: because I was testing with unlit solid colour models hanging in space it was difficult to see what was really happening. I think the eye compensates during head movement but as soon it stops the brain has to reinterpret what it's seeing and this adjustment is what looked like lag.
So, anyway. If I look straight ahead at the camera and roll my head (i.e. angular movement around the Z axis) everything is fine. However, if I shake or nod my head (i.e. angular movement around the Y axis or X axis), then the scene moves more than it should i.e. the floor, walls, etc, appear to swing; the parallax effect it too great. I have a textured floor now and if I look down and turn my head from side to side the effect is quite clear. I don't see the same thing in the "tiny room" or Tuscany demos.
I think that what's going on is that the displacement of the eyes (or rather the effective camera for each image) is incorrectly offset (along the Z axis when looking straight ahead) from the pivot point of the head. So the view I get is what I'd see if my eyes were some distance in front of my head - the spatial translation of the eye cameras is magnified because of their distance from from pivot point. From a quick look at the code, I couldn't see anything obviously wrong and a few pseudo-random changes didn't have the desired effect. I don't really understand the Oculus' head model yet so I need to read about that next.
FWIW, I'm using SDK 0.4.3.
Re: Oculus Rift Development Kit 2-support
Have you tried adjusting the world scale-parameter when initializing the OculusRenderer-class? I think in the example I set it to 20.0f.. you could try with a much smaller scale and see if it helps any. The scale affects the position tracking so it could possibly affect the Oculus API's neck model too.
In my Ludum Dare entry from few months back I used world scale 15.0f. I didn't notice the issue in that but it could also be that I'm just too blind to notice it.
http://ludumdare.com/compo/ludum-dare-3 ... &uid=38976
Maybe try with values 10 and 1
In my Ludum Dare entry from few months back I used world scale 15.0f. I didn't notice the issue in that but it could also be that I'm just too blind to notice it.
http://ludumdare.com/compo/ludum-dare-3 ... &uid=38976
Maybe try with values 10 and 1
Re: Oculus Rift Development Kit 2-support
Hi again,
Yes, I changed the world scale to 1.0f because I wanted to stick to 1 unit = 1m.
After some more testing, I got things working so that the view is absolutely perfect (IMO). When you rotate your head the world you're in appears to stay stock still, just like in the official demos. Happy about that!
After unpicking all the changes I made on the way, the only thing I eventually altered was to the line which initializes eyeDist_[eye] (around 113). Instead using the values from HmdToEyeView as (x*-10.0f,y,z), I'm using (x*-1.0f,y,1.0f). The change from 10.0 to 1.0 is to match my worldScale (should I using the actual variable worldScale here?). My z term is a constant 1.0f (should also really be worldScale?).
The values found in HmdToEyeView are something like (0.032,0,0), so making the z term non-zero is clearly a hack, or there's a bug in the OVR library. But why 1 unit? That's a big offset!
Not sure yet where this HMD to eye offset should really go, or why it exists. I'm wondering if there's something going on in the head->eye->camera transformation where one of them needs to move.
Yes, I changed the world scale to 1.0f because I wanted to stick to 1 unit = 1m.
After some more testing, I got things working so that the view is absolutely perfect (IMO). When you rotate your head the world you're in appears to stay stock still, just like in the official demos. Happy about that!
After unpicking all the changes I made on the way, the only thing I eventually altered was to the line which initializes eyeDist_[eye] (around 113). Instead using the values from HmdToEyeView as (x*-10.0f,y,z), I'm using (x*-1.0f,y,1.0f). The change from 10.0 to 1.0 is to match my worldScale (should I using the actual variable worldScale here?). My z term is a constant 1.0f (should also really be worldScale?).
The values found in HmdToEyeView are something like (0.032,0,0), so making the z term non-zero is clearly a hack, or there's a bug in the OVR library. But why 1 unit? That's a big offset!
Not sure yet where this HMD to eye offset should really go, or why it exists. I'm wondering if there's something going on in the head->eye->camera transformation where one of them needs to move.
Re: Oculus Rift Development Kit 2-support
Ah yes, the multiplier should definitely be the same as -worldScale. HmdToEyeView is basically half of the IPD, and this can be controlled with the Config Tool.
In the Oculus examples they're doing things slightly different by tracking the position of each eye individually but for this one I'm reading the head position instead like was done with the older version of the API. What they're doing inside the API seems more or less identical - just transforming the HmdToEyeView with head's orientation.
I guess the z-value would be the distance of your eyes from the center of your head along z-axis, and also multiplied with worldScale. This value should actually also be configurable in the Config Tool or at least so it seems after looking at the Oculus API. They're calculating it in ovrHmd_GetRenderDesc() by doing something like this: z = eyeCenterRelief - hmd.EyeLeft.ReliefInMeters etc. If it returns 0 to z-value then something seems odd - especially if the official demos work.
In the Oculus examples they're doing things slightly different by tracking the position of each eye individually but for this one I'm reading the head position instead like was done with the older version of the API. What they're doing inside the API seems more or less identical - just transforming the HmdToEyeView with head's orientation.
I guess the z-value would be the distance of your eyes from the center of your head along z-axis, and also multiplied with worldScale. This value should actually also be configurable in the Config Tool or at least so it seems after looking at the Oculus API. They're calculating it in ovrHmd_GetRenderDesc() by doing something like this: z = eyeCenterRelief - hmd.EyeLeft.ReliefInMeters etc. If it returns 0 to z-value then something seems odd - especially if the official demos work.
Re: Oculus Rift Development Kit 2-support
To compile your project on linux:
in main.cpp change to EDT_OPENGL
and handler for window to:
window = driver->getExposedVideoData().OpenGLLinux.X11Display;
in OculusRenderer.cpp
comment ovrHmd_AttachToWindow(hmd_, window, NULL, NULL);
Code: Select all
HEADERS=-I../irrlicht/include -I../ovr_sdk_linux_0.4.4/LibOVR/Include -I../ovr_sdk_linux_0.4.4/LibOVR/Src
LIBS=-L../irrlicht/lib/Linux -L../ovr_sdk_linux_0.4.4/LibOVR/Lib/Linux/Release/x86_64 -lIrrlicht -lstdc++ -lm -ludev -lovr -lGL -lEGL -lGLU -lXext -lX11 -lXxf86vm -lXi -lXrandr -lXinerama -lpthread -lrt
example: main.o OculusRenderer.o
g++ -o example main.o OculusRenderer.o $(LIBS)
%.o:%.cpp
g++ -c $(HEADERS) $< -o $@
clean:
rm -f *.o
and handler for window to:
window = driver->getExposedVideoData().OpenGLLinux.X11Display;
in OculusRenderer.cpp
comment ovrHmd_AttachToWindow(hmd_, window, NULL, NULL);
Re: Oculus Rift Development Kit 2-support
Has anyone gotten this to work with the direct render mode with DK2? I'm unable to do so with the current SDK at least. Everything else works fine, though.
EDIT: Order of operations matters, apparently, which I found from this: http://renderingpipeline.com/2014/07/op ... -rift-dk2/
Additionally, it's necessary to use ovrHmd_BeginFrame/ovrHmd_EndFrame, drawing directly to 2 different render targets and submitting them (rather than drawing to viewports with distortion shader applied).
EDIT: Order of operations matters, apparently, which I found from this: http://renderingpipeline.com/2014/07/op ... -rift-dk2/
Additionally, it's necessary to use ovrHmd_BeginFrame/ovrHmd_EndFrame, drawing directly to 2 different render targets and submitting them (rather than drawing to viewports with distortion shader applied).