Hi All,
As many other people have suffered, work got in the way and basically took all my time - and still is!
My first foray into this was to use the skeletal detection to track the users position in real space and map that to the camera in virtual space (irrlicht camera). I didn't get up to handling any more than a single point.
rookie wrote:Can you please help out with examples since right now I have worked out how to make them work separately but want them to work together. something like displaying meshes using irrlicht on top of video from kinect??
If you look at the OpenNI/NITE examples, more specifically the simple skeleton example, you see the rendered output of the skeleton - is this the "video" you're after? If so, that's the skeleton points being rendered to a GLUT surface every frame, rather than a video. I'm sure you *can* get the video feed from the Kinect camera, but that was never something I was after.
anoki wrote:Hey,
it sounds really interesting. If it is running well, i would be happy
if you post a small description here.
I got a kinect also, but just tried it with the Kinemote.
Kinemote is not so accurate.
Anoki
The current version I built was limited to 30fps - the frame rate from Kinect itself. By adding simple multi-threading one can have irrlicht render at it's maximum (on my machine, this is some 2000fps on simple rooms) with Kinect feeding data when it has it.
(I did this same technique when using OpenCV and the web-cam was restricted to some 25fps)
I've only just started to dig out the project again - sorry for the lack of updates or the sheer length of time it's taken me to get back on this!
The first thing I've done is compile a 64-bit version of Irrlicht.
The Kinect demos in OpenNI/NITE refused to build in 32-bit mode on my 64-bit machine... this should be a simple thing of changing the target, but I figured having everything target 64-bit would be better anyway.
Next, I used the 64-bit libs from OpenNI/NITE - also make sure you get the latest releases from OpenNI/NITE.
The driver I'm using is the SensorKinect available on GitHub.
I then trawled through the SimpleSkeleton demo, provided by OpenNI in order to get a feel for what would be needed. OpenNI/NITE is event driven, so you register callbacks and the target function and they fire when something happens - gestures or skeleton detection etc.
And finally, I went through the OpenNI documentation where they explain how to do "simple" things. This helped me to strip out any GLUT items that are heavily used in the demos.
If you look at the SimpleSkeleton demo, you can see where they iterate through all of the detected skeletal points and render these to the GLUT texture - they also draw lines between them.
I would imagine that it would be fairly simple to track the detected point of a Kinect skeleton and map that to the same point in an Irrlicht skeleton... but that's assuming they use the same number and position of points in the skeleton. This is something I'll take a look at!
My first plans, as I've said above, were to make the 3D camera move in virtual space relative to real space.
After that, I was using the "wave" gesture to then detect and track hand points - which is different than the skeleton. I was then wanting to interact with the virtual world by moving around and pushing objects with the detected hand.
Hopefully things will quieten down a little and I can take a look... I'm still doing Kinect stuff, but it's wrappers and native interfaces for other languages/engines/applications, so I may not be able to pick this up for some time :-/
Good luck!