There are going to be some hardware ray tracers coming out in the not too distant future. If/When things go more that direction, do you think that irrlicht will have to be completely rewritten? Does anyone have a good perspective on how libraries like open gl will be different from how they are currently?
Just curious what people think about this, since it may be an issue in the not too distant future.
JoshM
Hardware ray tracers discussion
I had to think about this when implementing a software raytracer video driver in Irrlicht (EDT_BLINDRT working name lol). It's almost ready for real use and supports nearly all material types, lighting, fog, etc.
There are some issues with Irrlicht's immediate mode nature. In BlindRT I had to store all drawn meshbuffers along with their world and view transformations, then on endScene I trace the entire scene using the stored values. I inverse transform each ray with the object's worldView matrix so that each object has the same transformations it had when it was drawn.
I also used the HardwareLink feature to store acceleration structures (KDTree, BVH, OctTree, etc, similar to a BSP tree) for each mesh so that I would only have to build them once, and if a hardware mesh buffer is drawn again it just retrieves the acceleration structure from the hardware buffer map.
All in all it actually works ok and you can add/manipulate scene nodes like you would with any other Irrlicht driver. Some small things I noticed that can be difficult are scaling nodes inversely (E.g. 0, -1, 0) and the fact that you can't access a scene manager from the video driver (Raytracers are scene-based so it's easier to draw if you have access to the current scene information, hence I had to store a list of drawn nodes and draw them in one go at endScene).
There's not much difference between using a hardware raytracer and a software one, the hardware ones will just have some api or maybe written completely in CUDA/OpenCL which you interface with Irrlicht code so the same issues of requiring scene access still exist.
Cheers
There are some issues with Irrlicht's immediate mode nature. In BlindRT I had to store all drawn meshbuffers along with their world and view transformations, then on endScene I trace the entire scene using the stored values. I inverse transform each ray with the object's worldView matrix so that each object has the same transformations it had when it was drawn.
I also used the HardwareLink feature to store acceleration structures (KDTree, BVH, OctTree, etc, similar to a BSP tree) for each mesh so that I would only have to build them once, and if a hardware mesh buffer is drawn again it just retrieves the acceleration structure from the hardware buffer map.
All in all it actually works ok and you can add/manipulate scene nodes like you would with any other Irrlicht driver. Some small things I noticed that can be difficult are scaling nodes inversely (E.g. 0, -1, 0) and the fact that you can't access a scene manager from the video driver (Raytracers are scene-based so it's easier to draw if you have access to the current scene information, hence I had to store a list of drawn nodes and draw them in one go at endScene).
There's not much difference between using a hardware raytracer and a software one, the hardware ones will just have some api or maybe written completely in CUDA/OpenCL which you interface with Irrlicht code so the same issues of requiring scene access still exist.
Cheers
ShadowMapping for Irrlicht!: Get it here
Need help? Come on the IRC!: #irrlicht on irc://irc.freenode.net
Need help? Come on the IRC!: #irrlicht on irc://irc.freenode.net
Sorry for my slow reply and thank you for your quick response.
I am going to have to look at CUDA/OpenCL because I don't know what they are. I think that part of my previous question is rather silly, because of course you won't be able to throw out the current graphics engine because it will take a long time for the transition to raytracing hardware to be made, if it is ever made fully.
I think it's awesome that you're working on a ray tracer for irrlicht, and I bet that it is a lot of fun to work on. I really wonder how the apis will look.
Thanks for your response.
JoshM
I am going to have to look at CUDA/OpenCL because I don't know what they are. I think that part of my previous question is rather silly, because of course you won't be able to throw out the current graphics engine because it will take a long time for the transition to raytracing hardware to be made, if it is ever made fully.
I think it's awesome that you're working on a ray tracer for irrlicht, and I bet that it is a lot of fun to work on. I really wonder how the apis will look.
Thanks for your response.
JoshM