I am investigating engines for a medical visualisation application. As well as traditional 3D views, it needs to simulate such things as ultrasound, x-ray, etc.
Irrlict is already scoring points for a C# binding, and I believe it allows you to freely write custom shaders, is this right?
Can anyone comment on whether the engine is flexible enough out of the box for this kind of thing?
I'm also interested in the animation system. What we want to do is have animations of parts of the body which can be transitioned together... e.g a heart with/without a valve problem. Is the built-in animation system likely to make this relatively straight-forward both to tweak a single animation (like simply changing one valve) and morph between two animations (like a healthy/arhythmic heart)?
Suitability of Irrlict for non-traditional project
-
- Admin
- Posts: 14143
- Joined: Wed Apr 19, 2006 9:20 pm
- Location: Oldenburg(Oldb), Germany
- Contact:
It's not really clear what ultrasound simulation would require from a 3d engine, so answers to your question are pretty vague.
Custom shaders are possible. As long as they don't use vertex specific information (Flexible vertex format) it's also possible without changes to the engine.
The animation scenarios you describe are supported in Irrlicht, at least as far as I understood your requirements. Transitions among two skeletal based animations are possible, and changing an animation to another frameset should help in all other situations. However, it also depends on how the animations are provided, i.e. file format, animation types, ...
Maybe you can provide more input for better answers?!
Custom shaders are possible. As long as they don't use vertex specific information (Flexible vertex format) it's also possible without changes to the engine.
The animation scenarios you describe are supported in Irrlicht, at least as far as I understood your requirements. Transitions among two skeletal based animations are possible, and changing an animation to another frameset should help in all other situations. However, it also depends on how the animations are provided, i.e. file format, animation types, ...
Maybe you can provide more input for better answers?!
3D models & animations would probably be built tailored to whatever engine is used.
Ultrasound basically renders a 2D slice, so in itself it's probably a case of 2-D raytracing. But When you see ultrasound images they are very noisy, so I imagine a shader might be used for post-processing here.
As for custom vertex formats, I'm not sure if we'd need anything like this. We might want to be able to provide some data about the flesh the 3D model is representing - tissue type or thickness being likely, but providing enough textures are supported we can probably do it this way. I would expect Irrlict allows at least 4 and maybe 8 texture coordinates per vertex?
Ultrasound basically renders a 2D slice, so in itself it's probably a case of 2-D raytracing. But When you see ultrasound images they are very noisy, so I imagine a shader might be used for post-processing here.
As for custom vertex formats, I'm not sure if we'd need anything like this. We might want to be able to provide some data about the flesh the 3D model is representing - tissue type or thickness being likely, but providing enough textures are supported we can probably do it this way. I would expect Irrlict allows at least 4 and maybe 8 texture coordinates per vertex?
Hmmm, well I *guess* you can say it provides 4, but not really. I'll explain:
There are 3 main vertex formats:
S3DVertex: A simple position, colour, normal and single texcoord are provided (That's 2 32-bit floats).
S3DVertex2TCoords: Same as above but with 2 texcoords (That's another 2 32-bit floats).
S3DVertexTangents: Now this is where it gets interesting. It allows you to pass a set of texcoords (2 32-bit floats), a set of tangents (3 32-bit floats) and a set of binormals (another 3 32-bit floats). This allows 8 32-bit floats in total, satisfying the 4 2D texcoords requirement!
Unfortunately, this obviously requires some means of custom intervention to achieve, the loaders aren't going to load more than 2 sets of texcoords for you, and I don't think you can load tangents from meshes, they are recalculated at runtime by the mesh manipulator.
I guess another way to do it is to go the differed shading route, load 2 seperate meshes with 2 texcoords each and output them to seperate buffers in 2 different passes. Offcourse even this awkward method isn't entirely possible with patching the engine for floating point rtt support (That's not too much trouble in itself though, the joys of being zlib).
Cheers, might wanna check out this slightly unrelated project.
There are 3 main vertex formats:
S3DVertex: A simple position, colour, normal and single texcoord are provided (That's 2 32-bit floats).
S3DVertex2TCoords: Same as above but with 2 texcoords (That's another 2 32-bit floats).
S3DVertexTangents: Now this is where it gets interesting. It allows you to pass a set of texcoords (2 32-bit floats), a set of tangents (3 32-bit floats) and a set of binormals (another 3 32-bit floats). This allows 8 32-bit floats in total, satisfying the 4 2D texcoords requirement!
Unfortunately, this obviously requires some means of custom intervention to achieve, the loaders aren't going to load more than 2 sets of texcoords for you, and I don't think you can load tangents from meshes, they are recalculated at runtime by the mesh manipulator.
I guess another way to do it is to go the differed shading route, load 2 seperate meshes with 2 texcoords each and output them to seperate buffers in 2 different passes. Offcourse even this awkward method isn't entirely possible with patching the engine for floating point rtt support (That's not too much trouble in itself though, the joys of being zlib).
Cheers, might wanna check out this slightly unrelated project.
ShadowMapping for Irrlicht!: Get it here
Need help? Come on the IRC!: #irrlicht on irc://irc.freenode.net
Need help? Come on the IRC!: #irrlicht on irc://irc.freenode.net
-
- Admin
- Posts: 14143
- Joined: Wed Apr 19, 2006 9:20 pm
- Location: Oldenburg(Oldb), Germany
- Contact:
Oh, you really want to "render" ultrasound images in real-time based on actual 3d models, with the actual ultrasound mechanisms You might succeed with this, but 3d real-time graphics is more a question of working result-driven. This means that you want to get images that look like ultrasound images, but need not be rendered with raytracing. Instead you'd use a shape shader with some random noise and limited depth occlusion. Density of tissue won't require flexible vertex format or many texture coords, but just a few shader parameters. Storing this info in the alpha channel would also be possible...
Oh, it's entirely result driven, but based on an animating 3D model for parts of the body. In this case, a heart.
Anyway, the texcoord limits seem quite strict... I thought on modern hardware you could have for instance a standard texture, lightmap texture, bump/specular maps all single-pass. Is this a limitation Irrlict imposes, or is it still common to use multple render passes.
However on say an x-ray view, I don't need any color/texture data for the model. So probably a fairly limited vertex format is sufficient.
Anyway, the texcoord limits seem quite strict... I thought on modern hardware you could have for instance a standard texture, lightmap texture, bump/specular maps all single-pass. Is this a limitation Irrlict imposes, or is it still common to use multple render passes.
However on say an x-ray view, I don't need any color/texture data for the model. So probably a fairly limited vertex format is sufficient.
Well all those textures can just share texcoords, why would you have the poor modeller unwrap the model that many times.
Anyway on another note would you be modelling organs? that would depend on the simulations you would be doing. Check your pm's in a few minutes d000hg there is allot of things to discuss.
Anyway on another note would you be modelling organs? that would depend on the simulations you would be doing. Check your pm's in a few minutes d000hg there is allot of things to discuss.
"Irrlicht is obese"
If you want modern rendering techniques learn how to make them or go to the engine next door =p
If you want modern rendering techniques learn how to make them or go to the engine next door =p
Newer cards can do 16 texture stages. But using that many would raise the questions why? using all of them would be quite slow, Gpus today do better at math than texture reads due to memory speeds not catching up to processor speeds.
Also you can send 4x8 bit variables through to shaders without modifying irrlicht through vertex colours.
The problem with ultrasound is that it takes a 2d slice through a volume of tissue, while the 3D representation of those tissues are thin sheets of polygons.
(thats assuming the input to the system is not prescanned ultra sound images, if it is then is a piece of cake)
There are sevral ways to tackle this and it would depend on the input data you can get your hands on.
You can get 4D heart, lung CT volumes elimanting the need for any polygon modelling or animation. As for other organs 3D CT would be fine. Since the densities between CT and ultra sound are similar you can probabbly use CT data for simulating ultra sound (with a few minor modifications like adding high intensity regions at borders between diffrent intensities) and removing tissues invisible to ultra sound.
Another advatage of working with volumes is that you can load two consective frames and LERP between them. Also you add in things that were not in the patient's CT just by painting voxels rather than any complex modelling.
Also you can send 4x8 bit variables through to shaders without modifying irrlicht through vertex colours.
The problem with ultrasound is that it takes a 2d slice through a volume of tissue, while the 3D representation of those tissues are thin sheets of polygons.
(thats assuming the input to the system is not prescanned ultra sound images, if it is then is a piece of cake)
There are sevral ways to tackle this and it would depend on the input data you can get your hands on.
You can get 4D heart, lung CT volumes elimanting the need for any polygon modelling or animation. As for other organs 3D CT would be fine. Since the densities between CT and ultra sound are similar you can probabbly use CT data for simulating ultra sound (with a few minor modifications like adding high intensity regions at borders between diffrent intensities) and removing tissues invisible to ultra sound.
Another advatage of working with volumes is that you can load two consective frames and LERP between them. Also you add in things that were not in the patient's CT just by painting voxels rather than any complex modelling.
"Irrlicht is obese"
If you want modern rendering techniques learn how to make them or go to the engine next door =p
If you want modern rendering techniques learn how to make them or go to the engine next door =p