Hendu:
If the problem is the existing team not having enough time, the solutions are
a) make time
If you need finances to do that, what would lead there? Are you getting royalties from the book? Are you putting ads to the front page?
From my experience I can tell you that an occasional donation here and there, or a sponsorship is not going to make extra time... for that you need a constant stream of money that can replace your day-job. I think people at irrlicht are already giving it all the spare time that is reasonable, and tempting them with money to do more work would just amount to them whoring out their social and family life and would be plainly morally wrong to rob them of their lives.
b) more people
definitely need more people with SVN write permissions
Nadro:
B) - it require just small modification in core, however without breaking an interface (it will be possible to get depth texture via TextureName).
You may as well not bother then XD.. better no implementation and a good patch than some half baked stuff like occlusion queries.
The reason why I use the above so viciously is because I can SHARE the depth buffer between render targets!
So I dont lose depth when doing post processing.
So I can do weighted average transparency (2 MRTs) while only writing to one RTT before for the solids in order to not kill my fillrate
So I can do all sorts of depth-based filters and effects without having to render my scene with 2 MRTs and more complex shaders
with the above approach none of these ACTUAL uses of the feature would not work
CuteAlien:
If you get any bug reports or patches in the tracker that would affect/benefit my irrlicht branch... give me links and I will fix/integrate them into the BaW above.
Update of Engine:
1) the IOcclusionQuery interface is up and running
2) we got Conditional Rendering going both as explicit optional argument "IOcclusionQuery* query" to 3D IVideoDriver draw functions and implicit as attaching the query object to ISceneNodes and setting automaticculling to EAC_CONDITIONAL_RENDER
Technically Speaking:
Water reflections take 6.5ms, Drawing Solid Objects (Z-Write) takes 8ms, Occlusion Queries take 6ms, in the typical game scenario.
The amount of time spent on occlusion queries does not include the time spent waiting for the results to be available (we chose to use the blocking query as it gave higher FPS), so imagine the sync stall!!!
Now with the conditional rendering we may avoid this stall, BUT!!!
OcclusionQueries are un-batchable, just one counter of pixels, so need to issue a separate draw command for each test mesh. With our 900+ regions and possible many water planes, this easily explains the 7ms for occlusion queries (despite low polycount!)
So now I'm looking at quasi Hi-Z very fine-grained batch culling GPU-based approaches... obviously I'm not stupid enough to try and approximately render the Z-buffer on the CPU and create max-mipmaps of it. But rather generate the Hi-Z map and test bounding boxes on the GPU by either:
A) using a vertex shader and rendering points to RTT or using transform feedback and then downloading the results to the CPU
B) doing the above but either using the compute shader, image_load_store or other extension to modify the contents of a QUERY_BUFFER OpenGL object, or typecasting/copying from a RenderBuffer/TransformFeedbackBuffer Object to a QueryBuffer Object and using ConditionalRendering to do the culling (all on the GPU, but many draw calls and GL 4.0 required)
C) using GL_draw_indirect and use the GPU to basically fill out the render commands with the shader that does the occlusion testing (definitely GL 4.0 required)