So we're going to maintain code and features in a C++ engine for people who havent learnt C++ properly?Personally, I like having the function implementations all in front of me rather than trying to figure out the STL.
I'm sorry if a lot of this see schizophrenic and disconnected, but I am typing with terrible neck/shoulder pain which is terribly distracting.
Fixed Pipeline is bad because its being emulated in your driver adding tons of overhead (you have a single unified shader emulating all combinations of fixed function pipeline settings, wasting ALU cycles on if statements etc.) let me give you an example:
+A for loop is inside the emulation shader for fixed function lights (8 loops with full lighting calc)
+If statement for fog is present
+albedo texture gets sampled and multiplied by (ambient color+diffuse color*lightColAccum) and emissive is added
+if statement for alpha ref testing
Now if you desire to just draw textured objects without any lights you do 1 sample and thats it, with fixed function emulation you have 2 if statements, at least 1 eval of light loop control statement, 1 mul by diffuse which dont do anything
Basically your pixel shader can be many times slower with FF
Not only that but stuff like alpha testing gets added at the end of your pixel shader if you have your own (after you write to gl_FragColor), so it makes all shaders slower... not just FixedFunction pipeline emulation (EMT_SOLID etc.). The alpha test also gets added at the end, and not earlier (i.e. just after you sample a texture and before you start calculating lighting - so you calculate light for transparent pixels), and if you add your own alpha test, the one at the end is there anyway.
Personally I think we should make a shader pre-processor for GLSL (HLSL already has one I think) so you can use #include files which can define some routines that can be supplied with the engine, like hardware skinning, cook-torrance lighting model, fresnel calculation, blinn-phong lighting, spheremap sampling, and whatever you may need for deferred rendering etc.
(We could share around the functions as addons/patches/extras instead of clogging the codebase maybe)
Streamed assets loading is a bitch, trust me... if you do it wrong (VBO upload in irrlicht! YOU GUYS UPLOAD NEW MESHBUFFER DATA RIGHT INTO A VBO RIGHT BEFORE A DRAW CALL USING THE SAME VBO) then you make massive CPU<->GPU stalls and lose tons of performance. Before you jump into that pool I would rather encourage a full implementation of GPU side buffers and handling/managing them (i.e. copying between a VBO and a FBO/texture etc. - very useful for GPU particles). I really have a big problem with how unexposed irrlicht VBOs are etc.
You also need to expose the GPU buffers if you want to introduce transform feedback and compute shaders. Without the freedom of manipulating GPU-side buffers, transform feedback makes no sense.
I myself, have slimmed down the irrlicht engine and I only have 2 or 3 built in materials which are simple pass-through shaders (EMT_SOLID, EMT_TRANSPARENT_ALPHA_CHANNEL, something else )
On the subject of writing a custom Physics engine:
+JUST NO, its like asking for a second software renderer when you look at alternatives like Bullet!
+Cant diversify that far with just 2 or 3 active Irrlicht devs
+Rendering which is the principal focus of this library is falling behind the times and needs to catch up with current technology and thats where all the focus needs to go
Collision detection in irrlicht as of now, is super bad.. when I removed it from Build a World I saved over 300mb of RAM from the triangle selectors!
However collision detection, is quite acceptable compared to the GUI. There is only 1 tutorial on how to use it and its use is not encouraged, with the GUI its the complete opposite story.
Irrlicht GUI is the principal reason why your FPS may drop by even 50%. Its really inefficient bad GUI, from resolving mouse clicks and hovers etc.. (no quad-tree likstructure to avoid texting each gui element against mouse position) to the amount of draw calls (one per every single button and separate text frame, drawing one image at a time) and shader/material confusion and the inability to order stuff into layers (in a proper game-ready optimized GUI you draw the entire GUI at once layer by layer with a large vertex buffer and textures {fonts, button backgrounds, etc.} in either an atlas or texture array)
Instead of a better IRRedit, I would propose... lets write a Blender file importer or a Blender plugin (but please not one that writes to XML, that stuff gives a loading time of 10 minutes for a 100mb file - tried it before with a detailed 30k vert tree model, not only did it take ages to load but the plaintext storage inflated the filesize to many times over the .obj equivalent) so you can use Blender as a level editing tool?
I would propose to ditch XML and adopt google-protocol-buffers which enables forward compatibile serialization buffers which can be saved in a binary file, validated and can handle errors (you can check if fields are missing etc.).
We need 3d textures badly (VGXI, voxelizing, color charts for color correction) and all sorts of MRT rendering (geometry shader MRT layers for PSSM/CSM)
(about to start making an offline GPU asset voxelizing and voxel handling library with simple compression, but thats gonna be in the Project Announcements section soon... may be made LGPL in return for people willing to help and spread the development load)
On the question of animations, maybe incorporate Smart Body?