It's been a long time since we talked.
Dunno if it may help but I got this model of a stair I was designing. I used it for render tests in blender; perhaps it might suit your needs:
http://www.danielpatton.com/afecelis/Bl ... tair.blend

Truely, it pushes things quite to a limit...I think this is a good idea. Smile It's the acid test. Will also serve as a bit of a performance benchmark.
I'll do tomorrow, now is a bit latequote: But..maybe for making the first code, you may need a cornel box like scene, 4 walls, a donut (curved surface) , an sphere, a polyhedra of some type, and a cube, with their lights and projected shadows.
But, yes, this might be good especially for me to sort out the visibility stuff. I think the ideal model would be a fairly simple room -- something you could imagine yourself standing in (unlike the Cornell Box). But perhaps with a column or two in it, and a box or two on the floor. For the moment, don't worry about curved surfaces. Or maybe make a version that has a sphere on top of the box or something simple like that.
Yeah. It's got its pros and cons like anything else, of course, but I figure it's at least a good place to start.Eternl Knight wrote:Using the graphics hardware to speed up the processing of lightmapping is a good idea not implemented by many people.
And that gives you substantially faster results than just one or the other, huh? Hmm! Something to mull over.Personally - I use a combination of octree & BSP's when I need to sort out "polygon soup" geometry into easily cullable regions. Use a large polygon count for the octree nodes and then use a BSP to sort the polygons within these nodes. Works well for most purposes I have.
Yes, absolutely. The one professional lightmapping tool I've worked with let you tweak the resolution in specific areas so that if you noticed that something was getting lost, you could just beef up that area. Kind of an ugly solution, and since I'm not planning to have that much of a UI (at least at first), it'd be hard to manage... you'd have to like plug in face numbers and tell it to use a higher resolution on them. Yuck.As for the "edge detection" optimisation - that can backfire depending on the original sampling resolution. If the initial resolution were configurable - it would be a nice feature, but if the lowest res is static in the code - it could mean fine details are lost.
.. which is important in this case, as I will have absolutely no control on the input to the program.Eternl Knight wrote:I personally prefer a "constant" performance over a wide variety of input than an optimal performance only for specific scenarios.
Well, it could do an approximation of the face's new lighting, but couldn't get it exactly. But yes. The limiting factor is that I'm not planning to have much of a UI up front.Edge Optimisation: This is why I hope you can make the source available. It would be trivial to alter the collision detection sample code to select map faces for "higher resolution" processing. Hell, with a little smarts - you could do it at runtime (i.e. select the face for higher res processing, press a hot-key, and the lightmapping engine reprocesses the face from cached data).
Actually, now that I'm thinking of it... I think with modern hardware I could implement radiosity almost entirely using just RTT, texture combinations, and pixel shaders... hmm... that might be interesting...Murphy wrote:And rather than bring each frame into main memory so that I can see how much light it contains, I might be able to write a vertex program to do this, which would take up FAR less bus bandwidth.