Are you using lightmapping?
It will. I gave him a complex level, not a level, but an amount of weird geometry cases, and at the stairs I forgot to put a light, and were very plain simple stairs, so for sure are of very good use
Finally making games again!
http://www.konekogames.com
http://www.konekogames.com
Truely, it pushes things quite to a limit...I think this is a good idea. Smile It's the acid test. Will also serve as a bit of a performance benchmark.
One thing I don't much like of using blender is the smoothgroups (or vertex normals) lack...
been thinking even in pass you the stuff in *.an8 format, which is well documented and ports quite scene info...Pitty is that is windows only, though has been reported to work well under Wine...
hmmm...no set of hard edges...(nope, spliting the mesh is not a solution for me, in blender...too tedious) but in Anim8or at least it does an autosmooth value, that's usually enough for most buildings....
Wait. Sorry. had forgotten. You take mesh info from OBJ (ie, from wonderful Wings) and Blender file only for extract the lights. Ok, then is good idea to stick with blender for that....
I'll do tomorrow, now is a bit latequote: But..maybe for making the first code, you may need a cornel box like scene, 4 walls, a donut (curved surface) , an sphere, a polyhedra of some type, and a cube, with their lights and projected shadows.
But, yes, this might be good especially for me to sort out the visibility stuff. I think the ideal model would be a fairly simple room -- something you could imagine yourself standing in (unlike the Cornell Box). But perhaps with a column or two in it, and a box or two on the floor. For the moment, don't worry about curved surfaces. Or maybe make a version that has a sphere on top of the box or something simple like that.
That's quite easy and surely is better to go sorting stuff.
Finally making games again!
http://www.konekogames.com
http://www.konekogames.com
First off, thanks to y'all for the test scenes and input!
Secondly, a little status update. I put in another half hour of work on it today and have fixed the problem I was having causing weird results when I was trying to calculate the hemicubes. So that's sorted. I've still got a lot of work to do, but things are, in fact, on track.
Secondly, a little status update. I put in another half hour of work on it today and have fixed the problem I was having causing weird results when I was trying to calculate the hemicubes. So that's sorted. I've still got a lot of work to do, but things are, in fact, on track.
Put another half hour into it tonight and fixed some more problems and now the results are EXACTLY what I'd been expecting.
This has all been to do with using OpenGL to render hemicubes, which are basically used to figure out what you can see from each little point on each wall. And by knowing what is visible from a given point, you know what is emitting or reflecting light onto that point.
It's taking quite a long time to go through an entire scene, even a fairly simple one... it's a lot of rendering, and my little "engine" is very unopimized (it's relying entirely on frustum clipping and depth buffering to do culling, for example). It also has to render each face twice. If you figure around 450x450 lightmap pixels used (a 512x512 texture with some wasted space), that's 202,500 different points of view. And you need to render five views from each point (to make a hemicube), so it's actually 1,012,500 seperate views. So if these were "frames" and you're getting 100 FPS, you're looking at around 169 minutes to process a scene. I have no idea if 100 renders per second is a reasonable guess or not.
So, you know. It's time consuming. No doubt I'll be doing a lot of test renders using much less than a 512x512 pixel lightmap texture.
Some improvements to my OpenGL code might produce significant speedups especially as the amount of geometry increases. Intelligent culling (octree or BSP or what have you) for one. And since all the geometry is static, stuff like vertex buffers are no brainers. Even the textures are static, so really, there's quite a bit of room for optimization. I could also just store some visibility data the first time you lightmap a mesh and then use that to accelerate it the next time you do it, which would be pretty easy to implement.
And then lots of lightmappers don't render at even density. They render most stuff kind of sparsely and attempt to find "edges" and render at higher density in those areas. If I implemented something like this, it'd certainly be faster.
For the moment, though, I'll just be rendering at lower density in general, just to get things working. Optimizations later.
The next step is to finish the hemicube code (I'm currently only rendering the front side). Then add lighting rules and emitters. Then see how it looks!
Still got a ton of other things taking up my time, but I'll be sure to let y'all know when I get a chance to work on it some more.
This has all been to do with using OpenGL to render hemicubes, which are basically used to figure out what you can see from each little point on each wall. And by knowing what is visible from a given point, you know what is emitting or reflecting light onto that point.
It's taking quite a long time to go through an entire scene, even a fairly simple one... it's a lot of rendering, and my little "engine" is very unopimized (it's relying entirely on frustum clipping and depth buffering to do culling, for example). It also has to render each face twice. If you figure around 450x450 lightmap pixels used (a 512x512 texture with some wasted space), that's 202,500 different points of view. And you need to render five views from each point (to make a hemicube), so it's actually 1,012,500 seperate views. So if these were "frames" and you're getting 100 FPS, you're looking at around 169 minutes to process a scene. I have no idea if 100 renders per second is a reasonable guess or not.
So, you know. It's time consuming. No doubt I'll be doing a lot of test renders using much less than a 512x512 pixel lightmap texture.
Some improvements to my OpenGL code might produce significant speedups especially as the amount of geometry increases. Intelligent culling (octree or BSP or what have you) for one. And since all the geometry is static, stuff like vertex buffers are no brainers. Even the textures are static, so really, there's quite a bit of room for optimization. I could also just store some visibility data the first time you lightmap a mesh and then use that to accelerate it the next time you do it, which would be pretty easy to implement.
And then lots of lightmappers don't render at even density. They render most stuff kind of sparsely and attempt to find "edges" and render at higher density in those areas. If I implemented something like this, it'd certainly be faster.
For the moment, though, I'll just be rendering at lower density in general, just to get things working. Optimizations later.
The next step is to finish the hemicube code (I'm currently only rendering the front side). Then add lighting rules and emitters. Then see how it looks!
Still got a ton of other things taking up my time, but I'll be sure to let y'all know when I get a chance to work on it some more.
-
- Posts: 313
- Joined: Tue Nov 01, 2005 5:01 am
Sounds good, mate.
Using the graphics hardware to speed up the processing of lightmapping is a good idea not implemented by many people. Personally - I use a combination of octree & BSP's when I need to sort out "polygon soup" geometry into easily cullable regions. Use a large polygon count for the octree nodes and then use a BSP to sort the polygons within these nodes. Works well for most purposes I have.
As for the "edge detection" optimisation - that can backfire depending on the original sampling resolution. If the initial resolution were configurable - it would be a nice feature, but if the lowest res is static in the code - it could mean fine details are lost.
Feel free to bug me if you want me to clarify anything I just mentioned
--EK
Using the graphics hardware to speed up the processing of lightmapping is a good idea not implemented by many people. Personally - I use a combination of octree & BSP's when I need to sort out "polygon soup" geometry into easily cullable regions. Use a large polygon count for the octree nodes and then use a BSP to sort the polygons within these nodes. Works well for most purposes I have.
As for the "edge detection" optimisation - that can backfire depending on the original sampling resolution. If the initial resolution were configurable - it would be a nice feature, but if the lowest res is static in the code - it could mean fine details are lost.
Feel free to bug me if you want me to clarify anything I just mentioned
--EK
Yeah. It's got its pros and cons like anything else, of course, but I figure it's at least a good place to start.Eternl Knight wrote:Using the graphics hardware to speed up the processing of lightmapping is a good idea not implemented by many people.
And that gives you substantially faster results than just one or the other, huh? Hmm! Something to mull over.Personally - I use a combination of octree & BSP's when I need to sort out "polygon soup" geometry into easily cullable regions. Use a large polygon count for the octree nodes and then use a BSP to sort the polygons within these nodes. Works well for most purposes I have.
I still may end up using a hacked up and stripped down Irrlicht (or another engine?) for this part, so that I can just let someone else handle the culling. Still way too early to say. I'll probably write some sort of simple culler myself and see what kind of improvement I get. And I'll certainly implement vertex buffers. I may do that the next time I work on it, actually, just to see if I get any improvement.
Yes, absolutely. The one professional lightmapping tool I've worked with let you tweak the resolution in specific areas so that if you noticed that something was getting lost, you could just beef up that area. Kind of an ugly solution, and since I'm not planning to have that much of a UI (at least at first), it'd be hard to manage... you'd have to like plug in face numbers and tell it to use a higher resolution on them. Yuck.As for the "edge detection" optimisation - that can backfire depending on the original sampling resolution. If the initial resolution were configurable - it would be a nice feature, but if the lowest res is static in the code - it could mean fine details are lost.
What I'll probably end up doing if I go this route at all is have the maximum "skip" be like four pixels or something. You can then control how much loss this really is by setting the lightmap density (which is configurable). The idea being that worst case, you miss three lumels. At a low lightmap density, this would be a heck of a lot of information, but at a higher density (when you'd actually care about the speed improvement) it wouldn't necessarily be a lot. And you could also use it for pseudo-high-resolution previews... you'd figure the density you wanted and tweak lights using interpolated results, and then switch to no interpolation for a final (time consuming) render.
-
- Posts: 313
- Joined: Tue Nov 01, 2005 5:01 am
Octree/BSP Combo: It will not always give you a substantial speedup. Like everything else it depends on the geometry in question. I have found though that it performs better more often than either method alone in "worst case" scenarios while not adding significant overhead to the cases where BSP or Octree culling are better. I personally prefer a "constant" performance over a wide variety of input than an optimal performance only for specific scenarios. I hate it when my artist comes to me saying "This level took X minutes to process whereas this one took four times as long. They aren't that different!" (which aesthetically speaking they may not be, but the math might be alot more complex!)
Edge Optimisation: This is why I hope you can make the source available. It would be trivial to alter the collision detection sample code to select map faces for "higher resolution" processing. Hell, with a little smarts - you could do it at runtime (i.e. select the face for higher res processing, press a hot-key, and the lightmapping engine reprocesses the face from cached data).
--EK
Edge Optimisation: This is why I hope you can make the source available. It would be trivial to alter the collision detection sample code to select map faces for "higher resolution" processing. Hell, with a little smarts - you could do it at runtime (i.e. select the face for higher res processing, press a hot-key, and the lightmapping engine reprocesses the face from cached data).
--EK
.. which is important in this case, as I will have absolutely no control on the input to the program.Eternl Knight wrote:I personally prefer a "constant" performance over a wide variety of input than an optimal performance only for specific scenarios.
Well, it could do an approximation of the face's new lighting, but couldn't get it exactly. But yes. The limiting factor is that I'm not planning to have much of a UI up front. So we'll see where that goes in terms of source release and such.Edge Optimisation: This is why I hope you can make the source available. It would be trivial to alter the collision detection sample code to select map faces for "higher resolution" processing. Hell, with a little smarts - you could do it at runtime (i.e. select the face for higher res processing, press a hot-key, and the lightmapping engine reprocesses the face from cached data).
BTW, I forgot to mention earlier that those number and times are per pass. And, of course, getting good radiosity results requires multiple passes. But if I use the first pass to calculate a lot of visibility data, subsequent passes could be much faster...
The other thing is that the hemicube face I'm using right now is pretty big. It's quite possible to do it much smaller which would up my rendering speed. Maybe I'll make it selectable. The biggest downside to small hemicubes is that things that are small/far away might get lost. If they're very bright, this would result in losing a very bright light. In the real world, though, there aren't too many of this sort of situation. The sun is a fair size, the sky is basically a giant emitter, and once you're indoors, you get a tremendous amount of reflected light. In the cases where there ARE very bright and very small lights, you usually pick up a lot of their light by reflection anyway. It's not very often that you're floating in space with a statue being lit only by a very bright and very distant star.
I also have no idea how well current cards do with offscreen rendering. So that might be another way to speed it up. And rather than bring each frame into main memory so that I can see how much light it contains, I might be able to write a vertex program to do this, which would take up FAR less bus bandwidth.
There are about a zillion things I might try to get some more performance, and I don't yet know much about implementing a lot of them. That's why I just want to get it working first.
On the up side, I think 100 renders per second was actually pretty conservative. I think 500 per second might actually be conservative. We'll see!
Actually, now that I'm thinking of it... I think with modern hardware I could implement radiosity almost entirely using just RTT, texture combinations, and pixel shaders... hmm... that might be interesting...Murphy wrote:And rather than bring each frame into main memory so that I can see how much light it contains, I might be able to write a vertex program to do this, which would take up FAR less bus bandwidth.
Hi.
I almost don't understand a word, lol.
hey, forgot to make the simple scene.I'll make one right now.
One thing after reading that last sentence.
paternon, a radiosity renderer of very good quality, was done by hardware card, it was planned to team up well with metasequoia format, a mesh modeler.
parthenon renderer
here
http://www.bee-www.com/parthenon/
perhaps some doc or code there can be of help.
I almost don't understand a word, lol.
hey, forgot to make the simple scene.I'll make one right now.
One thing after reading that last sentence.
paternon, a radiosity renderer of very good quality, was done by hardware card, it was planned to team up well with metasequoia format, a mesh modeler.
parthenon renderer
here
http://www.bee-www.com/parthenon/
perhaps some doc or code there can be of help.
Finally making games again!
http://www.konekogames.com
http://www.konekogames.com
Yay, test scene.
I like the other one, BTW. And I can easily pull out the barrels and dragon to try different arrangements. I'll be doing some testing with it for sure. The dragon should cast some cool shadows.
The code is really ugly at this point, but it's all basically working, so I probably won't do a rewrite until later. I'm hoping to actually have it rendering at least one real pass of lighting with only a couple hours' more work. Don't know when I'll get the chance for that, of course...
Thanks for the pointer to that GPU-based renderer. Pretty interesting, though, of course, their purpose is reaaaally different than a lightmapper. And they aren't implementing radiosity. But still, some of their stuff may prove instructive since I don't really know that much about GPU programming. They sure have some nice pictures. And if they're getting that kind of stuff... seems like GPU might be faster for radiosity too...
I like the other one, BTW. And I can easily pull out the barrels and dragon to try different arrangements. I'll be doing some testing with it for sure. The dragon should cast some cool shadows.
The code is really ugly at this point, but it's all basically working, so I probably won't do a rewrite until later. I'm hoping to actually have it rendering at least one real pass of lighting with only a couple hours' more work. Don't know when I'll get the chance for that, of course...
Thanks for the pointer to that GPU-based renderer. Pretty interesting, though, of course, their purpose is reaaaally different than a lightmapper. And they aren't implementing radiosity. But still, some of their stuff may prove instructive since I don't really know that much about GPU programming. They sure have some nice pictures. And if they're getting that kind of stuff... seems like GPU might be faster for radiosity too...
Just a small status update.
First, I've made myself a quick test scene (still waiting for yours, Vermeer ). I used DeleD LITE because I don't have any other tools on this machine at the moment. No subtraction operation... ouch.
Second, I've half-finished the "preview" view. Simple flyaround FPS style camera. I wasn't originally planning to do this, but it's going to make debugging and stuff way easier.
Third, Eternl Knight reminded me of something I'd thought of adding a long time ago, but forgotten -- bumpmapping. The idea is to cook the bumps right into the lightmap. If I recall, OBSP does this. He also pointed me towards part of an algorithm that may give better results than the one I was planning to use. This certainly won't make it into any of the early versions, but it's something I'm thinking about.
Anyway, development (slowly) continues...
First, I've made myself a quick test scene (still waiting for yours, Vermeer ). I used DeleD LITE because I don't have any other tools on this machine at the moment. No subtraction operation... ouch.
Second, I've half-finished the "preview" view. Simple flyaround FPS style camera. I wasn't originally planning to do this, but it's going to make debugging and stuff way easier.
Third, Eternl Knight reminded me of something I'd thought of adding a long time ago, but forgotten -- bumpmapping. The idea is to cook the bumps right into the lightmap. If I recall, OBSP does this. He also pointed me towards part of an algorithm that may give better results than the one I was planning to use. This certainly won't make it into any of the early versions, but it's something I'm thinking about.
Anyway, development (slowly) continues...
-
- Posts: 313
- Joined: Tue Nov 01, 2005 5:01 am
Created a couple polygonal lights for my test scene and wrote the code to load them in the lightmapper. They're a completely seperate set of geometry than the actual model.
Preview mode/camera is "finished". It sucks, but basically works. Acts somewhat like a Quake camera in noclip mode.
Finished the hemicube multiplier stuff.
Getting close to having something interesting to show again!
Preview mode/camera is "finished". It sucks, but basically works. Acts somewhat like a Quake camera in noclip mode.
Finished the hemicube multiplier stuff.
Getting close to having something interesting to show again!