General float/double precision and world resolution
General float/double precision and world resolution
hi folks.
since some time i am asking myself how to implement a rather huge landscape, which might be pretty detailed in some areas.
so, i think in general it is clearly obvious that if my area gets too huge to allow for small objects to be placed with the needed precision, i will need to cut the area into multiple subsets, each centered around its middle, and dynamically display and position them when needed.
tho many additional questions and uncertainties arise here, like:
- what would be the best scale for these subsets (e.g. -1.0 to +1.0, or rather -1000.0 to +1000.0 ?)
- are camera far/near values flexible, or are there API/hardware imposed restrictions, general problems?
- do model units need to fit the unit specifications (or, simpler question: is on-load resizing via setScale expensive?)
for now that is all questions which i see clearly enough to formulate right now, but i guess more will come.
thanks in advance for any clearifications or hints.
since some time i am asking myself how to implement a rather huge landscape, which might be pretty detailed in some areas.
so, i think in general it is clearly obvious that if my area gets too huge to allow for small objects to be placed with the needed precision, i will need to cut the area into multiple subsets, each centered around its middle, and dynamically display and position them when needed.
tho many additional questions and uncertainties arise here, like:
- what would be the best scale for these subsets (e.g. -1.0 to +1.0, or rather -1000.0 to +1000.0 ?)
- are camera far/near values flexible, or are there API/hardware imposed restrictions, general problems?
- do model units need to fit the unit specifications (or, simpler question: is on-load resizing via setScale expensive?)
for now that is all questions which i see clearly enough to formulate right now, but i guess more will come.
thanks in advance for any clearifications or hints.
This can be pretty easy to do depending on how many assumptions you want to make. In my current project every time a player walks from one map to the next I shift the entire world in the opposite direction they walked an amount which is equal to the difference in the 2 map positions. In the most basic terms every time you walk onto a new map, the entire world is shifted so the map you are on is located at 0,0,0 (the camera is shifted at the same time so the player can't tell).
I decided that I valued ease of coding over flexibility in mapping so I hardcoded all map sizes to say x, y, z - this really makes it simple to do.
I decided that I valued ease of coding over flexibility in mapping so I hardcoded all map sizes to say x, y, z - this really makes it simple to do.
i am still not sure what is the best way to handle this.
lets imagine this: i have a world (or game map) which resembles planet earth. on that planet, i have objects (like trees, houses, that kind of stuff).
lets say that we map units to meters, so one unit = one meter.
the naive way:
simply store the absolute position of each object on the planet (probably in some longtitude/latitude format), lets say as a double. then drawing the whole thing brute-force (no manual hidden object removal) in irrlicht.
to be able to also get off the planet with the camera, we make the camera far value insanely big, lets say 3 times the planet size.
of course, when having a lot of objects on the planet, some kind of LOD must be implemented. that is clear.
but what i generally need to know: is using such huge geometry possible/advisable/problematic?
the problem here is that i don't know much about the far value in general, i will now search for some information regarding this.
lets imagine this: i have a world (or game map) which resembles planet earth. on that planet, i have objects (like trees, houses, that kind of stuff).
lets say that we map units to meters, so one unit = one meter.
the naive way:
simply store the absolute position of each object on the planet (probably in some longtitude/latitude format), lets say as a double. then drawing the whole thing brute-force (no manual hidden object removal) in irrlicht.
to be able to also get off the planet with the camera, we make the camera far value insanely big, lets say 3 times the planet size.
of course, when having a lot of objects on the planet, some kind of LOD must be implemented. that is clear.
but what i generally need to know: is using such huge geometry possible/advisable/problematic?
the problem here is that i don't know much about the far value in general, i will now search for some information regarding this.
2 ways to deal with floating points problems:
1. Use Real. Ints that can grow beyond 32/64/128 bits. Albeit 128 should suffice. Don't forget, you can decide where the comma is in a decimal system.
2. Use referential floats or doubles. Use positions in relation to another position. Hence, you diminish by a huge amount the imprecision. This is a good trick when working with a library like irr or a physic one that expect to work in floats or doubles. Just make sure to keep those relations relevant and up-to-date.
1. Use Real. Ints that can grow beyond 32/64/128 bits. Albeit 128 should suffice. Don't forget, you can decide where the comma is in a decimal system.
2. Use referential floats or doubles. Use positions in relation to another position. Hence, you diminish by a huge amount the imprecision. This is a good trick when working with a library like irr or a physic one that expect to work in floats or doubles. Just make sure to keep those relations relevant and up-to-date.
that parts are clear to me. but when transferring these positions to irrlicht, and thus to the GPU, they all need to be floats/doubles again. for example i am not sure whats the impact of rendering speed when using a far value with the size of 3 times planet earth.
but then again, i thought about it again, and it is not as complex as i thought before. i can simply draw level "cells" next to each other when camera is within atmosphere, and draw a relatively small (few units big) sphere when camera is out of atmosphere.
still has the problem that under perfect conditions, one can see up to ~300 kilometres far, and i don't know whether a far value of 300.000 (remember, the idea is units are metres) is possible/a good idea.
of course, objects in far distance (like mountains) could be LODed and thus drawn with limited details (lowpoly), but they still would have the same scale and distance.....
but then again, i thought about it again, and it is not as complex as i thought before. i can simply draw level "cells" next to each other when camera is within atmosphere, and draw a relatively small (few units big) sphere when camera is out of atmosphere.
still has the problem that under perfect conditions, one can see up to ~300 kilometres far, and i don't know whether a far value of 300.000 (remember, the idea is units are metres) is possible/a good idea.
of course, objects in far distance (like mountains) could be LODed and thus drawn with limited details (lowpoly), but they still would have the same scale and distance.....
hey, your ideas are helpful and welcome.
i just wrote a little test app in JIrr, displaying a sphere as big as planet earth (units as meters), FarValue of 6 times planet earth radius. nothing was displayed
so i think FarValue is not as flexible as this.
does someone know what impacts FarValue has on speed (of course it gets faster if polys are outside of range, i mean in general), what its limits are, any details? i searched the forum for some information, but nothing detailed was found ('far' is a bit short for giving good search results).....
i just wrote a little test app in JIrr, displaying a sphere as big as planet earth (units as meters), FarValue of 6 times planet earth radius. nothing was displayed
so i think FarValue is not as flexible as this.
does someone know what impacts FarValue has on speed (of course it gets faster if polys are outside of range, i mean in general), what its limits are, any details? i searched the forum for some information, but nothing detailed was found ('far' is a bit short for giving good search results).....
One good advice: Don't use realistic scales. It's utterly nonsense.
Generated Documentation for BlindSide's irrNetLite.
"When I heard birds chirping, I knew I didn't have much time left before my mind would go." - clinko
"When I heard birds chirping, I knew I didn't have much time left before my mind would go." - clinko
ok. but what exactly does that mean? i hope the general idea of units=meters is OK, and not discouraged?wITTus wrote:One good advice: Don't use realistic scales. It's utterly nonsense.
i understand that having a FarValue of 300 kilometers (thus 300.000 units) is not a very good idea. but what to do then? what is a good FarValue for a realistic environment? on what parameters does a good FarValue depend?
imagine i have a mountain in the distance, which would be 100 kilometers away but still visible.
ok, now i understand it is not a good idea to render it in realistic scale and distance.
should it instead be loaded in smaller scale and rendered nearer (lets say in 1 kilometer distance), before any nearer objects are rendered? or is it a better idea to do the same but render it onto a skybox which is always updated after the camera travelled a specific distance?
I think something like this: If you're in space you can't see the object on a planet, they're to small most of the time. So I would make somesort of scaling system. When the variables can't take the size anymore do something like:
pseudo code:
Looks a bit like your "cell" idea I think
AND! secondly take a look into the skybox idea from unreal tournament and halflife. UT skybox is easyer to understand
pseudo code:
Code: Select all
if (distanceToPlanet > lodDistance)
resizePlanet(); //resize(smaller) planet and reposition the player
drawSolarSystem(); //make solar system visible
else //(distanceToPlanet < lodDistance)
resizePlanet(); //resize(larger) planet and reposition the player
drawPlanetObjects();
AND! secondly take a look into the skybox idea from unreal tournament and halflife. UT skybox is easyer to understand
Compete or join in irrlichts monthly screenshot competition!
Blog/site: http://rex.4vandorp.eu
Company: http://www.islandworks.eu/, InCourse
Twitter: @strong99
Blog/site: http://rex.4vandorp.eu
Company: http://www.islandworks.eu/, InCourse
Twitter: @strong99
that is kind of what i thought of. giving each cell kind of an address (like an IP address), so the entire planet would be address "1", and some small cell could be something like "1.2.1.4.1.1". that way, one could also store a low detail version averaging multiple sub-cells, like 1.1 contains a lower detail version of 1.1.1, 1.1.2, 1.1.3 and 1.1.4 (each cell optionally consisting of 4 subcells). i think that way it would scale best.
still i have to test how easy it is to render far away objects in less distance thus and corresponding scale, avoiding an unrealistic FarValue.
still i have to test how easy it is to render far away objects in less distance thus and corresponding scale, avoiding an unrealistic FarValue.
Of course this can be done with integers and LOD, but do you really want to create a whole planet? I don't know, but I imagine hundreds of kilometers of generated forest a bit boring.
Generated Documentation for BlindSide's irrNetLite.
"When I heard birds chirping, I knew I didn't have much time left before my mind would go." - clinko
"When I heard birds chirping, I knew I didn't have much time left before my mind would go." - clinko