sorry but on my cpu 5us is the time required for several instructions, this seems to be a big request. Anyway if you need it for a game just try and profile, when speed is enough that's ok

. anyway that should not be 100 pointer hops since what I want is every sector to be completely cached at once.. (so store all coordinates in sectors, and that's not the same of a quadtree)
My solution was not a quatree, that surprise me you don't see difference.
Quadtree is a structure with O(logN) time for find a node. that's because every node contains other nodes. you can be lucky and find immediatly a leaf (can happen if your map have big low-density zones), or you can be unlucky and perform (at worst case) logN time.
My solution access directly every sector in fixed time. (and once accessed every sector is fully cached ensuring high speed, even better you can use compilers-built-ins, so when you are accessing 1 sector, you can prefetech the next sector, removing at all cachin time, prefeteching this is not viable only on CPUs with low level of assiociativity, like mobiles and small consoles, but in that case assume lesser enemies and assume more sectors to be present in same pages).
If every zone have a surface of 100 (says m^2) and every unit have a surface of 1 (m^2) then there can't be more than 100 units per sector (in practice there will almost be never 100 units since usually they will wander around with some kind of steering behaviour).
All those parameters needs only to be tuned.
2D range trees is a halfway between grid-constant accesstime and quadtrees logarithmic search.
You add perframe- pre-processing time in order to be able to do more queries/always per frame.
Some timings:
K => number of returned elements (in a unit of time)
M => number of queries /unit of time
N => number of total elements
Following timings are only time-profiling referred to worst case, but also memory usage profiling is interesting
Quadtree:
Initial building cost:
O(NlogN)
Updating cost (1 update per unit of time):
O(NlogN) //assuming ALL elements moved enough to change LEAF, rarely happens and even if it happens now most leaf already exists and less time required anyway)
Query cost:
O(K + M*logN)
2DRange Tree: (note this would be usefull for really big databases, where you can have A LOT of queries before any change, I can't imagine it usefull for a videogame)
Initial building cost:
O(n*logN*logN) (note that initial data is pre-sorted, the cost is reduced to O(NlogN). We assume that initial build will also sort previous( unsorted) data.
Updating cost:
O(N+n*logN) (sorting with insertion sort, partially sorted data requires N time, and then building requires N*logN only, total rebuild almost always needed.)
Query cost:
O(K + M*logN)
Grid constant access:
Initial buildling cost:
O(N + DIM^2) //a grid of size DIM^2 requires DIM^2 time for building
Updating cost:
O(N)
Query cost:
O(M+K)
Note that K can be thinked like M*Z (total number of returned elements is equal to elements returned in average by every query, multiplied by number of queries)
so that:
O(K + M*logN) becomes O(M(1+Z+logN))
O(M+K) becomes O( M(1+Z))
and updating cost really vary on how you tunes parameters (if you have zones/ sectors bigger enough it can greatly reduce).
Note the only nice think of 2D range trees is that their sectors have not fixed size ( every sector goes exactly from 1 enemy to next enemy), so you have no parameters to tune => high scalability for how concerns elements density. Quadtrees perform really bad and needs tuning since are not scalable with density, while GRIDS, are not scalable for bigger maps (due to high memory usage)
2DRangeTree => scalability on elements density / range of search (uses additional memory compared to quadtree), perform bad with many updates
Quadtree => scalability on number of elements (uses least memory for storing additional info for retrieving nodes)
Grid => memory inefficent but have constant time, works better with sames search ranges (not scalable on range of search, need tuning for that), perfect for quick updates
notes about memory usage of grid.
if you can have maximum 100 elements than grid requires
DIM^2 * 100 memory. This can be reduced using sectors pooling (so allocating a sector and keeping it around only when needed and some units are in it, but if units are sparse, then pooling is useless since most sectors will be active anyway.).
There's no perfect solution, all depends on your needs. and that's why Try&Profile should be your first rule when you need to decide the right algorithm.
Junior Irrlicht Developer.
Real value in social networks is not about "increasing" number of followers, but about getting in touch with Amazing people.
- by Me