I'm curious about order independence so I have been reading about it. I'm not sure if this is stupid or a good idea but for single pass order independent rendering you could make a map of rays cast through every pixel of the near plane in the render function.
And use a map or set (anything with keys really) for pixel ques for each ray. Then loop through every buffer of each mesh testing what rays intersect with them. For each ray that intersects
intersects get the pixel (ignoring fully invisible ones) where the ray intersects the geometry.
If the material is solid or alfa-ref (and not invisible) set the end of the ray to this coordinate shortening the ray into a line so you can ignore all pixel fragments along the line that are occluded by a solid one. If later a nearer occluding pixel is found on the line then amputate all of its fragments in the line's que that are farther than the new occluding one and shorten the line.
Each pixel(fragment) in the que must include the color, and a blend flag, and the element's key should equal the square of the distance from the camera to the pixel(fragment).
Then stream each que to the video card rendering/blending the whole que farthest to nearest into a screen pixel.
Could also get all the texture's pixel's that are inside a cone intersection (instead of line)and blend them together averaging color and depth into a single fragment somewhere
along the line.
If i understand right in depth peeling multiple view planes are rendered then blended together. I think this would just extend the normal method of using rays to find the pixel
belonging to a screen coordinate. It shouldn't be much if any slower as invisible fragments or occluded ones would be left out of the que.
Would something like this be be better than depth peeling or am I completely misunderstanding?
I'm curious about order independent rendering
Re: I'm curious about order independent rendering
It sounds as if you are proposing to implement the depth buffer in software.
Re: I'm curious about order independent rendering
Kind of except for fragments instead of pixels, and probably written in a gpu programming language. I think this is a simplified form of a-buffer. Except apparently the reason people don't do it quite the way I was thinking is gfx cards are limited on how many fragments with transparency can be rendered into the screen pixel. If I correctly understand what I just read at http://stackoverflow.com/questions/9845 ... n-z-buffer
Could maybe collapse groups of transparent fragments into 1 fragment entry but that would be multi-pass.
Could maybe collapse groups of transparent fragments into 1 fragment entry but that would be multi-pass.