OpenCL based GP-GPU raytracer in conjunction with rasterizin

Announce new projects or updates of Irrlicht Engine related tools, games, and applications.
Also check the Wiki
Post Reply
devsh
Competition winner
Posts: 2057
Joined: Tue Dec 09, 2008 6:00 pm
Location: UK
Contact:

OpenCL based GP-GPU raytracer in conjunction with rasterizin

Post by devsh »

hey all,

looking for people with openCL experience

I am almost finished with my troublesome shadows, trees and water.
Also im putting up a demo of geomtry shader grass with andres

after i'm finished with that i have some spare time, as both MMO's whose team member I am dont have anything to do for me or its so little there is nothing to occupy my time fully with.

So I decided I might take irrlicht SVN or 1.6 and strip it down to only include openGL driver and then change it's fundamental functions so we get the first thing needed... then frustum culling will be done on CPU and GPU simultaneously if possible.

next thing is taking all the models and based on indices and vertices, bake triangles. alternatively we could bake triangles on GPU too.

i think it would be a good idea to quad tree the triangles

the OpenCL GPU program will get uploaded with all the triangles that are in the frustum.

i think it would be better to go for the straight line model rather than a cone model as it is faster. However it will produce aliasing like in a real rasterized image. Using binary occlusion tests the ray would be traced, and one triangle would be found.

Each ray would be traced in a parallel fashion on the GPU, the function would return the id of the primitive hit, the interpolated UV coord at the collision, the angle of reflection and the world space coord of the collision

I would consider it a great sucess if we managed to render a plane and the dwarf in 400*300 at better FPS than BlindSide with his CPU based raytracer
devsh
Competition winner
Posts: 2057
Joined: Tue Dec 09, 2008 6:00 pm
Location: UK
Contact:

Post by devsh »

haha* maniac laugh

im making the hybrid raytracer

and i have recently found this

clCreateFromGLRenderbuffer

which enables me to copy from my MRT normalMap RENDER BUFFER

sweet!

this means i have no overhead with copying texture data

the data circulation diagram

GPU_Rasterization->GP-GPU !!!

instead of

GPU_Rasterization->CPU->GP-GPU

haha next thing is to get bounding boxes from normals in the image!!!
:( writing openCL kernels

FURTHER OPTIONS:

a) CPU then returning them
CPU grouping (octTree or summink) and deciding which nodes in bounding boxes of rays
GPU recursive checkin with bounding boxes of nodes
CPU upload of triangles in the mesh in kd-tree or octtree structure
GPU ray intersection check!!!

b) GPU when got bboxes from image check them with all nodes bounding boxes and only return rays which collide with anything (return type [pixels_pos_in_scene.xy,ray_viewmatrix,list_of_ids_of_nodes_which_pass_the_bbox_intersection])
CPU make another round and upload ray data to another kernel which checks bbox dimensional structures for more precise collision and the actual triangles too
GPU check bboxes first then if there is collision... rasterize (matrix multiply the triangle verts with ray view matrix) if there is no collision background color if there are more triangles to test, test in order from nearest to furthest

EDIT: MUAHAHA - good news for peeps with i7 and SLI or crossfire, YOU CAN USE THEM ALL TO SPEED IT UP!!!

EDIT2: EVEN MORE CRAZY LAUGH- i can use the buffer directly AND WRITE TO IT. this solves the problem of later actual adding up of the reflection color.

EDIT3: Really openCL is sooo powerful we could write an openCL driver. Instead of shaders you could use kernels and they would be FAR MORE powerful, they would allow changing pixel position unlike GLSL so parallax mapping would be many times faster (almost as fast as straight rendering). just make a rendertarget in GL make CL context from GL and make a cl_mem object from the render target and write stuff into it.... then just present the render target on the screen with GL

i remember someone saying on irc, "the future will be software renderers, but running on hardware acceleration"
devsh
Competition winner
Posts: 2057
Joined: Tue Dec 09, 2008 6:00 pm
Location: UK
Contact:

Post by devsh »

hey a question to hybrid, blindside or others who are more advanced

which GLuint am I supposed to use

ColorFrameBuffer
OR
getOpenGLTextureName()

or something else???

extern CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromGLRenderbuffer(cl_context /* context */,
cl_mem_flags /* flags */,
GLuint /* renderbuffer */,
cl_int * /* errcode_ret */) CL_API_SUFFIX__VERSION_1_0;
Halifax
Posts: 1424
Joined: Sun Apr 29, 2007 10:40 pm
Location: $9D95

Post by Halifax »

People can't infer what you are talking about when you ask "Should I use getOpenGLTextureName()". You could be using that function method on any valid COpenGLTexture object, so please next time you ask a question ask it sensibly.

You'll want to use the ColorFrameBuffer variable, that is the framebuffer.
TheQuestion = 2B || !2B
devsh
Competition winner
Posts: 2057
Joined: Tue Dec 09, 2008 6:00 pm
Location: UK
Contact:

Post by devsh »

ok i just want to know what i should put in there as it returns CL_INVALID_VALUE when i try to use the framebuffer GLuint?

MUAHAHA: the story gets even better as i can have all openGL textures as openCL image memory objects muaha... AT NO COST MUAHAHA
devsh
Competition winner
Posts: 2057
Joined: Tue Dec 09, 2008 6:00 pm
Location: UK
Contact:

Post by devsh »

HAHA EVIL IDEA 3:

most probably the most useful thing i'm going to make (the raytracer will kill itself with large amount of geometry, i.e. real games, although at the end of my project i expect it to handle the quake 3 level) for games with be global illumination.

http://www.youtube.com/watch?v=B-_pnqXL ... re=channel

two approach:

a) i will simply make another CL object from render buffer which will happen to be the shadow map (evil laugh) scale it down a bit (do less operations) and trace rays into the scene from there but adding color in REVERSE (i.e. the origin of the ray doesn't get new colours, but infects the points where it hits with light)

b) when running the normal raytracing it will not be a large overhead to blend color from collisions BACKWARD (meaning the point of collision adds to the reflection, but light from the origin of the ray [reflective place] also changes the color of the reflectant)

in both approaches the "light infection" much have decay like in Volumetric Light Scattering, but on the first one i could implement culling and quit collision testing when the ray goes outside of view frustum (because it would be contributing to pixels/objects color outside the frustum [hence it goes BACKWARDS], where as the reflections contribute to color inside the view frustum even when ray is outside)
devsh
Competition winner
Posts: 2057
Joined: Tue Dec 09, 2008 6:00 pm
Location: UK
Contact:

Post by devsh »

i think the nvidia ocl examples show how far the CPu is behind the GPU in something as easily parallelisable as raytracing. a simple blurr filter (which can be done on GLSL) is almost as fast with openCL a 256 sample blur filter being at decent fps. and that is without OPENGL INTEROP

while a cpu C++ version does the operation so slow i could go to the kitchen, get more coke, drink a whole cup, wash my dirty sphagetti plate, talk with some guy on irc, listen to two songs

so guys, im sure we will have a quick raytracer
anoki
Posts: 58
Joined: Fri May 05, 2006 8:31 am

Post by anoki »

I think it is a great idea to try it with realtime raytracing ...
There are already examples of this in the net.
This is the right time !
Post Reply