Creating Normalmap from a High-Res model?

Post your questions, suggestions and experiences regarding to Image manipulation, 3d modeling and level editing for the Irrlicht engine here.
Masterhawk
Posts: 299
Joined: Mon Nov 27, 2006 6:52 pm
Location: GERMANY
Contact:

Creating Normalmap from a High-Res model?

Post by Masterhawk »

Hey guys,

due to the fact that there definitely are a few pros around here i hope you can help me. For my bachelor-project at the university I have to create a Normalmap from a high-res polygon model and reuse it with the low-poly version. So far so common. Unfortunaltely I have to write a tool that can handle this. So does anyone of you knows a description of an algorithmen which exactly do this?

Any hints of getting rid of this problem are very appreciated!

thx, masterhawk
Image
JP
Posts: 4526
Joined: Tue Sep 13, 2005 2:56 pm
Location: UK
Contact:

Post by JP »

Mmm nice project! Hope you can make it available if you're successful!

I guess you have to think about what the normal map is... It's like a texture for the model but specifying the normal of the model at each UV coordinate.

So with your high res model you've got to go through each poly and maybe fill in a texture with the normals of those polys based on the UV coordinates of each of the vertices in the poly?

So you've got your three vertices with their UV coordinates and so you can map those vertices onto a texture pretty easily and so you could just render a triangle with the face normal of that poly... That wouldn't give you a nice smooth normal map though if you were using vertex normals... but in that case maybe you just draw the triangle with the vertices using the colour of the normal each vertex has and then the blending would be done for you...?

Just thinking out loud here but sounds like that might be along the right lines...
Image Image Image
Mel
Competition winner
Posts: 2292
Joined: Wed May 07, 2008 11:40 am
Location: Granada, Spain

Post by Mel »

That isn't a trivial matter, but, overall, it is much more a problem of projecting the triangles of the high poly mesh over the mesh of the low poly.

Imagine you put a camera over each polygon of the low polymesh, from there you can see the detail of the high poly mesh. Then, render the normal of each point, using a raycasting algorithm, for example, to sample the high poly mesh, and that would be the core of it.

What do you think?
"There is nothing truly useless, it always serves as a bad example". Arthur A. Schmitt
Steel Style
Posts: 168
Joined: Sun Feb 04, 2007 3:30 pm
Location: France

Post by Steel Style »

IMHO you'll need also to use raycasting (for each pixel of the ouput normal map) then when you ray hit the model you'll need to store the ray coordinates first the normalized ray in R G B and maybe the strengh in the alpha but I don't know a lot about normal map channel.
Masterhawk
Posts: 299
Joined: Mon Nov 27, 2006 6:52 pm
Location: GERMANY
Contact:

Post by Masterhawk »

well, thanks for your replies.

The method JP described was the first that came into my mind. have a few problems due to the resulting texture ;)

The raycasting method is used by all open-source apps I could find in the web (e.g. Normalmapper by AMD/ATI).

Will see if I can get one of the methods working.
Image
christianclavet
Posts: 1638
Joined: Mon Apr 30, 2007 3:24 am
Location: Montreal, CANADA
Contact:

Post by christianclavet »

You could check for XNormal (http://www.xnormal.net/1.aspx)
This tool will do exactly that. It can deal with OBJ and other format so you don't need any specific 3D application to use it.

For this to work properly. The low-poly and the high-poly should be at the same position and about the same size. You need at least to create the UV on the low poly version.

These tools "bake" the details from the high poly mesh into the low poly mesh using raycasting, or a "cage" object (I think the cage is using the low poly mesh as a reference, and is normally slightly bigger than the high poly mesh. The rendering of the texture is using the low poly mesh UV coordinates.

Perhap, it's the way I think it could be done:
From the UV of the low poly mesh, get the relative coordinate on the surface of the low poly mesh, then do a raycasting collision test along the normal from the low poly to the high poly mesh. Once the collision is determined and the high poly surface is found, take the normal of the surface at those coordinates and put the values inside the UV map.
FuzzYspo0N
Posts: 914
Joined: Fri Aug 03, 2007 12:43 pm
Location: South Africa
Contact:

Post by FuzzYspo0N »

COol project,

read this thread (yea its hellishly long) but over the number of pages they detail like every theoretical aspect of creating tools to bake, as well as render the best kind of normal maps :) (with images and formulas etc)

http://boards.polycount.net/showthread.php?t=48305
Masterhawk
Posts: 299
Joined: Mon Nov 27, 2006 6:52 pm
Location: GERMANY
Contact:

Post by Masterhawk »

Well thanks to both of you, christian and fuzzy!

My first attempt was to implement a version which uses the U,V coords of the high-poly mesh to map it on a texture. Then I iterate over each pixel the current processed face is overlaying. This pixel gets the normal-values of the current face encoded in the RGB-Channels. Normals are currently taken form worldspace.

The mapping of the mesh onto the texture works pretty well, but the resulting colors seems to be a bit strange. I guess I horrorably miss something.

"Normalmap" of a plane with a peak
Image

The plane with the texture applied
Image

Creating texture of a well-known cow
Image

And the cow with the color texture applied
Image


Here's the code that procduces this mess.

Code: Select all

void CMeshManipulator::createNormalMap(CAISceneNode* node)
{
	int height = 512;
	int width = 512;

	int i=0;
	

	aiVector3D** normals;
	normals = new aiVector3D*[width];
	for(int d = 0; d<height; d++)
		normals[d] = new aiVector3D[height];


	for(int k=0; k<height; k++)
		for(int l=0; l<width; l++)
				normals[k][l] = aiVector3D();

	for(int f=0; f<scene->mMeshes[i]->mNumFaces; f++)
	{
		aiVector3D vec_uv1 = 0.f;
		aiVector3D vec_uv2 = 0.f;
		aiVector3D vec_uv3 = 0.f;,
		vec_uv1 = (scene->mMeshes[i]->mTextureCoords[0][scene->mMeshes[i]->mFaces[f].mIndices[0]];

		vec_uv2 = (scene->mMeshes[i]->mTextureCoords[0][scene->mMeshes[i]->mFaces[f].mIndices[1]];
				
		vec_uv3 = (scene->mMeshes[i]->mTextureCoords[0][scene->mMeshes[i]->mFaces[f].mIndices[2]];
				



		aiVector3D vert_norm_1 = 					scene->mMeshes[i]->mNormals[scene->mMeshes[i]->mFaces[f].mIndices[0]].Normalize();
		aiVector3D vert_norm_2 = 					scene->mMeshes[i]->mNormals[scene->mMeshes[i]->mFaces[f].mIndices[1]].Normalize();
		aiVector3D vert_norm_3 = 					scene->mMeshes[i]->mNormals[scene->mMeshes[i]->mFaces[f].mIndices[2]].Normalize();

		//calculating Normalvektor of the current face			
		aiVector3D vec_norm = (scene->mMeshes[i]->mVertices[scene->mMeshes[i]->mFaces[f].mIndices[1]]
															-scene->mMeshes[i]->mVertices[scene->mMeshes[i]->mFaces[f].mIndices[0]])
															^(scene->mMeshes[i]->mVertices[scene->mMeshes[i]->mFaces[f].mIndices[2]]
															-scene->mMeshes[i]->mVertices[scene->mMeshes[i]->mFaces[f].mIndices[0]]);

		vec_norm.Normalize();


				
		float max_pixel = 511;
				
		if(max_pixel>0.0)
		{
			float step = 1/max_pixel;
			for(float r=0.0; r<=1.0; r+=step)
				for(float s=0.0; s<=1.0; s+=step)
				{	
					//calculating the current position on the 
					// texture in barycentric coords
					float bary_s = s;
					float bary_r = r;
					if((bary_r+bary_s)>1.f)
					{
						bary_s -= (bary_r+bary_s) - 1.f;
					}
					float bary_t = 1.f - bary_r - bary_s;

					//current position in u,v
					aiVector3D curr_Vec;
					curr_Vec.x = bary_r*(vec_uv1.x) + bary_s*(vec_uv2.x) + bary_t*(vec_uv3.x);
					curr_Vec.y = bary_r*(vec_uv1.y) + bary_s*(vec_uv2.y) + bary_t*(vec_uv3.y);

					//transform u,v to x,y
					int x = (int)((511.f * curr_Vec.x) + 0.5);
					int y = (int)((511.f * curr_Vec.y) + 0.5);
							
					normals[x][y] = aiVector3D(vec_norm);

				}
		}
				

				
	}//end num faces
			
	} //end if

	//writing NORMALMAP.RAW
	std::ofstream normalmap("c:\\normalmap.raw",std::ios::binary);
		
	for(int w=0; w<width; w++)
		for(int h=0; h<height; h++)
		{
			int i_value_x = (int)(((normals[h][w].x * 255.f) - 127.f)+0.5);
			int i_value_y = (int)(((normals[h][w].y * 255.f) - 127.f)+0.5);
			int i_value_z = (int)(((normals[h][w].z * 255.f) - 127.f)+0.5);
			char vec[3] = { (char)i_value_x , (char)i_value_y , (char)i_value_z };
			normalmap.write(vec,3);
		}

	normalmap.close();
}
I presume the error in the creation of the RAW file. Cause, the grey color in the plane's RAW file would be caused be an zero-vector, but non of the processed normals is a zero-vector. So the texture is completely covered by the plane there should be no grey color
Image
Katsankat
Posts: 178
Joined: Sun Mar 12, 2006 4:15 am
Contact:

Post by Katsankat »

Awesome, did you try to generate the object-space normal map using a very high-poly mesh, and apply it to the low-poly?
Masterhawk
Posts: 299
Joined: Mon Nov 27, 2006 6:52 pm
Location: GERMANY
Contact:

Post by Masterhawk »

Katsankat wrote:Awesome, did you try to generate the object-space normal map using a very high-poly mesh, and apply it to the low-poly?
Well, no I didn't since there seems to be a fundamental error in the existing code. But I really don't get rid of it. :x
Image
Mel
Competition winner
Posts: 2292
Joined: Wed May 07, 2008 11:40 am
Location: Granada, Spain

Post by Mel »

The explanation is simple to understand. Your normals are being taken in worldspace coordinates, as you've said.

In worldspace coordinates, the Z value, the blue tone, can have any value, ranging from -1 to 1, so, it is to be expected that the combinations of values can give you any posible color. In tangential space, on the other hand, the Z value is, most of the times, almost 1, so, the blue color gets more importance. That's why the color seems odd. None the less, a further inspection to your code doesn't seem to reveal any particular flaw, in my opinion, it is pretty clear what it does, i think it is correct.

There is a test you could run anyways, and it is the test of a high polycount sphere. That points in all the posible normals, so, it is a good test to see if things are going properly.
"There is nothing truly useless, it always serves as a bad example". Arthur A. Schmitt
Masterhawk
Posts: 299
Joined: Mon Nov 27, 2006 6:52 pm
Location: GERMANY
Contact:

Post by Masterhawk »

Mel wrote:[...]In worldspace coordinates, the Z value, the blue tone, can have any value, ranging from -1 to 1, so, it is to be expected that the combinations of values can give you any posible color. [...]
Thanks Mel, you gave the key hint. I really didn't think about the possibility of a negative value.

I had to change something in the writing process of the normalmap

Code: Select all

int i_value_x = (int)(normals[h][w].x * 127.f);
int i_value_y = (int)(normals[h][w].y * 127.f);
int i_value_z = (int)(normals[h][w].z * 127.f);
char vec[3] = { (char)i_value_x , (char)i_value_y , (char)i_value_z };
normalmap.write(vec,3);
Image
Masterhawk
Posts: 299
Joined: Mon Nov 27, 2006 6:52 pm
Location: GERMANY
Contact:

Post by Masterhawk »

Hey guys, I'm still dealing with this topic.

This time I need to span a model which has no texture-coordinates at all over the texture-space. I need to create a good set of default mapping coords which doesn't waste much space and has no folds.
In other words I need a u,v coordinates at the vertices which spread nearly over the whole texture-space.

Any ideas?
Image
Mel
Competition winner
Posts: 2292
Joined: Wed May 07, 2008 11:40 am
Location: Granada, Spain

Post by Mel »

http://www.loria.fr/~petitjea/papers/siggraph02.pdf
http://people.cs.ubc.ca/~sheffa/papers/ ... s_plus.pdf

Those pdf seem interesting when it comes to spread polygons across a coordinate map space. Haven't you found any on your own?
"There is nothing truly useless, it always serves as a bad example". Arthur A. Schmitt
FuzzYspo0N
Posts: 914
Joined: Fri Aug 03, 2007 12:43 pm
Location: South Africa
Contact:

Post by FuzzYspo0N »

you could look at apps like blender and stuff that do all the automation example, they have links to good articles
Post Reply