The normals are independent from the vertex position, but not from the camera, to transform properly the normals you have to pass the "worldViewInverse" matrix (this already includes the rotation) and multiply the normals by it, which suposes a problem in the shader: it is dependant on each instance, and you can't pass viewInverse, and after multiply it by the world matrix, because it doesn't work like that. So either you pass it per instance, which decreases the amount of instances you can render, or you can calculate it inside, which decreases the eficiency of the instancing.
It is much better to render less instances this way and perform the proper transformations on the CPU (once per instance) than to lose time calculating the matrices per vertex, because there is no way to save that time, and the more vertices a instance has, the more time this would lose.
So... something like this might work...
Code: Select all
#define NUM_BATCH_INSTANCES 30
float4x4 ViewProjection;
float4x4 instanceWorldArray[NUM_BATCH_INSTANCES];
float4x4 instanceWorldViewInverseArray[NUM_BATCH_INSTANCES];
float3 lightPos;
struct VertexInput
{
float3 position: POSITION;
float3 normal : NORMAL;
float2 uv : TEXCOORD0;
float2 uv2 : TEXCOORD1;
float4 color : COLOR0;
};
struct VertexOutput
{
float4 screenPos : POSITION;
float4 color : COLOR0;
float2 uv : TEXCOORD0;
};
VertexOutput vs_main(VertexInput IN)
{
VertexOutput OUT;
int index = IN.uv2.x;
float4x4 WVP = mul(instanceWorldArray[index],ViewProjection);
OUT.screenPos = mul(IN.position,WVP);
OUT.uv = IN.uv;
float3 normal = mul(IN.normal,instanceWorldViewInverseArray[index]);
float3 worldPos = mul(IN.position,instanceWorldArray[index]).xyz;
float3 lightDir = worldPos - lightPos;
lightDir = normalize(lightDir);
normal = normalize(normal);
float ndl = saturate(dot(normal,lightDir));
OUT.color = IN.color*ndl;
return OUT;
}
(Needs some fixing and testing for sure, but as a draft i think it is clear
)
"There is nothing truly useless, it always serves as a bad example". Arthur A. Schmitt