Thanks, I think I will go with that. However, this 16 bit blending is really puzzling me. Here's the code in question. My description of what's going on is in //comments
Code: Select all
// 1 - Bit Alpha Blending
inline u16 PixelBlend16 ( const u16 c2, const u16 c1 )
{
// c1 is the "source" pixel that we're "blending" with c2, the destination
u16 c = c1 & 0x8000; // Extract the alpha bit in the source. It's either 0 (fully transparent) or 1 (fully opaque).
c >>= 15; // Move it down to the low bit
c += 0x7fff; // c is now either 0x8000 (if the source has an alpha bit) or 0x7fff (if it doesn't).
c &= c2; // So that's either 0x8000 & c2 (ignore the destination colour) or 0x7fff & c2 (use all of the destination colour)
c |= c1; // But then we add all of the source colour, even if its alpha is 0!
return c;
}
It's that last c |= c1 that confuses me. If the source pixel has an alpha bit, i.e. is solid, then we have ignored the destination and we're using only the source. Fair enough.
But if the source has alpha 0, i.e. it should be fully transparent, instead of ignoring it, we still use its colour. That's why boolean alpha keying only works in 16 bit if the whole colour has been set to 0, not just the alpha.
Pop quiz: what's wrong with this (logically, I haven't looked into the efficiency of the compiled instructions)?
Code: Select all
inline u16 PixelBlend16 ( const u16 destination, const u16 source )
{
if(source & 0x8000 == 0x8000)
return source; // The source is visible, so use it.
else
return destination; // The source is transparent, so use the destination.
}
I'll try it now, of course, but the Flying Spaghetti Monster Herself knows what else that'll break.