Hi,
As you all know I fill my spare time with reading the OpenGL spec and the GL extension registry...
And I'm here to tell you that glPolygonOffset and DirectX's SlopeDepth bias are useless features...
(and why they should be booted from SMaterial)
The same also unfortunately applies to Vulkan
why?
first, bad irr implementation
The polygon offset does not have separate fields for gradient and constant offset, only two integer bit-fields giving us values {-7,...,0,...,7}
The specs for DX and OpenGL clearly state that there are separate values for the gradient dependent offset and constant offset, moreover the GL spec (which I read more of) specifies that these values are floats, which is particularly important for the gradient offset!
While I do understand that it makes sense to multiply the smallest resolvable difference in a fixed-point depth buffer by integers only, when its combined either with the gradient offset or a floating point depth buffer, fractional offsets start to make more sense.
Those who do not grasp or understand why a gradient dependent depth offset is needed, I direct towards diagrams of the causes and prevention of shadow acne in shadowmapping.
next
Floating Point Depth Buffers
They offer the best depth resolution especially in reverse-Z mode (near is at 1.0 and far is 0.0 in NDC depth coordinate), which helps if you want to draw a planet and a spacecraft cockpit in the same pass without z-fighting.
You could only get better with a logarithmic Z-Buffer, or storing z-depth without the perspective divide (which is hardware built-in so no luck).
So in reality sooner or later you will use Floating Point Depth Buffers.
The issue is obviously that depending on the "scale" (exponent) of a depth value the smallest resolvable offset is different. However the OpenGL and Vulkan spec both shot themselves in the foot here requiring that the offset be constant across the polygon being rasterized, so its basically dependent on the maximum mantissa of the depth of any clipped vertex from the polygon.
This causes numerous issues:
A) Huge fluctuations in the offset for very large polygons
B) Different offsets for two triangles sharing an edge, even if the gradient offsets are equal to 0
C) In Reverse-Z mode, any triangle which is clipped by the near-plane (even if the clip-point is off-screen) will get the maximum depth offset value
The GL-spec shoots itself in the foot again, by storing depth in the -1.f,1.f range, whereas in DX its the 0.f,1.f range and in non-reverse Z-mode this approximate constant offset per-polygon debauchery may work on the assumption no triangle is too big, or at least that the really large triangles are only being used far away.
As if that isn't enough OpenGL plunges deeper, by requiring that the offset be calculated and applied before glDepthRange re-scaling which allows us to map far to 0.f instead of -1.f, that means that the smallest constant offset is actually given to a triangle with its maximum vertex at final depth 0.5f, which given the non-linear Z-buffer nature happens to be somewhere not that far from the camera (i.e. a few meters away from a view camera with a few kilometere view distance).
nail in the coffin
Early-Z
Polygon Offset kills Early-Z and Hi-Z, possibly for the remainder of the rendering to that framebuffer, so if you want to draw your decals before particles.... you're fucked!
This is why its my recommendation to enable the Early-Z tests "layout" qualifier extension, write a shader declaring which way you'll modify the depth and calculate the offsets per-pixel.