Here are code that build the Hi-z buffer
const float2 InUV = RenderTargetUVAndScreenPos.xy;
UV[0] = InUV + float2(-0.25f, -0.25f) * InvSize;
UV[1] = InUV + float2( 0.25f, -0.25f) * InvSize;
UV[2] = InUV + float2(-0.25f, 0.25f) * InvSize;
UV[3] = InUV + float2( 0.25f, 0.25f) * InvSize;
Depth.x = Texture.SampleLevel( TextureSampler, UV[0], 0 ).r;
Depth.y = Texture.SampleLevel( TextureSampler, UV[1], 0 ).r;
Depth.z = Texture.SampleLevel( TextureSampler, UV[2], 0 ).r;
Depth.w = Texture.SampleLevel( TextureSampler, UV[3], 0 ).r;
I have two question about that code.
- UE4 use source view size to calculate the InvSize. But in my case, I have a 1024x1024 sized input texture, output texture will be 512x512. When I evaluate the depth value for position(10,10). InUV will be (10.5,10.5), in this case, I want to sample texels (20.5,20.5),(21.5,20.5),(20.5,21.5),(21.5,21.5) from input texture. So from math perspective, I should evaluate new Uv with (10.5/512-0.25/512)*1024. So I think we need to use destination view size to calculate the InvSize. Am i right?
- Why UE4 didn’t use Gather* API here? Won’t be faster than SampleLevel?