I’m trying to understand more about the low-level RHI interface. I’ve been using GitHub - TriAxis-Games/RuntimeMeshComponent: Unreal Engine 4 plugin component for rendering runtime generated content. as a great learning tool, and I’m also looking at TriangleRendering.cpp in the engine code. I think I get the general idea of manual vertex fetch, but I’m confused about the details of the implementation.
In particular, the position buffer SRV gets created like this:
PositionBufferSRV = RHICreateShaderResourceView(PositionBuffer.VertexBufferRHI, sizeof(float), PF_R32_FLOAT);
While the tangents SRV gets created like this:
TangentBufferSRV = RHICreateShaderResourceView(TangentBuffer.VertexBufferRHI, sizeof(FPackedNormal), PF_R8G8B8A8_SNORM);
The thing I’m confused about is that the setup for PositionBufferSRV
seems to index each individual float, while the TangentBufferSRV
will index whole FPackedNormal structs. I would have expected the position buffer to index FVectors, so the stride would be 3*sizeof(float)
and the format would be PF_R32G32B32_FLOAT
or something similar. Why do position buffers get indexed a single float at a time instead of a whole vector at a time?