Capture screen at runtime using a custom projection

Hi,
I would like to be able to capture the screen content at runtime multiple times with slightly offsetted frustums.
I was able to do the “pure capturing part” by attaching a USceneCaptureComponent2D to my player character, and have its UTextureRenderTarget updated whenever I hit my “capture button”, then get the texture content and save it as an image.
When it comes to modifying the frustum projection, however, I’m completely stuck.
So my questions would be :

  1. Is there a way to modify the projection matrix of this type of components without modifying the engine’s source ?
  2. If not, is there another way to do captures of the screen with a custom projection matrix ?
  3. If there is no other way, what part of the engine should be modified to achieve this ?

Thanks

  1. we don’t have that exposed and you would need to make lower level engine code changes.
    2/3) FViewMatrices stores most of the data and in FSceneView::FSceneView(const FSceneViewInitOptions& InitOptions) it is computed. This is where you would adjust the computations.

If you want to do AntiAliasing with this method: AntiAliasing requires sub pixel shifts, we do those already for TemporalAA and you can reuse the code for that (look for TemporalJitterPixelsX).

In case you want to do Depth of Field with this method it might be a bit trickier. I suggest the CircleDOF as reference as it is the most physically correct one.

Thank you for your quick and detailed answer.
Actually, what I was trying to do is a high-resolution capture of the scene. I know there is already such a feature in the Unreal editor, but since we are targeting rather small PC builds, we would like to spare VRAM and therefore only do “native resolution” captures to reconstruct a high-resolution one, using some sort of stitching.
May I ask you if you know another way of doing this (maybe without changing the projection matrices) ? We would really like to avoid modifying the Unreal sources, if possible.

Seems the newest NVIDIA cards also have such a feature now but I think you want to work on other cards as well. We never implemented this because the existing methods allowed resolution large enough and doing the multiple renderings requires the engine to be very deterministic and that can be hard e.g. motion blur rendering also advances buffers. With some features disabled it might be less of an issue. I think you have to change UE4 sources.

Since there seems to be no way around modifying the source (and also the determinism issues you mentionned), we are now considering to simply do the large capture in real internal resolution, like in the editor.
Apparently, this may work even if the GPU has not enough VRAM, since it will then use system RAM… or am I mistaken ?
With high resolution multipliers (~6-7 * FHD), the graphic drivers are still crashing on my machine, though (even with sufficient system RAM). Is there any way to “help” preventing that ?
Thanks

or am I mistaken

I think in theory this is possible but the GPUs try to be fast and rather run out of memory before they go horrible slow. They page out textures to system memory so you might get some savings and stalls but paging out render targets would hit a real-time application way harder.
We usually use a high end GPU (lots of memory and fast) and a multiplier as high as it works (try - it depends on scene, base resolution, memory amount). The driver might reset if the frame takes too long - I think you can prevent with some driver registry hacks but don’t do that. Using the high res screenshot mask helps to focus on the screen part you care.

Ok, I will do some testing to see how well it performs on our different PC builds. Thank you for your help.