Changing RenderTexture size at runtime dramatically reduces performance

If for example a RenderTexture of a SceneCapturer2D is changed from 512x512 to 20482048 at runtime and then set back to 512x512 the performance is by far not what it was when initially 512x512.

Example, starting game with 60 Fps (with 512x512)
Change resolution to 2048x2048
Change back to 512x512
FPS ~3

Engine version 4.6.1 or higher.
Win 7 64 bit.
Tried 670 GTX, 760 GTX and 980 GTX no differences.

1 Like

Log: Log file open, 02/24/15 11:50:33LogInit: Version: 4.7.0-2440994+++depot+UE4-Re - Pastebin.com
BTW. The timing in the log is also wrong, see last line of log to notice the inconsistency…

Hey simmania,

Would you mind showing me how you set up your Render Texture to change during runtime?

If I can reproduce this in an empty project, I will be able to get in the bug report to have this issue more quickly resolved.

Cheers,

There is nothing special about it.

		mCaptureComponent->TextureTarget->InitAutoFormat(newRes, newRes);
		mCaptureComponent->TextureTarget->UpdateResourceImmediate();

Also, when I exceed the 1024x1024 size unreal starts to become very slow.
I already solved the ‘reallocating issue’ in this: https://answers.unrealengine.com/questions/163007/standalone-from-editor-shading-entirely-wrong.html thread.

Yet, Unity handles 2048x2048 render targets without problems on the same hardware.

I also tried to set the blend weight for the post processing effects to 0 so that no additional texture memory is allocated for doing post processing effects? However, this seems to do nothing at all?
So, because it looks like this blend weight for post processing set to 0 seems to have no effect, I also tried setting all other per post-process-effect intensities to 0 (=off) to make sure they are really NOT used.
Still seems to do nothing. The performance should at least change somewhat… but I do not see a change.

Hey simmania,

I received some important information about changing the resolution of your Render Textures at runtime. The results you are seeing is to be expected and here is why.

Whenever you create a Render Target the overall resolution, whatever you choose to set it to, is stored in memory and allocated for that specific size. How render targets function is they are constantly being called upon to update in Realtime.

By setting the resolution to higher or lower value at runtime, the space that was allocated for the texture is changed and not referenced correctly. It then needs to create a new space for the given resolution (in memory) and store that value. Returning back to the original resolution after changing means you have now broken up the integrity of rendering code and left ‘gaps’ or empty unused areas.

This is why we suggest not changing the resolution at runtime, but rather creating two Render Targets with the desired resolutions to be changed at specific distances or times. This allows the Engine to allocate space for both textures so when switching between the two, there are no performance issues like the one you are experiencing.

I hope this gave you some insight as to why you are experiencing the issue, but if you would like some more information or additional assistance I will be happy to help.

Cheers,

It is not that I change it every frame. Just try to resize it once. It makes sense that especially post process effects that depend on the resolution have to be reallocated, hence this is a slow procedure. But the performance should not remain slow when done reallocating/creating memory in the background…

Furthermore, I have created the same demo in Unity and i can easily use 4096 textures for render targets without problems.

I also do not understand that with an empty scene in both Unity and Unreal my my output is around 200FPS in unity but ~15/18 FPS in Unreal.
I am aware that post process effects should be turned off by setting blend value to 0.0 and I have also set all other values of other post process effects to 0.0. Still I have incredible slow performance with the same render technique I try to achieve.

What I try to achieve is render the scene to 3 renderTargets. Then with a special shader that covers the entire screen, raytrace the renderTargets to find the scene pixel (on renderPlane) for the output pixel (on Screen).
This has to do with a spherical warping technique I am creating.

Hey simmania,

The main difference between why Unity handles this differently is the rendering technique. Unity uses forward rendering, whereas Unreal uses deferred rendering. In deferred rendering you’re not going to pass everything you want to see on screen right to the graphics card. Instead of doing so you use multiple render targets which means you’re going to allocate textures on the graphics card, on which you render in realtime.

The benefit of this method is that we’re able to draw the geometry once to a so called gbuffer. After that we’re able to use the data on those textures for everything we want to do after that e. g. lighting equations, cel shading, normal mapping, screen space ambient occlusion, shadow mapping etc.

But deferred rendering has some downsides too. It’s pretty hard to implement anti-aliasing and you have to render transparent objects seperately. Another downside is, that it makes heavy use of graphics memory since you’re rendering onto multiple render targets which have to be the same size as your game’s resolution to achieve a good quality.

With that in mind, the technique you are using is having trouble because you are having to go through three full rendering passes before being able to apply the end Post Processing effect. Needless to say this is not optimal and will definitely cause performance issues. There are a few suggestions I could provide to help improve performance, but even with those suggestions your approach for the overall set up will need to change a bit. One suggestion would be to try changing the ‘Capture Source’ from HDR to LDR. This will move your Render Target resolutions to 4 bits per pixel instead of 8 bits per pixel, and also include the post processing as the final image output.

Capture Source

34057-capturesource.png

Depending on what your reasons for using/needing three separate render targets in conjunction with the Post Processing are, I cannot determine exactly what you would want to trim from the approach to help out with conserving memory and resources. If you wanted to keep your approach, the last suggestion I would have would be to program in a custom pass for the shader itself.

I hope this clarified your questions as to performance.

Regards,

Hi Andrew,

This is not entirely true. Unity can use both Forward and Deferred. And with Unity5 they have created an even newer Deferred method that has built in capabilities of doing physically based rendering. Instead of many small shaders to cope the permutation shader problem, they generate the phyiscally based shader based on the input textures/values.

Going from HDR to LDR is not the problem.
I think it has very much to do with all the ‘additional render targets’ that are created in the background to do the post processing (much as you said). However, I put post process blend weight to 0 and turned all of the post processes off as far as i am capable of doing so from the editor.
Yet the performance of the 2DSceneCapturers is not increasing, this is very strange.
In Unity you have to Add all the post processes yourself or it will not be available, this works better because you cannot accidentally render in a slower way then necessary due to effects that are ‘not’ needed.
Perhaps I am missing something but i cannot find how i can get more control of what post processes are applied when doing a render other than just setting intensity values to (0 = off according to docs) and set blend weight to (0 = off).
Furthermore, it has come to my attention that when moving a 2DSceneCapturer around (with some large render texture eg 2048x2048) the FPS is incredibly spike due to allocations and releases of memory of more than 1 GB (when checking task manager). The resolution is not changed when moving the object. It very much feels as something is wrong.

I would like to mention that setting the blend weight to 0 is not the same as unchecking the ‘Enable’ option. The visual change might be there, but the actual actor itself is still set to ‘Enabled’ which means it still needs to be processed in memory.

Unity 5 framework is different than ours, which is why the rendering approach is different. I do understand what you are saying, but since we do not have the ability to switch our rendering option the workflow and approaches for certain features will be different.

I am sure if you wrote some code for systematically enabling and disabling the SceneCapture Actors, you could have greater control over the performance hit you are seeing. This might just be something you would want to mention on the Forums as our community is always willing to help.

Regards,