How to apply a UV distortion Map on the camera?

Hi all,

Here is what I’m trying to do:

  • I have a UV distortion map with floating values(32bits) stored in it
  • I would like to apply this on the final rendering of the Engine to create real-time non linear lens distortions
  • the UV map is changing every frame

I have currently created a PostProcess Material in the UE4 Editor which has the following connectivity:
Param_DistortionTexture(TextureSample converted using right click to Parameter2D)->Mask(R,G)->SceneTexture:SceneColor(UV in, Color out)->MyMaterial(Emissive in).

The TextureSampleParameter2D contains a HDR Texture (associated with a Linear Color Sampler). I haven’t found any other format than can take floating point values, please advise if there is a better format to use. When I drag/drop a distortion texture (dds format, 32bits per pixel) to the Texture Sample Parameter of UE4 Editor, it seems to apply the distortion in the viewer even if I actually notice some serious aliasing.
So my idea was to modify dynamically that parameter Param_DistortionTexture by updating it in dynamically in the C++ source code with other UV float textures.

Here is how I did that.

In the C++ source code, I have MyPlayerController.h which has in the class declaration:

UPROPERTY(EditDefaultsOnly, Category = Materials)
UMaterialInterface *mMaterialInterface;

UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Textures)
UTexture2D *mTexture;

in the constructor of MyPlayerController.cpp

static ConstructorHelpers::FObjectFinder<UMaterial> lPostProcessMaterial(TEXT("Material'/Game/Materials/DistortionPostProcess.DistortionPostProcess'"));
mMaterialInterface = lPostProcessMaterial.Object;

In the CalcCamera() method of MyPlayerController.cpp (I also modify the position/orientation of the camera here so I thought it was the best place for updating the distortion texture -let me know if not):

// TheWidth=TheHeight=32 here
// I have tried many different formats here including PF_FLOATRGBA, ... no change
mTexture = UTexture2D::CreateTransient(theWidth, theHeight, PF_A32B32G32R32F);

if(mTexture)
{							
           FTexture2DMipMap& lMip = mTexture->PlatformData->Mips[0];
           float *lData = static_cast<float*>(lMip.BulkData.Lock(LOCK_READ_WRITE));
           if (Data)
          {
                    // lUVmap is declared as float *lUVmap= new float[theWidth * theHeight *4]; 
                    // and filled out with floating point vallues between 0.0 and 1.0**
                    FMemory::Memcpy(lData, lUVmap, theHeight*theWidth * 4 * sizeof(float));
          }
          lMip.BulkData.Unlock();
          mTexture->UpdateResource();
          UMaterialInstanceDynamic* lMaterialInstance = UMaterialInstanceDynamic::Create(mMaterialInterface, this);
          lMaterialInstance ->SetTextureParameterValue(FName("Param_DistortionTexture"), mTexture);
}

Result: it does… nothing ! The texture does not seem to be updated. Every instruction seemed to be applied and does not return an error but I can see that the result is not changing.
I even tried to fill the UVmap buffer with 0.0 values just to see if it changes the result but it doesn’t.
I wonder if the default HDR texture (dds file) I have put actually overrides any texture change…

So what am I missing ?
Is this the best way to achieve the lens distortion ?
Any idea why the texture update does not work ?

Apart from that, is there somewhere a description of what is exactly expected as a UV set to be linked to the UV entry of SceneTexture:SceneColor ?

Best Regards,

Phoenix.

Hi Phoenix,

We sent this info along to someone on our rendering team. Below is their comments along with a question.

Is there some distortion visible without any code changes?
All that can be done without any code changes and if that works (does it?) PostProcessInput0 should be used as input. There is an example of that in the samples that come with the engines/can be downloaded.
You can add code to manipulate the material properties – that is not related to distortion/textures.

See if the above helps you out or not and we’ll go from there.

Thanks,

Hi ,

Thank you for your fast answer. This is highly appreciated.

I don’t understand what your developer means by “Is there some distortion visible without any code changes”. As I said, if I use a float16 texture map (hdr format) as an input, then it does distort the result but it creates severe aliasing (and there is no code change) that I suspect coming from the 16bits limitation. But the real problem is “how to update that texture anyway ?”, assuming the sever aliasing can be resolved afterwards…

With regards to the examples, I don’t know if you are referring to the 1.16 of “Content” but if that’s the case, that example is of no help because it does not use 32bits floats, so it is a different problem. I didn’t even try to modify that material because it simply does not produce what I’m trying to achieve.

“You can add code to manipulate the material properties – that is not related to distortion/textures.”
Sure. But what does this have to do with my problem ? In all cases, I do not find any post-processing material that helps with regards to my issue (I also went through the whole forum and answerhub…).

So, is there an example somewhere that uses an array of floats (and not a 8bits texture) to be used as UV for distorting the final result (=the whole final rendered picture) and thus simulate lens distortions ?
If there is one I can download or try, then I will be able to modify that array in real-time in my source code I guess.

There were also many other questions in my email with regards to the UV format that takes “SceneTexture”: Any example or comment about that ?

Best Regards,

Phoenix.

PostProcess materials lookups are not yet filtered. This code piece

// if we are in a postprocessing pass
if(View.RenderingCompositePassContext)
{
PostprocessParameter.Set(ShaderRHI, *View.RenderingCompositePassContext, TStaticSamplerState<>::GetRHI());
}

You would need to change TStaticSamplerState<> to TStaticSamplerState. We intend to expose this functionality a bit later.

With regards to the examples, I don’t know if you are referring to the 1.16 of “Content” but if that’s the case,

It’s some hallway with multiple postprocess volumes. It’s on of the later vaolumes.

Seems you want to do some OcclulusRift like distortion. This is already in the engine and integrated in more low level code for efficiency. You might want to look at the stereo rendering there. You will have to change engine code but it would give you more control (e.g. filtering).

“You can add code to manipulate the material properties – that is not related to distortion/textures.” Sure. But what does this have to do with my problem

This is to isolate the exact problem (where it is). You need to break it down more so we can give a better answer.
It also might help if you show screenshots or say what you want to achieve.

then it does distort the result but it creates severe aliasing (and there is no code change) that I suspect coming from the 16bits limitation.

I just would like to separate the multiple problems you are having. As said the filtering should help - if you go that route - make sure the works as expected (e.g. if you update from CPU the byte order easily can destroy the results).

how to update that texture anyway ?

Ok, lets focus on that

32bits floats, so it is a different problem

I don’t think you want 32bit floats - and distortion even down to sub pixels can be expressed in 16 bi. Maybe even 8bit is enough (if either scale is small or you don’t care about sub pixel precision).

I do not find any post-processing material that helps with regards to my issue (I also went through the whole forum and answerhub…).

Sorry but we cannot provide samples for all possible modifications.

There were also many other questions in my email with regards to the UV format that takes “SceneTexture”: Any example or comment about that ?

Which ones are left? Can you start a new thread/question? - it makes it much easier to give a focused answer .

Hi Martin,

Comments Below.

then it does distort the result but it creates severe aliasing (and there is no code change) that I suspect coming from the 16bits limitation.
I just would like to separate the multiple problems you are having. As said the filtering should help - if you go that route - make sure the works as expected (e.g. if you update from CPU the byte order easily can destroy the results).

OK, but could you please be a little bit more specific and provide me with a complete C++ example of that filtering ?
You mentioned " // if we are in a postprocessing pass if(View.RenderingCompositePassContext) { PostprocessParameter.Set(ShaderRHI, *View.RenderingCompositePassContext, TStaticSamplerState::GetRHI()); }
You would need to change TStaticSamplerState to TStaticSamplerState.", but that is quite vague…

how to update that texture anyway ?
Ok, lets focus on that

As I said, I finally sorted out that part myself. I found a way to create G32R32F textures in C++ and inject them into the Editor to get the precision I need for the shader.
A couple of function calls were missing in the initial source code I posted. The “override_texture()” is one of them for example.

32bits floats, so it is a different problem
I don’t think you want 32bit floats - and distortion even down to sub pixels can be expressed in 16 bi. Maybe even 8bit is enough (if either scale is small or you don’t care about sub pixel precision).

That part, I just don’t understand how that can be technically or scientifically possible.

I’m not talking here about getting fake distortions like the example 1.16, but pixel-accurate distortions, simulating the exact behavior a real lens which has been thoroughly calibrated using a physically correct model. Every change of focus in a vari-focal lens model creates severe non linear distortions than cannot be correctly simulated without floating point values.

To my knowledge, any conversion from float 32bits to 16bits or 8bits integers (are you seriously talking about converting infinite values of floats between 0.0 and 1.0 to max 0-256 integer values ?) will severely damage the result and cannot be compensated with a filter.

But I would be happy to be wrong. So could you please provide me with an example of what you are saying, i.e. down conversion from float 32 bits to 16 bits (or 8 bits) + filter that would eliminate artifacts and be as accurate as using 32bits float textures ?

In all cases, shaders (as well as HDR maps and DDS textures) have been supporting 32bits float for a while, so it still sounds a bit weird to me not having that feature in the editor. That’s also why I’m bypassing that limitation using C++.

I do not find any post-processing material that helps with regards to my issue (I also went through the whole forum and answerhub…).
Sorry but we cannot provide samples for all possible modifications.

Well, you are the one who said “It’s some hallway with multiple postprocess volumes. It’s on of the later vaolumes.” So I went to look (again) at the content examples and did not find anything useful, hence my remark…

Which ones are left? Can you start a new thread/question? - it makes it much easier to give a focused answer

I have already created another thread, and I will try to continue doing it.
But here, at the beginning of that thread, there was (for example):

  • “Is this the best way to achieve the lens distortion ?” I suspect not: I would like to just apply my own pixel shader but the documentation is quite laconic about adding custom shaders to the engine. I can see that many people in the forums have the same problem… But I will create another specific thread about custom shaders later.
  • “Any idea why the texture update does not work ?” solved that myself…
  • " is there somewhere a description of what is exactly expected as a UV set to be linked to the UV entry of SceneTexture:SceneColor ?" Does not matter anymore as I figured out how to use a float texture anyway.

Regards,

Phoenix.

I think you just change this line:

PostprocessParameter.Set(ShaderRHI, *View.RenderingCompositePassContext, TStaticSamplerState<>::GetRHI());

to this line:

PostprocessParameter.Set(ShaderRHI, *View.RenderingCompositePassContext, TStaticSamplerState::GetRHI());

to get the precision I need for the shader.

I was assuming the precision problems you talked before have been the artifacts from not getting filtered results. With the mentioned line you should be able to verify that.

That part, I just don’t understand how that can be technically or scientifically possible. I’m not talking here about getting fake distortions like the example 1.16, but pixel-accurate distortions, simulating the exact behavior a real lens which has been thoroughly calibrated using a physically correct model.

Maybe we miss understand each other. If you need to distort a pixel form the left side of the screen to the right size with a precision of lets say 1/16th of a pixel you needs width*16 values and assuming width is 2K we get about 32000 values which is still in 16 bit int range. Usually you don’t need from left to right so you can reduce that (store an offset, not the final value). I guess 8 bit (256 values) might be only acceptable for very low quality but 10 bit or 16 should work. You can use a signed format or offset the values inside. Because the result is symmetric you could save another bit but that seems not needed. If you use a low resolution texture you actually get much better precision (on modern hardware, watch out for older ones) because they interpolated at higher quality than the texture format.

In all cases, shaders (as well as HDR maps and DDS textures) have been supporting 32bits float for a while, so it still sounds a bit weird to me not having that feature in the editor. That’s also why I’m bypassing that limitation using C++.

“Is this the best way to achieve the lens distortion ?”

As this is the question that seems to be left: No, there are many ways and they depend on your needs and you might have to try multiple methods to find the right one.

I think you just change this line:

PostprocessParameter.Set(ShaderRHI, *View.RenderingCompositePassContext, TStaticSamplerState<>::GetRHI());

to this line:

PostprocessParameter.Set(ShaderRHI, *View.RenderingCompositePassContext, TStaticSamplerState::GetRHI());

to get the precision I need for the shader.

I was assuming the precision problems you talked before have been the artifacts from not getting filtered results. With the mentioned line you should be able to verify that.

That part, I just don’t understand how that can be technically or scientifically possible. I’m not talking here about getting fake distortions like the example 1.16, but pixel-accurate distortions, simulating the exact behavior a real lens which has been thoroughly calibrated using a physically correct model.

Maybe we miss understand each other. If you need to distort a pixel form the left side of the screen to the right size with a precision of lets say 1/16th of a pixel you needs width*16 values and assuming width is 2K we get about 32000 values which is still in 16 bit int range. Usually you don’t need from left to right so you can reduce that (store an offset, not the final value). I guess 8 bit (256 values) might be only acceptable for very low quality but 10 bit or 16 should work. You can use a signed format or offset the values inside. Because the result is symmetric you could save another bit but that seems not needed. If you use a low resolution texture you actually get much better precision (on modern hardware, watch out for older ones) because they interpolated at higher quality than the texture format.

In all cases, shaders (as well as HDR maps and DDS textures) have been supporting 32bits float for a while, so it still sounds a bit weird to me not having that feature in the editor. That’s also why I’m bypassing that limitation using C++.

“Is this the best way to achieve the lens distortion ?”

As this is the question that seems to be left: No, there are many ways and they depend on your needs and you might have to try multiple methods to find the right one.

I noticed the first part of the answer wasn’t the correct code line to change.

PostprocessParameter.Set(ShaderRHI, *View.RenderingCompositePassContext, TStaticSamplerState::GetRHI());

Meanwhile we added a Filtered checkbox to the SceneTexture material expression/node which solved this.
If you still have the old code you can workaround the problem changing this line in MaterialTemplate.usf:

case 14: return Texture2DSample(PostprocessInput0, PostprocessInput0Sampler, UV);

to

case 14: return Texture2DSample(PostprocessInput0, BilinearTextureSampler0, UV);

This will make PostprocessInput0 on the SceneTexture node [bilinear] filtered.