Plane reprojection in post-process using homography matrix

Let’s take a proxy problem - You’re looking at some 3D scene. Now I replace the scene with its rendering done from your eyes position, so you don’t see any difference. Then I replace the rendered image with a wall and projector. What I need is an image, that when projected on the wall will look exactly as if you were looking at the scene.

It looks like this:

The left camera is the observer, the right one is projector.

My approach is to render the scene from observers location, then in the post process I sample rendered image to add distortion.

I have some proof of concept code that kind of works up to some offsets that I need to debug, but most of the computations is done in pixel shader so it’s not the best solution.

After I did my initial version I read about homography matrices and it seems to be the right tool for my needs. If I understand it correctly I should be able to compute the homography matrix and then only multiply my screen space UV with that to get reprojected UVs.

Ufortunately most of the info about homography I could find relates to the case when I have 2 pictures of some object, pick by hand 4 corresponding point pairs and compute a matrix from that, but I don’t have such points. Instead, I know the exact transforms of both views, their vertical and horizontal FOV and plane so I think that’s all I need.

The perfect solution would be to have some transform that maps my image-space UVs in [0,1] range to correct UV for texture sampling. Did any of you saw similar solution?

Very interesting problem. That would be essentially projection mapping, which I would love to do inside Unreal.

I usually perform an homography process similar to what you described inside Touch Designer to do projector calibration. As far as I understand, what you are calculating there is the projection matrix necessary to use the second camera as a light source, projecting the render of the first camera into an arbitrary object in real life, of which you know some equivalent points between the source 2D render and their positions in the output 2D space.

So you are turning a 2D image into another 2D image, but one which accounts for the intrinsic and extrinsic transformations of a camera placed within your level or your real life location. You are not generating new UV’s, and that new image is not used in the model, but can be re-projected to the model as light (which you could fake inside unreal).

If your use case is projection mapping, why would you need to use the second image as uv’s? You are good to go with just the output of the second camera as log as it corresponds exactly to your projector characteristics (that is where homography comes into play).

On the other hand, if what you really want is to use the camera view as a texture, why not use a renderTarget that sees what you want (might be placed in a different part of the level that you can’t access, with a duplicate of the geometry) and then use that renderTarget as a texture for a plane, with unaltered 0-1 Uvs?

Will give the effect of an image projected from somewhere in the sense that it is flat and it gets distorted as you deviate from the original perspective. On a virtual scene, there is no need to actually project it.

@Diego_Visualma: The main problem with your approach is framerate. UE4 doesn’t like drawing many textures for one frame so even in simple scenes my fps drops a lot, especially with HD render targets :slight_smile: and I need the best resolution I can get.

So my workaround was to draw a scene once, and do reprojection in post-process.

Second problem is that we want stereo rendering so the idea of rendering textured wall from projector POV additionally doubles my RTT count. With mapping from one space to another I could use some shaders tricks to do stereo in one pass (side-by-side).

I’ve found out that Homography may be the way to go, today I’ve even added the Eigen lib to my project and I’m computing Homography matrix but my results seem to be random, for the same input data I get different results each time. Still looking for solution…

Ok, take a look at this module in Touch Designer:

http://www.derivative.ca/wiki088/index.php?title=CamSchnappr

If you download the program and look inside that module, you will see an example of a working implementation. It will be in python and their visual scripting language, but might give you some ideas.

Also, they are not using shaders, but feeding a projection matrix to the camera (in TD, cameras can accept custom projection matrices). May be that if it is not possible in Unreal, you could extend the camera class in C++ so that you can change their projection matrix.