Screenspace Portal's on VR
Hi Everyone, I am trying to create a portal system for use on VR, but have come across issues with the way that it renders.
Currently I have a SceneCapture2D component that renders a view to a 1920x1080 render texture. Using screen space UV's, this is then applied in a material to a plane in the world. This is an amazing method to be used for games that would be played on the monitor, however, when played in VR, there is an aweful amount of skew on the texture.
The problem is not the texture, as when this is applied to a BSP without screen space UV's, it can be seen as working totally fine without distortion.
Myself and colleagues think that the problem is with the way the screen space UV's function grabs the screen position. We have tried changing this function that that we calculate the screen position manually/traditionally, however, we still end up with the same results in VR.
We have tried to find out where we can get the camera view matrix or projection matrix. However, this seems to be near impossible with Blueprints and the material shader. We have also tried using the world to clip function.
Does anyone have an appropriate solution to stop the skewing of the render target texture?
asked May 28 '15 at 12:14 PM in Rendering
My colleague and I managed to fix this (mostly). Hold on to your hats!
We had to create our own BP class in C++ (people who use BP, do not worry, you can find some code to copy and paste below).
In the header, we created a UProperty for a scene capture component 2D that could be fed into a function. We also created a Calculate FOV function and a Update Portal View P Matrix Parameters.
The Calculate FOV function takes an input of a player controller. This then checks to see if a HMD is connected and if so, proceeds to grab the FOV from the VR Camera as it changes between VR and on PC.
The Update Portal View Projection Matrix Parameters function takes dynamic material instance, a player camera transform, and a player controller. First of all we get the sceneCaptureComponent2D's capture size (x,y). We then grab the view matrix and view location of the player's camera transform. We then swap the axis to match unreal's coord space so Z is up. Lastly we grab the scene capture component 2D's FOV.
NOW THE MATH BEGINS! :D
If the viewport is wider than it is tall, the XAxisMultiplier is 1 whilst the YAxisMultiplier is viewport.x / viewport.y Else the XAxisMultiplier is viewport.y / viewport.x and the XAxisMultiplyer is 1.
We then create a projection matrix where the FOV value, axis multipliers and clipping plane values (we gave 10 and 1000) are fed into the constructor.
Then a view projection matrix is created by multiplying the view matrix and the projection matrix.
We then break the VPMatrix up into its column components (for X, Y, Z and W axis) to be fed into out dynamic material instance and feed them into it.
HERE IS ALL THE CODE SO FAR: .h
Now how does the material know how to use these variables I wonder?
We created out own version of the engine's screen space UV function as it does not work in VR properly.
So below is how we set up the material (with the functions output UV's going into the vertex shader customised UV's rather than the pixel shader UV's)
And here is a screenshot of the function.
In order for the custom node to work, we added some HLSL that could make sense of the crazy variables we are feeding into it.
Here is the HLSL.
As you can see, it takes all the parameters we are feeding in, and created a matrix to be used in the rest of the custom maths.
Admittedly this got a little out of my depth and my colleague helped out a lot. But if you have any questions about how or why things need to be done, please give me a shout and I will make sure I get an answer for you.
answered Jun 09 '15 at 04:23 PM
Follow this question
Once you sign in you will be able to subscribe for any updates here