CameraVector expression doesn't work with Draw Material to Render Target nodes

When drawing a material to a render target, the CameraVector expression outputs a fixed camera vector that has nothing to do with the player’s camera. I assume this is because it’s erroneously referencing the “camera” capturing the material.

I tried passing the player’s camera in manually via Blueprint, but realized this doesn’t calculate the angle per-pixel and thus isn’t equivalent or suitable for my use case.

Any help on this would be greatly appreciated as it’s holding up a release!

Hey SF,

Can you provide me with steps and a some screenshots in order to reproduce this issue on my end?

Let me know if you have further questions or need additional assistance.

Sorry for being slow to reply.

So I’m trying to generate a texture from a coarse grid of raytraces, then save that to a texture to use in a shader.

Material function generating the coarse grid works fine:

Material wrapper for the function works fine:

Here’s the blueprint rendering the material to a render target:

And here’s the material displaying the contents of the render target for debugging:

When applying that material to a cube, it’s a nonsensical mess that doesn’t even move a pixel when the viewing angle changes:

Passing in a constant camera vector makes it look vaguely more correct:

Hi SF,

It’s going to be a little more difficult to assist you since the material graphs are zoomed out all the way and the nodes are illegible in the screenshots provided.

I will say though, have you tried applying a vector expression node rather than a static vector. CameraVectorWS would be my first guess. (See screenshot)

Sadly that is all the advice I can offer based off the info given. Let me know if you need additional help.

That causes the same bug unfortunately:

I’m sorry for being vague, it’s just that I was working on a commercial product and didn’t want its inner workings to be publicly viewable and was hoping this would be enough information to replicate the issue.

Anyway here’s a much simpler version that should do the trick. All it’s doing is outputting the camera vector:

This fails to display properly:

However a constant works fine:

Okay, I think I understand what you are trying to do. It looks like you created a material that is generating a parallax occlusion like effect and drawing it to a render target. Then you apply a material using the render target to a static mesh in the world. Unfortunately, this kind of setup won’t work. This isn’t necessarily a bug but rather a limitation with the system.

When you use the camera vector node it outputs a three-channel vector value representing the direction of the camera with respect to the surface of the asset the material is applied to. This is important to note because if the material applied to the asset itself is using a render target! The material that is generating the vector output and then being applied to the render target has no asset to reference to generate the output information.

In other words:

Your parallax occlusion material is working properly because when applied to an object that exists in the world it can get input information. When that material is just sitting in your content folder and being drawn to a render target it has no object in the world to use for its vector calculation.

Let me know if that helps,

That stands to reason. Is there any way at all then that I could pass my shader that camera vector information? Or a different way I could write the output of a parallax occlusion mapping pass to a texture I can use later?

Yes there is a way, I’ve put together a test project as an example attached to this comment. It’s a pretty Hackish workaround but it may help!

You have to replace the vector information (camera vector) with vector parameter nodes. This will make the original material seem static since the vector parameter isn’t being set in the material graph. Then you have to calculate the information in blueprint and set it to the vector parameter at runtime before you draw that dynamic material to a render target.

To achieve this in my test project I wanted to update the render target with my camera direction vector. If you play in the editor you can see that the material with the render target is very similar (pretty much the same) as the material with camera direction plugged into emissive(for testing).link text

There is a blueprint(DrawToRT) that specifically gets a reference to the cube actor(CameraDirectionRT) and calculates the camera direction relative to that actor.

Then set the vector parameter accordingly and the draw the Dynamic Material to the Render Target.

I can’t tell you how to calculate CameraVectorWS in blueprint since that is outside the scope of what I know, but that is what would be passed into the vector parameter value.

that’s about all the help I can offer on this subject (moving this issue to rendering section),

link text

Ah yes that’s the solution I mentioned trying in my original post. The problem with that is it will only pass in a single camera vector for the entire material, rather than an accurate one which calculates the direction between each pixel and the camera individually, and that accuracy is necessary for this kind of parallax effect.

Doh, sorry about that. I should have read your OP a little more closely! This seems like something that I would like to investigate when I have more time. Unfortunately, it may take some time before I can get back to this post. I have a few more ideas I’d like to try for your particular use case.

So I’ve done some raymarching in dynamic textures. I was raymarching a spherical shell so I just used camera position and the UVToLongLat node to generate my camera rays. But if you supply camera position, camera forward vector, and camera FOV as parameters in a dynamic material instance, you should be able to reconstruct the rays using this tutorial.

This looks like just what I needed! Will test and report back.

I looked into the method you linked as well as a few others and correct me if I’m wrong but don’t they all depend on knowing the current pixel in screen coordinates? Because using the ScreenPosition node on a dynamic texture just gives this clearly incorrect output:

104149-capture.png

Hmm, yeah, I see the problem now. So do you only plan on using this with cubes? Planes are one thing, but since you have multiple faces displaying the same texture, there’s no way to make it consistent. You could render at least 3 faces to a render target, precalculating the camera vector at the faces from the properties of the cube.

Maybe I misunderstand your purpose here, but I’m guessing you’re raymarching each frame anyway. You’re not gaining any performance unless you’re drawing to a render target that’s significantly smaller than the object’s size on screen, or not updating it every frame.

Hi SF,

Unfortunately, there is no easy way to calculate Camera Vector in BP that I can find.

I’ve tried to get around this by having a cube mesh with Camera Vector Applied to the emissive output. Then having a cube capture generate a cubemap texture by placing it inside the cube and having the material be 2 sided. I then applied the Camera Direction as a vector input for the UVs of the CubeMap at runtime. This gave somewhat accurate results but only at specific viewing angles which isn’t useful.

I can’t spend anymore time on this subject but I hope that I was able to spark some ideas on how to work around your issue.

Terribly sorry I missed the notification for this reply.

I was hoping to use this not only with cubes, they were just a convenient display mesh.

The idea was to save performance on parallax occlusion mapping by rendering the results of a coarse grid of raymarches, then (assuming spatial coherence) do finer raymarching bound by those coarse results.