Can't use AbsoluteWorldPosition with Render Targets, can you help me find a work-around?

Hello,

I’m trying to create an object with a MID that reacts to hits the actor receives in world space. Ideally, each hit would apply a masks/lerps on a set of base textures, revealing “underlying” textures like gravel under dirt, or “adding” additional textures like mud smeared on a rock. These hits need to accumulate on the material, ie exposed gravel might be smeared with mud, or multiple mud smears should coexist.

Currently, I have a simple form of this MID working for a single hit. I essentially perform a custom sphere mask using a hit world vector parameter and the per-pixel AbsoluteWorldPosition input parameter. This works great, I can feed in hit vectors dynamically and I can get the exact masking effect I want with my example material.

Now, though, I’m working on accumulating multiple hits on my material, and I’ve hit some problems. It looks like there’s no loops, iterations, or array traversal in materials without resorting to custom shader nodes. My plan was instead to iteratively render hit materials as they happened to a CanvasRenderTarget2d, and then use the final rendered texture as a texture parameter in a main material for my mesh. Instead, when I render my material to a CanvasRenderTarget2d I get a solid black (all masked) material instead of the hit “splatter” I have on my hit test mesh.

Troubleshooting of note:

  • I made sure my hit material outputs in the emissive channel so it can be rendered by Draw Material
  • Used two CanvasRenderTarget2ds to buffer my rendering to make sure I’m accumulating draws instead of overwriting them.
  • Removed the AbsoluteWorldPosition input node from my hit calculations in the material and replaced it with a hard-coded world coordinate. This did not do the same thing (I get one world space position for hit masking instead of each pixel’s world space position), but immediately my CanvasRenderTarget2ds started displaying my hit material instead of all black.

My guess is that when I go to render my MID on the CanvasRenderTarget2d, it uses the canvas’ Absolute World Position instead of my original mesh’s. This means the hit is outside the detection radius of my custom sphere mask, so I get black/zero value instead of a masked texture value.

Is there a way to render my MID on a CanvasRenderTarget2d using cached vertex locations from my MID’s parent mesh to ensure my spheremasks compute properly? Is there some other way besides using CanvasRenderTarget2d that I can accumulate multiple executions of my MID to a single texture?

There is no connection between render target, and the mesh, you want to apply your effect to so world position is of no use here.

One way would be to get UV coordinates hit location for your mesh, and draw the hit mark at the same coordinates in render target.

Also, as of 4.13, there is a feature, that is better suits your needs.
https://docs.unrealengine.com/latest/INT/BlueprintAPI/Rendering/DrawMaterialtoRenderTarget/index.html

Thanks for the response!

Since my question, I’ve gotten an implementation of your UV suggestion working, drawing simple triangles on hit locations. However, this approach seems limited to only very simple hit marks like lines, spheres, and polygons. With my original hit material I was able to create pixel masks like splatters or smears and apply those to the hit location using world position in the material. Is there a way to create such a mask or brush in a blueprint and then render it to a target using UVs?

DrawMaterialToRenderTarget seems to suffer from the same limitation as my initial approach - the AbsoluteWorldPosition node in my MID has bogus data when drawing to the Render Target.

In respect to having varying hit marks this is no different from what you were doing initially.

I’m not so sure.

My current hit mask material uses per-pixel world space coordinates to determine which parts of the material should be masked or not. These world space coordinates are obviously dependent on the geometry of the object the material is applied to.

If I try to do similar hit masking using UV coordinates, don’t I still need per-pixel UV coordinates, which are dependent on the vertex information of my object’s geometry? The problem is I want to mask pixels which are “near” the hit location I provide. If I just stamp a hit mask on at a UV location, I’m afraid my stamp may bleed outside of a texture’s UV islands, painting locations that aren’t close to the hit location. I think I really need a way to do something like render render the results of a sphere mask to a render target.