I have a 24 card deck with 11 unique cards plus 1 unique back face, so in my content I can have 12 textures and 11 static mesh actors. Then my game at run-time will have 24 instances of the static mesh actors (for 24 cards).
However, I’m guessing that it’s more efficient to have just one texture (ie a simple texture atlas)?
Also I notice the 11 static meshes are exactly the same except for texture coordinates (used to sample from the texture atlas). So now I wonder, which of the following is faster?
A) one mesh, one material, modify UV in material based on const(s), or
B) 11 meshes, one material => per mesh tex coords, or
C) one mesh, 11 materials => material hardcodes
Meanwhile I read this post ( opengl - How to avoid texture bleeding in a texture atlas? - Game Development Stack Exchange ) says not to use texture atlases for 3D, so maybe my entire question is wrong?
Also, does UE4 do anything automatically for me? I mean should I just have 12 separate textures instead of a texture atlas? I was assuming it would improve performance to make my own simple texture atlas for this, but maybe I’m wrong?
So I know how to do it in OpenGL. But how to do it in UE4?
…
Update #1 - here’s my latest idea on how to do it with one static mesh, one material, and one texture atlas:
- material has array of 24 boneIdx’s (or more)
- C++ generate deck static mesh => generate mesh with 24 cards, set init 24 boneIdxs, one actor uses this static mesh
- to translate a card, modify material const (uniform mat4)
Besides the initial generation of 24 (or more) cards in C++, cards are never added/removed from the mesh - they are just translated by modifying the material const (uniform mat4) (bone[24] array). Translation means moving the cards to the discard pile or in a grid relative to the player’s camera to make the cards easy to see.
One limitation is that user can’t click a card via actor click because there’s only one actor (one static mesh for all 24+ cards). In other words, I can’t take advantage of UE4’s built-in system for detecting clicks on actors. Implementing that myself sounds complicated. However, a trick is to consider my use case. The cards are going to be in one of three places - the main deck, the discard pile, or the screen. And the only time we care about user clicking is main deck and discard pile. So I can just have two invisible actors for the main deck and discard pile. When user clicks one of them, we get a grid view of the cards (in main deck or in discard pile) by translating the corresponding bones.
So far I’m liking this idea because it sounds theoretically more optimal. Though I’m not sure if it really gains much performance, so I’m not really sure if it’s worth the extra effort to implement it. So I’m still open to other ideas…
…
Update #2 - I want to translate and rotate cards in the material based on a boneIdx (a group ID for the verts in that card so each card has a separate translation and rotation).
I considered doing this with a material “custom” node but it’s unclear to me whether that’s fully cross-platform. I noticed material has a “world position offset” output. I’m able to do rotations with (Absolute World Position is World Position, Time is Rotation Amount) → RotateAboutWorldAxis_Cheap → X-Axis → World Position Offset. However, the lighting is broken - lighting appears to be done before World Position Offset is applied. But I want to do my custom World Position Offset first, then let UE4 do its lighting. How?
…
Update #3 - I remembered “inverse transpose” from some CG lecture for normals. Here’s one reference:
I tried VertexNormalWS → InverseTransformMatrix → Normal. However, this is wrong because I need to get the matrix created by (RotateAboutWorldAxis_Cheap → X-Axis). However, RotateAboutWorldAxis_Cheap doesn’t output a matrix - it outputs transformed position. Hmmm…