Actor's components spawned at different scale

Hi.
I’m trying to do a simple AR app. I visualize an unfolded dice and the app spawn a mesh of an unfolded cube which then animate and fold itself.
This is the Actor’s viewport
as you can see, the cube has a edge of 50cm.
there is also a text render approximately as big as the mesh

This is the image used for recognition. I foresee I will have to get the “image extent” node and divide it by 4 at least, in order to get the actual size of the cube.

This is the level BP which defines the position, orientation and scale used to spawn the actor.

The app works, the text render within the actor is correctly scaled (takes up all the image size), but the cubo is like 1/3 smaller but I applied the transform to the BP and not to the single components.
I can’t see where the problem is since this is not my project, but my collegue’s one. Mine is the same but works properly.
This is the result