How about the "int32" and the capital of "INT32" defference

What does
the “int32” and all the of “INT32”
types different?

typedef signed int - FGenericPlatformTypes::int32
typedef signed int - INT32

Looking at both of them, they are both defined to the same thing of ‘signed int’ so it shouldn’t matter which you use really, unless someone else can shed light on this?

our “official” types are lowercase and have the bits at the end.

uint8, int32, float, double

You will find all of these in Platform.h

We don’t use any platform defines like DWORD, etc except when necessary and only within that platforms implementation. I’m not aware of any INT32 instances in the standard engine code.

thank you very much

Can you go deeper into this? Is there a performance diff? Platform type will always be Windows for our Servers, so would INT32 or int32 be the best choice for speed? Or no difference in speed?

There is absolutely no performance difference, because both are just aliases for signed int (or signed long). I suggest always using “official” Unreal types when working with the engine, unless you have a reason to do otherwise.

This is correct. By using the Unreal types, you are guaranteed to get the best representation of the type for each platform.

You could certainly use “int” if you didn’t care about the size of the type, but my muscle memory always uses int32 for that. I imagine most if not all platforms treat this type efficiently.

I would think the size aspect only really matters if you are packing data and looking for disk/memory critical objects.