Why does returning int from UFunction result in unknown type int compiler error?

Here’s a sample code snippet:

UFUNCTION()
int SomeFunc();

This results in a compiler error saying unknown type int. However, returning float, or types like int32, int16, etc… works.

Are we not allowed to return int itself from UFUNCTION functions? It works just fine if I remove UFUNCTION. I can kindof see why this would be the case since 32 bit and 64 bit builds would have different sized ints and this could mess with Blueprint somehow. I just don’t see this info anywhere, and was stuck for a few hours experimenting and wondering what the hell it means by int being an unknown type.

If my theory is correct, it would be really helpful to add this to this page: https://docs.unrealengine.com/latest/INT/Programming/UnrealArchitecture/Reference/Functions/index.html.

And if possible, make the compiler spit out a better error saying UFUNCTIONs can’t return int. They must return a specific sized int type like int8, int16, int32, int64.

You are correct about this, the UHT doesn’t recognize implementation dependent int-types, so if you want to connect your function with Unreals reflection system you have to specify the sizes. It’s very likely that this is because the interop code generated to bind ufunctions to blueprints and vice-versa depends on the size of the type.

Just to add, that there is documentation about this already, though I had to search a bit to find it in the new site format :slight_smile:

#UE4 Documentation Link

"Don’t use the C++ int type in portable code, since it’s dependent on the compiler how large it is."


Omg, this is so weird.

There’s multiple reasons. I’m sure it helps with their macroing/reflection system somewhat, and also if you want a datatype to act identically and reliably on all platforms that’d be supported, you have to use a custom datatype a lot of the time. Integers are very bad and unpredictable (it’s up to the compiler to decide not even the hardware), and uint32_t for example may not cut it as some platforms/compilers may not even support it. (well not likely but it can happen).

What’s more is that unsigned data types aren’t predictable across all platforms. As far as how they deal with integer under/over flows. And how they are stored. It’s looks to be very messy stuff. And the college professors that say integers are of some specific size and that const is a compile time number that is directly inserted (constexpr and macros do this not const) get under my skin. I know they don’t know any better though. My basic point is that making reliable cross platform code is not an easy task in c and by extension c++. And measures like this are probably a means to combat the issue to keep better consistency.