how to reach 32bit float precision in 2d render target

Hi all,

hopefully a simple question. How can you set a value in a RGBA render target up to a value of 3.4X10^38 if you cant set it to that accuracy based on the limitations of the floats in blueprints (max 7 decimal places) ?

see attached:

I could really use the added precision for encoding informaiton in a texture.

Cheers,
Ben.

float aka single-precision float type in C/C++ which is only float type supported by blueprints is already 32-bit float point value, note that 3.4X10^38 will produce 38 digit natural number as max value not fraction, so in 32-bit 7 digits in fraction is optimal max as it provides, so everything is ok.

Other types that C/C++ supports is:

  • half aka half-precision float aka 16-bit float (~3.3 max faction)
  • double aka double-precision aka 64-bit float (~15.9 max faction)
  • long double aka “x86 extended precision” aka 80-bit float point (~19.2 max faction) which is maximum precision that x86 CPUs support by standard

But all types above aren’t supported by blueprint and you need ot use C++ to be able to use them

read out about float point here:

Was this ever resolved? My Read Render Target Raw node is failing when it is set to any 32 bit float format. I can only use RGBA16f, which does not have the precision I need. How can I read render targets that are 32 bit floating point? Blueprints desired, but even a C++ snippet would be helpful.