GPU vs CPU floating point issues and robust noise generation?

I’m looking for way to generate 3D noise that can be applied to a material and sampled at specific points through C++ and/or BP. My approach is similar to this dynamic physical ocean, but I want to sample opacity based on world position rather that z displacement based on xy position. The author of that technique had limited success with render target lookups, but found it to be too slow to be practical. So I’d hoped to emulate his latter approach, running same logic on both CPU and GPU. I’ve found Perlin, simplex, and gradient noise all capable of creating the cloudy visual effect I’m after. While I’ve been able to generate matching noise on the GPU between shaders and material nodes, and on the CPU between C++ and BP, I get different results where the logic, and even the code(in the case of c++ and shaders) is identical.

For example, the simplex shader I’m using has code like:

    float mod289(float x) {
    	return x - floor(x/289) * 289.0;
}
    float permute(float x) {
    	return mod289(
    		x*x*34.0 + x
    	);
}

While permute(111.1) spits out 152.25 in C++ and 152.22 in the material editor, permute(permute(111.1)) spits out 171.375 and 237.154 respectively.

I’m new to C++ and shaders, so I’m wondering if any more experienced coders have dealt with floating point issues like this. Are there any methods I can use, or extra steps I can take to produce somewhat similar results, without destroying the visual effect? I tried adjusting precision by dropping digits at different points to make them match better, but it quickly results in blockyness and strange artifacts.

Unfortnately I don’t have an easy solution to your problem, but I can confirm that GPUs don’t do accurate floating point math by design. Usually they do floating point optimizations that introduce errors. So the floating point math results from GPUs are good enough for graphics but the results aren’t entirely accurate.

I’m not sure there’s a way to do this in UE4 at all but if you use a GPU compute framework (CUDA or OpenCL) they usually have shader compiler options to force accurate floating point math. You could also use integer math instead of floating points. The easiest solution would probably be to use a lookup table or a texture of random values for the GPU/CPU.

Yep. As Ensaides pointed out this is part of the design of the computer. You will always be susceptible to rounding errors and the more transformations you do - the bigger the error. Additionally, the float number line has some periodic numbers where you might not expect. Here is a video explaining the underlying numeric logic Why Computers are Bad at Algebra | Infinite Series - YouTube mostly for fun.

What can you do:

  1. Use double precision floating point numbers for your critical calculations. (this may work in your case)
  2. Try to do most of your math in the -1 to 1 range to avoid wildly different values if you can. (not always possible)
  3. Try to optimize your math by replacing your functions with the actual formula they use. This way you may be able to reduce the number of divisions or otherwise simplify your formula. (not easily applicable in your case)

Happy coding :slight_smile: