UE4 Frame rate dependant audio engine?

Just wanted to ask whether UE4’s audio engine is frame rate dependant? I am currently experimenting with the best method of dealing with faster repeating weapon sounds such as assault rifles in UE4. After quite a lot of experimentation it appears that attempting to use polyphonic or granular based approaches do not fare well.

For example utilizing one or two single shot samples with modulation plus a voice clamp of approximately 6, with or without a separate tail asset (polyphonic) or very short assets (length of the actual cyclic action of the weapon) with many asset variations selected randomly + modulation + separate tail asset (granular), no requirement for voice clamping with granular as the sound asset lasts only as long as one complete cycle of the weapon firing so only uses one or two voices max.

Both of the above methods play back incredibly erratically with stuttering and strange changes in timing of the shot sound asset being fired off. The issue is apparent even after packing or cooking the game where the files are being streamed from RAM and seems to become increasingly worse at lower frame rates. This occurs even on slower firing automatic weapons also (appox 600 RPM)

Is the only acceptable way to get accurate timing with higher rate of fire weapons in UE4 (without using wwise) to use looping sound assets, as demonstrated in the shooter game project? Or is there something I am missing or doing wrong?

Any help would be greatly appreciated.

Hi Fortran,

Could you post your dxdiag as well?

Hi TJ, thanks for the reply, sure here it is: ------------------System Information------------------Time of this report: - Pastebin.com

In many regards yes, the system is frame rate dependent

All the logic to determine which sounds are active occur on the main thread, as does the supplying of the audio data to the hardware audio thread. Once the buffer is supplied to the hardware it will play back at the rate that the hardware consumes it regardless of the game thread performance, however, for real time decompression sounds (by default sounds over 5s) the buffers could get starved if you had large enough hitches preventing the double buffer from being filled in time.

Another issue you may run in to is the maximum number of channels. By default we have 64 active channels, which translates to 64 possible active sources. If your sounds are being played often enough and are long enough you will find yourself cutting sounds being cut off or failing to start depending on how the priorities get chosen.

Finally there is also a question of how you’re playing the sounds. If you’re creating a new AudioComponent or just calling PlaySoundAtLocation for each sound, then no issues, but if you’re reusing the same AudioComponent you could again get sounds being cut off before the finished playing.

Many thanks for the reply Marc, seems like looping assets will be the way to go with ue4 then.