Best way to integrate procedural audio

Hi all, I would like to know the best way to integrate programmatically-generated audio.

Basically I have various simple generators (such as noise, etc.) which I would make usable by any sound designer working on the project: for this reason I try to rely as much as possible on available UE4 audio features.

I wrapped those generators into SoundWaves, something quite similar to what have been done here - except that I made my SoundWaves actual assets, with a proper factory. I can create then as any other asset and assign them to SoundWave Player nodes, or some custom node (equivalent to the “USoundNodeProceduralTest” in the thread linked above) that I created and allow me to expose any parameter this generator may be fed with.

It works great for now, however I recently noticed the SoundWaveStreaming class which seems to provide some basic functionality for the same thing - is it worth using it instead? The only example I could find were SoundMod and Phya plugins which use it quite differently:

  • the first one derives from it and override the GeneratePCMData method
  • the second one create a SoundWaveStreaming instance, assign its delegate for sound generation to the proper function, and make it played by a sound component

The 2nd method seems cleaner, but in my case what would be the advantage of using SoundWaveStreaming?

Thanks for your time!

I am unsure if this would suit your requirements, but have you considered FMOD studio? They have built a plugin specifically to integrate with UE4

http://www.fmod.org/fmod-studio-unreal-engine-4-now-available/

Actually I have, and Wwise also.
But I’d really know if such things have been done relying on native UE4 framework only.

The main goal being a simpler, and wider access to procedural audio content creator in UE4.