Best way to integrate procedural audio
Hi all, I would like to know the best way to integrate programmatically-generated audio.
Basically I have various simple generators (such as noise, etc.) which I would make usable by any sound designer working on the project: for this reason I try to rely as much as possible on available UE4 audio features.
I wrapped those generators into SoundWaves, something quite similar to what have been done here - except that I made my SoundWaves actual assets, with a proper factory. I can create then as any other asset and assign them to SoundWave Player nodes, or some custom node (equivalent to the "USoundNodeProceduralTest" in the thread linked above) that I created and allow me to expose any parameter this generator may be fed with.
It works great for now, however I recently noticed the SoundWaveStreaming class which seems to provide some basic functionality for the same thing - is it worth using it instead? The only example I could find were SoundMod and Phya plugins which use it quite differently:
The 2nd method seems cleaner, but in my case what would be the advantage of using SoundWaveStreaming?
Thanks for your time!
Follow this question
Once you sign in you will be able to subscribe for any updates here