How to create an audio filter plugin in UE4?

I am trying to implement an audio filter in UE4 as a SoundNode. (Maybe there is a better solution in terms of integration)
However, I cannot find any documentation, tutorial or example to draw on.
By investigating UE4’s source code, “SoundNodeDoppler” class and “SoundNodeAttenuation” class appear to be only changing “ParseParms" in their “ParseNodes” method override, not really filter signals at sample level.

Basically, I created a new C++ class called “MySoundNode” inheriting from the SoundNode class, and added a Sound Cue in the editor:

20852-untitled.png

void UMySoundNode::ParseNodes(FAudioDevice* AudioDevice, const UPTRINT NodeWaveInstanceHash, FActiveSound& ActiveSound, const FSoundParseParameters& ParseParams, TArray& WaveInstances)
{
int32 count = WaveInstances.Num();
if(count != 0)
printf(“blabla");

However, if I set a breakpoint at the printf line, play the Sound Cue, and the breakpoint never hits, which means the “WaveInstances” array is constantly empty. So it seems there is no way to get access to the buffer in the “RawData” in the “UWaveData” instance inside a “FWaveInstance” in “WaveInstances” parameter. The parameter “NodeWaveInstanceHash” is also always 0.

In addition, it seems that I can get the ResourceID of a “USoundWave” from AudioDevice->Buffers. But how can I get the exact pointer to the audio buffer to the “USoundWave” from the input of “MySoundNode” instance?
Even if I could get the exact pointer to the audio buffer, I am afraid that writing back to the audio buffer to a “WaveInstance” (in-place processing) may mess up other sound nodes, which also connect to the output of the same WavePlayer.

All in all, I am wondering what is the best practice to implement a out-of-place audio filter in UE4 (My filter will be a mono/stereo in stereo out, which need to be applied right before the Reverb processing and the Attenuation processing)?
It would be nice if you can provide a code example.

Thanks

I’m in the same position and it appears that you just can’t do this in UE4. Sound files seem to be loaded straight into OpenAL buffers (on Linux) and then all the sound nodes do is coordinate the playback of these buffers and set OpenAL parameters (volume, pitch, etc.). There appears to be no platform agnostic way of performing any DSP on the sound files.