How do I get the audio data of an input sound node?

Hi! I am trying to write a custom audio filter. I want to implement it as a SoundNode with one input and one output, so that I can use it in blueprints. As far as I have understood the method USoundNode::ParseNodes evaluates all the child nodes connected to a sound node. Now my question is: how do I get the raw sound data of the input sound node?

Note:
The question below is very similar, but since it is already quite old and there was no helpful answer, I thought I’d ask again.

Using ParseNodes would be enough for simple effects but for more complicated things you should look into the FAudioEffectsManager (your node could control an effect). That’s how reverb or the radio effects are done. Check FCoreAudioEffectsManager or FXAudio2EffectsManager for the actual implementations (they are platform dependent).

Another way would be adding come special code when the PCM data is submitted into the system, this happens in FSoundSource::SubmitPCMBuffers, again each platform has it’s own FXAudio2SoundSource for example.

You would require a source build to be able to add your own effects into the engine though.

Even for the simplest effect I will need to process the actual samples of the child node’s waveform. So does ParseNodes provide this functionality somehow?

The answer in the thread you linked is still valid: there is no real way to do that at the moment, which sort of make sense if you think of how audio is handled on, say, a platform like the Xbox: you create buffers and handle them through the XAudio API but at no point you actually have access to them.

Hopefully the audio engine overhaul that has been announced will allow such processing, maybe by allowing some buffers to roundtrip through memory.

As @G4m4 said, for now it is not possible doing it an Node. You could though add your special code where the audio buffers are created from the PCM data within FSoundSource but it does require engine modifications.

It’s not accurate to say the reason you can’t access a voice audio stream for user effects/DSP is because of XAudio2. I implemented the Oculus Audio SDK (which required access to the audio stream of a voice) by creating a per-voice XAPO audio effect and attaching it to an IXAudio2Voice (given the necessary conditions are met). You could do something similar for general effects and call into a platform-independent interface (like I did with spatialization plugin) that then calls into some user-processing callback.

This just hasn’t been done yet because it’d be quite a bit of work to do it and I plan on doing something similar anyway with the audio engine rewrite soon. I’ve not worked on the rewrite for a while in favor of getting some much needed leaf-features into the engine for Paragon (e.g. a more robust concurrency system, a focus system, occlusion, and a number of other smaller things).