x

Search in
Sort by:

Question Status:

Search help

  • Simple searches use one or more words. Separate the words with spaces (cat dog) to search cat,dog or both. Separate the words with plus signs (cat +dog) to search for items that may contain cat but must contain dog.
  • You can further refine your search on the search results page, where you can search by keywords, author, topic. These can be combined with each other. Examples
    • cat dog --matches anything with cat,dog or both
    • cat +dog --searches for cat +dog where dog is a mandatory term
    • cat -dog -- searches for cat excluding any result containing dog
    • [cats] —will restrict your search to results with topic named "cats"
    • [cats] [dogs] —will restrict your search to results with both topics, "cats", and "dogs"

Can I use UE4's audio decompression without playing?

I need to feed PCM data, on demand, to a third party library for mixing. I then get back the mixed output and want to pass that to UE4 for playback.

The latter I'm doing using a procedural USoundWave subclass, no idea if that's a good approach, but it works.

Currently though I'm feeding the data manually from files, whereas I really need to be able to use imported sound wave assets as the source, and have UE4 stream/decode them on request. Is this possible? Can someone outline the basic process to do so? With no background in audio coding, I'm completely lost among all the device/buffer/source/active sound classes.

Essentially, I want to say to UE4: Here's a USoundWave, seek to this point, give me some PCM data, then give me some more PCM data, etc.

Product Version: UE 4.12
Tags:
more ▼

asked Jul 28 '16 at 04:25 PM in C++ Programming

avatar image

kamrann
2.1k 83 37 121

avatar image Minus_Kelvin STAFF Jul 28 '16 at 05:56 PM

PCM data is decoded and submitted to our audio buffers in realtime using the async decoders. Check out FAsyncRealtimeAudioTaskWorker and the ERealtimeAudioTaskType::Decompress to see how we decode compressed audio data.

Unfortunately, it's a bit difficult to describe in extreme detail about how the whole thing works from top to bottom. I'll try to outline it:

1) A sound is requested to play 2) If the sound is below a duration threshold (defined in a SoundGroup), it fully decodes the entire file into memory (or pulls it from a cache of already decoded audio that was decoded on map load), 3) If the sound is above a duration threshold, then it does "real time decompression" on the loaded compressed asset. 4) The realtime decompression uses async workers (FAsyncRealtimeAudioTaskWorker) to decode portions of the compressed asset. 5) Depending on the platform, the decoded audio chunks are fed to the playing source voice in queued decoded chunks. In Xaudio2, the xaudio2 voice itself performs a callback when a buffer finishes playing. In that callback, we consume a decoded buffer from the voices async task worker, submit it to the xaudio2 voice, then kick off another async task to generate another buffer.

Extracting that code to use in a different procedural sound wave system will not be an easy task but is doable.

avatar image kamrann Jul 29 '16 at 10:24 PM

A high level outline was exactly what I wanted, I'm starting to understand the code a lot better now. Much appreciated!

Is the procedural USoundWave a reasonable approach for feeding the mixed output back into the UE4 audio system for playback? Currently I have a custom UAudioComponent class which references the source sound assets, takes care of setting up the procedural wave object and then assigns it to it's Sound property. If there's some flaw in that setup I'm overlooking, please let me know. Regardless thanks for your time.

avatar image Minus_Kelvin STAFF Aug 01 '16 at 06:01 PM

The only thing I can think of is that the procedural sound wave is not going to be able to mix to a surround sound mix. I haven't tried, but I think you can make a 2D stereo procedural sound but will have trouble if you want to do a surround sound procedural sound.

avatar image Minus_Kelvin STAFF Aug 01 '16 at 06:05 PM

Incidentally, this will be significantly easier with the audio mixer module I am currently working on (in a dev-stream not visible to the public yet). The audio mixer module will perform all mixing in platform independent code and implement a much lower-level device interface which is somewhat similar to a massive procedural sound (that feeds a N-channel output audio stream directly to the hardware audio device). You could just implement your wrapper around the 3rd party mixer as an implementation of the audio mixer interface. I'm hoping to be able to ship an early preview version of the audio mixer in 4.14 (I missed the 4.13 cutoff).

(comments are locked)
10|2000 characters needed characters left
Viewable by all users

0 answers: sort voted first
Be the first one to answer this question
toggle preview:

Up to 5 attachments (including images) can be used with a maximum of 5.2 MB each and 5.2 MB total.

Follow this question

Once you sign in you will be able to subscribe for any updates here

Answers to this question