Audio Configuration (stereo, 5.1 surround etc)

I’ve been trying to track down how to determine the current audio configuration to no avail. How do you determine in C++ if the current audio config is stereo, 5.1 surround or 7.1 surround etc…

Surely there must be some platform independent way to just query the engine what its current setup is?

FAudioMixerDevice::GetNumDeviceChannels(). Looks like that’s not exported, so you won’t be able to called that from outside the Engine module. Should be easily exported though.

Alternatively you could create an ISubmixBufferListener implementation and register it to the master submix. Then you’ll get callbacks on new master submix buffers, which also has the NumChannels. Also gives you the entire audio buffer to do what you will with it. Including mix into it if you wanted (e.g. if you want to tap in an entire audio engine output, could do it there). Notably you can do this on ANY submix by the way.

As for what it uses:

Console: Always renders to surround sound (7.1). The hardware does the down-mixing if the user is configured to use stereo (hardware down-mixing is preferable where available).

Linux: Always renders to 5.1 (what is max supported by SDL2, which also does the downmix).

PC: It uses whatever the OS default audio device is using. This is hooked up to listen to a IMMNotificationClient.

Android/IOS/Switch/Mac: Uses stereo.

If you’re working with submixes, the submix effect callback indicates what the channel count is always.

Hopefully that helps! I’d be more than happy to answer any other questions about the C++ code.

Thanks for the detailed answer! I was able to hack around the no-export issue. Might I request in future releases that GetNumDeviceChannels be proper exported? Would make this more uhm sane.

Current approach:

#define private public 
#define protected public
#include "AudioDeviceManager.h"
#include "AudioMixer.h"
#include "AudioMixerDevice.h"
#undef private
#undef protected

static int GetNumSpeakers() {
	FAudioDevice *dev = FAudioDevice::GetMainAudioDevice();
	if (dev && dev->IsAudioMixerEnabled()) {
		Audio::FMixerDevice *mix = static_cast<Audio::FMixerDevice*>(dev);
		if (mix) {
			//return mix->GetNumDeviceChannels(); // Not exported...
			return mix->PlatformInfo.NumChannels;
		}
	}
	return 2; // default case... unknown just assume 2 speakers stereo setup.
}

Ha! If you can build the engine, could just export it locally.

I’ll check this in exported for the next engine version, but that’ll be a bit.

Curious what you are doing where the output channels matter and you’re not writing a submix effect (where the channels matter)?

This is for Bink. Needs to work with no engine modifications unfortunately.As for why I need this, I need to let it know how many speakers UE4 is setup to use so that video playing can match the engine.

Interesting.

I’d probably implement this as either a submix effect (i.e. decode the bink audio and feed into the submix buffer, and as I said, submix effects have access to the channel count) or implement as a USynthComponent (which is always either mono or stereo at the moment and you define what it is).

Have you checked out the MediaSoundComponent? It’s part of our media framework. I’d assume a Bink implementation could use a similar implementation. Note: don’t take this advice ultra-literally (and copy the media sound component verbatim). Our MediaSoundComponent has issues that our new media framework team is resolving w/ recent efforts on a new streaming media decoder (i.e. for on-the-fly cloud streaming). Notably, the media sound component does an unnecessary sample-rate conversion (the audio mixer already does an SRC for you if you tell it what the Sample Rate of the synth is).

All our recent Fortnite events that use media streaming (and all our trailers for Fortnite) all use the media sound component that is in there now.

E.g. Fortnite Party Royale Premiere FULL Concert (Deadmau5, Steve Aoki, Dillon Francis) - YouTube

The significant benefit of implementing the decoded audio playback as a USynthComponent (which is a wrapper around an audio component and a USoundWaveProcedural) is that you can play the decoded audio as any other sound source in the world. I.e. you can spatialize the audio, occlude it, apply DSP effects, etc. The SynthComponent is just another sound source. We’ve taken advantage of this many times recently in Fortnite so I highly recommend it for a general Bink decoder.

Also – you doing this for RAD or a game? If you’re doing it for a game, have you checked out our out-of-the-box media framework? If you’re doing it for RAD, we should talk! I’d love to connect you to our media framework peeps.

Has this feature been added yet as BluePrint function? More specifically, is eight audio-out channels the maximum in Windows using the Unreal Engine? I have seen that you have in
UnrealAudioTypes.h (under UE 4.26.1) a number of speaker labels, yet when I create a virtual audio card that consisted of 16 channels, only eight channels in the WAV file are created from the Start/Finish Recording Output from ‘Submix Main’, containing only eight channels configured as 7.1. Yet when I change this to only four virtual channels, the WAV files contains the correct number of channels – which was four. Anyway to activate all 18 channels identified below (I can activate up to 32 channels)?

namespace ESpeaker
{
/** Values that represent speaker types. */
enum Type
{
FRONT_LEFT,
FRONT_RIGHT,
FRONT_CENTER,
LOW_FREQUENCY,
BACK_LEFT,
BACK_RIGHT,
FRONT_LEFT_OF_CENTER,
FRONT_RIGHT_OF_CENTER,
BACK_CENTER,
SIDE_LEFT,
SIDE_RIGHT,
TOP_CENTER,
TOP_FRONT_LEFT,
TOP_FRONT_CENTER,
TOP_FRONT_RIGHT,
TOP_BACK_LEFT,
TOP_BACK_CENTER,
TOP_BACK_RIGHT,
UNUSED,
SPEAKER_TYPE_COUNT
};