AudioMixerSource doesn't work with Media Framework

AudioMixerSource seems to be missing proper support for procedural USoundWaves such as UMediaSoundWave which is used by the Media Framework The user experience is no sound on all platforms. That’s neat. FIOSAudioSoundSource is also lacking this functionality - which means there’s currently no working AudioDevice on IOS - which is a shame because I just went to the trouble of fixing the media framework on IOS.

It has support for procedural sound waves as the synthesis stuff uses procedural sound waves.

It’s possible there’s a miss-handled edge case with the audio mixer and procedural sound waves and media framework. Also, the media framework uses the buffer queuing mechanism whilst the preferred method is now using a callback method.

In any case, Media Framework is currently getting reworked to use the new synth component anyway so this should be resolved.

I’m not saying I know what the fix is just that it’s definitely possible there’s an issue. I’m going to guess it’s probably not a trivial issue as the old media framework stuff was not implemented in a flexible way (i.e. there’s lots of platform-specific implementation issues for audio and a large number of platform parity issues).

I’d have to dig deeply into Media Framework’s audio code and I’m not the person who implemented Media Framework.

If you want to maybe dig in yourself if you don’t want to be bottlenecked by Max Preussner (Media Framework programmer) or myself, you can trace through the procedural sound wave code in audio mixer in the following areas:

In FMixerSource::ReadMorePCMRTData, there’s code which creates a new sound wave procedural audio task.

	bool FMixerSource::ReadMorePCMRTData(const int32 BufferIndex, EBufferReadMode BufferReadMode, bool* OutLooped)
	{
		USoundWave* WaveData = WaveInstance->WaveData;

		if (WaveData && WaveData->bProcedural)
		{
			const int32 MaxSamples = (MONO_PCM_BUFFER_SIZE * Buffer->NumChannels) / sizeof(int16);

			if (BufferReadMode == EBufferReadMode::Synchronous || WaveData->bCanProcessAsync == false)
			{
				const int32 BytesWritten = WaveData->GeneratePCMData(SourceVoiceBuffers[BufferIndex]->AudioData.GetData(), MaxSamples);
				SourceVoiceBuffers[BufferIndex]->AudioBytes = BytesWritten;
			}
			else
			{
				FProceduralAudioTaskData NewTaskData;
				NewTaskData.ProceduralSoundWave = Cast<USoundWaveProcedural>(WaveData);
				NewTaskData.AudioData = SourceVoiceBuffers[BufferIndex]->AudioData.GetData();
				NewTaskData.MaxAudioDataSamples = MaxSamples;

				check(!AsyncRealtimeAudioTask);
				AsyncRealtimeAudioTask = CreateAudioTask(NewTaskData);
			}

			// Not looping
			return false;

The task implementation is in AudioMixerSourceDecode.cpp, which basically calls “GeneratePCMData” in it’s DoWork() function:

		switch (TaskType)
		{
			case EAudioTaskType::Procedural:
			{
				ProceduralResult.NumBytesWritten = ProceduralTaskData.ProceduralSoundWave->GeneratePCMData(ProceduralTaskData.AudioData, ProceduralTaskData.MaxAudioDataSamples);
			}
			break;

At this point, it’s calling into MediaSoundWave.cpp’s own override of this function, int32 UMediaSoundWave::GeneratePCMData(uint8* Data, const int32 SamplesRequested)

I’m not familiar with the code here and whether there’s anything different here than the old audio engine.

I’ll ping Max Preussner, who is the owner of UMediaSoundWave to see if he has any advice for you. His plans are, as far as I’m aware, to totally remove UMediaSoundWave as a thing (i.e. no UAsset for this) and instead internally create a component which inherits from USynthComponent, and simply feeds decoded audio data to that. He might have CLs you could grab from our head revision on github.

GeneratePCMData is called but the buffers are never submitted. I don’t think the issue is with MediaSoundWave - it works with all the other XXXAudioSources. It should be trival for you to test it tbh - the problem occurs on windows.

Why are the buffers never submitted?

It’s not obvious to me why. It seems to be stuck in the initialization phase. Ofc that’s why I asked you to test it - seems very strange you’ve never done that tbh.

The audio mixer is still labeled as “experimental” because it hasn’t been fully tested in all possible use-cases or all platforms.

For 4.17/4.18, the focus has been on back-end support (to get it working on
all our platforms: PS4, XboxOne, Mac, iOS, Android, Switch, Linux, HTML5).

Our media framework is not a system that I own or work on and am not totally familiar with it, and, as I said, is getting completely refactored by the system owner to use synth components.

Apologies for the inconvenience but was hoping, since you’re clearly blocked by this issue, that you might be able to help me out since it seemed you had technical skills. Otherwise, you’ll have to wait for me to look at it which may be a bottleneck for you.

Hi Marksatt-pitbull,

Yes, that’s true. I’ve sent the information on how to fix that to Jack Porter. The trick is to use MTAudioAudioProcessingTap.

At the time the old iOS audio code couldn’t consume the data anyway so I passed.
Um, it still can’t hence this discussion. No need to promise. If you only fix the audio mixer i’d be more than grateful. I guess I should also mention that the linked fork also removes the limitation of the media framework not being able to play remote urls on Mac or iOS.

The audio mixer is not broken here, as I said. Procedural Sound Waves work in the audio mixer: half of my GDC presentation was using it.

The issue is the media framework’s use of it, which I said is currently being reworked by the Media Framework system owner.

Um? Lol, Ok. Let me reword: “if you make the audio mixer work correctly with the media framework in 4.17, I’ll be grateful”.

Let me clarify that our real requirement is to have working 3d audio with the media framework on ios. To get there involves fixing the media framework - which we wouldn’t expect to happen in 4.17 and so have done in our own fork, and also to fix the audio device. Since it seems pointless to invest time in IOSAudioSource we arrived at suggesting correcting the audio mixer. Obviously we can and will eventually get something working ourselves, but I don’t think the expectation that the audio mixer should already (i.e in 4.17) actually work with the media framework is unreasonable - in spite of minus_kelvin’s remarks.

Any chance you could point out the fix in the meantime? 4.18 is a ways off. Thanks!

AVFoundation on iOS makes piping decoded audio through our pipeline tricky so it wasn’t implemented last year when the media framework was reworked atop AVFoundation. So as far as I am aware movie audio is still played directly to the hardware by AVFoundation on iOS.

Yes, I was aware of MTAudioProcessingTap and contemplated hijacking it like this. At the time the old iOS audio code couldn’t consume the data anyway so I passed. On Mac the need to support scrubbing, fast-forward, reverse and some other problems I don’t recall made me take another approach. The UE infrastructure has changed since then so forwarding this on to us for reevaluation as you have done was the right thing to do. I can’t promise anything though.

The next release of Media Framework will have support for the new Audio Mixer API. The MediaSoundWave asset has been removed. Instead there will be a MediaSoundComponent that can be attached to actors and UMG widgets. It derives from USynthComponent.

The code for this is not yet in the Main branch and hence not on Github. All development is currently happening in the Dev-Sequencer stream. We are planning to push it up to Main later this month.

Hi gmpreussner,

Thanks for that information.

I also sent Jack Porter a link to a fix for media framework audio on android to look at (maybe he’s forwarded it to you?) - it requires tracking the output position of the audio in order to sync with the video. Do you think we could we get that hook added to MediaSoundComponent? E.g in our fork i added

int64 GetPlayheadPosition()

to MediaSoundWave which returns the number of frames that have been played since the last flush.

JackP is not the right person for that. Babcock owns the Android player. He already had a look at MediaPlayerExtended, and we’ll integrate it. Not sure if it will happen for 4.18 though.

I would like to merge your changes in this Github commit to make audio work on iOS. Are those all the changes that are needed, or are there more? I am only interested in the MTAudioTap stuff for now, but some of your other changes look interesting, too. I’ll look at those later.

Also, would you be able to submit a pull request that contains only the MTAudioTap related changes? If I can see just those changes, I might be able to get it in for 4.18. Thanks!

HI gmpreussner,
I had to hack the audiomixer a bit also - the buffer sizes didn’t match in some cases, it didn’t handle deactivating the app and reactivating (neither did the media framework), and as mentioned elsewhere in this thread it was hardcoded for USoundWaveProcedural. Not sure a clean PR can be made at this time (i.e for IOS and Mac). If you only care about Mac and you don’t care about the audio mixer then I think you just need the changes in AvfMedia. However, tbh the media framework shortcomings on Mac are the least of the problems. It’s highly unusable due to the editor being unstable (crashes several times a day for me - slate and cooking bugs) and takes forever to build due to generation of DSYM files (5 to 10 minutes after changing a single line of code). We only use mac to deploy to ios so I can’t spend time dealing with the myriad issues there.

Oh, I also had to hack AvfMediaPlayer a bit to get it to not die on remote urls.