Direct the audio output of a Web Browser

In 4.13.1, 4.14.1, and 4.15 (preview 4), I generated a Web Browser Widget and put this on a blueprint actor. I am aware that both the Web browser and use of a 3D widget are experimental features, but I would really like to know if there is a way to output the audio from the Web Browser into an audio component, thus making the Web Browser’s audio spatialized.

Through researching this subject, I am aware that the CEF (Chromium Embedded Framework) uses Window’s default sound device as its audio output stream, which if the audio is re-directed to an audio component this would fix that issue as well as the lack of attenuation.

Would I need to remove the current CEF implementation, download CEF, turn it into a plugin and hook this up to the Web Browser’s functionality, and then change the way CEF outputs audio in order to redirect it to an audio component? By looking through CEF’s BitBucket, I found a pull request for audio support: Bitbucket. However, I do not know if this progress amounted to anything.

As it stands, I only have access to the header files that CEF provides, and none of the functions I’ve searched through provide any means of changing audio output. Any help is appreciated. :slight_smile:

That’s a cool idea… I’m the audio programmer so I don’t really know how the Web Browser Widget works… in fact, didn’t even know that was a thing.

Through researching this subject, I am
aware that the CEF (Chromium Embedded
Framework) uses Window’s default sound
device as its audio output stream,
which if the audio is re-directed to
an audio component this would fix that
issue as well as the lack of
attenuation.

So if this implemented by chromium, and not us, it’s up to that API to get the raw PCM data output. Googling, it looks like its not supported:

http://www.magpcss.org/ceforum/viewtopic.php?f=6&t=14727

It does look like you can select a non-default output device but that won’t help you.

So, assuming you figure out how to get the output PCM data from the browser widget, you’d then need to feed it to a procedural sound wave (USoundWaveProcedural). This is a bit tricky to setup and a lot of people have had issues getting it to work. It totally DOES work, it’s just an advanced feature.

For GDC, I’ll be demoing a new “synth component” object which basically handles the ugly details of dealing with a procedural sound wave and lets you just do really cool stuff with raw PCM data very easily. We’ve been building really cool real-time synthesizers with it.

I know this isn’t too helpful :slight_smile:

Are you the suited up guy from the stream? I’m lighting candles every day for you until the remake of the audio system is done and extended. As an homestudio audio producer I’m hoping to someday release a full album (maybe just some songs if I can’t handle the 3d work) to showoff using unreal. By now some experiments are enough with the old engine, but I have many hype about your endevour. I’m also really interested in be able of load “Radio” from the internet and be able of play with the data. Anyhow’s, thanks and hope to hear very big news from you.

haha. thanks man!

I too am interested in using UE4 for non-game purposes :smiley:

I also need to be able to do this for my game. Looks like I may have to use Coherant instead :confused:

Hey, hope I’m not too late to the party but I’m wondering if anything has been added in the meantime that could allow doing something like this.

btw, love playing with the synth plugins so far! Made a program with a playable synthesizer and a web browser to play reference videos. Was hoping to somehow get a spectogram of the audio from the browser to visualise the played frequencies, hence the question :slight_smile: