How to Use UE4 to Implement 3D Panorama Player on Gear VR Devices

Hello everyone,

I have used UE4 4.12 to implement a VR panorama player in Gear VR.

  1. Firstly, I drew a sphere in UE4.
  2. Secondly, I mapped the image to the sphere.
  3. And then, all the rest of the work was done by UE4. Therefore, I can’t know any details about how does UE4 implement split-screen function and display image in screen of Android device.

Now, I want to modify this player into a 3D panoramic player. One method is to place two spheres in UE4, and map two 3D images to these two spheres respectively. And then, place a camera at center of these two spheres respectively and display different images in different part of screen. Because UE4 splits screen and display screen automatically, I don’t know how to implement my ideas. Could anyone give me some suggestions?

Thank you very much!

The screen of my panorama player on the Galaxy s7 is shown as follows.

114352-screenshot_20161109-210805_result.png

@Answers.Archive Can you give me some suggestions? Thanks.

Just getting into this myself, here’s where I’m at, which is an amalgam of a few different resources I’ve found.

I’m taking my left and right images, and putting them into a single image, with the left across the top half, and the right across the bottom, like this: https://code.blender.org/wp-content/uploads/2015/03/gooseberry_benchmark_panorama.jpg

In a material blueprint, the screenposition node gives you coordinates of the pixels, returned as (0,0) in one corner, to (1,1) in the opposite corner. That means you can test if a pixel is on the left or right side of screen by seeing if its screenposition x value is greater than or less than 0.5.

You can then do some tricks to the uv’s of your sphere mesh, so assuming they’re in a regular spherical 0:1 range, you can take the top half of the texture, scale it out to the full uv range, and in another branch take the bottom half of the texture, and scale it out to the full uv range.

Armed with those 2 things, you get the attached network. If the pixel is on the left side of the screen, take the top half of the image, expand it to the full size of the sphere. Otherwise if the pixel is on the right side of screen, take the bottom half of the image, expanded to the full size of the sphere.

Not sure if this is the most efficient way to do this, but its working well for me so far.

I’m using a very similar system for my material setup. What I’d like to know @mestela is what exactly are you mapping with that material. From my experiments I’m having difficulties in getting high resolution textures mapped on a cube. UE4 is always mipping the textures even if I’m forcing them not to and I can’t disable mips as they will sample poorly.
I’m developing for the Gear VR platform and encoding things as ASTC.

We’re using latlong images on a sphere rather than cube mapping, working well enough so far. It’s on my to-do list to try a cube, but there’s a bunch of other things that are higher on that same list. :slight_smile:

It’s a good idea! Thanks!