360 degree stereo pre-render needed

Hello community! We have a need that is apparently a somewhat special request, and I have been directed to ask the question here. Let me start this post by explaining what we ultimately need. We would like to be able to pre-render and export, as an image sequence, one of our OVR experiences so that it will work in a 360 degree stereoscopic view. So two sequences, offset by around ~70mm or so, of our full environment and experience.

We have experimented with the old school guerilla way of doing this, IE, taking 6 or more cameras at 90 degree (or less) intervals, capturing the experience, and then stitching it back together as a single piece of video, and then doing so for each eye. We are new to UE, so the answer may be easy and immediate, but the problem that we have is that our shot is very FX-heavy, and currently all of the effects happen in a random manner (rain, sparks, fire, shatters, etc). So capturing each eye, and capturing each angle, is currently going to produce random results which will make for a murky stereo piece. If there is a way to bake down the FX so that they play back in the exact same manner each time, that would be fantastic. Or if there was some way to enable a psuedo-random seed, like the type that exists in 3ds Max’s pflow, that could also work. OR, any other ideas or thoughts you folks have on this are appreciated.

Our second idea is: In trying to come up with a back door solution, I combed through the nodes in UE, and found these two:

TextureRenderTargetCube

&

SceneCaptureCube

If we could get an output (latlong image) from these for every single frame of playback, that might also be useful to us. Currently I can export a single frame of the created texture, but there is no way in the current toolset (that I know of) to actually write out those images on a per-frame basis. I can place two sets of these nodes in the scene, so I can get offset cameras kicking out a stereo latlong, which benefits us in that we can grab the entire scene in one go and not have to worry about any random elements. The downfall of this method is that it will take some slicing to get the stereo to work in all directions, but I’m willing to solve that problem if we can get the output of those texture nodes. I also tried putting this texture on a plane and then capturing that in matinee, but it doesn’t map in 2D, but in a 3D cube map manner, which makes sense, but if there is a way to convert a 3D texture to a 2D texture, that could also be helpful.

A third way that we have brainstormed to try and get everything in one go is to actually be able to use matinee’s make movie feature, and capture 12 or more cameras at once. Currently, I can only get one camera to output at one time in matinee, via setting the camera in the world settings. If we could change that particular property from a boolean to an array, where we could pick multiple cameras, that would be helpful. And we could do this anywhere, the world settings, in matinee, or otherwise. We don’t need to playback all 12 cameras in real time, and we aren’t concerned with the render time that it would take to create these frames, as long as we could designate a framerate and output all of the files, preferably to specified directories, or using some sort of logical naming convention. Right now, if I output out of matinee, it just puts a series of images into the /save/screenshots/windows directory. If we could output multiple cameras from matinee we would need some way to distinguish which image sequence belongs to which camera.

Those are our current ideas. If you have any others then we are all ears. We just need to be able to land on the specific goal of capturing 360 degrees of our scene into two separate eyes for a stereoscopic view.

Thanks for your time (esp taking the time to read this) and input

The most practical solution will be to take cubemap captures at each frame of the video. You’re going to have to add the code for that yourself, but it’s effectively updating the cube capture each frame and dumping its contents to disk. Look in the code for ProcessScreenShots and HighResScreenshot for examples.

The problem with a cube map capture is that we need this in stereo and the stereo will not be correct for the sides and rear of the capture with a cube map capture, so we have to build an array of cameras and export each one. I imagine that if we do a 2D scene capture and export each one it would work. I could use some help with what code to use to make that happen per frame with each one and have all of the effects and procedural animation not get broken or be out of sync.

thanks for the help,

The problem with a cube map capture is that we need this in stereo and the stereo will not be correct for the sides and rear of the capture with a cube map capture, so we have to build an array of cameras and export each one. I imagine that if we do a 2D scene capture and export each one it would work. I could use some help with what code to use to make that happen per frame with each one and have all of the effects and procedural animation not get broken or be out of sync.

thanks for the help,

This I agree with his solution was a bit much for me to do. So my hack but working solution was to create a cube capture target. Have the texture open on the desktop and with a matinee attached to the cube capture. I play the matinee and screen capture the updating texture. Crop it and bam you have a working 360 x 180 video.

Note: with a 1440 p monitor the video is 2550*1275 the Gear Vr can play it but the resolution really needs to be double that at least.

I screen capture at 15 fps and play the matinee at 1\2 speed so the updating is smoother. Then in the Gear Vr play it at 30 fps. Looks great just missing post processing.

Let me know what you think.

This doesn’t answer the question, unfortunately.

  1. The above description would not create a stereoscopic image. There is only one perspective being captured.

  2. Capturing a 3D cube in 2D space would not create a panoramic (360 degree) image, unless I am missing something.

And the problem with remapping cubemaps to 2D space in general is that they don’t lend themselves to creating a proper stereo image. The math just isn’t quite correct. So cube 3D scene captures aren’t really even part of the solution, as far as I can see, unless they are altered in some way.

I think you’re actually trying to get stereo 360 degree, pre-rendered.

Imagine that you’ll have 360 pixels wide of output video. If you face North, you see the 90th pixel (in the center of your view). If you face West, you see the 180th pixel. I’m over-simplifying it, but you get the idea.

But think about what’s rendered for each eye when you face North. You want 70 mm of separation between the two cameras. Meaning, the 90th pixel (North) for your left eye is 70 mm to the left of the camera that produced the 90th pixel (North) for your right eye.

If you see what I’m driving at, you can’t use 4 cameras to generate North, West, South and East views. Well, yes, at a minimum, you’d need to render 4 more cameras for the right-eye North, right-eye West, right-eye South, and right-eye East. But at the 45 degree angle, you’ve got this weird mess of camera data…

My point being that every single column of pixels you generate in your final 2D cube-mapped rendering, needs to come from a camera with a different position and look-at direction.

You’ll end up with weird artifacts, I suspect.

And I’m not sure if Unreal is particularly suited.

As you’ve said, you need a random number seed, so that all of your affects run the same way, every time… With no screen-space tricks, and no objects “culled due to visibility.”

And you’d need specialized rendering to handle the video you’d output… But I don’t think it’d be too hard to do that.

I’m pretty sure the technique I’ve described is why John Carmack is so excited in his recent tweet:

@OTOY added support for rendering stereo cube maps in the Octane renderer. Their test is the highest quality scene I have seen in an HMD.

We’re working on something you may like. Still early days, but looks promising! :slight_smile:

Here you go, courtesy of Michael Allar. He does note that it eats your ram and your dog though :stuck_out_tongue:

One of the problems that you will have is that most head-tracked VR systems don’t have the same camera location for every orientation, they have an offset on their location that’s affected by the rotation to simulate the way human heads work (rotating from the neck with eyes in front of that by a few inches), so you’re going to need much more data than what would be the equivalent of 2 cubemaps. You essentially need 2 new cubemaps for every possible pitch change if you want passable head tracking, and then if you want to do roll too you need even more, and if you want to be tracking translation even more.

The common solution is to not try to pre-render because you’ll either have so much data it won’t be worth it, or you’ll have to be doing complex image transforms at runtime if you want it to feel right. If you have 3d data already, it’s easier to just render it realtime, which you can do trivially.

True, but sometimes that is a good trade-off, f.i.on mid-to-low end mobiles: complex 3D real-time scenes are out of reach for most smartphones.
With some clever capturecubes/stitching/UV you can get some pretty nice results, for both static and pre-rendered animations.

Been there, done that, doesn’t work. :wink:
Two latlong captures are not enough for real 360 3D (you loose stereoscopy on 90 degrees left / right and you have inverted eyes in back view).

any luck with your mission Skan?? I am with you on this I have been to Allar’s and tried that and lose the stereoscopic, Also I have video textures in my scene and I cannot get them to match the frame rate of my game even when I set the frame rate though my project settings! I need help with this??

Here is another method you could try 360 capture LINK I couldn’t get it to render out correctly but maybe you can if you do let me know.