360 degree stereo pre-render needed
Hello community! We have a need that is apparently a somewhat special request, and I have been directed to ask the question here. Let me start this post by explaining what we ultimately need. We would like to be able to pre-render and export, as an image sequence, one of our OVR experiences so that it will work in a 360 degree stereoscopic view. So two sequences, offset by around ~70mm or so, of our full environment and experience.
We have experimented with the old school guerilla way of doing this, IE, taking 6 or more cameras at 90 degree (or less) intervals, capturing the experience, and then stitching it back together as a single piece of video, and then doing so for each eye. We are new to UE, so the answer may be easy and immediate, but the problem that we have is that our shot is very FX-heavy, and currently all of the effects happen in a random manner (rain, sparks, fire, shatters, etc). So capturing each eye, and capturing each angle, is currently going to produce random results which will make for a murky stereo piece. If there is a way to bake down the FX so that they play back in the exact same manner each time, that would be fantastic. Or if there was some way to enable a psuedo-random seed, like the type that exists in 3ds Max's pflow, that could also work. OR, any other ideas or thoughts you folks have on this are appreciated.
Our second idea is: In trying to come up with a back door solution, I combed through the nodes in UE, and found these two:
If we could get an output (latlong image) from these for every single frame of playback, that might also be useful to us. Currently I can export a single frame of the created texture, but there is no way in the current toolset (that I know of) to actually write out those images on a per-frame basis. I can place two sets of these nodes in the scene, so I can get offset cameras kicking out a stereo latlong, which benefits us in that we can grab the entire scene in one go and not have to worry about any random elements. The downfall of this method is that it will take some slicing to get the stereo to work in all directions, but I'm willing to solve that problem if we can get the output of those texture nodes. I also tried putting this texture on a plane and then capturing that in matinee, but it doesn't map in 2D, but in a 3D cube map manner, which makes sense, but if there is a way to convert a 3D texture to a 2D texture, that could also be helpful.
A third way that we have brainstormed to try and get everything in one go is to actually be able to use matinee's make movie feature, and capture 12 or more cameras at once. Currently, I can only get one camera to output at one time in matinee, via setting the camera in the world settings. If we could change that particular property from a boolean to an array, where we could pick multiple cameras, that would be helpful. And we could do this anywhere, the world settings, in matinee, or otherwise. We don't need to playback all 12 cameras in real time, and we aren't concerned with the render time that it would take to create these frames, as long as we could designate a framerate and output all of the files, preferably to specified directories, or using some sort of logical naming convention. Right now, if I output out of matinee, it just puts a series of images into the /save/screenshots/windows directory. If we could output multiple cameras from matinee we would need some way to distinguish which image sequence belongs to which camera.
Those are our current ideas. If you have any others then we are all ears. We just need to be able to land on the specific goal of capturing 360 degrees of our scene into two separate eyes for a stereoscopic view.
Thanks for your time (esp taking the time to read this) and input
asked Aug 06 '14 at 02:39 PM in Using UE4
The most practical solution will be to take cubemap captures at each frame of the video. You're going to have to add the code for that yourself, but it's effectively updating the cube capture each frame and dumping its contents to disk. Look in the code for ProcessScreenShots and HighResScreenshot for examples.
answered Aug 06 '14 at 03:21 PM
This I agree with Nick his solution was a bit much for me to do. So my hack but working solution was to create a cube capture target. Have the texture open on the desktop and with a matinee attached to the cube capture. I play the matinee and screen capture the updating texture. Crop it and bam you have a working 360 x 180 video.
Note: with a 1440 p monitor the video is 2550*1275 the Gear Vr can play it but the resolution really needs to be double that at least.
I screen capture at 15 fps and play the matinee at 1\2 speed so the updating is smoother. Then in the Gear Vr play it at 30 fps. Looks great just missing post processing.
Let me know what you think.
answered Dec 16 '14 at 03:33 PM
I think you're actually trying to get stereo 360 degree, pre-rendered.
Imagine that you'll have 360 pixels wide of output video. If you face North, you see the 90th pixel (in the center of your view). If you face West, you see the 180th pixel. I'm over-simplifying it, but you get the idea.
But think about what's rendered for each eye when you face North. You want 70 mm of separation between the two cameras. Meaning, the 90th pixel (North) for your left eye is 70 mm to the left of the camera that produced the 90th pixel (North) for your right eye.
If you see what I'm driving at, you can't use 4 cameras to generate North, West, South and East views. Well, yes, at a minimum, you'd need to render 4 more cameras for the right-eye North, right-eye West, right-eye South, and right-eye East. But at the 45 degree angle, you've got this weird mess of camera data...
My point being that every single column of pixels you generate in your final 2D cube-mapped rendering, needs to come from a camera with a different position and look-at direction.
You'll end up with weird artifacts, I suspect.
And I'm not sure if Unreal is particularly suited.
As you've said, you need a random number seed, so that all of your affects run the same way, every time... With no screen-space tricks, and no objects "culled due to visibility."
And you'd need specialized rendering to handle the video you'd output... But I don't think it'd be too hard to do that.
I'm pretty sure the technique I've described is why John Carmack is so excited in his recent tweet:
Here you go, courtesy of Michael Allar. He does note that it eats your ram and your dog though :P
answered Jun 01 '15 at 05:50 PM
Follow this question
Once you sign in you will be able to subscribe for any updates here