Hello community! We have a need that is apparently a somewhat special request, and I have been directed to ask the question here. Let me start this post by explaining what we ultimately need. We would like to be able to pre-render and export, as an image sequence, one of our OVR experiences so that it will work in a 360 degree stereoscopic view. So two sequences, offset by around ~70mm or so, of our full environment and experience.
We have experimented with the old school guerilla way of doing this, IE, taking 6 or more cameras at 90 degree (or less) intervals, capturing the experience, and then stitching it back together as a single piece of video, and then doing so for each eye. We are new to UE, so the answer may be easy and immediate, but the problem that we have is that our shot is very FX-heavy, and currently all of the effects happen in a random manner (rain, sparks, fire, shatters, etc). So capturing each eye, and capturing each angle, is currently going to produce random results which will make for a murky stereo piece. If there is a way to bake down the FX so that they play back in the exact same manner each time, that would be fantastic. Or if there was some way to enable a psuedo-random seed, like the type that exists in 3ds Max’s pflow, that could also work. OR, any other ideas or thoughts you folks have on this are appreciated.
Our second idea is: In trying to come up with a back door solution, I combed through the nodes in UE, and found these two:
TextureRenderTargetCube
&
SceneCaptureCube
If we could get an output (latlong image) from these for every single frame of playback, that might also be useful to us. Currently I can export a single frame of the created texture, but there is no way in the current toolset (that I know of) to actually write out those images on a per-frame basis. I can place two sets of these nodes in the scene, so I can get offset cameras kicking out a stereo latlong, which benefits us in that we can grab the entire scene in one go and not have to worry about any random elements. The downfall of this method is that it will take some slicing to get the stereo to work in all directions, but I’m willing to solve that problem if we can get the output of those texture nodes. I also tried putting this texture on a plane and then capturing that in matinee, but it doesn’t map in 2D, but in a 3D cube map manner, which makes sense, but if there is a way to convert a 3D texture to a 2D texture, that could also be helpful.
A third way that we have brainstormed to try and get everything in one go is to actually be able to use matinee’s make movie feature, and capture 12 or more cameras at once. Currently, I can only get one camera to output at one time in matinee, via setting the camera in the world settings. If we could change that particular property from a boolean to an array, where we could pick multiple cameras, that would be helpful. And we could do this anywhere, the world settings, in matinee, or otherwise. We don’t need to playback all 12 cameras in real time, and we aren’t concerned with the render time that it would take to create these frames, as long as we could designate a framerate and output all of the files, preferably to specified directories, or using some sort of logical naming convention. Right now, if I output out of matinee, it just puts a series of images into the /save/screenshots/windows directory. If we could output multiple cameras from matinee we would need some way to distinguish which image sequence belongs to which camera.
Those are our current ideas. If you have any others then we are all ears. We just need to be able to land on the specific goal of capturing 360 degrees of our scene into two separate eyes for a stereoscopic view.
Thanks for your time (esp taking the time to read this) and input