Hello, I’m experimenting with VR Video output from unreal engine 4. So far, my Standard 6 camera rig has worked well but what I’m really looking for a seamless solution. Ideally I’d like to be able to store the .HDR image produced by the Scene Capture Cube every frame at 60fps. Failing that i’d like to be able to render the .HDR to a texture in the game world and record that using a camera. Alternately I’ll try creating a 6 camera setup for a Scene Capture 2D.
Thanks in advance,
from HammerheadVR.com
Scene Capture Cube .HDR Output of Luxury Yacht Deck
I am interested in what you’re doing with rendering a Standard 6 camera video cube can you share more info with me and possible some screen shots? I maybe able to help out in some capacity.
K&L are developing a free 360 plugin for UE4, see below. Release date unknown: http://blog.kiteandlightning.la/
My guess is it was created off the back of Divergence, a client project.
This is the project I’m developing with HammerheadVR using 360 rendering: presence-the-abduction/
At the moment we’re using a 3ds max panoramic render, but UE4 will be much faster for production rendering for films.
Unfortunately, that doesn’t help me today, as I have no idea when this plugin will be released. I’ve contacted them, and even they don’t know, since it’s now in the hands of Epic, who is rolling it into a mainline feature.
If you’d like to learn more about rendering the 6 camera method I’m happy to talk to you about it, however there are a lot of issues with Seams. To the point where I’m stopping my research in this area until Epics solution is released.
Have you gotten anywhere with rendering a captureCube, perhaps through ? I’ve avoided using , but am starting to look into it. The 6-cam method sounds painful. I’ve also read about rendering many thin strips of a pano and stitching them together. That also sounds painful and inelegant;)
Theblog has taught me a lot about stereo 360 principles
From a conversation with
Capturing mono HDR pano video in-engine using cube maps, etc.
Capturing stereo pano in-engine
“Mono capture is probably best done as cubes.”
“This would probably go through the command line ‘dumpmovie’ path. That lets you configure resolution and framerate, and then the engine steps the time to give you a perfect rendering at whatever framerate you need as an offline process.”
“For stereo capture you will have to use an emulated rig and stitching, as the eyes move in addition to rotating as you change view directions. This will never be perfect due to parallax differences at the edge of each captured image, so that’s where the stitching comes in. We haven’t implemented our strips idea yet, but there’s a good chance that the rendering and stitching overhead will be quite high as the number of strips increases. It will definitely be an offline process.”
“Our current thinking is to mimic the camera rigs that are being used to capture live stereo 360 video and stick with videostitch. The advantage is that instead of being physically and cost limited to 8 stereo pairs (like the Samsung (Project Beyond) rig for example), we can render out as many vertical strips per eye as we need. That should make stitching much more precise, although the render and stitch times would obviously go up. I’m not sure how expensive that would be yet, but that’s the direction we were thinking about starting with.”
“For VR we’re typically looking at very fixed framerates (60 for GearVR, 75 for DK2, etc.) and your going to find the highest quality at some multiple of whatever your target platform is, although there’s probably a bit more flexibility at these high framerates? I’ll look into the file format support, especially as we start exploring stereo playback ourselves.”