is SceneCapture's performance dependent on FOV and Resolution?

I am trying to compose a stereo spherical panorama using many small SceneCaptures rotating on an axis. I’m trying to accomplish that with very small capture viewRects (at most 5x5) and also extremely small FOVs.

What I’ve noticed is that the resulting speed is the same according to the number of views I’m rendering per frame. Capturing 10 2x2 slices with minimal FOV takes as long as capturing 10 200x200 slices with a greater FOV.

Is that normal behaviour? How could I speed up this process?

Thanks in advance,
Paulo

I need to render a stereoscopic panorama, those are built differently (example, as far as I know, and what is made through SceneCaptureCube doesn’t work for that purpose.

I am a bit confused as to why you do not simply use “Scene Capture Cube” to bake a static texture, that will do pretty much the same thing. Are you trying to do it dynamically or something? I would never expect that to be fast. More than a handful of dynamic captures is a bad idea.

What you are seeing is that there is overhead cost to each scene capture. Yes, the smaller view sizes will technically have less pixels to process but that is probably dwarfed by the overhead of calculating scene visibility and occlusion from each render target.

Ah ok that makes sense. I guess I glanced over the word stereo before.

I think what I said still applies though, the engine is not designed to be using more than a few scene captures at a time. For each one the CPU overhead will be large so having a fast CPU will matter more than a fast GPU.

it may be the engine is just not doing a good job breaking up the workload when there are so many. Or something is stalling because something took too long. Profiling under extreme conditions can be tricky which is why usually you will scale it back to smaller amounts to see if the trend is linear.

But either way I wouldn’t expect this to work anytime soon.

What’s happening is actually the opposite: my CPU use is actually pretty low, barely breaking 15% more or less evenly spread through cores, but my GPU load is 100%.

It does work, it only takes insanely long… I’m gonna have to try something else then, do you have any ideas of what could be used to generate an stereo panorama in UE4?

The plan is to use each sphere capture on panorama viewer, one for each eye, for use with VR.

Where can I start looking for a way to do this like you said?

Anyway, thanks for the help!

Thats what I mean, I would not expect this to get much faster any time soon.

You should look into doing the ‘parallax’ using post processing. Gears of war3 shipped using this technique. basically you use the scene depth to fake the parallax. But this requires somehow getting access to the scene depth. With a scene capture cube I don’t think you can get this.

I think you will need some code support to be able to do that.

What is this stereo sphere actually used for? I skimmed the link you posted but didn’t see anything about what it actually is used for.

Hmm I am not sure what a panorama viewer is. I guess I am confused since normal VR only requires 2 captures.

If you really are computing the whole 360 degrees you will be wasting most of it since a human even in some kind of 360 screen can never see the whole thing at once, unless you look at it as some kinda long-lat chart.

There are tons of papers on the depth thing. Just try looking around. Heres one:
link text

I still must be missing something. Isn’t that kind of the same thing as just using one regular view for each eye? Are you looking for a super high FOV or something? Its still not clicking why you would do this.

Wouldn’t you just end up seeing something like this for each eye, which doesnt need any fancy setup?

I’m trying to create something like this: http://www.dalaifelinto.com/multiview/gooseberry/gooseberry_benchmark_panorama.jpg

I can then project that texture on 2 spheres, each one visible to one eye

What I’m trying to do is pre-render the whole scene as a stereo-panorama to use out of Unreal.