CUDA interoperability

##Context

We currently utilise CUDA and OpenGL for executing real-time multi-agent simulations with potentially tens/hundreds of thousands of agents. For example pedestrian and traffic simulations, whereby pedestrians and vehicles are the agents respectively.

This is achieved by keeping agent data on the GPU at all times, avoiding unnecessary PCI bus latency. CUDA simply copies all the agent location/rotation/animation data to a number of texture buffers. These buffers are then accessed during render by the vertex shader in order to reposition/animate each model. We use instanced rendering for this, however it can be achieved through individual render calls with apparent minimal overhead.

We’re investigating the suitability of transitioning our visualisations to a game/graphics engine to provide richer visualisations and abstracting VR integration in future.

##Questions

  1. Is it possible to access raw texture buffer ID’s within Unreal? We should be able to generate these with CUDA and provide them to Unreal, alternatively we can use buffers allocated by Unreal provided the IDs are exposed. CUDA supports interoperability with both OpenGL and DirectX, albeit through separate APIs.

  2. If significant parts of rendering such as positioning and animating models is being deferred to custom vertex shaders, are we losing the value of Unreal for our agent rendering?

Other suggestions are also welcome.
Thanks