Simulate Crosseyes VR

Is it possible to translate the x,y,z coordinates of a camera lens such that for a normal sighted person you would see what a “cross eyed” person would see?

I’m not actually intending to use it on normal sighted people, but rather individuals who have crossed eyes so that they could see what a normal person sees. I understand that this goes against customary rules of avoiding nausea.

I’d like to be able to apply these translations on the whole lens field, but also apply them on individual objects.

Through some experimentation and help from others I found I could apply some filters on a per object basis using the screen position of the object and applying changes to the object when it was at certain x-positions relative to the center screen, but I could find no way of translating their vertical, horizontal, and torsional coordinates.

(P.S. I apologize for cross posting, originally this seemed to me like a question most related to rendering)

I feel like this might be something addressed at the HMD level itself.

Like give someone who has this problem google cardboard and get them to go somewhere like here and calibrate the cardboard until they see things “correct”

https://wwgc.firebaseapp.com/

Sorry, I don’t even want to make this comment because I don’t want to sound like I don’t appreciate your response, because I do, but it isn’t really a solution to my problem, Someone else “accepted” it as a resolution but it isn’t. I need to be able to dynamically control degree of the translations during play, and also apply the translations to objects and not the whole lens. I’m not sure it’s possible, but it looks like I’ll need to go to something that has more direct control. I’m hoping opengl might be doable.