camera space with fow in material

http://i.imgur.com/kR3QjEb.gif
http://i.imgur.com/xtv3Z0n.gif

in gif evident that the normals are not considered properly in the camera
Because in the transformation of normal are not taken into account the distortion of the camera FOV.
How to calculate and transform the normals in camera space, taking into account the FOV?

You could do additional transform for normals that are already in camera space, using screen-aligned UVs. But for the effect you are after, there is a better solution.

Use Camera Vector Material Expression. It gives you a vector from pixel to the camera.

yes camera vector is similar to what I need.

but I can not convert them with the world normal

if not too much trouble I’d be happy to get hint

Take a dot product of camera vector and vertex/pixel normal.

I still can not get adjusted coordinates in the camera-space, taking into account the difference in the direction of pixels.
the whole way transformation? Thanks.

You should fully describe the effect you want to achieve instead.

Transform the pixels normal to camera space, given fov. And create camera aligned UV coordinates for the object.
(matcap UV like in shaderFX in 3dsmax or maya).

While rotating the camera, “camera aligned UV” should not change. UV change should only be when the camera is moving.

Transform the pixels normal to camera
space, given fov.

Fov is completely absent from definition of camera space. Should it not be the case, you would have a multitude of spaces.

that’s what I mean by the camera-space, taking into account fov

It is approximated to the desired result, but still not the same.
http://i.imgur.com/vg6dyZu.mp4
on gif seen a slight rotation of the UV when to rotate the camera.

I tried, but I have difficulty with that …
if you showed how to do it, I would be grateful.

If you want a screen-aligned texture on your object, but one, that stays stationary while cam is rotating, why don’t you just use screen aligned UVs, with addition of offset to compensate for camera rotation?

I’ve got one of two problems:
normalize the vector screen aligned UV to complete rotation.
or get the UV after additional turn

(before the texture is no mask RG, but essence not in it.)
aligned RG after normalization does not match…

offset to compensate for camera rotation in custom node:

float4 q;
q.xyz = cross(v1, v2);
q.w = sqrt((v1.Length ^ 2) * (v2.Length ^ 2)) + dot(v1, v2);
return v1 + 2.0 * cross( cross( v1, q.xyz ) + q.w * v1, q.xyz );

Post a reference to the effect you are trying to achieve.

screen aligned UVs - normalWS matrix*camera direction vector, and that space is flat.

but I need to transform normalWS*(cameravectorWS*camera_direction_vectorWS), is curved space.

then convert RG in the UV

and then if we consider the sphere of UV 0.5,0.5 center of the sphere will always look at the camera. And the spheres of the edge will always be in the tangent space perpendicular to the camera’s view

A reference of the effect, that was already implemented. Video, screen shot, or description or something like that. What you just posted is, as said above, as trivial as taking a dot between view dir and vertex normal.

screen-aligned texture from pixel normals on my object, but one, that stays stationary while camera is rotating…

You are terribly over-complicating things. Matcap UVs can be obtained like this:

Further read [here][2].

but it is not oriented to the camera, it is oriented to screen Z, but not to the camera. It looks good only when it is in the center of the screen.

The same material in the first question is, as an example

my last comment for this video http://i.imgur.com/UttOPth.gifv

There is a formula in the link I’ve posted.
Use this for custom node:

float3 R=C-2*dot(N,C)*N;

float D=2sqrt( R.xR.x + R.yR.y + (R.z+1)(R.z+1));

float U=(R.x/D)+0.5;

float V=(R.y/D)+0.5;

return float2(U,V);

Custom node should be set to float 2 output and have View-Space Camera vector as input C and view-space normal as input N.

For the distortion of the texture at the sides of a viewport, well… your sphere isn’t a sphere in the corner of a viewport anymore, so there will inevitably be some distortion. It will increase with larger FOVs. Typically, this is ignored. Have a proper padding in your matcap texture.

You could play around further and calculate divisor separately for U and V, introducing sort of compensation for position of the pixel on screen, but in the long term it is not worth the effort, no matter what kind of effect you are after.