admin管理员组

文章数量:1335134

I want to render a bunch of 3d points into 2d canvas without webgl. I thought clip space and screen space are the same thing, and camera is used to convert from 3d world space to 2d screen space, but apperently they are not.

So on webgl, when setting gl_Position, it's in clip space, later this position is converted to screen space by webgl, and gl_FragCoord is set. How is this calculation is done and where?

And Camera matrix and view projection matrices has nothing to do with converting clip space to screen space. I can have a 3d world space that fit's into clip space, and I wouldn't need to use a camera right?

If all my assumptions are true, I need to learn how to convert from clip space into screen space. Here's my code:

const uMatrix = mvpMatrix(modelMatrix(transform));

 // transform each vertex into 2d screen space
vertices = vertices.map(vertex => {
  let res = mat4.multiplyVector(uMatrix, [...vertex, 1.0]);
  // res is vec4 element, in clip space,
  // how to transform this into screen space?
  return [res[0], res[1]];
});


// viewProjectionMatrix calculation
const mvpMatrix = modelMatrix => {
  const { pos: camPos, target, up } = camera;
  const { fov, aspect, near, far } = camera;

  let camMatrix = mat4.lookAt(camPos, target, up);
  let viewMatrix = mat4.inverse(camMatrix);

  let projectionMatrix = mat4.perspective(fov, aspect, near, far);

  let viewProjectionMatrix = mat4.multiply(projectionMatrix, viewMatrix);

  return mat4.multiply(viewProjectionMatrix, modelMatrix);
};

The camera mentioned in this article transforms clip space to screen space, If so it shouldn't be named a camera right?

I want to render a bunch of 3d points into 2d canvas without webgl. I thought clip space and screen space are the same thing, and camera is used to convert from 3d world space to 2d screen space, but apperently they are not.

So on webgl, when setting gl_Position, it's in clip space, later this position is converted to screen space by webgl, and gl_FragCoord is set. How is this calculation is done and where?

And Camera matrix and view projection matrices has nothing to do with converting clip space to screen space. I can have a 3d world space that fit's into clip space, and I wouldn't need to use a camera right?

If all my assumptions are true, I need to learn how to convert from clip space into screen space. Here's my code:

const uMatrix = mvpMatrix(modelMatrix(transform));

 // transform each vertex into 2d screen space
vertices = vertices.map(vertex => {
  let res = mat4.multiplyVector(uMatrix, [...vertex, 1.0]);
  // res is vec4 element, in clip space,
  // how to transform this into screen space?
  return [res[0], res[1]];
});


// viewProjectionMatrix calculation
const mvpMatrix = modelMatrix => {
  const { pos: camPos, target, up } = camera;
  const { fov, aspect, near, far } = camera;

  let camMatrix = mat4.lookAt(camPos, target, up);
  let viewMatrix = mat4.inverse(camMatrix);

  let projectionMatrix = mat4.perspective(fov, aspect, near, far);

  let viewProjectionMatrix = mat4.multiply(projectionMatrix, viewMatrix);

  return mat4.multiply(viewProjectionMatrix, modelMatrix);
};

The camera mentioned in this article transforms clip space to screen space, If so it shouldn't be named a camera right?

Share Improve this question edited Sep 15, 2019 at 1:08 user128511 asked Sep 14, 2019 at 18:04 eguneyseguneys 6,4067 gold badges34 silver badges81 bronze badges 0
Add a ment  | 

1 Answer 1

Reset to default 7

First the geometry is clipped, according to the clip space coordinate (gl_Position). The clip space coordinate is a Homogeneous coordinates. The condition for a homogeneous coordinate to be in clip space is:

-w <=  x, y, z  <= w.

The clip space coordinate is transformed to a Cartesian coordinate in normalized device space, by Perspective divide:

ndc_position = gl_Position.xyz / gl_Position.w

The normalized device space is a cube, with the left bottom front of (-1, -1, -1) and the right top back of (1, 1, 1).

The x and y ponent of the normalized device space coordinate is linear mapped to the viewport, which is set by gl.viewport (See WebGL Viewport). The viewport is a rectangle with an origin (x, y) and a width and a height:

xw = (ndc_position.x + 1) * (width / 2) + x
yw = (ndc_position.y + 1) * (height / 2 ) + y 

xw and yw can be accessed by gl_FragCoord.xy in the fragment shader.

The z ponent of the normalized device space coordinate is linear mapped to the depth range, which is by default [0.0, 1.0], but can be set by gl.depthRange. See Viewport Depth Range. The depth range consists of a near value and a far value. far has to be greater than near and both values have to be in [0.0, 1.0]:

depth = (ndc_position.z + 1) * (far-near) / 2 + near

The depth can be accessed by gl_FragCoord.z in the fragment shader.

All this operations are done automatically in the rendering pipeline and are part of the Vertex Post-Processing.

本文标签: javascriptHow Does a camera convert from clip space into screen spaceStack Overflow