World space to perspective clip space help [Elysian Shadows]
Posted: Tue Jan 21, 2014 2:49 pm
I've been working on omnidirectional shadows for point lights, and I have completed the underlying implementation, and am now just tripping on the linear algebra and matrix transforms to make this mapping work correctly.
If you manage to help, you will get a shoutout in AiGD Chapter 22 and my personal gratitude.
So basically I'm rendering a view frustum from each direction (6 directions) of a light source, and am storing the depth values for each face in a cube map, which I am then using as a depth comparison in our fragment shader. This all pretty much works... but the shadows seem to be distorted, translated, and are sometimes coming from the wrong direction.
First of all, ES's native transform is defined by the following orthographic projection:PLEASE NOTE exactly what coordinate space this projection is establishing. The top of our screen in the Y direction is 0. The bottom is screen height. Z coordinate 0 is far away, and the closer the Z coordinate to 1000.0f, the closer it is to the camera.
This orthographic projection is not consistent with the coordinate space of a view frustum, which is why I believe I am having such a hard time with this math. It is, however, easier to work with in 2D, and represents the Dreamcast's native coordinate system.
Now for each directional render pass of the camera, our camera and projection matrices look like this:
So for each render pass for the shadows, we're using the same perspective projection, then we are calculating a view matrix as faceMatrix*cameraPosMatrix.
Then finally, when we're ready for the actual render pass, we transform each world space vertex by the cameraPosMatrix, to move it relative to the camera, then attempt to lookup into our cube map depth texture in the fragment shader:
...only as I've said before, this produces incorrect shadows. I'm fairly positive this has to do with the way we have defined our world space being fundamentally incompatible with an OpenGL view frustum, but I cannot figure out how to properly transform coordinates between them.
If you manage to help, you will get a shoutout in AiGD Chapter 22 and my personal gratitude.
So basically I'm rendering a view frustum from each direction (6 directions) of a light source, and am storing the depth values for each face in a cube map, which I am then using as a depth comparison in our fragment shader. This all pretty much works... but the shadows seem to be distorted, translated, and are sometimes coming from the wrong direction.
First of all, ES's native transform is defined by the following orthographic projection:
Code: Select all
const float pfar = 1000.0f;
const float pnear = -999.0f;
const float pleft = 0.0f;
const float pright = projWidth;
const float ptop = 0.0f;
const float pbottom = projHeight;
gyVidOrthoProjMatrix(&projectionMatrix, pleft, pright, pbottom, ptop, pnear, pfar);
This orthographic projection is not consistent with the coordinate space of a view frustum, which is why I believe I am having such a hard time with this math. It is, however, easier to work with in 2D, and represents the Dreamcast's native coordinate system.
Now for each directional render pass of the camera, our camera and projection matrices look like this:
Code: Select all
//projection matrix
gluPerspective(90.0f, 1.0f, 0.005f, 500.0f);
Code: Select all
//matrices defining each face of the cube
glLoadIdentity(); gluLookAt(0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0,0.0, 1.0); // +X
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)_pointShadowFaceMatrices[CUBE_MAP_RIGHT]);
glLoadIdentity(); gluLookAt(0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0,-1.0, 0.0); // -X
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)_pointShadowFaceMatrices[CUBE_MAP_LEFT]);
glLoadIdentity(); gluLookAt(0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0); // +Y
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)_pointShadowFaceMatrices[CUBE_MAP_BOTTOM]);
glLoadIdentity(); gluLookAt(0.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 1.0); // -Y
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)_pointShadowFaceMatrices[CUBE_MAP_TOP]);
glLoadIdentity(); gluLookAt(0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0,-1.0, 0.0); // +Z
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)_pointShadowFaceMatrices[CUBE_MAP_BACK]);
glLoadIdentity(); gluLookAt(0.0, 0.0, 0.0, 0.0, 0.0, -1.0, 0.0, -1.0, 0.0); // -Z
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)_pointShadowFaceMatrices[CUBE_MAP_FRONT]);
Code: Select all
//camera position transform
gyMatTranslate(-curLight->position.x, -curLight->position.y, -curLight->position.z);
Then finally, when we're ready for the actual render pass, we transform each world space vertex by the cameraPosMatrix, to move it relative to the camera, then attempt to lookup into our cube map depth texture in the fragment shader:
Code: Select all
vec4 absPos = abs(vShadowPos[i]);
float frus_z = -max(absPos.x, max(absPos.y, absPos.z));
vec4 clip = uShadowCubeMapProjMat * vec4(0.0, 0.0, frus_z, 1.0);
float depth = (clip.z/clip.w)*0.5+0.5;
if(textureCube(uShadowCubeMaps[i], vShadowPos[i].xyz).r < depth) {
shadowFactor = 0.1;
}