As you may know, I am working on a 3D graphing utility for the Casio Prizm graphing calculator. I have it working nicely, short of a few features. Notably, I've been using a camera rotation system whereby pressing one set of arrows adds/subtracts to/from the absolute theta_y, and pressing the other set adds/subtracts to/from the absolute theta_z. For correct rotation, the change in [theta_x, theta_y, theta_z] needs to depend on the current [theta_x, theta_y, theta_z], and I am having a hard time deriving the correct math for it. First, a bit of background:
:: I am defining the rotation for the graph with three angles, theta_x, theta_y, theta_z (or XYZ), applied in order of X, Y, Z. The transform I'm using is called the camera transform (using the camera transform). It's a left-handed transform, with, as I said, an angle order of X, Y, then Z. My code to calculate the camera position looks like this (remember, I want to keep it pointing at the center of the graph, which is usually the three-dimensional origin (0,0,0).):
Code:
Next, the math that I use to apply the camera transform to 3D points to be drawn to account for the camera position and heading. (dx, dy, dz) are the remapped coordinates in 3-space:
Code:
However, adjusting the camera rotation is proving to be a real problem. I've considered several different approaches:
The code for the last approach is here. It sadly does not work. Any assistance would be greatly appreciated. matan2f is a custom atan2() implementation.
Code:
I should mention that I found the algorithm at the end of that here:
http://stackoverflow.com/questions/1251828/calculate-rotations-to-look-at-a-3d-point. Again, thanks in advance for any help, large or small.
:: I am defining the rotation for the graph with three angles, theta_x, theta_y, theta_z (or XYZ), applied in order of X, Y, Z. The transform I'm using is called the camera transform (using the camera transform). It's a left-handed transform, with, as I said, an angle order of X, Y, then Z. My code to calculate the camera position looks like this (remember, I want to keep it pointing at the center of the graph, which is usually the three-dimensional origin (0,0,0).):
Code:
costx = mcosf(theta_x); sintx = msinf(theta_x); // 1 0
costy = mcosf(theta_y); sinty = msinf(theta_y); // 1 0
costz = mcosf(theta_z); sintz = msinf(theta_z); //-1 0
float alpha = 0.f, beta = 0.f, gamma = cam_radius;
float alpha_prime = alpha*costy*costz + beta*(costy*sintz) + gamma*(sinty);
float beta_prime = -alpha*(costx*sintz+costz*sintx*sinty) + beta*(costx*costz-sintx*sinty*sintz) + gamma*costy*sintx;
float gamma_prime = alpha*(sintx*sintz-costx*costz*sinty) - beta*(costz*sintx+costx*sinty*sintz) + gamma*costx*costy;
cx = -alpha_prime;
cy = -beta_prime;
cz = -gamma_prime;
// adjust for grid center (ex,ey)
cx += ex;
cy += ey;
Next, the math that I use to apply the camera transform to 3D points to be drawn to account for the camera position and heading. (dx, dy, dz) are the remapped coordinates in 3-space:
Code:
float dx = (val_x-cx)*costy*costz + (val_y-cy)*(costy*sintz) + (val_z-cz)*(sinty);
float dy = -(val_x-cx)*(costx*sintz+costz*sintx*sinty) + (val_y-cy)*(costx*costz-sintx*sinty*sintz) + (val_z-cz)*costy*sintx;
float dz = (val_x-cx)*(sintx*sintz-costx*costz*sinty) - (val_y-cy)*(costz*sintx+costx*sinty*sintz) + (val_z-cz)*costx*costy;
However, adjusting the camera rotation is proving to be a real problem. I've considered several different approaches:
- Simply add/subtract pi/8 to/from theta_y/theta_z. This does rotation, but not intuitively.
- Use the camera transform to transform delta theta_y or delta theta_z values into delta_x/y/z angles, and add those to theta_x/y/z. This appears not to work.
- Use the camera cx/cy/cz coordinates, and apply a camera transform of +/- pi/8 in either theta_y or theta_z. Then, use the resulting new coordinates for the camera to work backwards and get the XYZ angles for the vector that joins the origin to that point.
The code for the last approach is here. It sadly does not work. Any assistance would be greatly appreciated. matan2f is a custom atan2() implementation.
Code:
//alpha, beta, gamma are the delta rotation angles
float c1 = mcosf(alpha), c2 = mcosf(beta), c3 = mcosf(gamma);
float s1 = msinf(alpha), s2 = msinf(beta), s3 = msinf(gamma);
//[cxo cyo czo] are the camera coords before inversion/re-centering
float alpha_prime = cxo*c2*c3 + cyo*(c2*s3) + czo*(s2);
float beta_prime = -cxo*(c1*s3+c3*s1*s2) + cyo*(c1*c3-s1*s2*s3) + czo*c2*s1;
float gamma_prime = cxo*(s1*s3-c1*c3*s2) - cyo*(c3*s1+c1*s2*s3) + czo*c1*c2;
theta_x = matan2f( beta_prime, gamma_prime );
if (gamma_prime >= 0) {
theta_y = matan2f( alpha_prime * mcosf(theta_x), gamma_prime );
} else {
theta_y = matan2f( alpha_prime * mcosf(theta_x), -gamma_prime );
}
theta_z = matan2f( mcosf(theta_x), msinf(theta_x) * msinf(theta_y) );
I should mention that I found the algorithm at the end of that here:
http://stackoverflow.com/questions/1251828/calculate-rotations-to-look-at-a-3d-point. Again, thanks in advance for any help, large or small.