It seems like a valid title to mark my ~ half-decade hiatus from blogging. My last post about camera representation was around the time I left college. Soon after life hit, and I never came back.
Since then, I’ve worked at a startup, university, two years of unemployment, and now I write code infrastructure in the rendering team for a CAD company. I’ve tried to be a purist about writing, only 3D software to the degree one can find such jobs in a country like India, which has a (mistaken) reputation of being cheap outsource-able grunt work but not actual engineering/R&D country. Hence very few such jobs.
Throughout my jobs, I’ve worked on scientific visualization, realtime-graphics, C++, performance optimization, nonlinear math, clean legacy mess – but never all in one place on a single engine. I feel I’d have grown much more if I did. My dream of a Carmack-style career of mastering abstract math to silicon – remains unfulfilled for now. Anyways, I digress. I’ll write more on this later.
The point of this post is to announce I’ll try to write more regardless of response. I want to share knowledge and develop more constructive output for venting. I’ll mainly write technical posts as well as my experience working on small to very large-scale software – observations about :
Managing projects/people
Good (and bad) engineering
Tooling, release management, debugging tips
Working with people with very different skill sets
Attempts developing as a specialist in a generalist team
Meta-analysis on understanding political/social implications of our work; finding creativity within silos of well-defined scope, etc.
Feel free to share with people who may find my tirades helpful, and reach out to me on Twitter.
This means “rotation matrix at time t ” is equal to rotation matrix at time t-1 plus something.
So, why did we go through all this trouble?
Consider the “normal” scenario of Rotation matrices :
we can have a new rotation by multiplying two other matrices , i.e
$$R_3 = R_2 \times R_1$$
But rotations do not follow “linear” property, i.e , we cannot have a rotation matrix $R_3$ such that
$$R_3 = R_2 + R_2 ——Observation(2)$$
This is false! In fact, in this case $R^3$ is not even a rotation matrix!
(Mathematically, we say that such set of matrices do not belong to a linear “group”, since they don’t even follow simple rule of linear combination)
How nice would it be, if rotations could be represented in a sort of “linear” way……..
But that is exactly what we derived in $Observation(1)$!
Using magic of differential equations we’re able to represent rotations in a way that does not involve matrix-multiplication, hence simplifying the process intuitively. Not to mention it is also computationally cheaper.
The Exponential map
Given the formulation of rotation in terms of skew-symmetric matrix $\hat w$ , is it possible to determine a useful representation of $R(t)$ ? Assuming $\hat w$ is constant in time, which helps us formulate a first-order differential equation given by :
$$\left\{\begin{matrix}
\dot R(t) = \hat w R(t) \\
R(0) = I
\end{matrix}\right.$$
That is, “change in R” at time t, isgiven by $\hat w$ times $R$. Also given initial condition is $R(0) = I$
Solving this equation uniquely determines these matrices $\because$ if I tell what the first order derivative is, it basically tells – from every time, to next how the rotation matrix is changes.
Now, the given differential equation has solution
$$R(t) = e^{\hat w t} ——Observation(3)$$
That’s right, we’re putting $e$ to the power of a matrix!
It is not actually different from computing exponents of regular numbers. Just doing Taylor expansion we get :
$$ e^{\hat w t} = \sum_{n=0}^\infty \frac {(\hat w t)^n}{n!} = I + \hat w t + \frac {{(\hat w t)}^2} {2} + \cdots $$
What we see from $Observation(3)$ is that the function which maps skew-symmetric matrix $\hat w$ to a rotation matrix is just the exponential ($e$) function.
This is basically a rotation matrix that rotates around the axis “$w$” by angle “$t$” (only if $\left \| w \right \| =1$ )
(NOTE: we can also go back, i.e “inverse map” to $\hat w$ by applying logarithm. Again, logarithms of matrices can be found from their Taylor expansions)
Now, Taylor expansions are again computationally expensive because of matrix multiplications involved.