I'm trying to understand tensor notation and working with indices in special relativity. I use a book for this purpose in which $\eta_{\mu\nu}=\eta^{\mu\nu}$ is used for the metric tensor and a vector is transformed according to the rule $$x'^\mu= \Lambda^\mu{}_{\alpha}x^\alpha$$ (Lorentz-transformation).
I think I understand what is going on up to this point but now, I'm struggling to understand how the following formula works:
$$\eta_{\nu\mu}\Lambda^{\mu}{}_{\alpha}\eta^{\alpha\kappa} ~=~ \Lambda_{\nu}{}^{\kappa}$$
Why is this not equal to (for instance) $\Lambda^{\kappa}{}_{\nu}$? In addition, I have trouble understanding what the difference is between $\Lambda_\alpha^{\ \ \beta}$, $\Lambda_{\ \ \alpha}^\beta$, $\Lambda^\alpha_{\ \ \beta}$ and $\Lambda^{\ \ \alpha}_\beta$ (order and position of indices). And if we write tensors as matrices, which indices stand for the rows and which ones stand for the columns?
I hope someone can clarify this to me.
Answer
With the tensor indices notation, each "slot" is distinct and can be raised and lowered separately. So $\eta^{\kappa\alpha}\Lambda^\mu_{\ \ \alpha} = \Lambda^{\mu\kappa}$. Then $\Lambda^{\mu\kappa}\eta_{\mu\nu} = \Lambda_\nu^{\ \ \kappa}$.
When representing these objects as matrices, the usual convention is the first index is the row and the second is the column.
Be careful to stick with the convention when converting a tensor notation equation into linear algebra with its matrix representation. If we had an equation like $A_{ij} = C^{k}{}_{j}B_{ik}$, we could represent it with matrices $\bf{A}= \bf{B}\bf{C}$. Notice the swap in order to make the matrix multiplication correctly represent the equation (if you look at the four indices of the multiplied tensor components, it should look like the inner two are repeated ... in this case $B_{ik}C^{k}{}_{j}$).
No comments:
Post a Comment