Stefan Ram
2024-07-31 06:44:26 UTC
.
I have read the following derivation in a chapter on SR.
|(0) We define:
|X := p_"mu" p^"mu",
|
|(1) from this, by Eq. 2.36 we get:
|= p_"mu" "eta"^("mu""nu") p_"mu",
[[Mod. note -- I think that last subscript "mu" should be a "nu".
That is, equations (0) and (1) should read (switching to LaTeX notation)
$X := p_\mu p^\mu
= p_\mu \eta^{\mu\nu} p_\nu$
-- jt]]
|
|(2) from this, using matrix notation, we get:
|
| ( 1 0 0 0 ) ( p_0 )
|= ( p_0 p_1 p_2 p_3 ) ( 0 -1 0 0 ) ( p_1 )
| ( 0 0 -1 0 ) ( p_2 )
| ( 0 0 0 -1 ) ( p_3 ),
|
|(3) from this, we get:
|= p_0 p_0 - p_1 p_1 - p_2 p_2 - p_3 p_3,
|
|(4) using p_1 p_1 - p_2 p_2 - p_3 p_3 =: p^"3-vector" * p^"3-vector":
|= p_0 p_0 - p^"3-vector" * p^"3-vector".
. Now, I used to believe that a vector with an upper index is
a contravariant vector written as a column and a vector with
a lower index is covariant and written as a row. We thus can
write (0) in two-dimensional notation:
( p^0 )
= ( p_0 p_1 p_2 p_3 ) ( p^1 )
( p^2 )
( p^3 )
So, I have a question about the transition from (1) to (2):
In (1), the initial and the final "p" both have a /lower/ index "mu".
In (2), the initial p is written as a row vector, while the final p
now is written as a column vector.
When, in (1), both "p" are written exactly the same way, by what
reason then is the first "p" in (2) written as a /row/ vector and
the second "p" a /column/ vector?
Let's write p_"mu" "eta"^("mu""nu") p_"mu" with two row vectors,
as it should be written:
( 1 0 0 0 )
= ( p_0 p_1 p_2 p_3 ) ( 0 -1 0 0 ) ( p_0 p_1 p_2 p_3 )
( 0 0 -1 0 )
( 0 0 0 -1 )
. AFAIK, the laws for matrix multiplication just do not define
a product of a 4x4 matrix with a 1x4 matrix, because for every
row of the left matrix, there has to be a whole column of the
right matrix of the same size. Does this show there's something
off with that step of the calculation?
I have read the following derivation in a chapter on SR.
|(0) We define:
|X := p_"mu" p^"mu",
|
|(1) from this, by Eq. 2.36 we get:
|= p_"mu" "eta"^("mu""nu") p_"mu",
[[Mod. note -- I think that last subscript "mu" should be a "nu".
That is, equations (0) and (1) should read (switching to LaTeX notation)
$X := p_\mu p^\mu
= p_\mu \eta^{\mu\nu} p_\nu$
-- jt]]
|
|(2) from this, using matrix notation, we get:
|
| ( 1 0 0 0 ) ( p_0 )
|= ( p_0 p_1 p_2 p_3 ) ( 0 -1 0 0 ) ( p_1 )
| ( 0 0 -1 0 ) ( p_2 )
| ( 0 0 0 -1 ) ( p_3 ),
|
|(3) from this, we get:
|= p_0 p_0 - p_1 p_1 - p_2 p_2 - p_3 p_3,
|
|(4) using p_1 p_1 - p_2 p_2 - p_3 p_3 =: p^"3-vector" * p^"3-vector":
|= p_0 p_0 - p^"3-vector" * p^"3-vector".
. Now, I used to believe that a vector with an upper index is
a contravariant vector written as a column and a vector with
a lower index is covariant and written as a row. We thus can
write (0) in two-dimensional notation:
( p^0 )
= ( p_0 p_1 p_2 p_3 ) ( p^1 )
( p^2 )
( p^3 )
So, I have a question about the transition from (1) to (2):
In (1), the initial and the final "p" both have a /lower/ index "mu".
In (2), the initial p is written as a row vector, while the final p
now is written as a column vector.
When, in (1), both "p" are written exactly the same way, by what
reason then is the first "p" in (2) written as a /row/ vector and
the second "p" a /column/ vector?
Let's write p_"mu" "eta"^("mu""nu") p_"mu" with two row vectors,
as it should be written:
( 1 0 0 0 )
= ( p_0 p_1 p_2 p_3 ) ( 0 -1 0 0 ) ( p_0 p_1 p_2 p_3 )
( 0 0 -1 0 )
( 0 0 0 -1 )
. AFAIK, the laws for matrix multiplication just do not define
a product of a 4x4 matrix with a 1x4 matrix, because for every
row of the left matrix, there has to be a whole column of the
right matrix of the same size. Does this show there's something
off with that step of the calculation?