next up previous
Next: Summary Up: Time Evolution Previous: Introduction

The Classical Coupled Mass Problem

Here we will review the results of the coupled mass problem, Example 1.8.6 from Shankar. This is an example from classical physics which nevertheless demonstrates some of the essential features of coupled degrees of freedom in quantum mechanical problems and a general approach for removing such coupling. The problem involves two objects of equal mass, connected to two different walls and also to each other by springs. Using F=ma and Hooke's Law (F=-kx) for the springs, and denoting the displacements of the two masses as x1 and x2, it is straightforward to deduce equations for the acceleration (second derivative in time, ${\ddot x}_1$ and ${\ddot x}_2$):
 
$\displaystyle {\ddot x}_1$ = $\displaystyle - \frac{2k}{m} x_1 + \frac{k}{m} x_2$ (1)
$\displaystyle {\ddot x}_2$ = $\displaystyle \frac{k}{m} x_1 - \frac{2k}{m} x_2.$ (2)

The goal of the problem is to solve these second-order differential equations to obtain the functions x1(t) and x2(t) describing the motion of the two masses at any given time. Since they are second-order differential equations, we need two initial conditions for each variable, i.e., $x_1(0), {\dot x}_1(0), x_2(0)$, and ${\dot x}_2(0)$.

Our two differential equations are clearly coupled, since ${\ddot x}_1$ depends not only on x1, but also on x2 (and likewise for ${\ddot x}_2$). This makes the equations difficult to solve! The solution was to write the differential equations in matrix form, and then diagonalize the matrix to obtain the eigenvectors and eigenvalues.

In matrix form, we have

 
$\displaystyle \left[ \begin{array}{c} {\ddot x}_1 \\  {\ddot x}_2 \end{array} \right]$ = $\displaystyle \left[ \begin{array}{cc} -2 \gamma & \gamma \\  \gamma & -2 \gamma
\end{array} \right ]
\left[ \begin{array}{c} x_1 \\  x_2 \end{array} \right],$ (3)

where $\gamma = k/m$. Since this 2x2 matrix is real and symmetric, it must also be Hermitian, so we know that it has real eigenvalues, and that the eigenvectors will be linearly independent and can be made to form an orthonormal basis.

Equation 3 is a particular form of the more general equation (in Dirac notation)

 \begin{displaymath}
\vert \ddot{x}(t) \rangle = \hat{\Omega} \vert x(t) \rangle
\end{displaymath} (4)

where we have picked a basis set which we will call $ \left\{ \vert 1 \rangle, \vert 2 \rangle
\right\} $, where

\begin{displaymath}\vert 1 \rangle = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right]
\end{displaymath} (5)

represents a unit displacement for coordinate x1, and likewise

\begin{displaymath}\vert 2 \rangle = \left[ \begin{array}{c} 0 \\ 1 \end{array} \right]
\end{displaymath} (6)

represents a unit displacement for coordinate x2. Clearly any state of the system (x1, x2) can be written as a column vector

\begin{displaymath}\left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right]
\end{displaymath} (7)

(as in eq. 3), which can always be decomposed into our $ \left\{ \vert 1 \rangle, \vert 2 \rangle
\right\} $ basis as
$\displaystyle \left[ \begin{array}{c} x_1 \\  x_2 \end{array} \right]$ = $\displaystyle x_1 \left[ \begin{array}{c} 1 \\  0 \end{array} \right]
+ x_2 \left[ \begin{array}{c} 0 \\  1 \end{array} \right]$ (8)

or

\begin{displaymath}\vert x \rangle = x_1 \vert 1 \rangle + x_2 \vert 2 \rangle.
\end{displaymath} (9)

Hence, eq. 3 can be considered a representation of the more general eq. 4 in the $ \left\{ \vert 1 \rangle, \vert 2 \rangle
\right\} $ basis.

If we assume the initial velocities are zero, then we should be able to predict x1(t) and x2(t) directly from x1(0) and x2(0). Thus, we seek a solution of the form

 
$\displaystyle \left[ \begin{array}{c} x_1(t) \\  x_2(t) \end{array} \right]$ = $\displaystyle \left[ \begin{array}{cc} G_{11}(t) & G_{12}(t) \\  G_{21}(t) & G_...
...d{array} \right]
\left[ \begin{array}{c} x_1(0) \\  x_2(0) \end{array} \right],$ (10)

where G(t) is a matrix, called the propagator, that lets us get motion at future times from the initial conditions. We will have to figure out what G(t) is.

Again, the strategy is to diagonalize ${\mathbf \Omega}$. The point of diagonalizing ${\mathbf \Omega}$ is that, as you can see from eq. 3, the coupling between x1 and x2 goes away if ${\mathbf \Omega}$ becomes a diagonal matrix. You can easily verify that the eigenvectors and their corresponding eigenvalues, which we will label with Roman numerals I and II, are

$\displaystyle \lambda_{\rm I} = - \gamma,$   $\displaystyle \vert \rm I \rangle =
\frac{1}{\sqrt{2}} \left[ \begin{array}{c} 1 \\  1 \end{array} \right]$ (11)
$\displaystyle \lambda_{\rm II} = - 3 \gamma,$   $\displaystyle \vert \rm II \rangle =
\frac{1}{\sqrt{2}} \left[ \begin{array}{r} 1\\  -1 \end{array} \right]$ (12)

This new basis, the eigenvector basis, is just as legitimate as our original $ \left\{ \vert 1 \rangle, \vert 2 \rangle
\right\} $ basis, and is in fact better in the sense that it diagonalizes ${\mathbf \Omega}$. So, instead of using the $ \left\{ \vert 1 \rangle, \vert 2 \rangle
\right\} $ basis to obtain eq. 3 from eq. 4, we can use the $\left\{ \vert \rm I \rangle, \vert \rm II \rangle \right\}$ basis to obtain
 
$\displaystyle \left[
\begin{array}{c} {\ddot x}_{\rm I} \\  {\ddot x}_{\rm II} \end{array} \right]$ = $\displaystyle \left[ \begin{array}{cc} \lambda_{\rm I} & 0 \\  0 & \lambda_{\rm...
... \right ]
\left[ \begin{array}{c} x_{\rm I} \\  x_{\rm II} \end{array} \right],$ (13)

so that now ${\ddot x}_{\rm I}$ depends only on $x_{\rm I}$, and ${\ddot x}_{\rm II}$ depends only on $x_{\rm II}$. The equations are uncoupled! Note that we are now expanding the solution $\vert x \rangle$ in the $\left\{ \vert \rm I \rangle, \vert \rm II \rangle \right\}$ basis, so the components in this basis are now $x_{\rm I}$ and $x_{\rm II}$ instead of x1 and x2:

\begin{displaymath}\vert x \rangle = x_{\rm I} \vert \rm I \rangle + x_{\rm II} \vert \rm II \rangle.
\end{displaymath} (14)

Of course it is possible to switch between the $ \left\{ \vert 1 \rangle, \vert 2 \rangle
\right\} $ basis and the $\left\{ \vert \rm I \rangle, \vert \rm II \rangle \right\}$ basis. If we define our basis set transformation matrix as that obtained by making each column one of the eigenvectors of ${\mathbf \Omega}$, we obtain

\begin{displaymath}{\mathbf U} =
\frac{1}{\sqrt{2}} \left[ \begin{array}{rr} 1 & 1 \\ 1 & -1 \end{array} \right],
\end{displaymath} (15)

which is a unitary matrix (it has to be since ${\mathbf \Omega}$ is Hermitian). Vectors in the two basis sets are related by
$\displaystyle \left[ \begin{array}{c} x_1 \\  x_2 \end{array} \right] =
{\mathbf U}
\left[ \begin{array}{c} x_{\rm I} \\  x_{\rm II} \end{array} \right],$ $\textstyle \left[ \begin{array}{c} x_{\rm I} \\  x_{\rm II} \end{array} \right] =
{\mathbf U}^{\dagger} \left[ \begin{array}{c} x_1 \\  x_2 \end{array} \right] .$   (16)

In this case, U is special because ${\mathbf U} =
{\mathbf U}^{\dagger}$; this doesn't generally happen. You can verify that the ${\mathbf \Omega}$ matrix, when transformed into the $\left\{ \vert \rm I \rangle, \vert \rm II \rangle \right\}$ basis via ${\mathbf U}^{\dagger} {\mathbf \Omega} {\mathbf U}$, becomes the diagonal matrix in equation 13.

The matrix equation 13 is of course equivalent to the two simple equations

$\displaystyle {\ddot x}_{\rm I}$ = $\displaystyle - \gamma x_{\rm I}$ (17)
$\displaystyle {\ddot x}_{\rm II}$ = $\displaystyle - 3 \gamma x_{\rm II},$ (18)

and you can see that valid solutions (assuming that the initial velocities are zero) are
  
$\displaystyle x_{\rm I}(t)$ = $\displaystyle x_{\rm I}(0) cos(\omega_{\rm I} t)$ (19)
$\displaystyle x_{\rm II}(t)$ = $\displaystyle x_{\rm II}(0) cos(\omega_{\rm II} t),$ (20)

where we have defined
$\displaystyle \omega_{\rm I}$ = $\displaystyle \sqrt{\gamma}$ (21)
$\displaystyle \omega_{\rm II}$ = $\displaystyle \sqrt{3 \gamma}.$ (22)

So, the $\left\{ \vert \rm I \rangle, \vert \rm II \rangle \right\}$ basis is very special, since any motion of the system can be decomposed into two decoupled motions described by eigenvectors $\vert \rm I \rangle$ and $\vert \rm II \rangle$. In other words, if the system has its initial conditions as some multiple of $\vert \rm I \rangle$, it will never exhibit any motion of the type $\vert \rm II \rangle$ at later times, and vice-versa. In this context, the special vibrations described by $\vert \rm I \rangle$ and $\vert \rm II \rangle$ are called the normal modes of the system.

So, are we done? If we are content to work everything in the $\left\{ \vert \rm I \rangle, \vert \rm II \rangle \right\}$ basis, yes. However, our original goal was to find the propagator G(t) (from eq. 10) in the original $ \left\{ \vert 1 \rangle, \vert 2 \rangle
\right\} $ basis. But notice that we already have G(t) in the $\left\{ \vert \rm I \rangle, \vert \rm II \rangle \right\}$ basis! We can simply rewrite equations 19 and 20 in matrix form as

$\displaystyle \left[ \begin{array}{c} x_{\rm I}(t) \\  x_{\rm II}(t) \end{array} \right]$ = $\displaystyle \left[ \begin{array}{cc} cos(\omega_{\rm I} t) & 0 \\
0 & cos(\o...
...ght]
\left[ \begin{array}{c} x_{\rm I}(0) \\  x_{\rm II}(0) \end{array}\right].$ (23)

So, the propagator in the $\left\{ \vert \rm I \rangle, \vert \rm II \rangle \right\}$ basis is just
 
$\displaystyle {\mathbf G}(t)_{\vert \rm I \rangle, \vert \rm II \rangle}$ = $\displaystyle \left[ \begin{array}{cc} cos(\omega_{\rm I} t) & 0 \\
0 & cos(\omega_{\rm II} t) \end{array}\right].$ (24)

To obtain G(t) in the original basis, we just have to apply the transformation

\begin{displaymath}{\mathbf G}(t)_{\vert 1 \rangle, \vert 2 \rangle} =
{\mathb...
... \rm I \rangle, \vert \rm II \rangle}
{\mathbf U}^{\dagger},
\end{displaymath} (25)

noting that this is the reverse transform from that needed to bring ${\mathbf \Omega}$ from the original to the eigenvector basis (so that U and ${\mathbf U}^{\dagger}$ swap). Working out ${\mathbf G}(t)_{\vert 1 \rangle, \vert 2 \rangle}$ was a problem in Problem Set II.

Finally, let us step back to the more general Dirac notation to point out that the general form of the solution is

\begin{displaymath}\vert x(t) \rangle = {\hat G}(t) \vert x(0) \rangle,
\end{displaymath} (26)

and actual calculation just requires choosing a particular basis set and figuring out the components of $\vert x(t) \rangle$ and $\vert x(0) \rangle$ and the matrix elements of operator ${\hat G}(t)$ in that basis. Another representation of operator ${\hat G}(t)$ is clearly

 \begin{displaymath}
{\hat G}(t) = \vert \rm I \rangle \langle \rm I \vert cos(\o...
...rt \rm II \rangle \langle \rm II \vert cos(\omega_{\rm II} t),
\end{displaymath} (27)

as you can check by evaluating the matrix elements in the $\left\{ \vert \rm I \rangle, \vert \rm II \rangle \right\}$ basis to get eq. 24. Thus
$\displaystyle \vert x(t) \rangle$ = $\displaystyle {\hat G}(t) \vert x(0) \rangle$  
  = $\displaystyle \vert \rm I \rangle \langle \rm I \vert x(0) \rangle cos(\omega_{...
...+
\vert \rm II \rangle \langle \rm II \vert x(0) \rangle cos(\omega_{\rm II} t)$ (28)



 
next up previous
Next: Summary Up: Time Evolution Previous: Introduction
C. David Sherrill
2000-05-02