next up previous
Next: Slater Determinants Up: intro_estruc Previous: Postulates of Quantum Mechanics

Dirac Notation

For the purposes of solving the electronic Schrödinger equation on a computer, it is very convenient to turn everything into linear algebra. We can represent the wavefunctions $\Psi({\bf r})$ as vectors:
\begin{displaymath}
\vert \Psi \rangle = \sum_{i=1}^{n} a_i \vert {\hat \Psi}_i \rangle ,
\end{displaymath} (5)

where $\vert \Psi \rangle $ is called a ``state vector,'' $a_i$ are the expansion coefficients (which may be complex), and $\vert {\hat \Psi}_i \rangle $ are fixed ``basis'' vectors. A suitable (infinite-dimensional) linear vector space for such decompositions is called a ``Hilbert space.''

We can write the set of coefficients $\left\{ a_i \right\} $ as a column vector,

\begin{displaymath}
\vert \Psi \rangle = \left( \begin{array}{c}
a_1 \\
a_2 \\
\vdots \\
a_n \end{array} \right).
\end{displaymath} (6)

In Dirac's ``bracket'' (or bra-ket) notation, we call $\vert \Psi \rangle $ a ``ket.'' What about the adjoint of this vector? It is a row vector denoted by $\langle \Psi \vert$, which is called a ``bra'' (to spell ``bra-ket''),

\begin{displaymath}
\langle \Psi \vert = (a_1^* a_2^* \cdots a_n^*).
\end{displaymath} (7)

In linear algebra, the scalar product $(\Psi_a,\Psi_b)$ between two vectors $\Psi_a$ and $\Psi_b$ is just

\begin{displaymath}
(\Psi_a,\Psi_b) = (a_1^* a_2^* \cdots a_n^*)
\left( \begin{a...
...\vdots \\
b_n \end{array} \right) = \sum_{i=1}^{n} a_i^* b_i,
\end{displaymath} (8)

assuming the two vectors are represented in the same basis set and that the basis vectors are orthonormal (otherwise, overlaps between the basis vectors, i.e., the ``metric,'' must be included). The quantum mechanical shorthand for the above scalar product in bra-ket notation is just
\begin{displaymath}
\langle \Psi_a \vert \Psi_b \rangle = \sum_{i=1}^{n} a_i^* b_i.
\end{displaymath} (9)

Frequently, one only writes the subscripts $a$ and $b$ in the Dirac notation, so that the above dot product might be referred to as just $\langle a \vert b \rangle$. The order of the vectors $\Psi_a$ and $\Psi_b$ in a dot product matters if the vectors can have complex numbers for their components, since $(\Psi_a, \Psi_b) = (\Psi_b, \Psi_a)^*$.

Now suppose that we want our basis set to be every possible value of coordinate $x$. Apart from giving us a continuous (and infinite) basis set, there is no formal difficulty with this. We can then represent an arbitrary state as:

\begin{displaymath}
\vert \Psi \rangle = \int_{-\infty}^{\infty} a_x \vert x \rangle dx.
\end{displaymath} (10)

What are the coefficients $a_x$? It turns out that these coefficients are simply the value of the wavefunction at each point $x$. That is,
\begin{displaymath}
\vert \Psi \rangle = \int_{-\infty}^{\infty} \Psi(x) \vert x \rangle dx.
\end{displaymath} (11)

Since any two $x$ coordinates are considered orthogonal (and their overlap gives a Dirac delta function), the scalar product of two state functions in coordinate space becomes
\begin{displaymath}
\langle \Psi_a \vert \Psi_b \rangle =
\int_{-\infty}^{\infty} \Psi_a^*(x) \Psi_b(x) dx.
\end{displaymath} (12)

Now we turn our attention to matrix representations of operators. An operator $\hat{A}$ can be characterized by its effect on the basis vectors. The action of $\hat{A}$ on a basis vector $\vert \hat{\Psi}_j \rangle $ yields some new vector $\vert \Psi'_j \rangle $ which can be expanded in terms of the basis vectors so long as we have a complete basis set.

\begin{displaymath}
\hat{A} \vert \hat{\Psi}_j \rangle = \vert \Psi'_j \rangle
= \sum_i^{n} \vert \hat{\Psi}_i \rangle A_{ij}
\end{displaymath} (13)

If we know the effect of $\hat{A}$ on the basis vectors, then we know the effect of $\hat{A}$ on any arbitrary vector because of the linearity of $\hat{A}$.
$\displaystyle \vert \Psi_b \rangle = \hat{A} \vert \Psi_a \rangle
= \hat{A} \sum_j a_j \vert \hat{\Psi}_j \rangle$ $\textstyle =$ $\displaystyle \sum_j a_j
\hat{A} \vert \hat{\Psi}_j \rangle
= \sum_j \sum_i a_j \vert \hat{\Psi}_i \rangle A_{ij}$ (14)
  $\textstyle =$ $\displaystyle \sum_i \vert \hat{\Psi}_i \rangle ( \sum_j A_{ij} a_j )$  

or
\begin{displaymath}
b_i = \sum_j A_{ij} a_j
\end{displaymath} (15)

This may be written in matrix notation as
\begin{displaymath}
\left( \begin{array}{c}
b_1 \\
b_2 \\
\vdots \\
b_n \end{...
...n{array}{c}
a_1 \\
a_2 \\
\vdots \\
a_n \end{array} \right)
\end{displaymath} (16)

We can obtain the coefficients $A_{ij}$ by taking the inner product of both sides of equation 13 with $\hat{\Psi}_i$, yielding
$\displaystyle (\hat{\Psi}_i, \hat{A} \hat{\Psi}_j)$ $\textstyle =$ $\displaystyle (\hat{\Psi}_i, \sum_k^{n} \hat{\Psi}_k A_{kj} )$ (17)
  $\textstyle =$ $\displaystyle \sum_k^{n} A_{kj} (\hat{\Psi}_i, \hat{\Psi}_k)$  
  $\textstyle =$ $\displaystyle A_{ij}$  

since $(\hat{\Psi}_i, \hat{\Psi}_k) = \delta_{ik}$ due to the orthonormality of the basis. In bra-ket notation, we may write
\begin{displaymath}
A_{ij} = \langle i \vert \hat{A} \vert j \rangle
\end{displaymath} (18)

where $i$ and $j$ denote two basis vectors. This use of bra-ket notation is consistent with its earlier use if we realize that $\hat{A} \vert j \rangle $ is just another vector $\vert j' \rangle $.

It is easy to show that for a linear operator $\hat{A}$, the inner product $(\Psi_a, \hat{A} \Psi_b)$ for two general vectors (not necessarily basis vectors) $\Psi_a$ and $\Psi_b$ is given by

\begin{displaymath}
(\Psi_a, \hat{A} \Psi_b) = \sum_i \sum_j a_i^{*} A_{ij} b_j
\end{displaymath} (19)

or in matrix notation
\begin{displaymath}
(\Psi_a, \hat{A} \Psi_b) = \left( a_1^{*} a_2^{*} \cdots a_n...
...n{array}{c}
b_1 \\
b_2 \\
\vdots \\
b_n \end{array} \right)
\end{displaymath} (20)

By analogy to equation (12), we may generally write this inner product in the form
\begin{displaymath}
(\Psi_a, \hat{A} \Psi_b) = \langle a \vert \hat{A} \vert b \rangle =
\int \Psi_a^{*}(x) \hat{A} \Psi_b(x) dx
\end{displaymath} (21)

Previously, we noted that $(\Psi_a, \Psi_b) = (\Psi_b, \Psi_a)^{*}$, or $\langle a \vert b \rangle = \langle b \vert a \rangle^{*}$. Thus we can see also that

\begin{displaymath}
(\Psi_a, \hat{A} \Psi_b) = (\hat{A} \Psi_b, \Psi_a)^{*}
\end{displaymath} (22)

We now define the adjoint of an operator $\hat{A}$, denoted by $\hat{A}^{\dagger}$, as that linear operator for which
\begin{displaymath}
(\Psi_a, \hat{A} \Psi_b) = (\hat{A}^{\dagger} \Psi_a, \Psi_b)
\end{displaymath} (23)

That is, we can make an operator act backwards into ``bra'' space if we take it's adjoint. With this definition, we can further see that
\begin{displaymath}
(\Psi_a, \hat{A} \Psi_b) = (\hat{A} \Psi_b, \Psi_a)^{*} =
(...
...A}^{\dagger} \Psi_a)^{*} =
(\hat{A}^{\dagger} \Psi_a, \Psi_b)
\end{displaymath} (24)

or, in bra-ket notation,
\begin{displaymath}
\langle a \vert \hat{A} \vert b \rangle = \langle \hat{A} b ...
... a \rangle ^{*} =
\langle \hat{A}^{\dagger} a \vert b \rangle
\end{displaymath} (25)

If we pick $\Psi_a = \hat{\Psi}_i$ and $\Psi_b = \hat{\Psi}_j$ (i.e., if we pick two basis vectors), then we obtain
$\displaystyle (\hat{A} \hat{\Psi}_i, \hat{\Psi}_j)$ $\textstyle =$ $\displaystyle (\hat{\Psi}_i, \hat{A}^{\dagger} \hat{\Psi}_j)$ (26)
$\displaystyle (\hat{\Psi}_j, \hat{A} \hat{\Psi}_i)^{*}$ $\textstyle =$ $\displaystyle (\hat{\Psi}_i, \hat{A}^{\dagger} \hat{\Psi}_j)$  
$\displaystyle A_{ji}^{*}$ $\textstyle =$ $\displaystyle A^{\dagger}_{ij}$  

But this is precisely the condition for the elements of a matrix and its adjoint! Thus the adjoint of the matrix representation of $\hat{A}$ is the same as the matrix representation of $\hat{A}^{\dagger}$.

This correspondence between operators and their matrix representations goes quite far, although of course the specific matrix representation depends on the choice of basis. For instance, we know from linear algebra that if a matrix and its adjoint are the same, then the matrix is called Hermitian. The same is true of the operators; if

\begin{displaymath}
\hat{A} = \hat{A}^{\dagger}
\end{displaymath} (27)

then $\hat{A}$ is a Hermitian operator, and all of the special properties of Hermitian operators apply to $\hat{A}$ or its matrix representation.


next up previous
Next: Slater Determinants Up: intro_estruc Previous: Postulates of Quantum Mechanics
David Sherrill 2003-08-07