Next: Programming DIIS
Up: The DIIS Method
Previous: Introduction
Suppose that we have a set of trial vectors
which have been generated during the iterative solution of a
problem. Now let us form a set of ``residual'' vectors defined as
![\begin{displaymath}
\Delta {\mathbf p}^i = {\mathbf p}^{i+1} - {\mathbf p}^i.
\end{displaymath}](img2.png) |
(1) |
The DIIS method assumes that a good approximation to the final solution
pf can be obtained as a linear combination of the
previous guess vectors
![\begin{displaymath}{\mathbf p} = \sum_i^m c_i {\mathbf p}^i,
\end{displaymath}](img3.png) |
(2) |
where m is the number of previous vectors (in practice, only the
most recent few vectors are used). The coefficients ci are
obtained by requiring that the associated residual vector
![\begin{displaymath}\Delta {\mathbf p} = \sum_i^m c_i \left( \Delta {\mathbf p}^i \right)
\end{displaymath}](img4.png) |
(3) |
approximates the zero vector in a least-squares sense. Furthermore,
the coefficients are required to add to one,
![\begin{displaymath}
\sum_i^m c_i = 1.
\end{displaymath}](img5.png) |
(4) |
The motivation for the latter requirement can be seen as follows.
Each of our trial solutions
pi can be written
as the exact solution plus an error term,
pf +
ei. Then, the DIIS approximate solution is given by
p |
= |
![$\displaystyle \sum_i^m c_i \left( {\mathbf p}^f + {\mathbf e}^i \right)$](img6.png) |
(5) |
|
= |
![$\displaystyle {\mathbf p}^f \sum_i^m c_i + \sum_i^m c_i {\mathbf e}^i.$](img7.png) |
|
Hence, we wish to minimize the actual error, which is
the second term in the equation above (of course, in practice, we
don't know
ei, only
); doing so
would make the second term vanish, leaving only the first term. For
p = pf, we must have
.
Thus, we wish to minimize the norm of the residuum vector
![\begin{displaymath}\langle \Delta {\mathbf p} \vert \Delta {\mathbf p} \rangle =...
...angle \Delta {\mathbf p}^i \vert \Delta {\mathbf p}^j \rangle,
\end{displaymath}](img10.png) |
(6) |
subject to the constraint (4). These requirements can be
satisfied by minimizing the following function with Lagrangian
multiplier
![\begin{displaymath}{\cal L} = {\mathbf c}^{\dag } {\mathbf B} {\mathbf c} -
\lambda \left( 1 - \sum_i^m c_i \right),
\end{displaymath}](img12.png) |
(7) |
where
B is the matrix of overlaps
![\begin{displaymath}B_{ij} = \langle \Delta {\mathbf p}^i \vert {\Delta \mathbf p}^j \rangle.
\end{displaymath}](img13.png) |
(8) |
We can minimize
with respect
to a coefficient ck to obtain (assuming real quantities)
We can absorb the factor of 2 into
to obtain the following
matrix equation, which is eq. (6) of Pulay [3]:
![\begin{displaymath}\left(
\begin{array}{ccccc}
B_{11} & B_{12} & \cdots & B_{1...
...}
0 \\
0 \\
\cdots \\
0 \\
-1 \\
\end{array}\right)
\end{displaymath}](img18.png) |
(10) |
Next: Programming DIIS
Up: The DIIS Method
Previous: Introduction
C. David Sherrill
2000-04-18