next up previous
Next: Programming DIIS Up: The DIIS Method Previous: Introduction

The Mathematics of DIIS

Suppose that we have a set of trial vectors $\left\{ {\mathbf p}^i
\right\}$ which have been generated during the iterative solution of a problem. Now let us form a set of ``residual'' vectors defined as

 \begin{displaymath}
\Delta {\mathbf p}^i = {\mathbf p}^{i+1} - {\mathbf p}^i.
\end{displaymath} (1)

The DIIS method assumes that a good approximation to the final solution pf can be obtained as a linear combination of the previous guess vectors

\begin{displaymath}{\mathbf p} = \sum_i^m c_i {\mathbf p}^i,
\end{displaymath} (2)

where m is the number of previous vectors (in practice, only the most recent few vectors are used). The coefficients ci are obtained by requiring that the associated residual vector

\begin{displaymath}\Delta {\mathbf p} = \sum_i^m c_i \left( \Delta {\mathbf p}^i \right)
\end{displaymath} (3)

approximates the zero vector in a least-squares sense. Furthermore, the coefficients are required to add to one,

 \begin{displaymath}
\sum_i^m c_i = 1.
\end{displaymath} (4)

The motivation for the latter requirement can be seen as follows. Each of our trial solutions pi can be written as the exact solution plus an error term, pf + ei. Then, the DIIS approximate solution is given by
p = $\displaystyle \sum_i^m c_i \left( {\mathbf p}^f + {\mathbf e}^i \right)$ (5)
  = $\displaystyle {\mathbf p}^f \sum_i^m c_i + \sum_i^m c_i {\mathbf e}^i.$  

Hence, we wish to minimize the actual error, which is the second term in the equation above (of course, in practice, we don't know ei, only $\Delta {\mathbf p}^i$); doing so would make the second term vanish, leaving only the first term. For p = pf, we must have $\sum_i^m c_i = 1$.

Thus, we wish to minimize the norm of the residuum vector

\begin{displaymath}\langle \Delta {\mathbf p} \vert \Delta {\mathbf p} \rangle =...
...angle \Delta {\mathbf p}^i \vert \Delta {\mathbf p}^j \rangle,
\end{displaymath} (6)

subject to the constraint (4). These requirements can be satisfied by minimizing the following function with Lagrangian multiplier $\lambda$

\begin{displaymath}{\cal L} = {\mathbf c}^{\dag } {\mathbf B} {\mathbf c} -
\lambda \left( 1 - \sum_i^m c_i \right),
\end{displaymath} (7)

where B is the matrix of overlaps

\begin{displaymath}B_{ij} = \langle \Delta {\mathbf p}^i \vert {\Delta \mathbf p}^j \rangle.
\end{displaymath} (8)

We can minimize ${\cal L}$ with respect to a coefficient ck to obtain (assuming real quantities)
$\displaystyle \frac{\partial {\cal L}}{\partial c_k} = 0$ = $\displaystyle \sum_j^m c_j B_{kj} + \sum_i^m c_i B_{ik} - \lambda$ (9)
  = $\displaystyle 2 \sum_i^m c_i B_{ki} - \lambda.$  

We can absorb the factor of 2 into $\lambda$ to obtain the following matrix equation, which is eq. (6) of Pulay [3]:

\begin{displaymath}\left(
\begin{array}{ccccc}
B_{11} & B_{12} & \cdots & B_{1...
...}
0 \\
0 \\
\cdots \\
0 \\
-1 \\
\end{array}\right)
\end{displaymath} (10)


next up previous
Next: Programming DIIS Up: The DIIS Method Previous: Introduction
C. David Sherrill
2000-04-18