Monday, May 17, 2021

Spelunking CACM, vol. 1 (1958): Accelerating Convergence of Iterative Processes

For our first spelunking expedition, I skimmed volume 1 of the Communications of the ACM.

The first thing that I noticed about the earliest issues is that numerical algorithms dominate: numeric integration, minimizing errors, etc. If you dig around, though, you'll find a smattering of topics such as binary search in tables, as well.

The one that caught my eye this morning is Accelerating Convergence of Iterative Processes, by J. H. Wegstein of the U.S. National Bureau of Standards.  (ACM began as an American organization; I don't know when the first articles from abroad appeared in CACM, nor when it began to really view itself as an international organization.)

The first thing you notice, of course, is the layout and typesetting. It's almost a newsletter format, single column, not a lot of boilerplate or fanciness. That said, the typesetting is pretty solid, even if it now looks kind of dated. (One big problem is the intermediate quality of the scans, but given the volume of material that needs to be brought online, I'm forgiving on this.)




The figures appear to be hand drawn by a competent but not brilliant draftsperson.



The paper contains an acknowledgment that another researcher contributed Section 4 of the paper, so I wonder why he wasn't listed as an author. The main body of the text also includes a comment by the editor (Alan J. Perlis?) on a method for reducing the computational cost. The paper contains no references.

Okay, on to the content...

Iterative methods for finding roots of equations have been known for a long time; one famous one is Newton's method. They always depend on some assumptions about the nature of the function. In this paper, we are looking at roots of $F(x) = 0$. If the equation can be written as $x = f(x)$, then you're looking for a constant point (a kind of eigenvector or steady state solution?), and it can be found by iterating $x_{n+1} = f(x_n)$.  If that form doesn't work, then you apply some factor $\Gamma$ by way of $x_{n+1} = x_n + \Gamma F(x_n)$.

The author examines several cases, where iteration causes values:

  1.  to oscillate and converge;
  2. to oscillate and diverge;
  3. to converge monotonically; or, finally,
  4. to diverge monotonically.
The main point seems to be an adaptive method for finding the factor $q$ in equations of the form
$\bar{x}_{n+1}=q x_{n}+(1-q) x_{n+1}$,
using the above equation for $x_{n+1}$.
The claim is that many (all? seems unlikely, but that's the way I read the text) equations can be transformed from diverging to converging, and the converging ones can be made to converge more rapidly. The overall process looks like this figure, where steps 1 & 3 are used only on the first iteration and the whole thing terminates when the absolute error drops below some pre-chosen threshold.



The author then goes on to show some examples of convergence. It's not clear to me that this was actually programmed; the examples would be somewhat tedious but not too bad if done by hand, and I don't see any other indication one way or the other.
Overall, a representative article from CACM's earliest years, and I think a successful spelunking expedition. I already have a couple of surprising articles lined up from 1959 and 1960, so stay tuned!

No comments: