In January, Duda and Hart wrote about using a variant of the Hough transform for finding lines in a pixelated image. More than fifty years later, this is still a standard approach to the problem. A line can be defined in terms of theta and rho, such that x cos theta + y sin theta = rho. First, transform every non-blank point (this algorithm is originally defined for black & white images) into a sinusoidal curve in the (theta, rho) parameter space. With a line, a set of those curves will all intersect at a point in that space. Finding those intersections can be expensive, but they give techniques for making this tractable, as well as allowing some slack in the parameters to account for imperfections or approximations.
David Pager proposed a computer-based interactive scientific community that is basically a queryable database of mathematical theorems that will provide you with a list of mathematical theorems you need to know in order to understand a particular paper you are reading. This paper seems to have had little impact, but the GOFAI people, including Cyc, should have loved it!
And the March issue was a special issue with eight of the 23 papers from the third SOSP, including a paper on TENEX and one by Liskov. Wow, I'm going to have to read all of these in more detail!
Who can argue with the early introduction of a course on computers and society?
As I noted in spelunking 1970, all of this is starting to feel very modern -- or maybe I'm just old. But it's getting to the point where so many articles feel relevant that it's hard to choose! However, many of these early CACM articles have only single-digit numbers of citations (ACM counting), which I find surprising.
Let me close with an obvious one: Dijkstra's humble programmer. We should all heed the words of the titans, and their Turing Award lectures are a great source of wisdom. "Programs should be constructed correctly, not just debugged into correctness," indeed! (Would that it were so easy...)