Saturday, November 15, 2025

Spelunking CACM, Vol. 25 (1982): Worms and Grapevines



 Wow, right away in January there are things I find relevant to our work today. One such is an article on a test to see if documentation of control flow or data structures helps more in understanding a piece of code, coming down firmly on the side of the data structure. Of course, that was a very limited test done with a short program and without the benefit of today's tools and practices, but it's nice to see where we were at the time.

I lived through the Morris Worm in 1988, and I knew the name came from a Brunner SF novel, but I either didn't know or had forgotten there were not just technical analyses but implementations and tests of worms before 1988! (The image at the top is taken from this article.) Fascinating study there on a multi-node, multi-segment worm that is extremely hard to kill. Prescient, and frightening. It has been a long time since we had something quite like it, despite the many forms and instances of malware; let's all hope we have built a resilient system, robust and well-defended against it happening again. The article also includes a nice summary of early ARPANET distributed programs, including pointing out that routing is itself a distributed computation -- an under-appreciated insight even today.

One article that I teach, even today, is on Grapevine, which is one of the seminal distributed systems. Everyone in computer systems should know about that work at Xerox PARC.

The ACM Classification System originated in 1964, and was redone here in 1982. With the obvious exception of quantum computing, I find that classification surprisingly solid, more than forty years later. It has since been revised, in 1991 and 1998, but not in over a quarter of a century, so people either don't use it or our forebears Simply Got It Right.

State of the art in AI in 1982? A 20-page effort to build an English grammar diagram useful for parsing sentences.

I haven't stopped to look more closely, but the February issue has several articles on queueing systems analysis.

A couple of personal interest: one on debugging via program slicing, a technique one of my Ph.D. graduates used for quantum programs, and one by Bill Swartout and Bob Balzer on the intertwining of specification and implementation. Just a few years later I would be the juniorest worker at USC/ISI, where they worked.

For what it's worth, fall of 1982 is when I entered college. Next year is my fortieth reunion, looking forward to seeing people I haven't seen in a long time.







AI Policy: Whew

 I am working on a policy for appropriate use of AI (especially but not only LLMs) in my classes and research group. It's an important task, and I'm trying to not just lay down a set of written rules to be blindly followed but a bit of history and a set of principles. I do expect the policy to be revised regularly as the technology evolves, but I want to clarify my own thinking and encourage students to both think for themselves and read as widely as possible.

This is a nontrivial task...

Tuesday, October 21, 2025

Takaichi Sanae: Japan's First Woman Prime Minister

237 votes for Takaichi Sanae (高市 早苗) on the first ballot in the Lower House. It's official, she will become Japan's first woman prime minister.

Interestingly, she worked for Democratic (and liberal) Congresswoman Pat Schroeder of Colorado as a congressional fellow in the late 1980s, when Takaichi was in her mid-twenties. Either she was still forming her political views or she didn't know enough about American politics to realize who she was working for, but for most of her political career she has been conservative to very right wing.

Koizumi Junior will become Defense Minister. I haven't seen a full cabinet lineup yet. Always interested in who the ministers with large research portfolios are, though I rarely know enough about them to have an opinion.

In the photo (mobile phone photo of a television) at the top, Takaichi sits in front of three former prime ministers at the back of the Diet Lower House. I know so little about Japanese parliamentary procedures that I have no idea how the seating chart works. (Actually, there are many more important things I also don't know.)

Thursday, September 11, 2025

MOSIS in the 1980s

 

I found some of the key history of the MOSIS Project I was looking for!  The table above is from a 1991 DARPA report, DARPA Technical Accomplishments. Volume 2. An Historical Review of Selected DARPA ProjectsAccession Number: ADA241725. During the 1980s, MOSIS fabbed 12,201 projects (computer chips designed by researchers from universities, government labs and companies) in technologies ranging from 5-micron NMOS to 1.2-micron CMOS.  What an astounding total!  I wish I also knew how many wafers, chips, designers and organizations are represented by those numbers.

I also found two key annual reports summarizing all (most?) of USC/ISI for DARPA, 1980 and 1982.  Unfortunately the absolutely crucial year of 1981, when MOSIS went live, I have been unable to track down so far.  The 1980 report states:

In August 1980 MOSIS accepted designs for the first fabrication run using the software developed for automatic interaction with users. This run had 65 projects submitted by designers from 8 organizations: ISI, UCLA, Caltech, Jet Propulsion Lab, Stanford University, National Bureau of Standards, Carnegie-Mellon University and Washington University at St. Louis. These projects were packed into 18 dies on the wafer.

And then elsewhere:

After a period of debugging, the MOSIS system became operational in January 1981.  The MOSIS system accepts VLSI designs expressed in CIF [1] and handles all the fabrication steps needed in order to provide the designers with actual chips.

The original VLSI research staff were: Danny Cohen, Yehuda Afek, Ron Ayres, Joel Goldberg, Gideon Hess, Dave Johannsen,  Lee Richardson, and Vance Tyree.  Support staff: Victor Brown and Rolanda Shelby.

By the 1983 annual report, we learn that, "Over 40 universities and hundreds of designers now submit VLSI designs in electronic form via any network to MOSIS. MOSIS delivers chips and will soon deliver user-specified printed circuit boards to designers 30 to 35 days after receipt of a design."

The 1984 annual report lists 35 total MOSIS "runs": 17 4-micron nMOS runs; 5 3-micron nMOS runs; 11 3-micron CMOS/Bulk runs; and 2 3-micron CMOS/SOS runs.  By the time I began working at ISI in June 1986, runs were "closed" weekly on Thursdays.

It really was an incredible period in computing technology, and MOSIS was an essential contribution.  I was young (20, when I started working at ISI) and stupid (okay, I've still got that part), and so I had only the tiniest glimmer of how important everything going on around me was.

Now that I have found most of the hard data I wanted, I can update the Wikipedia entry on MOSIS!

MOSIS History: Help a Guy Out?


 If you're reading this, and you know anything about the early days of the MOSIS Project at USC/ISI, mosey on over to https://en.wikipedia.org/wiki/Talk:MOSIS and drop a note there.  I'm trying to enhance the article on MOSIS, but of course it needs to be documented history, not just, "I remember..." or "I think..."

If you don't know what MOSIS is, it's the Metal Oxide Semiconductor Implementation System.  It's a multi-project wafer fabrication service dating back to the early 1980s, and it transformed VLSI research in the United States.  (I've never understood why it was emulated aggressively worldwide, though a few other MPW systems have existed.)

I worked for MOSIS for a couple of years, though my job was on the compiler for the ICL programming language that most of the processing tools where implemented in.  I had basically nothing to do with the VLSI side or the production run side.

Image at the top from the March 1984 user manual, available at https://apps.dtic.mil/sti/citations/ADA139744.

Saturday, August 09, 2025

August 9th (Nagasaki)

 

Yesterday (August 8) we were in the city of Nagasaki. It's a beautiful little place, with a charming streetcar and centuries of important history, especially in Japan-Dutch relations during the Edo Period. And yet, of course, its most famous role in history is now, and very likely will remain, as the target of humanity's second atomic bombing of other humans.

We visited the Nagasaki Atomic Bomb Museum.  I was last here at New Year's 1994, and the current museum building opened in 1996, so this was effectively the first time for me. The museum is, if possible, more powerful, moving and terrifying that Hiroshima.  The exhibits pull zero punches.

The museum opens with some of the most disturbing images and artifacts anywhere on Earth. From there, it goes into more personal stories and mementos.  It also covers in surprising depth the physics and medical impact of the explosion.  It concludes with a history China-Japan conflicts starting in the late 19th century, a separate timeline for the Pacific War, followed by postwar nuclear developments and politics, with a moving call for global disarmament.

Many of the exhibits have at least short accompanying explanations in English, but many also do not; they are probably covered in the audio guide that I didn't borrow.  The war histories state dates for key events, such as the Manchurian Incident and the attack on Pearl Harbor, but without any description, so interpretation is left up to the viewer and their pre-existing knowledge of history.

We were there the afternoon before the 11:02 a.m. 80th anniversary.  There was a scattering of foreigners in somber suits, most with guides/interpreters.  I presume they were there as guests for today's ceremony.

I took a few photos (which is allowed through most of the museum), but most of them feel too raw, too personal.  The photo at the top of this posting is from the memorial, which is attached to the museum.  The glass tower in the middle back normally holds books listing the names of the many victims, but a small sign said that the books had been removed in preparation for use in today's ceremony.


Thursday, July 31, 2025

Monday, July 28, 2025

Bollocks

 Professor Peter Gutmann, of the University of Auckland, a well-known computer security researcher, has made something of a name for himself as a quantum skeptic, at least with respect to quantum cryptanalysis and post-quantum cryptography (PQC).  His argument is roughly two-pronged:

  1. Quantum computers aren't making progress on factoring at all; and
  2. even if they did, computer security people and cryptographers have much larger problems to worry about.
I agree with him pretty strongly on point #2.  I've said various times that the mathematical vulnerability of Diffie-Hellman key exchange or RSA authentication is not very high on the list of corporate CSOs.  On point #1, I'd say he's technically right about the current state, but rather dramatically wrong about concluding that he can project onward for the next couple of decades and declare the world safe from quantum computers.

Sometime in the last year or two, Peter gave a talk somewhere titled, "Why Quantum Cryptanalysis is Bollocks". The slides themselves have no date or talk venue, but evidence from within them suggests mid-to-late 2024. Some blog postings also assert that date, as does a very recent article in The Register, which is how Peter and this talk came to my attention.  The slides make the talk look like a lot of fun, so I wish I could hear it in person or even via recording.  So let's take a look at the logic in the talk, then I'll tell you why I think he is misunderestimating the quantum folks:

  1. Germany had a massive weapon systems boondoggle during WWII.
  2. OWASP lists a lot (tens of thousands!) of threats to the security of computer systems, and the highest-ranked attack on the mathematics of encryption was #17,245 (not a typo).  Roughly, the argument is:
    1. Success with mathematical attacks are high-effort and low success probability;
    2. even if you succeed, you recover a few bits of the contents of one message; and
    3. of the top ten high-priority problems, when you succeed you win big -- you get "the plaintext of all of the messages."
    4. (And holy cow, not directly in Peter's talk, but there are always examples of how human stupidity is the number one threat!)
  3. NSA can already factor 1,024-bit RSA keys, if they're willing to commit leading-edge supercomputer time in allocation chunks of a year at a time.
  4. Quantum computer-based factoring has grown from 4 bits to 5 bits over the last 20+ years.
  5. Quantum computers are physics experiments, not computers.
  6. (Brings up poorly-regarded D-Wave factoring.)
  7. PQC is very hard to implement well, and costs a lot in engineering time and network/execution time.
  8. (Various disparaging of academics and standards organizations as becoming self-justifying.)
  9. "Quantum cryptanalysis is the string theory of security" and my dog can factor just as well.
Let's see...let me respond one by one:
  1. Yes, and while cautionary tales are good, cautionary tales are not the same thing as prediction or even valid analogy.
  2. Okay, but I think the implied message here is off: if you can crack one Diffie-Hellman key exchange, you gain the ability to read everything in that conversation (n.b.: it's harder than just factoring or discrete log, there are other mechanisms involved), but the bigger catch would be the RSA private key of an important individual, which would allow you to impersonate them across a range of systems; certainly there are organizations that would pay a lot of money for that.  Of course, I'd argue that it's pretty easy for truly high-value targets connecting to high-value systems are likely secured via shared private key, so hacking RSA is lower value. Peter is definitely more knowledgeable than I am in this area.
  3. Okay, but is that relevant to the anti-quantum argument? Is the argument just that people won't really commit big resources to factoring? I'd like to hear the oral argument that accompanies this point.
  4. This is the big one: he's saying progress in development of quantum computers is so poor that we can effectively discount them as any sort of threat. Ooh...okay. It's a fair point that reported successes in the literature on the task of cryptanalysis are advancing at a glacial pace.  (We have worked on this topic.) But projecting from past progress to future progress is dangerous in this field. We have known since the beginning of this field that error correction would be necessary.  Until we hit the threshold that allows quantum error correction to be executed effectively, progress on hardware did not translate into equivalent algorithmic successes.
    Well, guess what? The relentless improvement in hardware means we have passed that basic threshold on at least two very different hardware platforms in the last two years.  At least two other companies have released roadmaps that take us to large-scale, fault-tolerant systems within five years. At that level, that means they think they know how to solve all of the problems standing in their way.  Even if they are off by a factor of two, that still means we're there within a decade, I'd bet sooner.
    So my opinion is that pooh-poohing the likelihood of the advent of cryptographically relevant quantum computers (CRQCs) seems unwise.  I think it's bordering on irresponsible to assume the technology won't happen; the argument instead needs to be about how much to prioritize countermeasures.
  5. In today's environment, strongly agreed.  Dave Farber said to me several years ago (perhaps as far back as 2019, though I think it was a little more recently than that), when I showed him some Qiskit code, "This isn't an application, it's an experiment."  I think we as a community need to think very hard about how to deliver hardware, runtime systems and tools, and applications to customers.
  6. (Pass.)
  7. Cost of PQC is high -- oh, yes, definitely.  I attend IETF meetings and listen to people moan about how hard it is.  I'm not an expert here, though.
  8. (Pass.)
  9. Funny!  (I need a dog.  I love dogs.  But I'm allergic to dogs, work too much and travel too much, and I think dogs in Japan don't have good lives, but all that's for a different posting, some other day...)
Verdict: pundits, and occasionally quantum people, oversell how soon and how big the impact will be -- I'd agree with that.  But the machines are coming.  Make no mistake about that.  So, it's up to you, as a business person, engineer, researcher, bureaucrat, CSO.  How will you respond?

(Also, if you care about broader skepticism of quantum computing, you may want to go look at a blog posting I wrote about four years ago. Geez, time flies.)

Tuesday, July 22, 2025

Spelunking CACM, Vol. 24 (1981): Hierarchical Storage Management, OS Support for Databases


Alan Jay Smith was a consultant for a company I worked for in the late 1990s. Very smart guy, already seemed "old" to me then, but he was quite a bit younger then than I am now.  Geez.  He worked on processor architecture, but his most influential work has been on storage systems, and here in 1981 he was already deep into what became known around this time as hierarchical storage management (HSM) systems.  Even well before this article, though, mainframe computers used a combination of hard disk and tape to hold user files (as in the figure above, taken from this article) that were considered active; if your file had suffered the unfortunate fate of being evicted from disk, when you asked for the file, it had to be fetched from tape -- a task that could take a couple of minutes up to an hour, depending on whether tapes had to be fetched and mounted by a human.  Alan worked on understanding inter-reference patterns, hopefully reducing the incidence of that unfortunate fate.  He found that not only was it a long-tailed distribution, with many files never accessed beyond two days after their creation, but among those whose active lifetimes were long, some showed periodic use that could be predicted and taken advantage of.  Modern server and storage farms operate at a scale and level of parallelism inconceivable in the 1970s and early 1980s, but I wonder how many of the designers of today's systems are familiar with the foundational work done by people like Alan Back in the Day.

Preparata and Vuillemin described CCC, Cube-Connected Cycles, as a compromise on the physical infeasibility of a large-scale hypercube interconnect.  They limit the overall structure two an ordinary three-dimensional cube, with each vertex replaced by a ring of nodes sufficient to make up the extra dimensions; i.e., if what you want is a 5-cube, the overall structure gives you 3 dimensions and each ring emulates 2 dimensions, so each vertex is replaced with a 4-node ring, as in the figure below.  Later analysis by Dally would propose k-ary n-cubes with small n instead of this somewhat more irregular structure.



Lamport proposed a password authentication system in which the terminal actively participates; the user identifies themselves to the computer, the computer issues a challenge (from a predetermined challenge sequence), the terminal calculates the challenge using the user-supplied password. With this scheme, the computer (or server, in a modern client-server system) never holds the password or enough information to allow an eavesdropper to create and crack the next iteration of the challenge.  Of course, this requires that the terminal (client) itself be secure. I'm not enough of a cryptographer to know how influential this particular paper is, but it's worth noting that it comes five years after Diffie-Hellman (one of only two references in this "Technical Note" short paper).

David Patterson implemented a high-level language, called STRUM, for microcoding a Burroughs D-machine. I have done very close to zero microcode myself (one student project during my master's degree, and reading some VAX microcode around the same time), though I suppose things like SCSI processors and network packet processors are about halfway in between ordinary instruction sets and microcode. Microcode programmers are Titans of the old gods; it's a very early idea, attributed to Maurice Wilkes himself. I don't think Patterson's approach ever really took off; as far as I am aware, writing microcode still involves very low-level management of latches and data routing and the like, not really amenable to high-level languages, in my opinion. Interesting project, though.  (Interesting side note: this article took over two years from initial submission to acceptance.)

One article I read as a grad student is Stonebraker's Operating System Support for Database Management. Today, it reads like a fairly basic, almost plodding analysis, but it was groundbreaking at the time, and in my own field this kind of clear-eyed analysis of how good a job quantum hardware and software do for particular tasks should become standard practice.

Overall, I have to say, not CACM's most exciting year.