Saturday, April 05, 2025

Spelunking CACM, Vol. 23 (1980): Pilot and Medusa



The structure of the magazine is changing, but not the covers, which are still primarily black and blue, with some abstract design. The articles are longer and more in-depth, and each issue has only a few articles. Some of the older types of notices and articles, such as short algorithm descriptions, have been retired. And then rather suddenly around August or perhaps a little earlier, the article format itself changed and became more modern, with a nicer header and a three-column, large-magazine format.

Apparently there was a thing called the IBM Programmer Aptitude Test, which I have never heard of.  A quick check didn't find a copy of it online, but some discussion seems to indicate that it was a lot of math word problems. A couple of researchers tested it rigorously, and found, surprise, very weak correlation between the test result and grades in an introductory FORTRAN programming course. They also found an even weaker correlation between gender and performance.

Harold Abelson and Peter Andreae wrote about tradeoffs in VLSI design. Interestingly, this is the first time I recall seeing the term "VLSI", though maybe it just didn't catch my eye before. The term itself should have been only about three years old at the time (according to Lynn Conway's reminiscences), and yet the article doesn't bother to expand the acronym.

Not only was there a chess tournament featuring a dozen programs at a 1979 ACM conference, there was also an early attempt at a man-machine team, what we might now call "centaur" or "advanced" chess, playing against (in this case) a lone human. The article authors were relieved that the lone human won.

Xerox Business Systems (not PARC?) authors published about Pilot (the figure below), an OS for personal computers complete with a single 32-bit virtual address space.  (In fact, we might call it a 33-bit address space today, since addresses were to 16-bit words.) Pilot used a flat (non-hierarchical) namespace for files, each of which had a 64-bit unique identifier they refer to as a capability. The capability is supposed to be unique in space and time, across all machines. Frustratingly, the article doesn't contain much on the hardware required to run Pilot, but it's implemented in Mesa and very closely tied to that language.  Inter-process communication can be either via shared memory or the communication libraries provided, which primarily focused on the PUP protocol suite, though the describe similarities to the ARPANET protocol suite. Like TCP/IP, PUP includes internetworking concepts in it.

As long as we're talking operating systems, maybe the more interesting one technically is Medusa, which ran on the Cm* multiprocessor   (the figure at the top of this posting), developed at Carnegie Mellon University. The article by Ousterhout et al. describes an extremely sophisticated OS running on a distributed shared memory system. The hardware, like a NUMA system today, can directly access local or remote memory, with up to about a 10x latency penalty. The processors are LSI-11s, a version of the workhorse PDP-11 that was used for so many things over two decades and several hardware and OS iterations.

A task force includes activities, with the former roughly resembling a modern process and the latter corresponding to threads, where each activity has a specific role in the overall program/utility -- except that each activity is bound to a processor, but a task force can apparently consist of activities on many processors. One approach, shown below, is to have the many utilities (daemons, in modern terms) of the OS each running on a separate processor.

Arguably DCS echoes some aspects of Farber's DCS and in turn influences things like VAXclusters, though neither of those systems had direct hardware access to remote memory, as far as I know/recall.

Guy Steele and Gerald Sussman described a LISP microprocessor.  Interestingly, despite the fame of that pair, this article hasn't been cited much; perhaps the commercial LISP machines of just a few years later don't really owe much to it?

Also, I hadn't realized that there were formal attempts to verify security in an OS that far back -- and using capabilities, to boot.

Enough for now. Once again, this is turning into a catalog rather than a dive into one or two pleasing papers, but it's intriguing to see so much on distributed OSes showing up. What a time it was, and I was still too young to participate at all, even though some of this major work was taking place driving distance from my parents' house, in Pittsburgh. If I had known then about that work, and that I wanted to do computing systems, I might very well have gone to CMU instead of Caltech, and how different my life would have been then. Although my life later converged with many good people from CMU!

Cross-validating Quantum Network Simulators

 New paper on the arXiv. I'll be presenting this one at an INFOCOM workshop in London next month.

During this cross-validation process, we not only fixed bugs in both simulators, but we gained a deeper understanding of the performance differences caused by protocol design differences.

Cross-Validating Quantum Network Simulators

Joaquin Chung, Michal Hajdušek, Naphan Benchasattabuse, Alexander Kolar, Ansh Singal, Kento Samuel Soon, Kentaro Teramoto, Allen Zang, Raj Kettimuthu, Rodney Van Meter

We present a first cross-validation of two open-source quantum network simulators, QuISP and SeQUeNCe, focusing on basic networking tasks to ensure consistency and accuracy in simulation outputs. Despite very similar design objectives of both simulators, their differing underlying assumptions can lead to variations in simulation results. We highlight the discrepancies in how the two simulators handle connections, internal network node processing time, and classical communication, resulting in significant differences in the time required to perform basic network tasks such as elementary link generation and entanglement swapping. We devise common ground scenarios to compare both the time to complete resource distribution and the fidelity of the distributed resources. Our findings indicate that while the simulators differ in the time required to complete network tasks, a constant factor difference attributable to their respective connection models, they agree on the fidelity of the distributed resources under identical error parameters. This work demonstrates a crucial first step towards enhancing the reliability and reproducibility of quantum network simulations, as well as leading to full protocol development. Furthermore, our benchmarking methodology establishes a foundational set of tasks for the cross-validation of simulators to study future quantum networks.