(Just a quick note for now, more later.)
rdv live from Tokyo
See our online course on quantum computing or our online courses on quantum communication.
Monday, December 08, 2025
Saturday, November 15, 2025
Spelunking CACM, Vol. 25 (1982): Worms and Grapevines
Wow, right away in January there are things I find relevant to our work today. One such is an article on a test to see if documentation of control flow or data structures helps more in understanding a piece of code, coming down firmly on the side of the data structure. Of course, that was a very limited test done with a short program and without the benefit of today's tools and practices, but it's nice to see where we were at the time.
I lived through the Morris Worm in 1988, and I knew the name came from a Brunner SF novel, but I either didn't know or had forgotten there were not just technical analyses but implementations and tests of worms before 1988! (The image at the top is taken from this article.) Fascinating study there on a multi-node, multi-segment worm that is extremely hard to kill. Prescient, and frightening. It has been a long time since we had something quite like it, despite the many forms and instances of malware; let's all hope we have built a resilient system, robust and well-defended against it happening again. The article also includes a nice summary of early ARPANET distributed programs, including pointing out that routing is itself a distributed computation -- an under-appreciated insight even today.
One article that I teach, even today, is on Grapevine, which is one of the seminal distributed systems. Everyone in computer systems should know about that work at Xerox PARC.
The ACM Classification System originated in 1964, and was redone here in 1982. With the obvious exception of quantum computing, I find that classification surprisingly solid, more than forty years later. It has since been revised, in 1991 and 1998, but not in over a quarter of a century, so people either don't use it or our forebears Simply Got It Right.
State of the art in AI in 1982? A 20-page effort to build an English grammar diagram useful for parsing sentences.
I haven't stopped to look more closely, but the February issue has several articles on queueing systems analysis.
A couple of personal interest: one on debugging via program slicing, a technique one of my Ph.D. graduates used for quantum programs, and one by Bill Swartout and Bob Balzer on the intertwining of specification and implementation. Just a few years later I would be the juniorest worker at USC/ISI, where they worked.
For what it's worth, fall of 1982 is when I entered college. Next year is my fortieth reunion, looking forward to seeing people I haven't seen in a long time.
AI Policy: Whew
I am working on a policy for appropriate use of AI (especially but not only LLMs) in my classes and research group. It's an important task, and I'm trying to not just lay down a set of written rules to be blindly followed but a bit of history and a set of principles. I do expect the policy to be revised regularly as the technology evolves, but I want to clarify my own thinking and encourage students to both think for themselves and read as widely as possible.
This is a nontrivial task...
Tuesday, October 21, 2025
Takaichi Sanae: Japan's First Woman Prime Minister
237 votes for Takaichi Sanae (高市 早苗) on the first ballot in the Lower House. It's official, she will become Japan's first woman prime minister.
Interestingly, she worked for Democratic (and liberal) Congresswoman Pat Schroeder of Colorado as a congressional fellow in the late 1980s, when Takaichi was in her mid-twenties. Either she was still forming her political views or she didn't know enough about American politics to realize who she was working for, but for most of her political career she has been conservative to very right wing.
Koizumi Junior will become Defense Minister. I haven't seen a full cabinet lineup yet. Always interested in who the ministers with large research portfolios are, though I rarely know enough about them to have an opinion.
In the photo (mobile phone photo of a television) at the top, Takaichi sits in front of three former prime ministers at the back of the Diet Lower House. I know so little about Japanese parliamentary procedures that I have no idea how the seating chart works. (Actually, there are many more important things I also don't know.)
Thursday, September 11, 2025
MOSIS in the 1980s
I found some of the key history of the MOSIS Project I was looking for! The table above is from a 1991 DARPA report, DARPA Technical Accomplishments. Volume 2. An Historical Review of Selected DARPA Projects, Accession Number: ADA241725. During the 1980s, MOSIS fabbed 12,201 projects (computer chips designed by researchers from universities, government labs and companies) in technologies ranging from 5-micron NMOS to 1.2-micron CMOS. What an astounding total! I wish I also knew how many wafers, chips, designers and organizations are represented by those numbers.
I also found two key annual reports summarizing all (most?) of USC/ISI for DARPA, 1980 and 1982. Unfortunately the absolutely crucial year of 1981, when MOSIS went live, I have been unable to track down so far. The 1980 report states:
In August 1980 MOSIS accepted designs for the first fabrication run using the software developed for automatic interaction with users. This run had 65 projects submitted by designers from 8 organizations: ISI, UCLA, Caltech, Jet Propulsion Lab, Stanford University, National Bureau of Standards, Carnegie-Mellon University and Washington University at St. Louis. These projects were packed into 18 dies on the wafer.
And then elsewhere:
After a period of debugging, the MOSIS system became operational in January 1981. The MOSIS system accepts VLSI designs expressed in CIF [1] and handles all the fabrication steps needed in order to provide the designers with actual chips.
The original VLSI research staff were: Danny Cohen, Yehuda Afek, Ron Ayres, Joel Goldberg, Gideon Hess, Dave Johannsen, Lee Richardson, and Vance Tyree. Support staff: Victor Brown and Rolanda Shelby.
By the 1983 annual report, we learn that, "Over 40 universities and hundreds of designers now submit VLSI designs in electronic form via any network to MOSIS. MOSIS delivers chips and will soon deliver user-specified printed circuit boards to designers 30 to 35 days after receipt of a design."
The 1984 annual report lists 35 total MOSIS "runs": 17 4-micron nMOS runs; 5 3-micron nMOS runs; 11 3-micron CMOS/Bulk runs; and 2 3-micron CMOS/SOS runs. By the time I began working at ISI in June 1986, runs were "closed" weekly on Thursdays.
It really was an incredible period in computing technology, and MOSIS was an essential contribution. I was young (20, when I started working at ISI) and stupid (okay, I've still got that part), and so I had only the tiniest glimmer of how important everything going on around me was.
Now that I have found most of the hard data I wanted, I can update the Wikipedia entry on MOSIS!
MOSIS History: Help a Guy Out?
If you're reading this, and you know anything about the early days of the MOSIS Project at USC/ISI, mosey on over to https://en.wikipedia.org/wiki/Talk:MOSIS and drop a note there. I'm trying to enhance the article on MOSIS, but of course it needs to be documented history, not just, "I remember..." or "I think..."
If you don't know what MOSIS is, it's the Metal Oxide Semiconductor Implementation System. It's a multi-project wafer fabrication service dating back to the early 1980s, and it transformed VLSI research in the United States. (I've never understood why it was emulated aggressively worldwide, though a few other MPW systems have existed.)
I worked for MOSIS for a couple of years, though my job was on the compiler for the ICL programming language that most of the processing tools where implemented in. I had basically nothing to do with the VLSI side or the production run side.
Image at the top from the March 1984 user manual, available at https://apps.dtic.mil/sti/citations/ADA139744.
Saturday, August 09, 2025
August 9th (Nagasaki)
Yesterday (August 8) we were in the city of Nagasaki. It's a beautiful little place, with a charming streetcar and centuries of important history, especially in Japan-Dutch relations during the Edo Period. And yet, of course, its most famous role in history is now, and very likely will remain, as the target of humanity's second atomic bombing of other humans.
We visited the Nagasaki Atomic Bomb Museum. I was last here at New Year's 1994, and the current museum building opened in 1996, so this was effectively the first time for me. The museum is, if possible, more powerful, moving and terrifying that Hiroshima. The exhibits pull zero punches.
The museum opens with some of the most disturbing images and artifacts anywhere on Earth. From there, it goes into more personal stories and mementos. It also covers in surprising depth the physics and medical impact of the explosion. It concludes with a history China-Japan conflicts starting in the late 19th century, a separate timeline for the Pacific War, followed by postwar nuclear developments and politics, with a moving call for global disarmament.
Many of the exhibits have at least short accompanying explanations in English, but many also do not; they are probably covered in the audio guide that I didn't borrow. The war histories state dates for key events, such as the Manchurian Incident and the attack on Pearl Harbor, but without any description, so interpretation is left up to the viewer and their pre-existing knowledge of history.
We were there the afternoon before the 11:02 a.m. 80th anniversary. There was a scattering of foreigners in somber suits, most with guides/interpreters. I presume they were there as guests for today's ceremony.
I took a few photos (which is allowed through most of the museum), but most of them feel too raw, too personal. The photo at the top of this posting is from the memorial, which is attached to the museum. The glass tower in the middle back normally holds books listing the names of the many victims, but a small sign said that the books had been removed in preparation for use in today's ceremony.
Thursday, July 31, 2025
History of the Asian Internet
If you're a history buff, a collection of writings on the history of the Internet in Asia:
https://sites.google.com/site/internethistoryasia/home?authuser=0Monday, July 28, 2025
Bollocks
Professor Peter Gutmann, of the University of Auckland, a well-known computer security researcher, has made something of a name for himself as a quantum skeptic, at least with respect to quantum cryptanalysis and post-quantum cryptography (PQC). His argument is roughly two-pronged:
- Quantum computers aren't making progress on factoring at all; and
- even if they did, computer security people and cryptographers have much larger problems to worry about.
- Germany had a massive weapon systems boondoggle during WWII.
- OWASP lists a lot (tens of thousands!) of threats to the security of computer systems, and the highest-ranked attack on the mathematics of encryption was #17,245 (not a typo). Roughly, the argument is:
- Success with mathematical attacks are high-effort and low success probability;
- even if you succeed, you recover a few bits of the contents of one message; and
- of the top ten high-priority problems, when you succeed you win big -- you get "the plaintext of all of the messages."
- (And holy cow, not directly in Peter's talk, but there are always examples of how human stupidity is the number one threat!)
- NSA can already factor 1,024-bit RSA keys, if they're willing to commit leading-edge supercomputer time in allocation chunks of a year at a time.
- Quantum computer-based factoring has grown from 4 bits to 5 bits over the last 20+ years.
- Quantum computers are physics experiments, not computers.
- (Brings up poorly-regarded D-Wave factoring.)
- PQC is very hard to implement well, and costs a lot in engineering time and network/execution time.
- (Various disparaging of academics and standards organizations as becoming self-justifying.)
- "Quantum cryptanalysis is the string theory of security" and my dog can factor just as well.
- Yes, and while cautionary tales are good, cautionary tales are not the same thing as prediction or even valid analogy.
- Okay, but I think the implied message here is off: if you can crack one Diffie-Hellman key exchange, you gain the ability to read everything in that conversation (n.b.: it's harder than just factoring or discrete log, there are other mechanisms involved), but the bigger catch would be the RSA private key of an important individual, which would allow you to impersonate them across a range of systems; certainly there are organizations that would pay a lot of money for that. Of course, I'd argue that it's pretty easy for truly high-value targets connecting to high-value systems are likely secured via shared private key, so hacking RSA is lower value. Peter is definitely more knowledgeable than I am in this area.
- Okay, but is that relevant to the anti-quantum argument? Is the argument just that people won't really commit big resources to factoring? I'd like to hear the oral argument that accompanies this point.
- This is the big one: he's saying progress in development of quantum computers is so poor that we can effectively discount them as any sort of threat. Ooh...okay. It's a fair point that reported successes in the literature on the task of cryptanalysis are advancing at a glacial pace. (We have worked on this topic.) But projecting from past progress to future progress is dangerous in this field. We have known since the beginning of this field that error correction would be necessary. Until we hit the threshold that allows quantum error correction to be executed effectively, progress on hardware did not translate into equivalent algorithmic successes.
Well, guess what? The relentless improvement in hardware means we have passed that basic threshold on at least two very different hardware platforms in the last two years. At least two other companies have released roadmaps that take us to large-scale, fault-tolerant systems within five years. At that level, that means they think they know how to solve all of the problems standing in their way. Even if they are off by a factor of two, that still means we're there within a decade, I'd bet sooner.
So my opinion is that pooh-poohing the likelihood of the advent of cryptographically relevant quantum computers (CRQCs) seems unwise. I think it's bordering on irresponsible to assume the technology won't happen; the argument instead needs to be about how much to prioritize countermeasures. - In today's environment, strongly agreed. Dave Farber said to me several years ago (perhaps as far back as 2019, though I think it was a little more recently than that), when I showed him some Qiskit code, "This isn't an application, it's an experiment." I think we as a community need to think very hard about how to deliver hardware, runtime systems and tools, and applications to customers.
- (Pass.)
- Cost of PQC is high -- oh, yes, definitely. I attend IETF meetings and listen to people moan about how hard it is. I'm not an expert here, though.
- (Pass.)
- Funny! (I need a dog. I love dogs. But I'm allergic to dogs, work too much and travel too much, and I think dogs in Japan don't have good lives, but all that's for a different posting, some other day...)
