Thursday, December 15, 2022

Quantum Internet class for undergrads!

It's now official -- next fall I will be teaching an undergraduate class titled "Quantum Internet". As far as I am aware, this is the world's first Quantum Internet course aimed at undergraduates!

Next year, we will have two undergraduate courses:
  •  Quantum Information Processing: intended to be a moderate introduction, suitable not only for those planning to continue in quantum but also those not planning to continue in quantum but who want to understand the key ideas. Intended to be accessible to ambitious freshmen, the only math required is a little bit of linear algebra and discrete probability. This course is based on two short online courses we have created, plus some hands-on exercises. Taught flipped classroom style.
    • "Understanding Quantum Computers" (UQC) -- basic ideas of computation (superposition, interference, entanglement, unitary evolution, measurement, decoherence and no-cloning; algorithms; types of hardware), not a lot of math
    • "Overview of Quantum Communications" (English and Japanese) (OQC) -- a little more math and the basic ideas of quantum communications (teleportation, BB84, purification, entanglement swapping) as well as critical basic technology (especially waveguides/optical fibers and lasers)
    • exercises using IBM machines via GUI & Qiskit
  • Quantum Internet: intended for those wanting to go a little deeper, but not limited to those joining my research group, AQUA. The QIP course above is a strict prerequisite, and this course will also be taught flipped classroom style. There will be substantially more math in this, but it's not purely abstract, most of it is in service of designing real systems. It will be based on our 2nd and 3rd Q-Leap Quantum Academy modules:
    • "From Classical to Quantum Light" (CQL) (English and Japanese) -- Maxwell's equations, single photons, more on waveguides
    • "Quantum Internet" (QI) -- the module we are recording lessons for right now: deeper analysis of errors and error handling, network engineering issues such as routing, multiplexing, security.
    • Exercises are still a little TBD, but will include QuISP and Intel-based exercises
UQC was funded internally by Keio. OQC, CQL and QI are funded by the Japanese government Q-Leap program. Portions of OQC are now being translated into Chinese, Thai, Arabic, Korean and French, thanks to a grant from Intel. More announcements on languages to come!
The material from OQC is now being compiled into a book. We expect an alpha-release draft to be available in January 2023 and a near-final version in 2Q2023. We have not selected a publisher for the book yet, but are looking for one.
All of the materials are or will be available under a Creative Commons license. People are encouraged to reuse, remix, translate, etc. -- just give us credit and make your own work available, too.

Monday, November 21, 2022

Book Progress: Quantum Communications

 Seven of the fifteen chapters in our next book, "Quantum Communications", are ready for local students to use/review. With this momentum, the book should be 2/3 ready in a week. Aiming for 4/5 finished by Dec. 19, then completion of alpha release quality over New Year's.

Sometime in Q1 2023, I expect to open up a PDF for wide review and the git repository for pull requests for corrections or contributions. With luck, the completed book, at least at strong beta reader level, will be on the arXiv April-ish.
The book will be Creative Commons, CC-BY-SA. People will be encouraged to contribute to our version, recompile or restructure for their own local use, or translate into other languages.
Keio and the Japanese government pay me to create knowledge, but it does no good if it's not shared, and I want it to go much farther than just the students of Keio and the other elite Japanese universities. Intel is also providing us with support for education. Let's work toward a high-quality, online, flexible, supported, inclusive, highly available quantum curriculum with global reach, including language.

Sunday, November 06, 2022

Spelunking CACM, Vol. 12 (1969)

 The first half of the year seems to move along smoothly, with a lot of algorithm papers but not much exciting. Automated printed circuit routing with a stepping aperture, by Stanley E. Lass, is the first hardware CAD/routing paper I've seen in CACM, but it cites a couple of things from the early 1960s. I remember VLSI engineers at Caltech in the early 1980s worrying that chips being designed by computers automatically were too complex to verify by hand. Of course, we now think of verification as being a task best suited for automation.

One letter to the editor in June, though, concerns an LA chapter meeting that bothered the author:

My disappointment stems from the subject matter and from the speakers. Three of the last meetings which I attended had to do with social conscience and community aspects. Two of these were on the same subject "Operation Bootstrap," which is an L.A. area self-help program for black people.

If only we had been more successful all the way back in 1969 in creating an inclusive atmosphere, and providing mentoring and help to those who needed it!

 A July paper talks about adding features to Dave Farber's SNOBOL. I haven't programmed in SNOBOL, so there are things I definitely don't grok, but there is a feature called "indirection", in which a variable contains the name of another variable, and via indirection the value of that variable can be read or written. This, of course, requires that the symbol table be available at run time, since the variable name can be constructed by the program! No indication in this paper as to what happens when the name isn't found in the symbol table (or, in modern terms, a dictionary?).

The September issue has a solid theory paper by David L. Parnas, On simulating networks of parallel processes in which simultaneous events may occur. It's dealing with discrete event simulation, and provides formal logic solutions for handling simultaneous events in the simulation of digital logic circuits. This work is roughly contemporaneous with the cooperating sequential processes work of Dijkstra and a little before the communicating sequential processes work of Hoare. Although this paper explicitly discusses the parallel nature of the work to be done and some machines that do the work in parallel, the paper really focuses on how to guarantee an acceptable sequential ordering.

And, in fact, in the following month's issue there is an article by Tony Hoare on An axiomatic basis for computing programming. This particular paper takes an...optimistic?...view of programming?

Computer programming is an exact science in that all the properties of a program and all the consequences of executing it in any given environment can, in principle, be found out from the text of the program itself by means of purely deductive reasoning. 

And then later,

When the correctness of a program, its compiler, and the hardware of the computer have all been established with mathematical certainty, it will be possible to place great reliance on the results of the program, and predict their properties with a confidence limited only by the reliability of the electronics.

Ah, would that it were so...

 

Friday, August 26, 2022

Quantum Ethics, the Conversation Continues

 In the magazine Foreign Policy, Vivek Wadwha and Mauritz Kop have an article titled, "Why Quantum Computing is Even More Dangerous than Artificial Intelligence", published in August 2022. It's one in a series of recent articles by Vivek arguing that technology needs to be regulated to address ethical concerns. In the article, Vivek and Mauritz argue that the unregulated, market-driven, ad hoc use of technology such as AI has produced outcomes that we would all consider undesirable, chiefly through mechanisms such as AI adopting our own biases or misuse of indiscriminately collected personal data. They tie quantum to these same issues, which I think is correct. In fact, I argued in a March blog posting that the ethical issues in quantum are, in the short run, very similar to classical computing, though I consider it too early to connect quantum to AI.

The authors have referenced some valuable online resources, such as the video "Quantum Ethics | A Call to Action", which includes excerpts from John Martinis, Nick Farina, Faye Wattleton, Ilyas Khan, Ilana Wisby, and Abe Asfaw, which I highly recommend.

I would not have picked the adjective "dangerous", but the conversation is critical. We should not be afraid of our technological future, but we should not use it without paying attention to the macro-level changes it brings. I believe quantum technology, the Second Quantum Revolution, will bring tremendous benefits in the long run -- if we use it wisely.

Monday, July 25, 2022

Spelunking CACM, vol. 11 (1968): Operating Systems Bonanza and Ethics, but not the ARPAnet!

If 1967 was a little dry, 1968 more than made up for it. The May issue is worth reading end-to-end! As it happens, that issue contains retypeset (and possibly updated?) versions of papers from the first Symposium on Operating Systems Principles (SOSP), which took place in 1967.  That first SOSP includes Larry Roberts's description of the ARPAnet, though that paper doesn't seem to have made it into CACM. (Some of the links below may be open access via the second link above, if the links below take to you the paywall.)

The year was just chock-full of operating systems papers by names such as Jack Dennis, Butler Lampson, and Edsger Dijkstra, and a comprehensive view of systems by E.L. Harder and another view of things by Jack Dennis, many but not all of them from that SOSP. This year also provided the first proposed ethics guidelines for ACM members, to the best of my recollection.

Donn B. Parker's article, "Rules of Ethics in Information Processing," presents the ACM's Guidelines for Professional Conduct in Information Processing. The article begins with three major points we still debate today, and a fourth (fraudulent programming schools) which seems to have been more transient. It begins:

There are a number of serious ethical problems in the arts and sciences of information processing. A few of these are: invasion of privacy by use of the computer; implications of copyrighting computer programs; and fraudulent programming trade schools. The "little" problems of personal ethics are of equal importance and are often closely related to the more imposing problems.

The original version filled less than a page, divided into rules for relations with the public, with employers and clients, and with other professionals. I can't help but notice the continuous use of the pronoun "he" to describe an ACM member, though honestly that probably would have been not so different even a decade ago.  This set of guidelines has now grown into ACM's Code of Ethics, which everyone in the field should read.

As a Caltech grad, this paper on a means of recovering an on-disk structure after a system crash, caught my eye. Of course it's a systems paper, so I'm naturally interested, but otherwise maybe not so remarkable by today's standards. It's the first non-Knuth-sensei Caltech paper I remember spotting, but I haven't checked the affiliation of every single author. (Knuth-sensei's first paper with a Caltech affiliation might be 1961.) I don't recognize either of the authors, Peter C. Lockemann and W. Dale Knutsen. Caltech punches far above its weight in almost every scientific field, and many Caltech alumni have made major contributions to computing, but computing seems to be relatively weaker than fields like physics and chemistry.

The April issue has a breathtaking, simultaneously contemporary and prescient view of how the need for computers was broadening. "Perhaps the computer is too big for the space vehicle, or to put in your vest pocket," Harder posited. It serves as a harbinger of the papers to come in May.

Daley and Dennis described virtual memory in MULTICS, one of the three or four most influential systems in history. Oppenheimer and Weizer presented their resource management in mid-sized systems. Van Horn of GE described a set of formal criteria for reproducibility in debugging. Interestingly, he used the term "virtual computer", essentially describing a process as an instance of a virtual machine, language very similar to what I use today. And Dennis shows up again, with a position paper arguing in favor of timesharing machines over networks that is well worth a read.

But the star, the paper for the ages, is Dijkstra's "The Structure of the THE Multiprogramming System", which features the observation that "an interrupt system to fall in love with is certainly an inspiring feature." Part of what makes the paper spectacular is that Dijkstra describes their failures and mistakes. We could all learn from this approach! Although perhaps his claim that the delivered system would be perfect should be taken with a grain of salt? The paper doesn't have references(!), but this is a real-world implementation of Dijkstra's Cooperating Sequential Processes, the idea he was developing in the mid- to late-1960s that would become one of the foundations of theoretical work on computing systems. The clever language also makes the paper a pleasure to read. All systems people should read this paper!


Sunday, July 10, 2022

Spelunking CACM, vol. 10 (1967)

In 1967, not a lot caught my eye but these.
Parallelizable arithmetic expressions shows how to take a linear expression and convert it into a tree for maximum parallelizability, assuming you have multiple ALUs, Ina single pass.
4-D hypercube includes some nice b&w wire frame drawings of a rotating hypercube done for a stereo animation. The author noted that he didn't feel that it gave him better insight into actual 4-D geometry.
Simulating a computer system is helpful in designing new systems. This simulation of an S/360 seems pretty sophisticated, including a careful job generator.
QED editor by Peter Deutsch and Butler Lampson, at Berkeley at the time, is a line editor for a teletype terminal. It mentions a few similar programs at other institutions, as well. This one includes search functionality, and the ability to store sequences of commands in a way we would later call editing macros.
This completes my spelunking of the first decade of CACM! Names (like those last two) who would go on to lead the 1970s and become legends have begun to appear.
I'm doing CACM since it seems central to the conversation, but there was also a note back in January about scope. CACM, they say, is mostly about systems, with less about numeric computation and theory. More theoretical work appears more in ACM's Journal and Reports. I'm not even going to attempt to spelunk those, though, just CACM is enough!

Wednesday, June 29, 2022

Icons for Quantum Network Nodes

 The set of icons we created for quantum network nodes, for use in network diagrams, simulators, etc., are available Creative Commons license. Use and share! Communication will be simpler if we all use the same icons. It will be easier for others to quickly grasp what we mean when we show them a diagram.

PNGs with either white or transparent background are available at https://github.com/sfc-aqua/quisp/tree/master/Network%20icons.

Tuesday, June 21, 2022

For My Nephew

 My nephew just finished his frosh year in college, and wants to study...astrophysics?  Not entirely sure yet, but the college where he is has only a basic physics degree anyway. He seems a little lost on what to study, so this posting is a personal one, for him, but feel free to read on if you're in the same situation.

Okay kiddo,

It sounds like you have done the basics of mechanics and geometric optics, but I'm not actually sure what else in physics. I know you haven't done Maxwell's equations and quantum mechanics, but what about special relativity and introductory thermodynamics (statistical physics)? And in math, I know you've done "calculus", by which I gather you mean functions, limits, and integration and differentiation of a single variable. Good start, but we've got a long ways to go.

Despite our conversations, it seems like you don't have a clear picture yet of the curriculum, or even the broad structure of knowledge, in math and physics.  (And since I'm a computer engineer, all of this is from that perspective, of course.) So the first thing to do is to get oriented on what you really need.

Here are a few things to help you understand the structure of the body of knowledge, which should help you figure out what classes to take (or what to study on your own).

Mathematics

For a basic physics degree, you are going to need the following math:
  • Linear Algebra. Multiplying vectors and matrices, solving systems of equations via Gaussian elimination, linear and affine transformations, and eigenvalues and eigenvectors, at least. You'll also be exponentiating matrices ($e^A$, where $A$ is a matrix) in quantum mechanics, and that's easiest if you can diagonalize a matrix. You can start with my linear algebra videos, but there are lots of resources on the web. I recommend the Georgia Tech Interactive Linear Algebra online, interactive textbook. That's pretty deep, you won't need it all right away, but it's there for a reason. There is also an entire 20-hour lecture course posted as two videos in YouTube, by Prof. Jim Hefferon; I haven't watched it, so I don't know how good it is, but the comments and likes are very positive. Khan Academy has an LA course. 3 Blue 1 Brown is one of the best things on the web, and they have Essence of Linear Algebra available (full course here). In short, there are many excellent, free resources available.
  • Probability, both discrete and continuous. Probability distributions, conditional probability, Bayes' Theorem, moments of distributions. For continuous, you'll need integration, so continuous comes after basic calculus.
  • Statistics. I think most physics majors get away with just what they learn in a probability class unless they specialize in statistical mechanics and thermodynamics.
  • Ordinary differential equations (ODEs). You said you've seen that $f'(x) = f(x)$ is solved only by the function $f(x) = e^x$, so you've seen the start of a very deep field.
  • Partial differential equations (PDEs). Next step: derivatives in multiple dimensions. These are equations involving symbols like $\frac{\partial x}{\partial t}$. I first hit this when doing Maxwell's equations; I think it's pretty common for that to happen, but it means you're dealing with both a new math tool and important ideas in physics at the same time, so studying basics of PDEs first is a good idea.
  • Transforms and signal processing are a big deal; you might run into Fourier, Laplace, Z and other transforms. (This is different from the linear and affine transforms above.) Often, these are tools for solving ODEs or PDEs, and might show up in an Applied Math class in your college.
  • Later, you might get into more specialized topics like number theory, group theory, and graph theory. Group theory, for example, shows up in particle physics. The basics of graph theory you can learn very early, actually, and they are critical in computer science but maybe not as much in physics.
I'm sure you can find as much stuff on the later topics as I found for LA.

Physics

Surprisingly, I'm a little less comfortable talking about what basic physics you should study. Let's talk first about optics, since it's one of my favorite parts of physics, demonstrates broadly useful concepts, and happens to be what we are working on for our quantum communications research.

From among the courses we have created already, if you want the basic physics, and to minimize the quantum communications portions, these steps would be very good follow-on to your work on geometric optics. They are a bit idiosyncratic relative to an ordinary course on optics, naturally focusing on what we need for quantum communications, but they will still carry you a long way.
  • OQC, Lesson 5: Coherent Light and Single Photons, leading up to a quick quantitative intro to lasers.
  • OQC, Lesson 6.1-6.2: Interference, group and phase velocity quick introduction to constructive and destructive interference and the notions of group and phase velocity, a distinction that is crucial to understand.
  • OQC, Lesson 7: Waveguides discusses the most important means of guiding light, of which the most famous type is, of course, optical fibers. Total up to here is about two hours worth of video, you can do this in the time it would take you to watch one soccer match.
  • And then most of our entire module From Classical to Quantum Light, which is about 10 hours of material covering wave equations, Fourier analysis, Maxwell's equations governing how electromagnetic waves work both in a vacuum and in materials, and then into more on single photons and the like, including how detectors work. You might want to taper off when you get to the single photon stuff and defer that for after you have had basic quantum mechanics, so let's say you should do the first ten lessons of that, 45 minutes each, so about 7.5 hours.
  • Among the remaining important topics to learn about are how holograms work, more on polarization, and a lot about antenna design. A good optics course will cover these things.
From the above video on the map of physics, you should have some idea of the basic list of topics:
  • mechanics
  • waves
  • optics
  • special relativity
  • electricity & magnetism
  • introductory quantum mechanics
  • thermodynamics
That much you should take as an engineering or physics student of any sort. If you major in physics, you'll add advanced QM, particle physics, general relativity and some other topics to that list.

Given your interest in soccer, you might consider studying biomechanics, which is a bit of a specialized area and interdisciplinary, so a little biology will help, too. You should check out Professor Ohgi's Sports Dynamics and Informatics Lab as a possible destination for grad school. He works with people up to the level of Olympic athletes to understand the forces and dynamics of athletic performance. Studying some computer science before joining his lab would also be helpful.

Enough for now. Go watch those introductory things to get oriented, then study some linear algebra and probability while you watch our online courses on optics and quantum computing and communications.

Sunday, June 12, 2022

Spelunking CACM, vol. 9 (1966): Eliza (and now 2022's LaMDA)

 Eliza has arrived. Eliza, who is a few months older than I am, is inarguably one of the seminal events in computing history. It's just about the only program created in the 1960s that most of us can still spontaneously name. It helped to spur my own interest in computing.

My first computer was a Radio Shack TRS-80 Model I. I also got a book with programs in it, which meant copying the programs in by hand. (Good training for catching syntax errors!)  One of those programs was called Eliza, though I suspect the BASIC version in my book was pretty different, and probably much simpler, than the 1966 version created by Joseph Weizenbaum in SLIP, a language adding list processing features to FORTRAN.

Eliza works by parsing input text into tokens and finding an important phrase (where phrases are delimited by commas or periods) based on the key word in the phrase, using a list of important words that was created by hand. Then it replaces the primary verb or subject in the phrase with one of a set of stock phrases. If you type in "I love my dog," Eliza might respond, "Tell me why you love your dog." Does mean it has any idea at all what a dog is, it just found a simple pattern and followed it.

My TRS-80 was at home, but the high school also had one, and friends of mine also enjoyed playing with Eliza. And what do high school boys want to talk about? Sex and scatalogical things and bad language, of course. You could make Eliza say some pretty hilarious things that way, by the standards of high school boys.

And now, 56 years later, we have a Google engineer and AI ethicist arguing that an intellectual descendant of Eliza has become sentient.

Chat bots have evolved tremendously since then, and are generally connected to a backend system of some sort, so that they can provide airline assistance or what have you. Often, they are connected to a neural net-based AI both for parsing the language and creating the responses. Generally speaking, no one believes they are actually sentient, but the responses will sometimes appear so insightful that your hair stands up on the back of your neck.

My skeptic's nature and my own very limited experience with chat bots cause me to lean toward siding with the Googlers who said there is no evidence that it's sentient, and plenty of evidence against it. But we are now clearly entering the realm where it will get harder and harder to tell, and the stakes of the arguments will continue to grow. Research into AI ethics grows more crucial every day.

Monday, June 06, 2022

Spelunking CACM, vol. 8 (1965): debugging, peephole optimization, and an undergraduate curriculum

(Click on the image if it isn't large enough to read.)

Overall, 1965 seemed like an "ordinary" year at CACM. I didn't see any papers that really caught my eye as some sort of extraordinary event. (Stay tuned for 1966 for that.) But there are a few things of interest, of course.

By 1965, the idea of wide-area networks was definitely in the air. Paul Baran was publishing RAND reports on the key ideas of packet switching (although I'm not sure when those became publicly available), and the ARPANET project would kick off in 1966. But connections between computers were still much more an idea than a reality, and so I.R. Neilsen had a CACM paper on a 5kHz modem for "callup" connections for exchanging medical data.

Evans and Darley presented DEBUG, which worked with other tools to allow smooth insertion of new lines of assembly while debugging a program, preventing what we now call "spaghetti code". DDT already existed for the PDP-1 at this point. I find this early history of debugging intriguing, since I have a Ph.D. student working on debugging for quantum programs now.

Maybe the most interesting trend is a pair of papers (is two enough for a trend?) on optimizing compilers. In June, Nievergelt presented "On the Automatic Simplification of Computer Programs", which proposed a set of backend techniques for individual optimizations to a stream of instructions. The paper enumerates limitations on the use of the proposed techniques, which interesting included the constraint that programs not be self-modifying, evidence that that idea was already in use.

Then in July, McKeeman gave us "Peephole Optimization". He implies that the technique is known but known and "simple" but "often neglected", and also that he is coining the term "peephole optimization" in this paper. Again implemented for the PDP-1, the explanation is clear but limited, being just over a page.

But maybe the most interesting thing as an artifact is the report from the ACM Curriculum Committee on Computer Science titled "An Undergraduate Program in Computer Science-- Preliminary Recommendations". Some earlier papers on education appeared in prior years, but I think this is the first actual organized recommendation from ACM itself. Here was the status at the time:

At the time of this writing, in the United States, doctorates can be sought at more than 15 universities, master's degrees at more than 30, baccalaureates in at least 17 colleges, and a sizeable number of other colleges and universities are presently planning or considering departments of computer science.

Impressive. I didn't know there were that many before I was born.

At the top of this post is the key table. Each of the sixteen courses there has a several-paragraph description that is well worth reading. I'm particularly intrigued that Graph Theory was already recognized as an important element in the curriculum. I am working to introduce a Graph Theory class for undergrads at our campus, for many purposes including economics, social sciences, etc., not just CS.

Each course in that table is assumed to be a U.S. 3-semester-hour course consisting of 3 hours/week x 15 weeks = 45 hours of in-class time and probably 2x that in lab or homework time for a total of, oh, 130-140 hours of work and learning. If you take just the nine "Required" plus "Highly Recommended Electives" courses, you're well over a thousand hours of time invested.  Add the seven "Other Electives", and a university that provided them all (noted in the text to be a difficult task), and you have about 2,100 hours of time. And that's not even counting the "Supporting" courses, let alone any arts and social sciences classes required. In short, this is a pretty thorough and ambitious major!

Saturday, June 04, 2022

古典光学から量子光学へ


 

公開しました!

去年の「量子通信の基礎」の引き続き、「古典光学から量子工学へ」はYouTubeにアップされています。量子通信、量子インターネットなどに興味ありましたら、是非御覧ください。69本のビデオ、10時間ぐらいがあって、以下のトピックがテーマになっている:


などの内容です。今年度(つまり、来年の3月)は「量子インターネット」のモジュールを公開しますので、来年もどうぞよろしくお願いします。

Tuesday, May 31, 2022

Peak Digital

 Random topic for your end-of-year brain break: I predict that we are at #PeakDigital, within a decade or so. Beyond that, we will have the #SecondQuantumRevolution and the #AnalogRenaissance. The idea of (classical) binary-only data processing will come to seem...quaint.

We will learn to create, process, store, transmit and correct analog data in myriad forms. Same with digital and continuous-variable quantum. All at fidelity and reliability that far exceed vinyl records, film, wax cylinders, differential analyzers and tide calculation machines, allowing essentially infinite rounds of processing and transmission with no detectable degradation.

It will be far faster and more efficient than digital computation, and after some teething pains will come to be seen as flexible and reliable.

People will talk of a quartet of information paradigms -- classical and quantum; digital/discrete and analog/continuous.

Information theory and computational complexity will merge completely.

And the kicker? P ?= NP turns out to be irrelevant, as complexity turns out to be a continuum instead of discrete classes, a topography of hills and valleys, and whether you roll downhill toward polynomial or exponential is a microscopic different on a ridge.

Now, do you think I actually believe this will work out exactly this way? I'll hedge my bets and keep my cards face down. But this might provoke some conversation. What do *YOU* think information will look like in, say, 2075?

https://twitter.com/rdviii/status/1476071997239336961


Saturday, May 28, 2022

From Classical to Quantum Optics Course on YouTube


The Japanese government is funding a group of us, led by Prof. Kae Nemoto (now at Okinawa Institute of Science and Technology), to create an undergraduate quantum computing (engineering?) curriculum. We are calling it Quantum Academy. As of this writing, getting the full experience requires registering for a class at one of the participating institution, but we are making our share of the work available as Creative Commons, CC-BY-SA. Each module is intended to be 1 unit that will satisfy MEXT, about ten to twelve hours of face time with faculty, plus reading & homework. (Most Japanese university courses are two units, a total of 40-50 hours of work, about half the size of a U.S. course.)

Our contribution this year is the module From Classical to Quantum Optics. Actually, technically, that link will take you to our YouTube channel's playlists, which includes not only that module but also last year's Overview of Quantum Communications. Moreover, there are both English and 日本語 versions available! Michal Hajdušek created the materials and the English videos, while I did the Japanese videos.

My apologies, but we are still working on subtitles, especially for Japanese. If anyone would like to do subtitles in another language, please let us know, we are very interested!

Eventually, edited transcripts will be available in book form, as well. Patience, please!

The module begins with the classical wave equation, Fourier analysis, and Maxwell's Laws, then gets into single photons. It follows on nicely from Overview of Quantum Communications, where we talked mostly about qubits as abstract things, but also covered the critical technology of waveguides such as optical fibers, without fully justifying why light can be guided that way. Here, Maxwell's equations and the wave equation shore up that foundation. Here's the outline:

From Classical to Quantum Light

Prerequisites: Overview of Quantum Communications, Linear Algebra, Probability

Prerequisites/co-requisites: Differential Equations, Introductory Partial Differential Equations, Introductory Quantum Mechanics, Classical Optics

Recommended next courses: Quantum Internet (coming in 2023)


An Early, Relatively Complete Quantum Computer Design

 Based on a Twitter thread by yours truly, about research with Thaddeus Ladd and Austin Fowler back in 2009.

In the context of an ongoing #QuantumArchitecture #QuantumComputing project, today we reviewed some of my old work. I believe that in 2009 this was the most complete architectural study in the world.

We started with a technology (optically controlled quantum dots in silicon nanophotonics), an error correction scheme (the then-new surface code) and a workload/goal (factoring a 2048-bit number).

We considered everything.

Optimized implementation of Peter Shor's algorithm (at least the arithmetic, the expensive part). (More recent work by Archimedes Pavlidis and by Craig Gidney goes beyond where we were in 2009.)

How many logical qubits do we need? 6n = 12K.

How many logical Toffoli gates? A LOT.

So how low a residual logical gate error can we allow?

Given that, and a proposed physical gate error rate, how much distillation do we need? How much should we allow for "wiring", channels to move the logical qubits around?

We ended up with 65% of the space on the surface code lattice for distillation, 25% for wiring, and only 10% for the actual data.

From here, we estimated the code distance needed. High, but not outrageous, in our opinion. (More on that below.)

With micron-sized dots and waveguides, we had already grokked that a multi-chip system was necessary. So we already knew we were looking at at least a two-level system for the surface code lattice, with some stabilizers measured fast and others slow.

We worked through various designs, and wound up with one with several different types of connections between neighbors. See the labels on the right ("W connection", "C connection", etc.) in this figure from the paper. This is the system design.



Turns out each type has a different connection rate and a different fidelity, so we need purification on the Bell pairs created between ancillary dots, before we can use them for the CNOT gates for stabilizer measurements. This could mean that portions of the lattice run faster and portions run slower.

Oh, and an advance in this architecture that I think is under-appreciated: the microarchitecture is designed to work around nonfunctional dots and maintain a complete lattice. I have slides on that, but sadly they didn't make it into the paper, but see Sec. 2.1.

Put it all together, and it's an enormous system. How big?

Six billion qubits.

So if you have heard somewhere that it takes billions of qubits to factor a large number, this might be where it started. But that was always a number with a lot of conditions on it. You really can't just toss out a number, you have to understand the system top to bottom.

Your own system will certainly vary.

The value in this paper is in the techniques used to design and analyze the system.

You have to lay out the system in all its tedious detail, like in this table that summarizes the design.



That lattice constant in the table, btw, is one edge of a surface code hole, so the actual code distance is 4x that, or d = 56. We were aiming for just a factor of three below the surface code threshold, and for a huge computation. These days, most people will tell you hardware needs to be 10x better than threshold for QEC to be feasible. If not, you end up with numbers like these.

You can pretty much guess who did what. Austin Fowler (then in Waterloo, now at Google) did the QEC analysis. Thaddeus Ladd (then at Stanford with Yoshi Yamamoto, now at HRL) did the quantum dot analysis and simulation. I brought the arithmetic expertise and overall system view. We all worked super hard on the chip layout and how all the pieces fit together to make a system.

These things are beasts, with so many facets of the design problem, that we need all hands to make them work!


Quoted/Cited in the Press

 A couple of places that the press has quoted me or cited my research recently, on Quantum Internet stuff. (This blog posting is more for my own reference than anything else.  Feel free to ignore it.)


  • Physics World on the Hanson group's new paper, showing two-hop teleportation after entanglement swapping in NV diamond.
  • Nikkei talking about Quantum Internet and about our work,  but not directly quoting me.  (In Japanese, and behind a paywall.)

Sunday, March 06, 2022

Raising Ethical #QuantumNative Engineers (the 2022 Long List)

 

I am a child of Apollo; one of my earliest memories is of the launch of Apollo 15, which my family drove to Florida to witness. I was five years old. I came to love science, especially but not only astronomy and space. Like many others, I started down the path to a STEM career simply because I was fascinated by the science, by the beauty of the ideas. Many of us, including me, start out as techno-utopians. We naively assume that the things that we discover or build will be useful or make the world better in some way. But as an engineer, I have come to recognize that technology does not exist in a vacuum. The things we are building need to be placed in their proper context in society.

And as an educator, it is my responsibility to see that our students at least begin the process of understanding their role through an ethical lens, whether they become scientists, engineers, business people, policy makers, or activists, and in all cases in their roles as citizens. Rather than a full understanding of theory, this should be applied ethics and society.

We all hear a lot these days about technology-driven anthropogenic global warming (including, recently, the astounding power consumption of cryptocurrency mining in proof-of-work systems), autonomous weapon systems, machine learning systems learning (from human behavior and data) to be racist, technological disruption of labor markets, and online spying, bullying, and scams. While it might seem too early to be talking about the ethical implications of something as embryonic as quantum computing and communications, in fact the first major burst of funding for the field was driven by the discovery of Shor's factoring algorithm. Its implications for public key cryptography have immediate consequences for national security, and so the spooks and others scrambled to be involved. Although most quantum researchers trace the birth of the ideas to the theoretical, abstract work of Benioff, Feynman, Deutsch, Bennett and Brassard in the 1980s (or to Wiesner in the 70s, Bell in the 60s, or even Einstein-Podolsky-Rosen in the 30s), it's fair to say that as a field in its own right quantum engineering was born with ethical concerns. Moreover, the students we are educating today will lead the field for the next four or five decades. Now is the time to begin the conversation.

To that end, recently, I posted this short message on Facebook:

Okay, people, I'm thinking curricular thoughts: how do we raise ethical #QuantumNative engineers? What are your book-level reading recommendations? I think I want four books, one each on the following topics:

1. How technology makes the world a better place.
2. The limitations of technological attempts to make the world a better place.
3. The consequences of technological failure.
4. How technology can be employed in antisocial/anti-democratic ways.
Recommendations? (I am also willing to be persuaded that I don't yet have the right set of topics.)

Asking these questions actually has three purposes. The first is to augment a blog posting of mine on a quantum native engineers bookshelf, the second is thinking about revising our own undergraduate curriculum here for all our students, and the third is my involvement in a Japanese government funded effort to create an undergraduate quantum engineering curriculum.

I received some nice responses from my friends, and even more from the friends of Mike Nelson. Rather than replay the conversation, in this posting I am going to attempt to synthesize the collective knowledge. With apologies to those who contributed, I am certain that almost everyone will disagree with some element of this synthesis, but I hope it represents roughly some sort of centroid among those who spend a lot of their waking hours worrying about such matters. I will list the names of those who were kind enough to contribute, and did not ask to be anonymous, at the end.

Most of the contributors took the idea quite seriously, not just as an adjunct to an engineering curriculum but as an important topic in its own right, so some of the suggestions drill deeper than is achievable within the confines of a four-year engineering program, but could be incorporated into a master's program.

It's worth noting that many of the books listed here are only roughly categorized. Many of them take a broader look at the topic and so cover more than one of these questions, or focus on one technology or historical period and look at it from multiple points of view. Indeed, the more I look at the set of books the more it seems my original questions are naïve or oversimplified in categorization. So, rather than attempting to read these books as if they answer a single question, perhaps the better approach is to read any of these books with that set of questions in mind.
Indeed, as I am editing this, I am coming to reconsider the structure. This is also the "long list", with nothing yet culled; eventually I will have to make a (Japanese) semester-sized syllabus as well as a shorter list for my bookshelf.

Somewhat unexpectedly, I got several fiction recommendations. After some thought, I kept them with the appropriate categories.Though as discussed below they run a larger risk of misinterpretation than nonfiction, they can make a deeper, emotional connection and explore the impact of the what-ifs on people in a different way. The fiction books are marked.

Where I have been able to find them, I have used links to publishers' and authors' sites, rather than to Amazon. After all, even as much I use Amazon, I think we all agree it is a prime example of too much concentration of market power.

-1. Quantum

Before we start the list, I suppose it's worth a comment or two about what is unique about quantum computing and communication, with respect to other engineering disciplines. The short answer is, nothing.
The long answer is twofold:

  1. As an interdisciplinary field, comprised of physics, mathematics, computer science, and computer engineering, as well as application fields such as chemistry, AI, operations research, and cryptography, the people involved come from very different viewpoints and backgrounds, and use very different vocabularies. While WWII was known as "the physicists' war", most present-day researchers spend little time thinking about the implications of their work. Finding common ground is crucial.
  2. Quantum is the biggest change in our computational and communication capabilities since the transition from analog to digital. I expect this Second Quantum Revolution to run for fully half a century yet; there is still some time before the actual capabilities catch up to the vision. Nevertheless, the changes will be so profound and far-reaching that it is hard to foresee any problems that will develop.

One could argue that the problems quantum will raise are less fraught than the work already underway on brain-machine interfaces and even current machine learning. Perhaps. In educational terms, I do think the foundations are the same, hence quantum programs can cooperate seamlessly with other engineering programs, so perhaps rather than creating my own list here my time would be better spent finding an existing program I admire enough to emulate. But I'll bet in a couple of decades there is an independent sub-field of quantum ethics.
To be clear, as of this writing (March 2022), I see the challenges as primarily educational rather than research.
There is a small community of people who are already publishing on the topic, but the things I have seen to date demonstrate only a rudimentary understanding of the technology, and so have only a little to add to the conversation so far. Thus, besides educating our own engineers, it is incumbent on us to work with ethics researchers so that the learning is mutual.
An approach taken by at least one researcher is to tie quantum to AI, and hence to the profound issues that ethicists are wrestling with there. The researcher can hardly be blamed for making the connection, since quite a number of quantum researchers are themselves pushing it. Personally, I am skeptical that quantum will transform AI, though if the critical problem of operating on large datasets can be solved then quantum may accelerate performance. I do believe that a well-rounded #QuantumNative engineer will have a grasp of the important AI ethical issues.

0. Foundations

Ahmed Amer emphasized that students need to begin not just with my questions above but with a framework for thinking about ethical issues, so let's begin there. Call it question zero. A couple of these books are specific to AI and hence perhaps should be categorized a little lower in this list, but most are more general.

  1. Thomas Kuhn, The Structure of Scientific Revolutions: Reportedly, Al Gore's favorite book. I've read it a couple of times, and find it not an easy read but an important one, although I have heard that modern thinking has moved past the particular structure that Kuhn proposed.
  2. Herb Simon, The Sciences of the Artificial: We are, after all, engineers. It's important to understand how the things we build fit into the taxonomy of the universe.
  3. Abraham Flexner, The Usefulness of Useless Knowledge: Another book on why we do science, a philosophy rather than ethics book, but about the role of science in society. Get the edition with modern commentary.
  4. Shannon Vallor, Technology & the Virtues: Ahmed's first recommendation, this should provide the basic framework and vocabulary for thinking about ethics. (I have not yet read this book, but it is very high on my list.)
  5. Peter L. Berger and Thomas Luckmann, The Social Construction of Knowledge: Cited 70,000 times, which would catch the eye of any metric-watcher. You can find (possibly copyright-violating) PDFs if you look. (I have not yet read this book.)
  6. Karl Mannheim, Ideology and Utopia: Written in 1936, perhaps the oldest book on this list. (I have not yet read this book.)
  7. Lin, Abney, Jenkins, eds., Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence: (I have not yet read this book, but it looks like a good overview, so I listed it here.)
  8. Joseph Weizenbaum, Computer Power and Human Reason: An early (1976) nonfiction book on what we would now call AI ethics by an important, early AI researcher. I'm not certain if it is still in print, but I think you can find (possibly copyright violating) PDFs on the web if you look. (I have not yet read this book.)
  9. Robert E. McGinn, Science, Technology, and Society: I haven't read this, and don't really know much about it, but apparently is/has been used in college classes. It is apparently out of print (1990).
  10. Naomi Oreskes, Merchants of DoubtMore about how scientists can go wrong than about how others misapply our work, so I put this in foundations. (I have not yet read this book.)
  11. Marshall McLuhan and Quentin Fiore, The Medium is the Massage: A super-famous notion, but I admit I have not read the book.
  12. Woodrow Hartzog, Privacy's Blueprint: I haven't read this book, but I have heard the author speak. This book can serve as our look into the relationship between engineering and law. (You might want to substitute his even newer book coming out right now, but I gather this one is a little more focused on that legal connection.)
  13. Brian Green, Space Ethics: I am told that the first part of this presents a useful, broad framework.

1. Benefits

On question one, we should see some examples so that we know what success looks like.
  1. S. Pinker, Enlightenment Now: (I have not yet read this book.)
  2. D.A. Henderson, Smallpox: The Death of a Disease: (I have not yet read this book.)
  3. Kevin Ashton, How to Fly a Horse: Apparently not strictly related to ethics, but more about how innovation happens. (I have not yet read this book.)
  4. Eric D. Beinhocker, The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics: (I have not yet read this book.)
  5. Tom Standage, The Victorian Internet: This comes strongly recommended. (I have not yet read this book, but Jun Murai is fond of citing a late 19th century pamphlet(?) by Yukichi Fukuzawa that includes an image of people connected via telegraph.)
  6. Lee McKnight and Audrey N. Selian, The Role of Technology in the History of Well-Being: A book chapter rather than a book, but at 40 pages has enough room to establish depth. (I have not yet read this.)
  7. Paul Kriwaczek, Babylon: Mesopotamia and the Birth of Civilization:  (I have not yet read this book.)
  8. (fiction) David Brin, EARTH: I've read this and consider it an extended thought experiment. I'm not sure I would categorize it here, but that was the recommendation.

2. Limitations

I think it's also important to understand that factors other than sheer technical success influence how large an effect on society that a new technology can have. Engineers must work with others who are working to make society better, and must approach the the task with profound humility. In short, things never work out the way you expect, and engineers shouldn't even try to go it alone.
  1. Justin Reich, Failure to Disrupt: Perhaps the best cautionary tale I have read on computers and society, showing that technology alone is not enough. This is something of a specific case study rather than a more complete textbook. It might feel off topic, but I think everyone should be aware of the limitations of our ability as engineers alone to remake the world. Fundamentally, it shows how important the social structures that support the deployment of technology are.
  2. Daniel Sarewitz, Frontiers of Illusion: Science, Technology, and the Politics of Progress: (I have not yet read this book.)
  3. Sherry Turkle, Alone Together: I have not yet read this book and am uncertain about its categorization, but this seems like a good start.
  4. (fiction) Karl Schroeder, Stealing Worlds: (I have not yet read this book.)

3. Failures

In this context, I mean technical failure: the systems we design and build do not always operate the way we intend. These failures have consequences in the real world, sometimes including the death of humans or damage to the environment. It is critical for an engineer to understand these consequences and some of the common causes and to take to heart the importance of, and equally the impossibility of, technical perfection.
Beyond outright crashes (of programs or vehicles), this category includes failure to operate properly in a broad set of legitimate circumstances due to insufficiently broad or rigorous requirements and/or testing, such as the now-famous failure of some automated sinks that use infrared sensors to correctly detect the hands of dark-skinned people.
  1. Peter Neumann, Computer-Related Risks: To the best of my knowledge, there's still nothing else like it. Peter is still the chair of the ACM RISKS Forum. If you don't have a healthy respect for what can go wrong with, and as a result of, computing technology, then you are Icarus incarnate. (The book is 1995, and availability is apparently limited; is there a more modern equivalent?)
  2. Henry Petroski, To Engineer is Human: another book from the 1990s on failure in engineering and design. (I have not yet read this book.)
  3. Charles Perrow, Normal Accidents: This book received multiple recommendations. (I have not yet read this book.)
  4. (fiction) Mary Shelley, Frankenstein: The recommender suggested this meets all four questions, but I think I would place it here.

4. Misapplication

As engineers, we carry in our heads an image of how we expect our products to be used. That image may unintentionally exclude some groups, such as women or people of color or differently abled people. Thus, if we become aware of the issues, we can do better. Worse, some may deliberately use digital (and ultimately quantum) tools to effect repression, destabilize democracy or even prosecute a war, goals which hopefully few quantum engineers would actively endorse.
This category is the most vague and hence has the most entries. It's certainly not necessary to read all of these, but it is a good idea to get a broad view.
  1. Edwin Black, IBM and the Holocaust: I read some of the articles about this when the book came out, but have not read the book itself. Seems like Exhibit #1 for this category.
  2. Steven Feldstein, The Rise of Digital Repression: perhaps the clearest, most direct book on this list about how technology enables political authoritarianism, well written with a strong focus on how existing authoritarian organizational structure influences choices in the digital arena, as well.
  3. Your Computer is on Fire: I found this book uneven, and I don't think it provides a well-rounded, comprehensive survey, but it's still probably the best book-length thing I've read on how technology influences and is influenced by human biases and problems. If you want to replace or augment this with something else on the ethics of computing technology, including race and AI, the surveillance society, etc., I am okay with that. Reading the news every day and thinking about its implications is necessary but not sufficient, IMO.
  4. Sasha Costanza-Chock, Design Justice: Like the above, a book that very much comes from the point of view that "artifacts have politics". (I have not yet read this complete book, only excerpts, but the full text is open access.)
  5. James Williams, Stand Out of our Light: (I have not yet read this complete book, only excerpts, but the full text is open access.)
  6. Cathy O'Neil, Weapons of Math Destruction: One of the more famous books on this list, but I haven't yet read it.
  7. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence: (I have not yet read this book.)
  8. Tim Wu, The Master Switch: The Rise and Fall of Information Empires: (I have not yet read this book.)
  9. Barbara Ehrenreich and Deirdre English, Witches, Midwives, and Nurses: A History of Women Healers: I have not yet read this book, and of all the suggestions made this is the only one that seems likely to result in significant disagreement as it is listed under "New Age" and apparently rather substantially disses modern medical science; Amazon reviews are mostly positive but a few set up significant alarm bells for me. This is apparently a reprint of a 1970s feminist tract with commentary from 2010.
  10. Ruha Benjamin, Race After Technology: (I have not yet read this book.)
  11. Caroline Criado Perez, Invisible Women: (I have not yet read this book.)
  12. Brad Smith, Carol Ann Browne, foreword by Bill Gates, Tools & Weapons: the Promise & the Peril of the Digital Age: (I have not yet read this book.)
  13. Jonathan Zittrain, The Future of the Internet and How to Stop It: I have not yet read this book, but it is well known. Published in 2009, it is worth reading while keeping an eye on how the Internet actually has evolved in the intervening decade-plus. Perhaps it can help head off the worst of the ideas in the current "web3" brouhaha. (I have not yet read this book.)
  14. David Brin, The Transparent Society: One of my own favorite thought experiments. Basic thesis: privacy is dead, get over it. Governments and corporations have all the incentive in the world to collect data on us; as individuals, our only recourse is to, in turn, have equal transparency into those organizations and what they are doing with the data.
  15. Neil Postman, Technopoly: (I have not yet read this book.)
  16. Chris Ategeka, The Unintended Consequences of Technology: (I have not yet read this book.)
  17. Paul A. Offit, Pandora's Lab:  (I have not yet read this book.)
  18. Siegfried Kracauer, From Caligari to Hitler: An intriguing recommendation, with the suggestion that it shows how cinema was used by the Third Reich to manipulate public opinion (although the blurb says it is more about the Weimar Republic).  (I have not yet read this book.)
  19. (fiction) George Orwell, 1984: this was recommended by a friend, and of course it's one of the most important novels of the 20th century, but I don't think of it so much as being about the impact of technology on society. Moreover, one commenter pointed out that not everyone takes the same message away from reading the novel; fiction even more than nonfiction appears very different depending on the mindset you bring to reading it. I think it's a book that everyone should read, but I am a little reluctant to include it in a curriculum like this.
  20. (fiction) Aldous Huxley, Brave New World: Of course someone suggested this, as well.
  21. (fiction) John Brunner, Shockwave Rider: (I have not yet read this book.)
  22. (fiction) John Brunner, Stand on Zanzibar: (I have not yet read this book.)


Closing Thoughts

As I write this, Russia has just invaded Ukraine, and simultaneously apparently attacked a number of Ukraine government websites. The COVID-19 pandemic continues, where advanced research empowered the vaccines and the Internet has been a source of both indispensable research communication and public outreach, and appalling misinformation. Examples of the importance of ethics in daily life and our profession flow through the news daily. 
Ethics in AI has become an important topic, perhaps belatedly given its broad implications. Quantum may (or may not) wind up contributing to the advance of AI, so it is particularly urgent for quantum folks to be aware of the discussions. (See a few links below.)
One topic that is not yet addressed here is work in the defense industry. Many graduating engineers in all fields will go to work in the defense industry; quantum will be no different. How should they decide what work is ethical? I believe this would be an entirely separate, and very long, discussion.
Inevitably, even a list of this length only scratches the surface. This is a veritable book factory subject, with dozens and dozens of books by people from all backgrounds, especially in the loosely-defined fourth topic above. It does seem to be more a topic covered in books and white papers rather than journals, but perhaps I'm looking in the wrong places. People devote their careers to this topic; I would be thrilled if our own graduates did so. Hopefully, this will embed an ember deep in the brains of today's students that will glow and eventually flare into a flame that sheds new light on the world.

Sources and Resources

I am not by any means the first to think about incorporating ethics into STEM teaching; inarguably, I am late to the game. From my point of view, this lets me take advantage of the learning and work done by others.
I am grateful to the following contributors: Michael Alan Aisenberg, Suzanne Aldrich, Scott Alexander, Ahmed Amer, Lara A. Ballard, Richard Bennett, Maya Bernstein, David Brin, Patrick Coin, Margaret Cullen, Bay Fang, Castor Fu, Mike Godwin, Steve Gómez, Margret Hjalmarson, Annalie Killian, Kelly Knox, Charlie Marcus, Lee Warren McKnight, Ashley Merryman, Sue Moon, Mike Nelson, Craig Patridge, Karla Peterson, Alejandro Pisanty, Stuart Ray, Susan N Erik Read, April Rinne, Kavé Salamatian, Avery Sen, Jun Takei (竹井淳).
Note that even the centuries-old technology of the English alphabet, and our practice with respect to organizing lists of names, is not entirely value neutral; it puts people with family names starting with the letter A in a position of priority.

n.b.: Mike Nelson added a discussion about writing when he reposted my original. While I 110% agree about its importance, I think it's a separate discussion from raising ethical, thoughtful #QuantumNative engineers.


Other web resources:
  1. https://oecd.ai/en/wonk
  2. https://standards.ieee.org/industry-connections/ec/autonomous-systems/
  3. https://ethicsinaction.ieee.org/
  4. https://genderedinnovations.stanford.edu/index.html
  5. https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/embedding-ethics-into-computing-curricula-resources-and-suggestions/

Revision History

  • 2022/2/4: first message posted to Facebook
  • 2022/3/5: first full draft sent to a handful of people for review