Saturday, December 30, 2023

Spelunking CACM, Vol. 15 (1972)



In January, Duda and Hart wrote about using a variant of the Hough transform for finding lines in a pixelated image. More than fifty years later, this is still a standard approach to the problem. A line can be defined in terms of theta and rho, such that x cos theta + y sin theta = rho. First, transform every non-blank point (this algorithm is originally defined for black & white images) into a sinusoidal curve in the (theta, rho) parameter space. With a line, a set of those curves will all intersect at a point in that space. Finding those intersections can be expensive, but they give techniques for making this tractable, as well as allowing some slack in the parameters to account for imperfections or approximations.

David Pager proposed a computer-based interactive scientific community that is basically a queryable database of mathematical theorems that will provide you with a list of mathematical theorems you need to know in order to understand a particular paper you are reading. This paper seems to have had little impact, but the GOFAI people, including Cyc, should have loved it!

And the March issue was a special issue with eight of the 23 papers from the third SOSP, including a paper on TENEX and one by Liskov. Wow, I'm going to have to read all of these in more detail!

Who can argue with the early introduction of a course on computers and society?

As I noted in spelunking 1970, all of this is starting to feel very modern -- or maybe I'm just old. But it's getting to the point where so many articles feel relevant that it's hard to choose! However, many of these early CACM articles have only single-digit numbers of citations (ACM counting), which I find surprising.

Let me close with an obvious one: Dijkstra's humble programmer. We should all heed the words of the titans, and their Turing Award lectures are a great source of wisdom. "Programs should be constructed correctly, not just debugged into correctness," indeed!  (Would that it were so easy...)



Thursday, December 28, 2023

Movies & Other Culture, 2023

I don't think I saw any major live music, dance or theater performances in 2023. The only place "new" I went, as best I recall, was ITB (Institut Teknologi Bandung) in Indonesia, where I hope to be going more often in coming years. I did read my usual thirty or so books, listed over on Goodreads (but zero so far in December? I should finish a couple by the end of tomorrow to get to thirty...).

The other major cultural source of entertainment is, of course, moving images. I watched almost no movies on small screens, though my wife and I did watch and enjoy Ōoku (大奥) and Whatcha Gonna Do, Ieyasu? (どうする家康?). I watched a little of Star Trek: Deep Space 9, but it has so far failed to captivate me as much as it has many other people. Beyond that...I pay for a lot of media services and rarely find things I am eager to watch.

So, movies...I think I perhaps saw only seven in theaters this year? Loosely ordered by ranking:

  • Oppenheimer: This really is as astounding as everyone says. Among other things, Nolan uses sound (and lack thereof) to isolate characters, in a technique that very much makes going to a theater worthwhile. Looking forward to it coming to Japan and seeing it again, hoping I can get my wife to go with me. There are one or two anti-Japanese lines in it, consistent with the times, and it does focus on the title character rather than the bombings in Japan, but that's its intent -- there are plenty of other movies about Hiroshima, Nagasaki, or the war in general.
  • Everything Everywhere All at Once: This is good, and I always love Michelle Yeoh, but when I first saw it I didn't automatically think, "This is the best movie of the year." But it stayed with me.
  • Godzilla Minus One: Also fully as good as the praise. My Godzilla-seeing pal and I saw this on the biggest screen in eastern Japan, and we are glad we did. It truly does have characters that resonate.
  • RRR: The most AMAZING movie you will ever see! Well, maybe not quite, but pretty close. It's as over the top as can possibly be, and huge fun. Also, if you thought the volleyball scene in Top Gun was, um, gay friendly, wait until you see the leg squats here! Speaking of Tom Cruise...
  • Mission Impossible: Worth seeing, but I think some of the editing choices around the money shot are a little odd.
  • A Haunting in Venice: Branagh was born to play Poirot, but should have let someone else direct this one.
  • Indy Jones: Even bringing up the bottom of my list, it was okay. But there really wasn't any need to make this, it doesn't really add anything. It is kind of interesting to see an aging Indy out of his time and place in the 1960s.
...I think that's all I saw? Partly it was working too much (seven days a week most weeks), partly it was just lack of access to both information and good showings. I don't often hear about smaller movies and non-English, non-Japanese movies that I would definitely enjoy given the chance.

Gotta do better on all these fronts in 2024!

Saturday, December 23, 2023

Spelunking CACM, Vol. 14 (1971)

 Let's start with a real gem from Niklaus Wirth, perhaps the greatest programming language designer of all time, who will turn ninety next year. In Program development by stepwise refinement, he lays out how to refine programs that perform combinatoric (although he never uses that word) searches, using the example of the 8 queens problem (a problem he attributes to Gauss in 1850). Begin with the general idea of iterating through all possible solutions, then progressively refine the generation of candidates so that the computer (whether electronic or human) doesn't have to do the full calculation on all candidates. He discusses backtracking. The term itself isn't referenced, but I doubt it was original to this paper. Wirth advocates deferring the actual data structure design until the development of the alogirhtm starts to make it clear what characteristics the data structure should have.

In the acknowledgments, Wirth mentions extensive discussions with Hoare and Dijkstra. Oh, to have been a fly on that wall -- but even with my current brain power, let alone that of a fly, I doubt I could even follow the depth of the discussions!

I couldn't figure out how to get the high-res version of the cover (which I presume still exists), but this article was the cover for the magazine in April 1971.


Michael J. Flynn proposed his taxonomy of parallelism the year I was born. In 1971, Allen B. Tucker and Flynn had an article in CACM that provides a clear description of microcoding in processor design, though tracing back references leads back to at least 1964 when Amdahl himself had a paper on the topic, so I don't think Flynn gets credit for the idea.

In an early premonition of Mechanical Turk, Krolak, Felts and Marble proposed A man-machine approach toward solving the traveling salesman problem. They don't make any mention of paying the human to contribute to the solution, though!

I am going to have to go back and look at Amarel's article on a CS curriculum. Off the top of my head, the view of the field itself is too simplistic, but the figure on applications looks pretty good. Maybe it's just very broad?

Also gotta say, it's impressive that there were multiple competing visions for how to get computers to do integrals for you, all the way back in 1971.

Hope you all have had a good 2023, and have a better 2024. I may try to do one or two more of these before the end of the year, but if don't, see you on the other side...

Friday, December 15, 2023

Annealing: Quantum and Simulated

I don't put a lot of faith in citation counts, but they are one measure of influence. Just a random tidbit on where we are...

Quantum annealing in the transverse Ising model, by Kadowaki and Nishimori, is the origin (as far as I am aware) of quantum annealing as a computational tool -- the technology used by D-Wave, one of the early quantum computing companies. It's a hugely influential idea in our field, though it is a single-purpose computational tool, useful only for finding the ground state of some system. However, it turns out that a lot of optimization problems can be mapped to this problem, so it can be wildly useful if it works well. Unfortunately, it is hard (but maybe not impossible) to do error correction on annealers, so scalability and fidelity are expected to be limited.

Kadowaki and Nishimori cite Optimization by simulated annealing, by Kirkpatrick, Gelatt and Vecchi as an inspiration. This paper is undoubtedly one of the most important papers in all of the history of computation. It probably needs no introduction, but it can be used for a plethora of optimization problems.

And so the scores -- citation counts according to Google Scholar:

  • Quantum annealing: 2,159
  • (Classical) simulating annealing: 57,085
Wow...

Wednesday, December 06, 2023

Passings

I just found out that a friend I have known since 1992 has passed away. About a decade older than me, he was a quiet, shy man with a wonderful family. His wife is as outgoing as he was shy, and they complemented each other perfectly. She will be lost without him. Their two daughters, both mid-thirties,  are well established in their lives.
Beyond these simple facts, I am at a loss for what else to say. How do you convey the warmth and intelligence of a friend? The times we spent together, mostly in the 1990s? John's passing will leave a hole in the lives of many.

Saturday, December 02, 2023

Workin' on a Saturday

 


I am not the only one working on a Saturday! Dozens of egrets and herons in the Hikiji River -- word must be out that there is sashimi on the hoof (er, fin?) here.

Also, Fuji-san's snow cover was fantastic a couple of weeks ago, then almost disappeared, and is now thin but present.


Monday, November 27, 2023

Spelunking CACM, Vol. 13 (1970)

I'm baaack! It's been a while since I did a blog posting on spelunking of the Communications of the ACM. Now I'm up to volume 13, 1970. Dang, there was a lot of action in that year. On my first pass, I picked up ten papers, but I don't want to review that many in depth, so let's pick up a couple:

 Per Brinch Hansen's Nucleus paper is one of the most famous papers in operating systems. It has the notions of program (the set of instructions), internal process (the executing context of a program, without I/O), and external process (the I/O context for the program). Processes communicate with each other via message queues, using Dijkstra semaphores. Processes are in a process tree; a parent can create child processes. Resources are allocated in a hierarchical fashion; if you own a resource, you can subdivide it and share among your children, and you regain control when the child exits. It's an astoundingly modern conception of what an OS kernel should be, with the exception that it has no virtualization whatsoever that I can find and makes no mention of security.

Another that has to be high on the list of great papers from the year is Burton H. Bloom's filters, now known as (surprise) Bloom filters. A little to my surprise, the paper has "only" (ha!) been cited a little over 10,000 times, according to Google Scholar, but if you just search for "Bloom filter" it will return half a million to a million entries, depending on exactly how you phrase it (i.e. with or without quotes).

There were plenty of other important things, such as a pretty detailed technical standard for 1600 CPI (characters per inch) magnetic tape. No such standard would fit into a magazine anymore; it would be hundreds of pages long!





The notion of pricing of resources according to demand and time of day was already understood, if an active area of discussion. Multics was being instrumented. Man, it was a great year for systems work! There was other OS work as well that I found intriguing. A Scholar citation count of 14,000 suggests that a relational model of data for banks has had some impact. A program to teach programming seems important to me, but it wasn't the first such thing, I don't think.

Maybe the year away from this spelunking project is coloring my opinion, but to me this feels like the first truly modern year. The late Sixties and early Seventies were a time of great change and turmoil throughout the world, from the Cultural Revolution to the Beatles to the hippies to the Apollo program to (what Americans call) the Vietnam War...but computing was right there in the mix, changing almost every day.

Tuesday, November 14, 2023

社会のための情報技術のターニングポイント

10月1日に私はサイバーインフォマティクス(CI)プログラムのチェアパーソンになりました。ウエッブサイトの紹介として、以下のメッセージを書きました。

 21世紀最初の四半世紀は、目につくあらゆるものが情報化され、インターネットに接続される時代でした。 車、サーモスタット、ベビーモニター、観葉植物や衣類に至るまで、あらゆるものが地球規模の巨大な情報システムの一員となりました。この傾向は、AIの発展とも相まって、ますます強まっています。次の四半世紀は、これに関わる問題が噴出する時代となるでしょう。そこには、技術的問題だけでなく、テクノロジーが社会に与える影響についての重要な対話が隠されています。技術的には、計算のエネルギー・コストや、大規模・小規模を問わずデジタル・コンピューターの限界に対処しなければなりません。 社会的には、個人情報の倫理的な生成・収集・利用、そして個人・企業・政府による情報の管理、さらには人為的な地球温暖化など、社会の重要な問題の解決にテクノロジーをどのように応用していくかに取り組まなければなりません。

SFCのサイバー・インフォマティクスの教員、研究スタッフ、学生は、常に社会的背景の中で活動し、ITの限界に挑戦しています。社会と情報システムの統合に関するもの(農林業や建築におけるIT化、スマートシティ、ビジネスやエンターテインメントにおけるドローン、社会におけるロボット、自動運転車など)や、これらの礎となっているSFのようなIT自体の進歩(モバイル・ネットワーキング、インターネット、モノのインターネット(IoT)、データベースなど)から、将来情報技術に大きな転機を与える研究(例えば量子インターネット)、など、多岐にわたります。サイバー・インフォマティクスの「サイバー」とは、物理的な世界に存在し、人々のために環境と相互作用するITのことです。

CIプログラムの学生は、研究グループの枠を超え、他の政策・メディア研究科の教員や、日本国内および海外に広がる研究機関間のグループと共同研究を行うことが奨励されています。また、慶應義塾大学サイバーセキュリティ・コースなどの資格取得も奨励されています。

CIプログラムの教員は全員、日本語と英語を併用して研究を行っています。そのため、語学力は優れた研究を行う上での障壁にはなりません。SFCの他の大学院プログラムと同様、性別、性的指向、宗教、国籍、母国語など、あらゆるバックグラウンドを持つ学生の参加を歓迎します。

世界は絶えず変化しており、ITが重要な役割を果たしています。CIプログラムに参加して、世界をより良い場所にするために、その変化を構築し、展開し、研究してみませんか?

A Turning Point: Information Technology in the Service of Society

On October 1, I became Chair of the Cyber Informatics Program (department, more or less, but mostly less) of Keio University's Graduate School of Media and Governance, at our Shonan Fujisawa Campus. This is the chairperson's message that I wrote for our website. Someday I will step down as chair, so I am recording this here for posterity's sake. (Not that I think this blog is forever, either, but I expect it to outlast my chairmanship.)

Over the first quarter of the twenty-first century, everything – your car, your thermostat, your baby monitor, even your house plant and maybe your clothing – has been connected to the Internet and increasingly is bound to global scale artificial intelligence (AI)-driven systems. And yet, within that success lies our challenge for the next quarter century: not only the next technical problem to solve, but also a crucial dialog on the impact of technology on society. Technologically, we must address the energy cost of computation and the limitations of digital computers at both the large and small scale.  Socially, we must address the ethical generation, collection and use of personal information and the personal, corporate and governmental stewardship of that information, and how to apply technology to solving society’s key problems, such as anthropogenic global warming.

The faculty, research staff and students of the Cyber Informatics Program at SFC are pushing the boundaries of IT, but always within that societal context. Our technical work spans the stack from the foundations of information technology (e.g. quantum Internet), through science fiction-like advances in what IT can do (mobile networking, Internet, Internet of Things (IoT) and databases), and on up to integrating IT into our daily lives with the goal of improving society for all (IT in agriculture, forestry and building architecture, smart cities, drones in business and entertainment, robots in society, and self-driving cars). The “cyber” in Cyber Informatics can be thought of as referring to IT in situ, in the physical world, interacting with its environment in the service of people.

Students in the CI Program are encouraged to collaborate across research groups, with faculty in other Graduate School of Media and Governance Programs, and in inter-institutional organizations both within Japan and extending abroad. They are also encouraged to acquire certificates such as the Keio Cyber Security Course, taught in part by our faculty.

The CI Program faculty all work in a combination of Japanese and English, and language skills are not a barrier to good research here. As with all of SFC’s graduate programs, students of all genders, sexual orientations, religions, nationalities and native languages are encouraged to join us.

The world is perpetually changing, and IT initiates some of that change; come join the CI Program and build, deploy, and study that change to make the world a better place.


Saturday, November 11, 2023

Margo Seltzer on The Power of Allies

 Wow. Great words from Professor Margo Seltzer:

The most important message goes to both senior and junior researchers alike. This has been my mantra for the last decade: it is not the job of the underrepresented to solve underrepresentation [emphasis mine]. It is the job of the majority to make their field open, welcoming and enticing. The first problem we create is we put all of the underrepresented people on the diversity committee. That is not going to fix the problem. The problem needs to be fixed by the majority, who are comfortable in their fields. That is the single most important thing.

 Read the whole article (both Part One and Part Two)!

Release of our Creative Commons-licensed Undergraduate Textbook, Quantum Communications!

 Tell your friends, tell your family, tell your students, tell your enemies!

Focusing on #QuantumCommunications and #QuantumInternet, on Tuesday this week, Michal Hajdušek and I released our undergraduate textbook, Quantum Communications. It is available in PDF from
https://arxiv.org/abs/2311.02367
https://scirate.com/arxiv/2311.02367

and LaTeX source from
https://github.com/sfc-aqua/Overview-of-Quantum-Communications-E

This is the companion book to our online course, "Overview of Quantum Communications", videos for which are available on YouTube at
https://www.youtube.com/@QuantumCommEdu/playlists
in both English and Japanese (so far).

The book and other materials are available under the Creative Commons CC-BY-SA license.

(The photo has no direct relationship to the book, it's just my favorite photo I took this week. Lightning in Bangkok!)

Tuesday, April 11, 2023

Twentieth Anniversary

 Last month (March 2023) was the twentieth anniversary of me moving from classical into quantum computing "full time". I'll always remember exactly when this was, because I was working for Nokia at the time, and had already decided that I wanted to move from engineering back into research and in particular to pursue quantum computing. I was out on parental leave when our second daughter was born, when it was announced that the unit I was part of was being shut down; some people moved into other internal positions, others were let go. I had been trying to get Nokia Research to take me on and let me move to Tokyo to be a Ph.D. student at Keio, but we couldn't work it out, so I agreed to take the layoff buyout. We parted quite amicably, I am still close with a few of the people from that era.

So.

Between the Network Alchemy stock options from the Nokia acquisition, plus the buyout, plus some money we made from selling our house in Half Moon Bay, I had enough to pay for a Ph.D. and eventually put a down payment on our house here in Kamakura. So I didn't worry about money, initially (I should have been worrying much more about it for thirty years, but that's a different story).

We took care of our kids, sold the house, did a month-long cross-country road trip, did various things, and moved to Japan in early August 2003. I attended my first quantum computing conference, the fourth EQIS (now known as AQIS) at the end of August or early September before officially becoming a Ph.D. student in late September.

I came to Keio to be Kohei Itoh's Ph.D. student, but I was afraid I couldn't get into the applied physics Ph.D. program, so I applied in CS and wound up with Fumio Teraoka, a respected networking researcher, as my official advisor. (Tera-san once said to me, roughly, "You got Paul Mockapetris and Bob Hinden to write you recommendation letters?!? Geez...") Kohei was my unofficial advisor, and Jun Murai an occasional mentor and later unofficially on my Ph.D. committee. Kohei quickly introduced me to Kae Nemoto, who became an important collaborator and also joined my Ph.D. committee. Kohei is now president of Keio, making him, to the best of my knowledge, the first quantum computing person to run a major university anywhere in the world.

I did my thesis defense June 13, 2006, if I remember right, then the final exam a few weeks later and graduated that September, 36 months after starting. Not too bad.

It was, and remains, quite the adventure. I am glad I made this choice. Thanks to all who have supported me or chosen to run alongside me on this quixotic quest.

Thursday, March 16, 2023

RFC9340: the First Quantum RFC!

 Request for Comments (RFC) 9340: Architecture Principles for a Quantum Internet is now published! Thanks to Wojtek, who originated it, and Stephanie, Shota, Marcello, Angela Sara, and Bruno, who all made invaluable contributions to it.

RFCs are the technical documents that describe the Internet. They come in several "streams", or publishing tracks. This one is from the Internet Research Task Force (IRTF) stream. They may have one of several statuses, including "Standards Track", "Standard", or "Experimental". This one is "Informational", a common type for things that are ready to share and to be haggled over in a public forum, but don't yet directly impact the operation of broad swathes of the Internet itself.

Creation of this document took four years and almost a dozen formal drafts in a public forum, with lots of comments not only from the authors but from other members of the IETF and QIRG communities. Thanks, all!

Reducing Errors by...Using Errors

 Just a quick note: our newest paper, "Leveraging hardware-control imperfections for error mitigation via generalized quantum subspace", by Yasuhiro Ohkura, Suguru Endo, Takahiko Satoh, Rodney Van Meter, Nobuyuki Yoshioka, is up on the arXiv at https://arxiv.org/abs/2303.07660!

This paper goes into a lot of detail on error mitigation techniques. Basically, in quantum error mitigation, you add errors into your quantum circuit in order to understand the effect of errors on the answer to your problem. For example, you introduce some waiting time between finishing the computation and measuring the results, to see how decoherence affects the state. Or, you add a couple of gates that, in theory, cancel out, and see how the error rate changes. These kinds of techniques have the potential to extend the range of problems that can be solved with noisy quantum computers.

This paper, joint work with NTT and Todai/RIKEN/JST (one author has three affiliations), represents an amazing amount of learning by Yasuhiro (who goes by the nickname "rum"). The math in it is captured fairly succinctly, but in fact there is a lot more underlying it, and I am deeply impressed by rum's rate of learning over the last year. He has used an astounding amount of computation time on specific IBM processors during the creation of this paper, and I would say it has paid off. I think this paper will help guide others who have read a little about error mitigation, but aren't sure how to apply it in their own context. It's the kind of practical paper that bridges theory and implementation that I consider a signature of my group.

Thursday, February 09, 2023

Cloud Cover

 If you're looking for information on cloud cover, this website has you, er, covered:

https://www.earthenv.org/cloud

Claims to be 1km resolution; you can certainly zoom way in. Data is available Creative Commons, too.

A table of data for a few cities in Japan in available here.

Sunday, January 15, 2023

Some Recent Important Quantum Theory Results, Part One: MIP* = RE

 While preparing for class, and for my general edification, I have been trying to organize thoughts about recent important results in theory and algorithms. The latter I try to keep up with on general principles, but the former is largely above my station on the mountain, as I stand on the col looking up at the peak through the swirling clouds of my own ignorance. I'm an engineer, and proud of it, and so I have an engineer's understanding of complexity theory. In fact, I have the understanding of an engineer trained in the 1980s; the theorists have been busy since then, and have really changed the landscape! So, take all that I say here with a grain of salt...


MIP* = RE 

(announced in January 2020, with evaluation ongoing) 
Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, Henry Yuen

I'm not part of the CS theory community, but I could hear the trumpets, kazoos and champagne corks from over here in the Engineering quad on our global, virtual campus. This paper is considered a big deal, assuming it's right, to which most people seem to have given provisional assent.

The introductory paragraph of the CACM version summarizes forty years of theory -- exactly what I need. The key advances: understanding the importance of randomness (in both the action of the algorithm, and in a willingness to accept a probabilistic answer, I believe), interactive proofs, and multiple interactive provers. The authors then go on to cite the addition of entanglement as the fourth major shift. I feel like if I understood this paragraph alone, I would be a far better researcher.

Backing up a bit, computational complexity classes...checking The Complexity Zoo, we see that there are currently 546 complexity classes acknowledged there. Back when I was an undergrad, I learned about 2.5 of them: P, NP, and "bigger than NP", so things have changed a bit. To understand the importance of this paper, we are going to have to understand IP, MIP, MIP*, and RE and their relationship to some of the other basics. Okay, the Complexity Zoo itself is pretty intimidating, so let's try the introductory Petting Zoo, which introduces sixteen important classes...none of which are IP, MIP, MIP*, or RE.  All right, let's roll up our sleeves and see what we know...

  • P is the class of problems solvable in polynomial time -- the simple ones.
  • NP does not mean "non-polynomial", it means non-deterministic polynomial. If you have a Turing machine that solves problems with high combinatoric complexity, then the challenge is to pick good potential answers to try. If trying one solution and failing doesn't help a whole lot in eliminating other possible solutions, you might be in NP territory. Typically, if the size of the problem is n, then the number of possible solutions is O(e^n). But the formal definition of NP is such that the Turing machine scratches its head, shrugs its shoulders, and somehow magically picks the right answer each time, in polynomial time. The practical implication is that there is an exponential number of potential answers that have to be checked one by one, but once you have found an answer it is easy to verify that the answer is correct.
  • MA is Merlin-Arthur, and is the first time we run into two parties in our theoretical conception and proofs. Merlin has infinite computational power, and wants to prove to Arthur that he knows the solution to a problem, so Merlin sends Arthur a proof. (This proof must be no more than polynomial in size, relative to the problem size n.)  Arthur has normal, mortal computational power, but he also has access to a source of randomness (coin flips), which he uses after receiving the proof from Merlin. Arthur checks the proof, using his coin flips as needed, but there is a probability he will wrongly decide whether or not the proof is correct. Generally, we require the probability of correctly accepting a true proof to be at least 2/3, and the probability of incorrectly accepting a wrong proof at 1/3. (That is, the probability of false positive or false negative is each no more than 1/3.) This is the introduction of randomness discussed above. Note that, in this conception, only one round of communication is used.
    Merlin can be called the prover (P), and Arthur the verifier (V). Those terms come up a lot later.
    (Theorists were actively working on this around the time I was an undergrad, but I was completely oblivious to it. Babai 1985 is a key reference.)
  • AM is Arthur-Merlin, which is similar except that Arthur makes his coin tosses before Merlin produces the proof, and sends the coin toss results to Merlin, who can use them as he wishes. After Merlin sends the proof to Arthur, Arthur is required to deterministically check the proof.
  • IP is Interactive Proof, also introduced in 1985 by Goldwasser, Micali and Rackoff. It's similar to AM, except that there can be a polynomial number of rounds. This is where the interaction mentioned above comes in.
  • MIP is Multiple Interactive Provers, introduced by Goldwasser & co. in 1988, uses two (or more?) provers. The provers are allowed to decide on a shared strategy or otherwise share information before the proof starts, but not after it begins. Perhaps the easiest way to enforce that non-communication is to verify the position of the two provers, and require their answers to be received by the verifier in time so short that the speed of light prevents them from having communicated with each other. Now we have our multiple provers.
  • RE is Recursively Enumerable, and is the set of all problems that a Turing machine can solve in finite (but not limited to polynomial) time. It's a very big set.
  • ...and for the purposes of this blog entry, there things stood for a long time...until:
  • MIP*: MIP, except that the provers get to share entanglement beforehand. MIP* includes things like the CHSH game.

I'd like to have good pedagogical examples to go with each of those classes, but at the moment, I don't. My apologies; I'll try to add some later. One important note is that from MA on down, a lot of classes are often expressed in terms of cryptographic functions or cryptographic games.

Also, roughly, that list goes from less complex to more complex, with each class including all of the problems in the classes above it on the list, but in general not all of the relationships are exactly known. Most famously, NP is commonly believed to include problems that are outside of P, but that has never been proven.

And so, the proof of MIP* = RE...drum roll, please...

I don't understand it.

But I'm not alone, most people don't. It's a complicated thing.

In fact, the link above is to the CACM version of the paper, which is written for readers like me, but the full paper is 237 pages. Not very many people have even read the whole thing.

But, I think I am safe in saying this: Very, very, very roughly, it is now (provisionally) known that polynomial amounts of entanglement can be used to solve some very enormous problems. What "polynomial" means here, and what the constant factors are, and real-world examples of problems that are classically intractable but quantumly tractable in practice, all remain as future work. Henry Yuen even refers to this as an "elephant" they are trying to get a feel for.

I wish I could do a better job of explaining its importance, though.  Instead, I will defer to one of the authors, Henry Yuen, and to Scott Aaronson. Scott in particular links to a lot of other explanations, so you can go from there.


Book Progress: Quantum Communications

Over the Christmas & New Year's break, a lot of editing on the book got done -- thousands of edits, according to git. 14/15 of the chapters are in serious local review. Chapter 15 still needs quite a bit of editing. Boilerplate, exercises, index, etc. are partially done; no cover yet. Glossary and table of notation aren't done yet.

If you want to be an alpha reviewer, let me know! It will be distributed soon.