Tuesday, December 21, 2021

New Quantum Networking Papers

 Just a short post.

A few months ago, I posted A #QuantumInternet Position Paper. I'm happy to report that that has now been extended and refined, and is available as a preprint titled A Quantum Internet Architecture for your reading pleasure.

Also, our Quantum Internet Simulation Package is now described in a separate preprint titled, well, QuISP: A Quantum Internet Simulation Package. It is also possible to try QuISP directly in the browser via WASM.

Enjoy, let us know if you have feedback!

Saturday, October 16, 2021

Quantum Ponzi

I have seen bubbles and winters(*) in AI, Internet, computer architecture, and even the PC. (And I've seen attack ships on fire off the shoulder of Orion.) Now, some argue, we are in a quantum bubble that will inevitably pop and lead to a prolonged (and deserved) quantum winter. I agree that there is excessive hype making the rounds, but some of the current reaction to that has its roots in the basic academic/industrial culture clash. There are always those in academia who dislike the charlatanism and shamanism of capitalism, but it is inevitable. Our job as academics is first, of course, to advance the science itself, and (also first) to nurture the next generation of talent, but second, to convey the scientific results to colleagues, funders and investors in a way that tempers expectations to minimize the depth of the winter when it comes.

In July 2021, Prof. Victor Galitski of the University of Maryland's Joint Quantum Institute posted a long anti-quantum hype piece on LinkedIn. I disagree with the title not at all. I disagree with some elements of the contents just as emphatically as I agree with the title. Usually I just let such criticism slide (or post a vision for the future), but this time I felt I should respond, both because this one has gotten some airplay and because Galitski's own JQI position seems to lend it some weight of authority. Plus, he brings up some good points worth responding to thoughtfully, and some naive points that I'll endeavor to respond to politely. And finally, some people I admire have also made positive comments about this, so I wanted to counterbalance their thoughts. Although this is structured around Galitski's criticism, I hope the points I make will resonate farther than just the single article.

Let me attempt to summarize his points, first the major ones, then the smaller ones, rather than in the order he presents them.

  1. There is a lot of unjustified hype around QC these days.
  2. The hype comes from unsubstantiated claims about a. the present and near future of quantum hardware, and b. the long term potential of quantum to change the world.
  3. These claims are coming from a. pure at heart researchers who have been corrupted by capitalism, and b. already corrupt capitalists with no clue, who are only in this to make a buck. The latter group includes VCs and classical computing people.
  4. These claims are resulting in an investment bubble that will inevitably pop.
  5. That bubble is draining away talent, both seasoned and potential, that rightly belongs in academia.
  6. As a corollary, the work done in industry (both large and small) is less important than the work done in academia.
  7. The popping of that bubble will poison investment for some time to come.
  8. Only physicists are qualified to talk about quantum.
  9. The barrier to entering the software industry is low, therefore software is easy and never gets harder, and therefore is unworthy of respect.
  10. Quantum systems are vulnerable to security problems.
  11. Today's quantum systems can be fully simulated, therefore today's cloud accessible quantum systems may be fraudulent.
Put this bluntly, this is more than legitimate skepticism of quantum hype, it is a pinched, narrow view of the world.
Let me try to go through these points roughly in order:

1. hype exists. Hoo, boy, I don't think anyone would disagree with this.

2. hardware and software aren't good. In his second paragraph, Galitski manages to both diss the fundamental importance of quantum algorithms and pooh-pooh the state of hardware. Despite being at JQI (which I suppose has a broader remit than just quantum information), he states, rather bluntly, that none of the existing algorithms will truly revolutionize our world, and by implication that such a revolution is unlikely to ever be forthcoming. I disagree. It is no longer anything more than obstinacy to refuse to recognize the profound shift that quantum information represents at the theoretical level. It is fully as fundamental as the shift from analog to digital information. When and how that will affect daily practice is the question at hand.

It is true that very few algorithms have been effectively evaluated for the requirements for machines to execute them on problems of commercial or scientific interest. But that number is not zero; it's perhaps ten or twenty, depending on how you count, and yes, the fidelity and resource demands often come out far higher than we naïvely initially hope. Shor's algorithm was among the first so evaluated; recently, chemistry and finance are getting the treatment. Ultimately, we need to go through the entire Quantum Algorithm Zoo, line by line, and identify the smallest problem that's infeasible classically, and therefore where QCs need to be in technical development to generate truly new results (as well as figure out which of those algorithms have real-world impact, and which are only of theoretical interest). However, the existence of hybrid algorithms complicates that picture; we may well reach the point where quantum computers do sub-problems for us in a useful fashion before they truly, definitively exceed classical supercomputers.

Data centers today consume about 1% of the world's generated electricity (and still growing), and Haber-Bosch process manufacture of agricultural fertilizer another 1% of all energy. The logistics and transportation industries consume even larger amounts of energy, and optimizing them is an enormous computational task. Both specific computational results and the general deployment of quantum computers may impact this energy landscape, but it is incumbent upon us to make that story increasingly concrete. This is very much an engineering problem, and requires incorporating a lot of details about the machines to be used; it's much more than a $O(\cdot)$ problem.

Classical supercomputers are, in fact, an interesting challenge to compare. The fabrication of quantum computers benefits from classical VLSI technology and their operation requires a lot of supporting classical computation. More importantly, the success of classical digital computers is so tremendous, that quantum computers have a very tall hill to climb before surpassing them. Conversely, classical computers are facing truly fundamental problems: working at the atomic scale, and dealing with heat. The former is a result of Moore's Law, the latter of Dennard scaling. Current transistors are only a few tens of atoms across, and we don't know how to make transistors out of anything smaller than an atom. The latter has a solution, but will require major reengineering. (See my paper, Q's C problem, C's Q problem.) Quantum computing offers partial solutions to these problems, both with its physical technological contributions and its potential to attack certain classes of computational problems,  especially those with modest amounts of state but exponential growth in the interesting state space. So, quantum computers still have a long ways to go, but they are both desirable and necessary.

3. hype is corrupted, unqualified or both. Wow, the nose-in-the-air ivory tower attitude here is high. "the researchers are forced to quit activities, they are actually good at and where they could have made real impact, and join the QC hype", we are told. For more on this, see points 5 and 8, below.

4. this is a bubble. This is perhaps Galitski's most important point. Growth in investment is absolutely necessary for the field to expand beyond its academic roots and create an industry, but the media hype and current ready availability of VC funds means that not all investment is wise. The way to improve the quality of investment is experience and education. Some of these startups will fail, for sure; some should never be invested in in the first place. Want to make a difference here? Engage with VCs and help them learn and make wise decisions.

5. brain drain.  Eating our seed corn is definitely a problem, largely self-correcting, known since the 1990s or before. There are plenty of reasons to dislike Silicon Valley, but overall the balance (and sometimes tension) between government labs, government funded university research, corporate funded research both at universities and in its own labs, industrial development and startups is fundamentally healthy. (Though the US, EU, JP, KR, CN, SG, and AU models differ rather dramatically.) There is a valid and serious issue of how to best manage this (or to let it operate without intervention), but Galitski isn't making an argument on that topic, just lamenting the movement of people into other areas.

6. industrial work isn't good, and is mostly less than fundamental. I think this is so blatantly wrong (or at least elitist "my problems are the only ones worth solving") that it hardly needs refuting; as I noted above, we need to go through the available algorithms and figure out which are industrially relevant, and that's an engineering activity sitting right where people are poised to leap into industry, making their systems, algorithms and talent useful outside the confines of their own laboratory. That tech transfer is among the most important activities of all, even if there is a lot of skepticism about how well that really works out for universities.

The whiff of success, and with it the possibility of strategic advantage and riches (including university IP licensing) is leading to increased friction within the system, imposed by governments and corporate agreements, impeding flow of people and ideas through restrictive agreements and import/export paperwork and restrictions. Of the two, the government-imposed limits worry me more, because they can't be gotten around.

7. popping bubbles are bad. Galitski enumerates his two main points: that the current investment scene is a Ponzi scheme, and that the lure of money is drawing the best people out of academia and into the nascent industry. (He lists a third point, that hype is bad for science, but here he seems to primarily mean as a consequence of the first two points.)

In a true Ponzi scheme, investors have a responsibility to pay those who recruited them, which they fulfill by attracting others to pay them. This pyramid or tree structure depends on continued exponential growth in those willing to invest, and so collapses when the potential investor pool dries up, with the last round of investors left holding the bag.

I don't think a bubble is the same thing as a Ponzi scheme. Moreover, if we manage it well, investment in quantum computing will grow wiser and more rational. In that sense, criticism and discussion of irrational investment and building realistic expectations is welcome.

It does puzzle me why Galitski cares at all, since he apparently thinks there is little of value in quantum computing altogether. "To be sure, there are gems," he says, but there is little if anything positive in his take.

8 & 9. quantum belongs to the physicists.

To really see Galitski's opinion of the tech industry as a whole, it's worth quoting him:

A successful company in the "quantum technology space" can not pop up like Facebook or TikTok or a similar dumbed down platform, based on a code written by a college drop out. What's needed is years of education, work, and dedication. But what's going on is that there is an army of "quantum evangelists," who can't write the Schrödinger equation[.]

"You can't QC if you don't Schrödinger" smacks of elitism, but I suppose that's a point of view with moderately broad support in the community. (Heck, of course an author like Galitski thinks you should do a lot of QM before you do QC.) Personally, I can say

$i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle = H|\psi(t)\rangle$

with the best of them, but -- and this will elicit gasps -- I don't think you need to do that in order to do QC. In fact, I think it misses the point if you want to develop software; the skills you need are very different. (See my quantum computer engineer's bookshelf.) I'd be more inclined to say you can't QC if you don't sashay the Fourier. Finding the interference patterns that drive interesting quantum algorithms will require creativity, math, and perhaps geometric thinking; one-dimensional wells, the ultraviolet catastrophe and perturbation theory can be left for (much) later.

It's not clear which tech industry college dropout he has in mind; certainly there are a lot to choose from. There are even a lot to choose from if you restrict your list to those whose products have a mixed effect on society as a whole. It is true that it is possible to begin a large classical project with almost no investment; the barrier to entry is low. That is largely seen as a plus, rather than a minus, across the industry. But being dismissive of the amount of investment of time and brainpower, and the actual intellectual innovation and research it takes to reach the scale of global impact is foolish.

Fundamentally, it is important to recognize that there are a lot of really smart people in the world who aren't physicists, and some of them are trying to figure out how to deploy quantum computers (and quantum networks) outside of the physics laboratory. There are hardware engineers, software engineers, and business people who are learning. They need the room, time, respect and support to make this happen.

I have spent quite a bit of time with people in Japan, the U.S., and other countries who started with zero clue about quantum but are starting companies. Some of them start out roll-your-eyes clueless, and yes, most of those will go down in flames. Others, however, will surprise you. Through hard work and a willingness to study, they are in fact learning. Ultimately, they will build or buy a clue, or go out of business.

Yes, it would be better if they weren't a drain on resources (money and people) and reputation while acquiring or failing to acquire their clue. But over time, those doing the evaluation (VCs and the general public) will themselves become more knowledgeable and sophisticated. Personally, I would rather they did that with our blessing and our support rather than without.

10. insecure systems. I have no doubt that today's cloud-accessible quantum systems have security vulnerabilities. All computer systems have them. It's a tenet of our industry. I don't understand why this is relevant to Galitski's larger point.

11. fraud! Because today's systems could be fully simulated, there might be fraudulent companies out there, some Quantum Theranos. Yeah, I suppose that's possible. "Fake it 'til you make it." Faking quantum computers is easy; faking quantum computer development is hard. You think investors aren't going to come look into the labs? You think they aren't going to expect to see dil fridges, racks of FGPA boxes, even lines of FPGA source code? And, over the next few years, results of calculations that can't be simulated? Especially in a post-Theranos atmosphere? Due diligence is always necessary (and I have seen it go wrong), but I don't think this is a valid point for criticizing the nascent industry.


Overall, I find Galitski's criticism to have a few valid points; we all agree that hype will result in negative effects for the community as a whole as a "reality correction" sets in. But -- and perhaps I'm being too sensitive here -- I read his criticism as coming from a deep misunderstanding and dislike of the tech industry, and skepticism not just about the current quantum frenzy but more deeply of the value of quantum computing itself. I disagree.

We want to avoid the sheer silliness of the dot com bubble, its worst excesses on domain names and business models. At the same time, we want to avoid the prolonged AI winters in which too few smart people and too few research dollars entered the field. (Keeping in mind that, despite its demonstrated, thrilling successes, we might be in a time of over-exuberance for machine learning, the current favored model of AI; studying its successes and excesses carefully would be instructive for the future of quantum computing.) Let's all be responsible and realistic about the amount of work to be done, but maintain our optimism and faith in the long-term vision.

To quote myself, we are in the time of Babbage trying to foresee what Knuth, Lampson and Torvalds will do with these machines as they mature. Let's do it.

Onward and upward!

(*) What a mixed metaphor! Can we have springs and renaissances, too? Or at least some explanation of how a bubble popping results in winter?

Wednesday, September 22, 2021

Astrophotography: Kanto Dark Spots


My wife and I have been going places to shoot night skies off and on for the last couple of years. We live in Kamakura, which is suburban and within the "light dome" of Yokohama itself, second largest metropolis in Japan. So we've got to go somewhere in order to get decent skies. There are one or two spots within an hour's drive, but many of the places we have gone are 3-5 hours each way. (Less late at night, but can be hellishly bad late afternoon on a Sunday, trying to get back toward the population centers of Kanto.) The screenshot above (from https://www.lightpollutionmap.info/) shows the challenge we're up against. The blue areas deep in the mountains to the west would be 5 hours' drive without traffic, and the even darker blue areas well to the north of Tokyo would be closer to 6 hours' drive.

Turns out my wife and I have kind of different goals; I am getting into deep sky photography, wanting several hours of perfectly clear skies and unobstructed views. My wife wants nice foregrounds in front of dramatic skies; Milky Way is good, but some clouds at sunset or sunrise are even better. She also likes shooting at the beach, even with her tripod standing in the surf. (Yeah, she's hard on equipment; sends her DSLR bodies and occasionally lenses for professional cleaning when needed.) Naturally, anyone with telescope optics and mechanics will be horrified at being where salt, sand and moisture are. Some of these spots have both beach access and a good spot on a high bluff, fairly safe from such concerns. A few of these are well up in the mountains.

This lists sites in Chiba, Kanagawa, Shizuoka, Yamanashi, Ibaraki, and "other", in order. At the bottom, you'll also find a list of other tools & websites I use.

This posting is progressively updated. Check back occasionally for new sites & new info about old sites.

Monday, September 13, 2021

Ranking the Star Trek: The Original Series episodes

Inspired by the 55th anniversary of the first broadcast, I'm going back and watching Kirk and company, more or less in order but with a little bit of skipping around. Watching them from the perspective of 2021, the most egregious thing is not the effects (some of which have been upgraded anyway, in the Netflix version) or simplified plots or retro future tech, or even race relations, it's gender roles and outright sexism. I'm sure having women Starfleet officers was very progressive for 1966, and it is true that there will be a certain amount of sexual tension in any crew (even a single-gender one), but it's pretty blatant.

On the other hand, if you like looking at 1960s style beauty in stunning costumes, it's definitely a bonanza.

Let's divide the original three seasons up into half-season blocks and rank them separately, see what we get.

I'm just going to post this and update it ad hoc as I watch more episodes.

The "Worth Watching" List

It's tempting to try to fashion some sort of order out of this list, either chronological by broadcast date, or so that it makes some sort of actual story arc, but by and large the episodes are fully independent and there is little growth or change among the characters over the three years. So, this is just a list of the ones I consider to be worth watching. You can pretty much discard everything that isn't on this list, except for a couple that are iconic in some way but don't make my own quality threshold. This is numbered bottom to top, with the best episode listed last.

  1. Let that be Your Last Battlefield (S3E15)
  2. A Piece of the Action (S2E17)
  3. The Doomsday Machine (S2E6)
  4. A Taste of Armageddon (S1E24)
  5. The Ultimate Computer (S2E24)
  6. Arena (S1E19)
  7. Is There In Truth No Beauty? (S3E5)
  8. All Our Yesterdays (S3E23)
  9. Elaan of Troyius (S3E13)
  10. Mirror, Mirror (S2E4)
  11. Space Seed (S1E23)
  12. Tomorrow is Yesterday (S1E20)
  13. Balance of Terror (S1E15)
  14. Devil in the Dark (S1E26)
  15. What are Little Girls Made of? (S1E8)
  16. The Menagerie (S1E12 & 13)
  17. The Enterprise Incident (S3E2)
  18. The Trouble with Tribbles (S2E15)
  19. Errand of Mercy (S1E27)
  20. Amok Time (S2E1)
  21. Journey to Babel (S2E10)
  22. The City on the Edge of Forever (S1E29)

Season 1, first half

In ranked order, with broadcast order in parentheses (following the Netflix counting of "The Cage" as #1 and the first regularly scheduled broadcast of "The Man Trap" as #2). The first half ends with "Balance of Terror". The top three here are classics, IMO; after that, it drops off kind of quickly, but only "Mudd's Women" would I call actively bad. On the whole, the writers, production team, directors and actors really hit the ground running in this first half year, but I suppose two years of gestation helped.
  1. "The Menagerie" (12 & 13): Wow, this is better than I had remembered, one of the best episodes of all, in my current judgment. It's better as "The Menagerie" than as "The Cage", with the wrapper meta-story, but hard to believe the studio execs didn't just fall all over themselves getting this launched after the first pilot. Loyalty on trial, and important questions about what drives us as humans. Will we lose our will when illusion takes over?  (Today, there are those who claim that the Internet and smartphones are "robbing us of our boredom," and that's a solid concern, IMO.)
    Really glad the Enterprise tech got a facelift from its 1950s look to the 1960s look of the series in full gear, but interesting that the transporter is 100% the same. Of course, there is a gratuitously good looking officer on the starbase for Kirk to ogle (complete with seductive music, the only adjective here is "lovely"), entirely aside from Pike's green alien dancer. As much as you gotta love Kirk, Pike would have made a great captain and Number One should have stayed. Pike's wheelchair and communication tech were surpassed for Hawking and others with little more communication capability than moving their eyes by the end of the 20th century, but the point stands. Also glad they dumped running a starship with paper and clipboards!
  2. "What are Little Girls Made of?" (8): An episode I had largely overlooked before. Are our petty jealousies and flaws a product of our organic bodies, or would they be the same in an android? More than a little iffy on what "programming" an android imprinted from a sentient being means, but asks interesting questions. Christine made the tough choice to break off an engagement to pursue a Starfleet career, a pretty progressive move for the day. And who doesn't love Lurch? Not the first and certainly not the last dying/dead civilization to be explored by a guest star, then left behind without a further thought as the Enterprise warps off to another adventure, though.
  3. "Balance of Terror" (15): Peace through strength, very Cold War. Honorable people fulfilling their duty on both sides of a conflict can still result in waste of life, and war. Prejudice based on appearance is, well, a bad thing. And love, and loss, happen under many circumstances. This is by far the most space opera-y episode of the first half season, with "Run Silent, Run Deep"-style cat-and-mouse starship-to-starship hunting. Electromagnetic signals, surely, but I'm a bit dubious about the need to work quietly! A great episode, even if the ending is inconclusive. Going in, I was expecting this to be my top episode for this half year, but the ending robs it of first place.
  4. "Charlie X" (3): Teenage angst and self control, Uhura ad libbing a funny song about Spock, 3-D chess, what more can you ask? The first time, but not the last, we encounter an apparently superior race who then inscrutably leaves without us even getting a chance to ask their names -- and we seem totally unworried about that. Not wild about the ending, this one leaves me uneasy, which is a good thing.
  5. "The Enemy Within" (6): The dubious plot device of the transporter dividing based on personality aside, a solid episode. We need our yin and our yang to be whole.
  6. "The Man Trap" (2): Not as chauvinistic as the title suggests. One of several in this first half season where illusions, mind control, ESP, or telekinesis plays a big role. What is it that makes us happy? First redshirt to die, in the very first regular broadcast episode, and we have established a paradigm.
  7. "Miri" (9): A solid episode. A human attempt to live forever has intergenerational consequences, and nearly takes out Kirk, McCoy, Rand and Spock, too. This one (as with many of the episodes, both good and bad) doesn't really need a starship; it's SF, but could take place anywhere. But the timeline doesn't really make sense -- how did they get there three centuries ago? And once again we warp away, leaving behind a live community who could really use our help.
  8. "The Naked Time" (5): It takes a contrived plot device, but we get to learn about the innermost thoughts of the crew. Sulu's stripped-to-the-waist swashbuckling is the most memorable bit, but Christine's love of Spock and Spock's sometimes wobbly control of his emotions advance the characters the most. Kirk's iron will, sense of duty and love of the ship get him through it.
  9. "The Corbomite Maneuver" (11): My brain had this listed as dreadful, but it's not as bad as I remembered/feared. The first time we meet a (possibly) technologically superior species, get over an initial misunderstanding, and leave on mutually agreeable terms.
  10. "The Conscience of the King" (14): A pretty good human drama about how hard conditions and impossible choices can incite horrible, inhumane actions. This one doesn't need starships.
  11. "Where No Man Has Gone Before" (4): Its biggest gift, of course, is the title. Another telekinesis episode, with muddled reasoning for the sudden growth in powers of a character or two, but an interesting question about how we will deal with ourselves when we start to outgrow these bodies -- from both sides of that issue. Also, a barrier at the edge of the galaxy? Really?
  12. "Dagger of the Mind" (10): The first of many geniuses who advance Mankind, then go wrong later in life. Establishes a precedent of Kirk not asking anyone else to do something he wouldn't try first, but is sitting down in a brain ray chair you suspect damages minds really a good idea?
  13. "Mudd's Women" (7): All the good stuff is in the last two minutes. Otherwise, c'mon, man, smuggling brides to male-only mining outposts in the 23rd century and controlling women by controlling their access to a "Venus drug" beauty enhancer? And a lot of "hubba! hubba!" from the crew. Umph. Is this our first reference to Kirk being married to the Enterprise?

Season 1, Second Half

Lots to look forward to. Through my rose-tinted glasses, "A Taste of Armageddon", "Devil in the Dark", and "Arena" are all great episodes, leading up to "City on the Edge of Forever" (not only inarguably the best episode, it's definitely got the best title). Hoping they have aged well.
  1. "City on the Edge of Forever" (29): Accept no substitutes. The finest episode in all the ST universe. And only nine weeks earlier, "Tomorrow is Yesterday" showed that time travel could be treated both relatively rigorously and interestingly, and yet here CotEoF blows it out of the water.
  2. "Errand of Mercy" (27): Are we really as different from the Klingons as we think? Non-corporeal, powerful aliens solve the ultimate plot dilemma for the episode, and save us from having a perennial hot war with the Klingons.
  3. "Devil in the Dark" (26): One of my favorite episodes: will we recognize other life, and other intelligence, when we find it? (We'll leave aside the Class M Planet bipedal species, 1.5-2m tall, with eyes, ears, a mouth, favoring N2-O2 atmosphere, that seem to keep popping up in ST:TOS.) How will we communicate with it? (Well, that one is kind of finessed in this episode.) Will we be able to establish (in Kirk's own words) a modus vivendi? Cheesy 1960s "monster"/alien "effects" aside, this one would be fun to revisit later, to learn about the Horta's society. And man, for something made out of silicon, the body part that gets phasered off the Horta is awfully light!
  4. "Tomorrow is Yesterday" (20): Solid time travel paradox. Established that gravity + warp = time travel, a device we will use again in movies and other series.
  5. "Space Seed" (23): An iconic episode, this gave us a look at 21st century history and it gave us the great Khan, the best human villain we get in TOS (and the movies). Hurt only by its innate chauvinism.
  6. "Arena" (19): A personal favorite, but would have been better with Fredric Brown's original (but probably unfilmable in 1966 and certainly not a sympathetic character) alien. The watered-down ending compared to Brown's original short story hurts a bit. A good chance to demonstrate some of the Federation's core principles.
  7. "A Taste of Armageddon" (24): One of the best episodes. If you sanitize it, is it still war? Aren't we supposed to be horrified, repulsed by war? Some pretty blatant ignoring of the Prime Directive, if you consider them to to be the kind of civilization not to be interfered with. Also, the issue of the U.S.S. Valiant's disappearance 50 years ago gets referred to, but just dropped as an issue.
  8. "This Side of Paradise" (25): Another episode with a bad rep in my memory, but turned out to be pretty good. The spores and the Bertholdt(?) rays are a bit contrived (especially the "we can fix your health" bit), but asking the question of whether humans must strive in order to be whole, to be human, is an eternal question. Answered in favor of striving rather than paradise here (spoiler alert! But did you expect different?), nothing super-original in thinking, but well plotted and executed. Far better than "Archons" (below), and an interesting comparison to "The Enemy Within" (above) in what makes us human.
  9. "Court Martial" (21): Solid. Can we trust data just because it's recorded? Will a person really hold a grudge serious enough to fake their own death to sabotage another's career? There is a hint that Riley's daughter learns he is still alive, but that's never pursued. Perhaps it's continuity issues, but it feels to me like this one (and several other episodes) had scenes that were written and either never filmed or cut from the final episode for time or other reasons.
  10. "The Alternative Factor" (28): The idea of alternate universes in and of itself was probably a fresh concept, but this episode has some holes and doesn't really address the core issues of the multiverse very clearly. And that's a heckuva...UFOy spaceship. Not bad, not good, mostly due to poor execution of a solid idea.
  11. "The Galileo Seven" (17): This one seems to be ranked highly in a lot of polls, but I found it awfully blunt. A test of Spock's logic as a method of command could be really interesting, but it's such a contrived plot, including unseen giant natives with Earth-like simple spears. And would you really have three of the top four officers on one shuttle that is nominally out on a data-gathering mission? To me, this feels like a script written by a young fan, rather than a mature writer in the full swing of Trek. (n.b.: Some of the fan fiction exceeds the original in depth, originality and maturity!)
  12. "The Squire of Gothos" (18): One of the more memorable "encounter with a god-like entity" episodes, but in this case a petulant child with a silly view of Earth and humanity. Don't think too hard about this one.
  13. "Shore Leave" (16): An occasional light episode is fine, but this is just silly. Not a good start for the 2nd half of Season One. Another corporeal species apparently advanced compared to us, but not interested in conquest. Leave them and warp away, without trying to establish an embassy!
  14. "Operation -- Annihilate!" (30): Encounter with perhaps the most alien species in Season One, but we just kill it then get outta there. Also sets the record for cheesiest practical effects.
  15. "Return of the Archons" (24): This one's just a muddled mess. Too many things going on. The "Festival" is never really explained, nor is anything about the 6,000 year old technology. Why the town looks like the late 19th century in the U.S. is baffling, and we don't get any sort of justification even for how the Enterprise crew knew how to appear in period costume. The weapon tubes used by the lawgivers are examined once and shown to be nothing but empty tubes, but that's never pursued. How people not of "The Body" are detected isn't discussed. If everyone is part of The Body, why are the lawgivers needed at all? And, most of all, all signs point to the planet's residents being human. If so, how did they get there 6,000 years ago, and why would there be any parallel at all with Earth civilizations? Yet another episode in which the Enterprise is investigating a missing starship, then just warps away without really completing that investigation.

Season 2, First Half

  1. "Journey to Babel" (10): One of the very best episodes, thanks to D.C. Fontana's rigorous and compassionate writing. I might place this behind "City on the Edge of Forever" as second-best episode overall. Diplomacy and intrigue, this one could be a Mediterranean or European council just as easily as Federation.
  2. "Amok Time" (1): Even better than I remembered. Makes up for Sturgeon's silliness in "Shore Leave". I wonder how Spock later explained to T'Pau that she had been snookered, though? A couple of things are...illogical, but the look at Vulcan is great, even if the culture does kind of resemble a mishmash of Asian tropes.
  3. "Mirror, Mirror" (4): an iconic episode, using parallel universes to ask if we are really as pacifist and advanced as we think. Echoes episodes from Season 1, examining our inner selves, but perhaps done best here. Don't think too hard about the parallel universes, though.
  4. "The Doomsday Machine" (6): Real drama, and an interesting take on how we will react when we run into a mindless machine that has only finding more energy for itself as a goal.
  5. "The Changeling" (3): A largely forgettable episode, but it planted the seed for ST:TMP, and so is logically necessary.
  6. "Obsession": More energy beings with unclear capabilities and limitations. You'd think Starfleet would invest some serious effort in understanding these special effects types of sentient beings.
  7. "Friday's Child": Once again, fighting Klingons in a proxy Cold War. Silly costumes and Kirk et al. get trapped a little too easily, but not so bad. And this time the aliens don't pull a deus ex machina on us.
  8. "The Deadly Years": Weak SF, good drama, although at my age now they don't look as old as they once did!
  9. "Metamorphosis" (9): Love comes in many forms. Once again an alien without a true body but many powers.
  10. "I, Mudd" (8): Way better than "Mudd's Women", but still borderline silly. So many improbable or implausible plot elements, and terrible system design in the android distributed control and logic systems. "Haaaarcourt! Harcourt Fenton Mudd!!!" is iconic, but not necessarily for good reasons.
  11. "Catspaw" (7): Just silly. Nudibranch-like aliens manage to stop the Enterprise, take human form, and find our cultural spooky memories by accident (why are they purely European tropes such as iron maidens and black cats?). And once again we endanger the entire executive leadership of the Enterprise. The limits to the powers of the aliens are, as almost always, unclear.
  12. "Who Mourns for Adonais?" (2): This is dreadful, which is really a shame since the core ideas are interesting. What if the ancient Earth gods were space travellers? Do gods exist without people to worship them? (Shades of American Gods?) Should one episode really be trying to answer both questions?  They seem like pretty separate incidents/questions to me. At least this time Lieutenant Palamas, who falls for the hunky space god, gets to have a spine and do her duty for her ship, unlike Lt. McGivers, who falls for the hunky fascist in "Space Seed" and trots off to colonize a planet with him. But there's still a lot of 1960s gender roles baked into this one.
    This "advanced aliens can control anything with their minds" trope certainly wears thin. And does this dude have a real body, or not?
    It's a little too much "Squire of Gothos meets Space Seed", though. A few lines of dialog are thought-provoking. Almost got away with saying, "We no longer have need of gods," without any qualifications! But I'm guessing the Mike Pences on NBC's censorship committee forced the addition of "The one we have is enough," with respect to gods. 
  13. "The Apple" (5): Among the worst episodes, with white/orange primitives who bow down to a local machine god that controls them entirely but also keeps them completely healthy. Very little about this makes sense, and it is essentially a white-people-save-the-natives-from-their-own-superstitions schtick.

Season 2, Second Half

  1. "The Trouble with Tribbles" (15): Pure fun. Tribbles, Klingons, and a bar fight over an engineering insult, what more can you ask?
  2. "The Ultimate Computer" (24): Themes that will echo for time to come. Can our technology replace us? Should we risk human lives if we can risk a machine instead? Are we doomed to transmit our own flaws to our technological offspring? To me, this is a great episode.
  3. "A Piece of the Action" (17): Implausible, but the best, funniest romp short of "The Trouble with Tribbles".  And fizzbin!
  4. "A Private Little War": Kirk and the Klingons in a Cold War parable about proxy wars and arming the natives. Very dated, but overall maybe not too bad. But did we really need it, after "Friday's Child"?
  5. "Return to Tomorrow": Some food for thought here. A handful of minds preserved for eons in noncorporeal contraptions, wanting to get back into humanoid bodies. A lot of implausibilities in the plot, but would we carry our petty vendettas with us to eternity? You betcha.
  6. "Wolf in the Fold": Meh. But Scotty always deserves more screen time, and he gets it here, even if it's not in Engineering.
  7. "Patterns of Force": Correcting interference in a civilization by another Starfleet officer, in violation of the Prime Directive. Not plausible, not fun, and not especially creative.
  8. "Assignment: Earth" (26): The Federation thinks it's a good idea to send a starship into the past, possibly risking realigning all of history? I don't think so. And even in 1968, maybe especially in 1968, would people have failed to recognize a Saturn V? But unlike some other pundits, I think Gary Seven could have been a stylish Mod Squad-era show in its own right, it's just that Roddenberry shouldn't have shoehorned it into Star Trek.
  9. "The Immunity Syndrome": I'm writing this a few weeks after watching it, and I no longer remember it. Bad, but forgettably so.
  10. "By Any Other Name": They came all the way from the Andromeda galaxy just to swipe human form and a starship, hoping to swipe some actual planets? Among the worst SF in the lot, if not as outright awful on the drama.
  11. "The Gamesters of Triskelion" (16): No. Just no.
  12. "Bread and Circuses" (25): Can we stop with the almost-parallel evolution of planets to Earth? Please???
  13. "The Omega Glory" (23): Can we stop with the almost-parallel evolution of planets to Earth? Please???

Season 3, First Half

  1. "The Enterprise Incident" (2): Wow! I had never seen this one before. Tension, drama, sexy Romulans, and Kirk as a Romulan.  Has he lost his senses, or worse? Has Spock betrayed the Federation?  One of the top episodes, IMO.
  2. "Is There in Truth No Beauty?" (5): A solid episode with some awkward moments. And that blasted, silly barrier at the edge of the galaxy. One of the most truly alien aliens we encounter, but we are never given an actual look at them.
  3. "The Tholian Web" (9): If you ignore some plot holes, not too bad.
  4. "Wink of an Eye" (11): Interesting premise, with some weaknesses. I liked this one as a kid -- accelerated people! 
  5. "Day of the Dove" (7): Maybe the weakest of the Klingon episodes, due to the contrived "energy being" that feeds on hatred (such a gimmick). I appreciate the difficulties in conceiving and portraying non-humanoid aliens, but these sparkly clouds that can just walk through walls and exist in space are both scientifically dubious and have such powers that (like writing for superheroes) working around them is tricky. I would not say this episode particularly succeeded.
  6. "The Paradise Syndrome" (3): Awkward representation of Native American culture. And why the heck would they be way out here, anyway?  Yet another episode of simple people beholden to a machine created by some ancients. At least it's better than "The Apple".
  7. "For the World is Hollow and I Have Touched the Sky" (8): A simple people beholden to a machine created by some ancients! Where have I heard that before...? Bad, but not, like, memorably bad, best simply forgotten. I do love the title, though.
  8. "The Empath" (12): Another race captures people from multiple planets and tortures them for fun, and Kirk, Spock and McCoy are next. Well, turns out they are being tortured to put an empath to the test; if she would sacrifice herself to save them, then her entire species wins. Yes, that's as bad as it sounds.
  9. "Spectre of the Gun" (6): Well, it's better than "Spock's Brain", and has some humor, but the fundamental premise makes zero sense. Was this just because Roddenberry wanted to film a Western?
  10. "Spock's Brain" (1): Nothing about this makes any sense.  Widely regarded as one of, if not the, worst episodes, its only saving grace to me is that has less awkward racism and sexism than the ones below it here.
  11. "And the Children Shall Lead" (4): In Den of Geeks' phrase, an "angel" in a shower curtain. Just bad all the way around, as both SF and drama. Match this with "The Way to Eden", and boy, you've got bad acting and bad plot taking over the Enterprise.
  12. "Plato's Stepchildren" (10): This is dreadful. Painful to watch. Nothing at all in it makes any sense.

Season 3, Second Half

I gotta say, by the time I got here, this was beginning to feel like a slog. NBC arguably did Star Trek a favor by cancelling it after season three. It sure feels like they ran out of gas in the Story Idea Department minivan.
  1. "Elaan of Troyius" (13): Kirk plays diplomat and disciplinarian, while using his love of the ship as an antidote to an aphrodisiac.
  2. "All Our Yesterdays" (23): Another episode I had never seen before. The atavachron reeks of the Guardian from "The City on the Edge of Forever", but this is a solid, emotionally resonant episode that's also reasonable SF, a too-rare combination over the three seasons. Perfect hair, makeup, and revealing Raquel Welch-style animal skin outfit for Spock's love interest aside, of course. This would have been a pretty good place to go out.
  3. "Let that be Your Last Battlefield" (15): A heavy-handed morality play on the ridiculousness of racism; did NBC not get that, or were they okay with such an overt political message by this point in the game? At core a good episode, if a bit over the top, but yet again hurt by a race of beings with telekinesis whose powers serve as a plot gimmick. What are the limits to their power? Not clear, again. It's also not clear why Bele has this power but Lokai doesn't. And 50,000 years? Really? That would, I think, make them among the very oldest beings encountered anywhere in the series. Echoes of Season One's "The Alternative Factor"; better drama, writing and execution, if less solid/interesting SF. For that matter, this is another episode that didn't really need a spaceship, though since the nonhuman aspects of the aliens are important, it definitely is SF.
  4. "The Cloud Minders" (21): Another episode I had never seen before. I had no idea that we had a cloud city in ST:TOS. Also another heavy-handed morality play on the ridiculousness of racism (done just weeks before in "Last Battlefield"), this time with live civilizations and another beautiful young woman in an amazing costume who is attracted to Spock (this time, with no reciprocation).
  5. "Requiem for Methuselah" (19): Not bad, but maybe forgettable. Well, Kirk falling that hard for Rayna in two hours is pretty over the top, but compared to some of the other things in Season 3 it almost goes unnoticed. Flint is only 6,000 years old, so a youngster compared to Bele and Lokai. This "immortal guy who was Leonardo and other interesting people back in the day" schtick feels old, almost trite, but I'm not enough of an SF historian to tell you where it comes from; it's possible this is a fairly early use of it.
  6. "The Lights of Zetar" (18): Unprofessional of Scotty to fall so hard for a young lieutenant, and the spirits of other beings wandering the galaxy at warp speed is pretty dreadful, but for all that it produces good tension. Oh, and the United Federation of Planets can't afford to make a backup copy of Wikipedia?
  7. "The Savage Curtain" (22): Another episode I had never seen, this is where Kirk meets Abe Lincoln and Spock meets Surak. But they (along with a 21st century tyrant, a Klingon, and a savage woman and Genghis Khan, the latter two of whom get no lines) are artificial constructs of lava-based aliens who can read our minds. And despite the fact that the aliens can read our minds, they still want us to fight it out, good versus evil, as humanoid species? A Roddenberry story idea, but this one's pretty bad.
  8. "Whom Gods Destroy" (14): Well, maybe the best thing that can be said for this is that it's not "Plato's Stepchildren", but there is a level of ridiculousness here. By today's standards, not a particularly compassionate or insightful look into mental illness. Doesn't improve on "Dagger of the Mind", which it echoes.
  9. "That Which Survives" (17): Androids in purple "I Dream of Jeannie" outfits can read human minds but have the job of killing specific humans to protect a ghost ship/planetoid where all the people died a long time ago and some automaton still runs things. Bad SF, which would be a shame since encountering dead/dying civilizations and galactic archeology are a great theme, but since we do it so often this one can just be discarded.
  10. "The Way to Eden" (20): There were a couple of brief moments where I thought, "Maybe this isn't as bad as its reputation," but no, it's definitely bad. By far the most 1960s of the whole series, with an ironclad "Anti-establishment people can't survive in the real world," message to it. It's kind of a shame -- no, make that a real shame -- since a look into a sub-community of people who were dissatisfied with life in the Federation would have made for a fascinating topic. It just couldn't see past the counterculture of the 1960s itself.
  11. "Turnabout Intruder" (24): I debated whether this is better or worse than the couple above and the one below, but they are all awful, so it doesn't much matter. The random alien technology that transplants human personalities successfully drags this down, although not as much as the implausible story. Would a former lover really try to lure Kirk across the galaxy and think she could get away with replacing him? Although I suppose various imposters have been an important theme of literature since time immemorial, this is still almost impossibly bad.
  12. "The Mark of Gideon" (16): The start of this is illogical, but not dreadful, but by the time we get to the end the whole thing has fallen apart. People so crowded on the land surface that they have to keep walking, can't sit down or be alone? A replica of the Enterprise that Kirk couldn't tell wasn't the real thing?  Instead of just, well, capturing him? (We'll ignore that they insisted that the captain come alone -- that's never a red flag.) How did they get all the information to make such a perfect replica? Oh, never mind, there are a dozen things about this one that are equally bad.

Wednesday, July 21, 2021

Spelunking CACM, vol. 5 (1962)

For 1962, I considered choices such as an early Knuth paper, on tricks for making evaluation of polynomials more efficient, an early (but not the first) paper on theorem proving machines, a description of an event for high schoolers that points out that there were already 8,000 computers and 30,000 professionals in the country (sadly, the article has no demographic info on attendees), and especially an early paper on multiprogramming from NASA (what's not to love about a paper that says, "Some educationally valuable mistakes were made"? It's instructive that it refers to "the interrupt feature", indicating its newness, but the modern term "interrupt service routine" was already in use.). In the end I settled on a notice about ACM's policy toward standardization. CACM already had a section on "Standards", edited by S. Gorn, but this notice is otherwise unsigned.

Early, the paper makes three binary divisions: users v. "professional computer people", industrial v. theoretical (interestingly, not academic), and hardware v. software. This divides those with an interest into eight categories.

It points out the risks of too-early standardization. As a vendor, if you tie yourself to a standard too early, an innovative competitor can introduce something new, and your hands are tied.

I found this table intriguing. Likely you're vaguely aware of some of these organizations, but you may not realize how early and dynamic they were.  Keep in mind that this is a mere 17 years after the end of World War II, and yet Germany, Italy, the Netherlands, France, and Japan, who collectively suffered some of the worst devastation, are represented. (Russia, China, Poland and Belgium are listed, too, but don't seem to have entries, so I'm baffled as to why they are included.) (In the most breathtaking post-WWII recovery, just two years later Tokyo would host the Olympics and the first shinkansen line would open.)

It's also interesting that the table focuses on language; today, the UN's list of official languages notwithstanding, almost all international standardization work takes place in English, with a vestige still of French. I admit to being lucky to have been born in an English-speaking family at a time when it is the de facto language of science, technology and international commerce. Not so long ago, the choices would have been French (esp. for diplomacy) and German (science and technology).



Quoting from the paper:

The policy of ACM toward standardization is therefore the following:

1. It is extremely conservative as far as the development and promulgation of standards is concerned.

2. It is resistant toward precipitate standardization, specially in any area in which not enough is known to make such standardization theoretically sensible or stable.

3. It tends to be neutral in those areas where standardization is a matter of arbitrary selection, in spite of its recognition of the usefulness of such selection. That part of its membership which is vitally interested in such arbitrary selection is already represented in the industrial side of the activity.

On the positive side, the society is vitally interested in maintaining wide open channels of communications.

4. Thus it takes a positive interest in the stabilization of terminology, whether by reporting common usage or by declaring preferred usage (the normative function).

5. It is interested in the development of appropriate fundamental concepts, the establishment of the relationships among them, and in the quick dissemination of such developments.

6. Finally, it is interested in the development of standard methods of specification of processors, whether they be computers, programs or systems, of languages for such processors and of translation processors for such languages. Included in the methods of specification are methods of documentation for each type of audience or interest in the computer area.

Overall, the policy expresses some interest in standardization of systems, esp. programming languages, it seems, but little else. Almost sixty years later, we can see that indeed ACM, despite its importance in the computing ecosystem, has largely remained aloof from the issues of standardization, leaving that to ANSI, IEEE, FIPS, ISO, IETF, NBS/NIST et al.

Tuesday, July 06, 2021

Building a Raspberry Pi 4 MPI Cluster in 2021

 


We are using Ubuntu on Raspberry Pi 4 boards with 8GB RAM, 64GB flash drives, coupled using a cheap gigabit Ethernet switch. I wanted to use PoE (power over Ethernet), but that requires a "hat", an additional daughter board to extract the power, and it's moderately expensive compared to the cost of ordinary power supplies. Moreover, PoE-capable switches are more expensive and a shade harder to get ahold of. (Apologies for the mess in the photo, we should straighten that out. We also don't yet have a permanent location for this. Another group in our lab 3-D printed holders and a 19" rack mount frame for theirs, but we haven't gotten that far yet.)

Getting it all running was rather a pain. Here were our pain points and some advice:

  • We accidentally installed ARM7 Ubuntu on some machines and ARM64 on others. This problem won't become apparent until you compile and run your own code on multiple nodes via MPI, at which point it will tell you "Exec format error," and you'll have to go back and make all the nodes agree on architecture & chipset support. This was the last major problem we had to solve, but I mention it first since it's one you want to get right up front. All things being equal, unless you're creating a mixed cluster with older hardware, you probably want the 64-bit installation.
  • My first mistake was mixing installs of MPICH and OpenMPI. They are two separate implementations of MPI. Either is apparently fine, but don't mix them. If you just do
    sudo apt install mpi
    you will get OpenMPI. It doesn't include headers and the development tools, so you won't be able to compile. You also need the package mpi-default-dev.
  • You need openssh-server, but that's usually included a default Ubuntu setup. Likely, you'll also need to install gcc, make, git and gdb.
  • We're still tinkering with the best way to share setup info, including username databases and SSH keys for students and the like, but what we've settled on for the moment is Ansible, a popular networked systems management tool.
  • We set things up to share the executable via NFS. (We're not doing data-intensive stuff, just introductory programming exercises for now, so we're not sharing some major data farm.) Getting permissions right here took a little bit of work.
  • Our biggest pain point, which took the longest to solve, was getting the firewall settings right. Even though ompi_info tells me it's not compiled for IPv6, in fact the basic ssh that is used to initiate communications apparently runs over IPv6 anyway, if v6 is configured on our systems. Took us a couple of hours to figure this one out. Even when we briefly turned off the firewall entirely for debugging purposes, we were getting timeouts that baffled us. (ss was a big help here in figuring out what connections are trying to happen, but it takes a little greping to sort the wheat from the chaff.) (And random, 35-year-long rant: what is it with UNIX folks and short commands/tool names? "ss"? What is that?!? At least "netstat" has some mnemonic relationship to what it does.)
    Also, the default setting for Ubuntu firewall is "all outbound traffic allowed, no inbound traffic allowed," so even if you think you have the firewall entirely off, that might not mean what you think it means!
When your setup is close to working right, 

mpirun -np 2 --host raspi1,raspi2 hostname 

should print out the names of your hosts.  (Replace raspi1 and raspi2 with DNS names or IP addresses for your machines.) That just executes the command hostname on the remote host, showing that your communication is working. Since each machine has that command on it, it won't reveal the first problem above, the ARM7/64 issue.
 
That's just some quick notes in case you're running into similar problems. I'll try to flesh this out later.

Sunday, June 20, 2021

Spenlunking CACM, vol. 4 (1961): Soviet cybernetics and computer sciences, 1960

The January 1961 issue is dedicated mostly to compiler-related issues, especially ALGOL, though there are some articles on arithmetic and one on digital computers in universities; other issues from the same year include work on error correcting codes. One intriguing one talks about mathematical models for documentation and search. Algorithms are published en masse, with little commentary. Most are short subroutines for calculating mathematical functions. The July issue includes what might be the first publication of quicksort; I'm not enough of a historian on algorithms to say whether it's new here, or just published for the record. But the description is, um, terse:

That's it, that's the whole thing. No, I can't read it, either, and I think I know how quicksort works.

CACM, by now, features black and white photos on the cover. By 1961, we can say that CS research and the operation of ACM are in full swing. There is even a letters to the editor section; one February letter discusses an earlier article on multi-processing (contrasted, correctly, with multi-programming). There are a few women authors; Lynn could be either man or woman, but Joyce, Judith, Mary and Patty are unlikely to be men. (A noticeable number of authors use only initials, as well.) The names Wilkes, Hoare, Dijkstra flit past; and, for the first time, I spott Knuth's name as an author.

But one article in particular caught my eye.

Edward A. Feigenbaum, one of the founders of GOFAI, already a Berkeley professor at the time, visited the Soviet Union in 1960, and had some things to say about the state of their computing (and their ability or willingness to run an interesting conference). Interestingly, in 1982, Ed would be one of the prominent senior foreign guests at the first Fifth Generation computer conference held by ICOT in Japan.

In the article, "Soviet Cybernetics and Computer Sciences, 1960", Feigenbaum does quite a bit of complaining about the Soviets as hosts. The report is long and detailed (14 pages of 3-column text, no figures), covering his attendance as a delegate to the First International Congress of the International Federation of Automatic Control.  If you read Russian (or use a translator), you can find a report at http://www.mathnet.ru/.

Feigenbaum objected to the style of the conference, referring to the "tedium" of each paper being followed by an extensive "discussion" that amounted to a further clarification or rebuttal of the paper. Again, here, he complained about the erratic performance of translators.

For Feigenbaum, and probably for his audience, the most interesting part was not the 400 papers presented by 1,200 delegates, but the individual visits he managed to make, seeing some Russians he already knew by name. However, he was stymied in his attempts to see others, and some of the ones he did get to meet offered him no interesting information. But he did manage to find some people working on speech, automated translation, brain simulation, and other AIish topics as well as mathematical computation.

He actually learned quite a bit about some of the computers themselves, including which ones were mature enough to handle a true compiler for a language.

He described the chess machine (by which I think he means JOHNNIAC running the Newell-Shaw-Simon program; he refers to the machine as "antediluvian") and geometry machine (Gelernter at IBM) then under development. Apparently, the optimists at the time believed that a chess machine would be (world?) champion by 1970 and would prove new mathematical theorems by 1970, as well.  The Soviets seemed to concur with that as a timeline, but were amazed that such impractical research was "allowed" in the U.S., and might even be conducted by capitalist corporations. Feigenbaum explained how foundations, corporations and the government support research, and the Soviets were reportedly impressed.

In the end, Feigenbaum concluded that 

I concur with the opinion of most U. S. computer scientists who have visited Russia that at present the United States has a definite lead over the Soviet Union in the design and production of computing machines, but that there is no gap in fundamental ideas,with the possible exception of the production of reliable transistors.With the importance of computers to modern science and technology, there is no doubt that fairly soon the Soviet Union will be producing as many computers as we do. To what extent they will utilize these computers effectively, and in what new ways, I have no immediate answer[.]

Of course, we now know that reliable translation has taken a further sixty years already, and performance is still spotty. I sometimes wonder how much of the complexity of such problems that people like Feigenbaum had accurately anticipated.

Monday, May 31, 2021

Spelunking CACM, vol. 3 (1960): Automatic Graders for Programming Classes

This one boggles my mind. In the October 1960 issue of Communications of the ACM, Jack Hollingsworth of the Rensselaer Polytechnic Institute Computer Laboratory published a paper titled, "Automatic Graders for Programming Classes".

This was on an IBM 650 computer. The 650 has a drum for temporary storage, and input and output are via punched cards. The grader program itself functioned as an executive of sorts, loading a student's already-compiled program from a card deck, setting up input data, running the program, comparing the output to an expected value, and aborting by punching a card indicating the error if it doesn't match. The grader is remarkably sophisticated; it can handle multiple independent assignments in a single run, by using different card decks for the input and output expected values.

They used this grader in a class of over 80 programming students. The article doesn't say if any of the students were women, but RPI already had a handful of women students at the time, so it's possible. Two machine operators are mentioned by name in the acknowledgments, both women; it's likely that they had a very high degree of technical skill in operating the machine and possibly in programming it.

"In general only an eighth as much computer time is required when the grader is used as is required when each student is expected to run his own program, probably less than a third as much staff time, and considerably less student time." That was very important in the 1950s, as machine time was an expensive and prized commodity.

The writing of the paper is a little rough; there's not much in the way of introduction, it just dives straight into some of the details of using the program.  We do learn that the grader was first used fifteen months before the paper was written, so presumably in 1959, perhaps as early as 1958. Pseudocode is included.

Given that I still grade student programs by hand, I should probably take a lesson from some of the pioneers from before I was born, and learn to save myself some work!

Sunday, May 23, 2021

Spelunking CACM, vol. 2 (1959): Abstracts -- Nuclear Reactor Codes

Today's Communications of the ACM spelunking is a startling find from volume 2, 1959: Abstracts -- Nuclear Reactor Codes, attributed to "The Nuclear Codes Group, Virginia Nather and Ward Sangren", General Atomics. It's not clear to me if Virginia and Ward are members of the NCG who led this effort, or whether it's NCG AND Virginia and Ward.

This article is essentially a list of known programs used in the design and simulation of nuclear reactors. For each one, it lists the authors, status/availability, what problem it solves (most in words, some with accompanying differential equations), estimated run time (in hours and minutes, not big-O notation; big-O has existed since the 1890s, but wasn't common in computer science until Knuth-sensei made it so in the 1970s), limitations and some comments.

This article describes 239, yes, two hundred thirty-nine programs used in nuclear reactor design -- in 1959! There is an additional page listing several dozen more that they didn't fully catalog! And this wasn't even the first such list; that dates to 1955, according to the authors. Given that ENIAC was completed in 1945, the first IBM 650 was installed at the end of 1954 and the first IBM 704 in 1955, I am astounded at how quickly codes for this purpose proliferated. On the other hand, building reactors was one of the preeminent science and engineering problems of the day, so I suppose I shouldn't be.

The authors worried a bit that their list was dominated by codes for the 650 and 704; did that mean they were missing other important ones? Interpreting the performance of the 650 in modern terms is a little difficult, but the 704 could perform 12,000 floating point operations per second, several orders of magnitude faster than a human and incredibly valuable to calculation-dependent teams. A few programs ran in seconds; most list fractions of hours up to a few hours. The 704 codes seem to mostly run in minutes, so presumably represent hundreds of thousands up to low millions of floating point operations, taking into account that I/O is a big fraction of running time.

They list 33 different organizations/laboratories where these codes were known to be running. That means that the mean laboratory created about eight programs, which I suppose is reasonable. (I didn't try to assess that distribution.)

The authors categorize programs in the following way:

  • Burnup -- "dealing with decay and fuel or poison depletion"
  • Engineering -- "involving non-nuclear calculations such as heat transfer or stress analysis"
  • Group Diffusion -- diffusion theory approximations, which they further divide into three-dimensional, two-dimensional, one-dimensional (there are a lot of these!), and control rod calcuations
  • Kinetics -- "concerned with reactor startup and sudden changes in reactivity"
  • Miscellaneous -- curve fitting, etc.
  • Monte Carlo -- given that this is a technique and not an application, not sure why it's categorized this way
  • Physics -- "any code involving nuclear physics calculations which is used for reactor design and does not appear in another category"
  • Transport -- "solving an approximation to the Boltzmann transport equation other than those under (G) [group diffusion]"

Wow...

Monday, May 17, 2021

Spelunking CACM, vol. 1 (1958): Accelerating Convergence of Iterative Processes

For our first spelunking expedition, I skimmed volume 1 of the Communications of the ACM.

The first thing that I noticed about the earliest issues is that numerical algorithms dominate: numeric integration, minimizing errors, etc. If you dig around, though, you'll find a smattering of topics such as binary search in tables, as well.

The one that caught my eye this morning is Accelerating Convergence of Iterative Processes, by J. H. Wegstein of the U.S. National Bureau of Standards.  (ACM began as an American organization; I don't know when the first articles from abroad appeared in CACM, nor when it began to really view itself as an international organization.)

The first thing you notice, of course, is the layout and typesetting. It's almost a newsletter format, single column, not a lot of boilerplate or fanciness. That said, the typesetting is pretty solid, even if it now looks kind of dated. (One big problem is the intermediate quality of the scans, but given the volume of material that needs to be brought online, I'm forgiving on this.)




The figures appear to be hand drawn by a competent but not brilliant draftsperson.



The paper contains an acknowledgment that another researcher contributed Section 4 of the paper, so I wonder why he wasn't listed as an author. The main body of the text also includes a comment by the editor (Alan J. Perlis?) on a method for reducing the computational cost. The paper contains no references.

Okay, on to the content...

Iterative methods for finding roots of equations have been known for a long time; one famous one is Newton's method. They always depend on some assumptions about the nature of the function. In this paper, we are looking at roots of $F(x) = 0$. If the equation can be written as $x = f(x)$, then you're looking for a constant point (a kind of eigenvector or steady state solution?), and it can be found by iterating $x_{n+1} = f(x_n)$.  If that form doesn't work, then you apply some factor $\Gamma$ by way of $x_{n+1} = x_n + \Gamma F(x_n)$.

The author examines several cases, where iteration causes values:

  1.  to oscillate and converge;
  2. to oscillate and diverge;
  3. to converge monotonically; or, finally,
  4. to diverge monotonically.
The main point seems to be an adaptive method for finding the factor $q$ in equations of the form
$\bar{x}_{n+1}=q x_{n}+(1-q) x_{n+1}$,
using the above equation for $x_{n+1}$.
The claim is that many (all? seems unlikely, but that's the way I read the text) equations can be transformed from diverging to converging, and the converging ones can be made to converge more rapidly. The overall process looks like this figure, where steps 1 & 3 are used only on the first iteration and the whole thing terminates when the absolute error drops below some pre-chosen threshold.



The author then goes on to show some examples of convergence. It's not clear to me that this was actually programmed; the examples would be somewhat tedious but not too bad if done by hand, and I don't see any other indication one way or the other.
Overall, a representative article from CACM's earliest years, and I think a successful spelunking expedition. I already have a couple of surprising articles lined up from 1959 and 1960, so stay tuned!

Sunday, May 16, 2021

Spelunking Communications of the ACM

 I'm feeling both a little random this morning, and very under-read as a general principle, so I'm going to start something.  We'll see how far it goes...

Communications of the ACM is the Association for Computing Machinery's flagship magazine. The modern instantiation is fabulous. Its history goes back to 1958 (some sources say 1957, but apparently v.1, no. 1 was Jan. 1958). It has evolved dramatically in its 63.5 years of existence, as the platform we take for granted has grown and matured.

Last night, I decided to go spelunking in the archives. This morning, I decided I'm going to try to review one paper from each year of CACM's existence. If I do one paper a week, this will take me a year and a quarter.  (But we know I'm easily distracted, so the challenge is, can I keep it up?)

I'll pick something, not at random, but not based on metrics such as whether a paper has been cited a lot or has a famous author, especially in the beginning.  It will just be something that catches my eye, and it likely will be something far from my own expertise, so there's a good chance my review will contain basic errors, so please feel free to comment and correct but not deride. After we get into the 1970s, we'll start to see more of the names I already know, and by the late 1980s, when I became an ACM member, very likely I'll pick some papers based simply on, "I remember reading that!" So, this is very much spelunking -- going into the dark, picking things up and examining them, tossing most of them back but finding a few gems.

So, starting this morning, I'm going to review a paper from 1958. Come along for the ride...

Thursday, May 13, 2021

A #QuantumInternet Architecture Position Paper

A #QuantumInternet Architecture Position Paper 

Rodney Van Meter 

2021/5/9

Okay, here's something I've been intending to write down for some years...a brief outline of my ideas for a Quantum Internet architecture, based both on our published works and some ideas that aren't yet published.  The tl;dr is

  1. the Quantum Recursive Network Architecture (QRNA),
  2. RuleSet-based connections,
  3. a two-pass connection setup mechanism,
  4. qDijkstra with seconds per Bell pair as link cost for routing, and
  5. ??? for multiplexing.

Of course, a lot of this is covered in my book (Quantum Networking, available in the ACM online learning center, I believe, though just now I couldn't find it).  But quite a bit about our ideas has evolved since I wrote the book, and it's good to summarize them anyway. Importantly, this is not a full survey of history or current thought; this is my idea for how things should go.  I suppose you could call this my #QuantumInternet #PositionPaper.

See the AQUA group publications page and my Google Scholar page for access to some of these papers and to others.

First off, of course, it's important to recognize that there will be an internetwork, a network of networks. http://dx.doi.org/10.1109/MNET.2012.6246754 or, for an unfancy copy,  https://aqua.sfc.wide.ad.jp/publications/van-meter-networking-review-preprint.pdf.

There will be more than one network architecture, no doubt; but to build a true Quantum Internet there will ultimately be only a single internetwork architecture.

There are a number of key design decisions that must be made:

  1. the nature of the fundamental service: Bell pairs?  measured-out classical bits?  qubit teleportation? multi-party graph states?
  2. how networks will come together to make an internetwork -- what is the nature of the interaction?  (affected strongly by connections, below)
  3. the multiplexing discipline for resources (n.b.: not for access to wavelengths; this is a higher-level, end-to-end concept): circuit switching?  time division muxing?  statistical muxing, as in the    Internet?  buffer space muxing?
  4. nature of connections: entanglement swapping and purification (1G), or QEC (2G, 3G)? (affects internetworking)
  5. both of the above points affect whether a connection requires state at each repeater/router
  6. how connections are established
  7. how a path or route is chosen through the network
  8. all of the above affect choice of whether to try to do multipath for a single connection
  9. security for the network

There are more, but those are some of the critical ones.  For more on these kinds of issues, (as well as a super-brief intro to quantum networking for those without the background), see our Internet Draft, hopefully to become an RFC soon.

On individual links, using memories at each end and photons to entangle, it seems pretty obvious to me that the fundamental primitive is the physical Bell pair (which we also call a "base entangled state").  Everything else builds on top of this.

However, that's made more complicated by the possibility of all-optical repeaters, an area we are currently researching. (See https://quantum-journal.org/papers/q-2021-02-15-397/ and work backwards from there.)  An especially tricky issue we are actively working on right now is how to terminate such connections and how to make them interoperate with other types of connections/nodes/networks.

I believe that end-to-end multiparty states (GHZ, W, graph states) are likely to be extremely valuable, but I think it's an open question whether they are part of the fundamental service or should be created and managed entirely by applications running at end nodes.  (In particular, I'm not at all sure what the APIs at the responders are like to make something like this happen.  What is listen() like in this case?)  At any rate, I think QRNA and our RuleSet-based approach can handle either connection-level or multiparty graph states as we develop it over time.

Same for multipath connections.  It's a pretty obvious idea, and sorry but I can't remember who first proposed them in print (Perdrix? Benjamin?).  My own opinion is that the benefits of multipath are likely to be minor, as a. often, the first hop will be the bottleneck anyway, b. asymmetry in the paths in the real world means that benefits will be minimal, and c. I assume there will be a lot of competition for resources on the net, and so the occasions when you can actually acquire the resources to do multipath effectively will be few.  Oh, and d. the software complexity is high.  So, I think it's doable in QRNA+RuleSet, but it's far down my list of things to work on.

So, let's stick with Bell pairs for the moment as both the fundamental link service and the *primary* end-to-end service.  If we build well, it will be possible to extend later.

That disposes of...let's see...point 1 in the list above.  On to point 2...

I think that the internetwork architecture should be a fully recursive system; we have named this the Quantum Recursive Network Architecture (QRNA), after Joe Touch's RNA.  (Joe collaborated with us on this.) Today, an idealization of the Internet is that it's a two-level system, with BGP as the external gateway protocol and your choice of internal gateway protocol (OSPF and IS-IS being two of the most prominent).  The reality, with tunneling having long been common, with switched Ethernets requiring a spanning tree protocol underneath even though they are nominally "link layer", and lately with the huge emphasis on virtualizing networks and services (e.g., network slices), is that the Internet has long been a multi-tier system with ad hoc interactions at each level.  Designing from scratch, if we do a good job, this means that they are all unified into a single system.

Of course, if you want, at your network boundary, you can run anything you want inside: you just have to match the semantics of the connection's requests where they cross your borders.

Today, in the Internet, when a packet arrives at your border, the implied semantics are for you to forward it across (for transit) or to the matching end node (for termination).  For the Quantum Internet, connections will have to be established in advance, with a certain amount of state.  What I envision is a connection request protocol where, in a multi-tier system, connections are for some boundary-to-boundary (for transit) or boundary-to-end node (for request termination) operations.  Presumably, for transit, what connection requests see is each entire network as a node in the graph at this level (i.e., top-level eBGP graph).  Requests, then, are of the nature, "Please give me Bell pairs of fidelity F=x between here and address 1.2.3.4," where the requester knows that at this level of the graph that 1.2.3.4 is the next hop toward the destination.

Therefore, it's the responsibility of border routers to rewrite the request to internal actions that will fulfill this goal.  Again, internally, it can be what you like -- but if you adopt QRNA internally, it can be creating a new set of QRNA requests that reach from here to the gateway on the other side of the network.

There's lots more to say on QRNA; see our original journal paper or discussion in my book.  Beyond this vision, there is still a lot of work to do!

Two down, seven points to go...

Multiplexing: Lucho Aparicio's master's thesis addressed circuit switching, time-division multiplexing, statistical multiplexing (like Internet best-effort forwarding), and buffer space multiplexing, where the qubits at each router node are assigned to specific connections but multiple connections can pass through,  getting assigned a share of the qubits.  We studied aggregate throughput and fairness, and found, somewhat to our surprise, that stat mux works pretty well.  Aggregate throughput is actually above circuit switching, because it does a pretty good job of allowing multiple areas of the network to be working productively at the same time.  What's more, as far as I am aware, this was the world's first simulation of a quantum repeater network, as opposed to just a chain of repeaters. See Lucho's SPIE paper or Lucho's master's thesis.

However, that said, those early sims were for small-scale networks. I think this needs to be studied in much more detail to assess robustness in the face of complex, varying traffic patterns.  In particular, I really fear that something akin to congestion collapse is possible, and is to be avoided.  We already know that connection state will have to be maintained at repeaters & routers; quite probably there will have to be some active management of resources here, as well.

This has to coordinate with routing, below.  Naturally, we want to avoid fully blocking muxing protocol if possible.

Oh, and one more point on this: given that early networks will be low performance, how do we prioritize connections?  Do we create a static priority scheme, based on...how much money people have?  Auction off time slots?  Use a fixed accounting/charging scheme and lower nodes' priority the more they use, like an OS multi-level feedback queue?  (Can you tell that I lectured on MLFQ last week?)

Point four: an internetwork architecture needs to accommodate 1G, 2G and 3G networks.  Although these designations address advances in dealing with types of errors as our capabilities improve, they do not necessarily correspond to time.  Nevertheless, 1G, using acknowledged link layer entanglement creation to handle photon loss and purification (quantum error detection) to handle noise and decoherence, will definitely be the first deployed.  (Indeed, Delft is getting there, one step at a time.)

So, we need an internetwork capable of interconnecting different connection architectures.  We have addressed how routers at the boundary can make entangled states that cross the borders

Even for first-generation networks, though, you have to have a mechanism for saying, "Okay, Bob, once you get a Bell pair with Alice and a Bell pair with Charlie, execute entanglement swapping, then send the Pauli frame correction to Charlie and a notice-of-entanglement-transfer to Alice," and "If you have two Bell pairs with Alice, both with fidelity less than 0.9, then purify."

Our approach to this is to define Rules that have a condition clause and an action clause, very analogous to classical software defined networking (SDN).  For a connection, each node is given a RuleSet that should comprehensively define what to do as local events occur (entanglement success, timeout, etc.) and as messages arrive.

This RuleSet-based operation is the heart of our work these days, and allows for explicit reasoning about how to achieve the maximum asynchrony in the network (rather than waiting for explicit instructions at every operation or attempts to make everything proceed in lockstep).  The best reference to date on RuleSets is Takaaki's master's thesis or our PRA paper on the topic.

I believe this RuleSet-based approach meshes will with the vision of QRNA.  Indeed, when combined with a rewrite engine at network borders, as described above, it should serve well as an instantiation of QRNA.

All right, that actually handles point five, as well; RuleSets and any qubits at the nodes that are currently assigned to a particular connection, well, that's connection state at each repeater/router. The scalability of this needs to be assessed, but I don't see a way around it right now.

(n.b.: as a network architecture aside, for the foreseeable future, there won't be any high-performance links; they'll all be low performance.  Therefore, there won't really be backbone, long-haul, high-bandwidth links, either; the network topology is going to have to be richer.  So there might not be huge numbers of connections passing through individual routers, anyway.)

So, point six, how do we set up connections...our approach is to use a two-pass system.  On the outbound leg (starting, appropriate enough, at the Initiator), you collect information about links and available resources.  The connection request eventually reaches the Responder, which takes that information and builds RuleSets for everyone along the path.  Those RuleSets are then distributed in a return pass, then the operation for the connection begins.

This has a few interesting points: a. it limits the amount of information each node has to have on hand about the entire network, b. it allows Responders to innovate (within the bounds of the RuleSet architecture), and c. it will work well with the border rewrites necessary for QRNA.

I have presented this approach in any number of talks (see, oh, for example, my 2020 virtual talk at Caltech -- there is a video file there as well as PDF of my slides), but so far it's only written down in an expired Internet Draft that I hope to revive, and in some of the documentation on our Quantum Internet Simulation Package (QuISP)

Which brings us to point seven...how to pick a route.  Quite some time ago, we investigated a variant of Dijkstra's algorithm, which we inventively call qDijkstra.  We define link cost as "seconds per Bell pair of some index fidelity,"  e.g., seconds to make a Bell pair of e.g. F=0.98.  Including fidelity in the metric makes a lot of sense; a high data rate but with poor fidelity may be less useful than one with a lower rate but higher fidelity.  Thus, if your base fidelity is poor, you have to take into account purification, which automatically reduces throughput by half and more likely by 3/4 or more.  We compared (via simulation) the throughput of various paths with heterogeneous links, and found a good correlation with our calculated path cost.  Fidelity is actually too simple a metric, so the correlation isn't perfect, but we think it's good enough.

The biggest open question here -- and one of the things we are investigating -- is how to combine path selection with multiplexing/resource reservation.

Whew...let's see, we covered multipath above when talking about connections, so we're set there.

Bringing us to point nine, network security: in fact, we are the only group in the world looking at security of network operations for quantum repeaters. But this doesn't mean at all that we have a complete plan for secure operation of the Quantum Internet.  Fairly obviously, all of the protocols we've talked about above need authentication and tamper resistance; whether privacy is also required or useful is an open question.  Given the previous Internet (and, to a lesser extent, telephone network) experiences with lack of security in routing, accounting, DoS, etc., and the likely high cost of quantum connections, it seems pretty imperative to have a solid framework in place very early in the Quantum Internet, basically well before we have a truly operational network.  And, this ties into the muxing decisions as outlined above -- you can't have accounting and authorization without authentication.

Whew, that's a lot of decisions, and lays out a lot of work to do. And we haven't even addressed some important topics, like naming.

If you can't carry all that in your head, just remember the three critical points: a recursive architecture for internetworking, RuleSet-based connection operation, and a two-pass connection setup routine (outbound info collection, inbound RuleSet distribution).

And although there is solid work on routing and multiplexing, designing a system that will be robust at scale and that will serve us well for decades is a big, open issue.