Saturday, October 16, 2021

Quantum Ponzi

I have seen bubbles and winters(*) in AI, Internet, computer architecture, and even the PC. (And I've seen attack ships on fire off the shoulder of Orion.) Now, some argue, we are in a quantum bubble that will inevitably pop and lead to a prolonged (and deserved) quantum winter. I agree that there is excessive hype making the rounds, but some of the current reaction to that has its roots in the basic academic/industrial culture clash. There are always those in academia who dislike the charlatanism and shamanism of capitalism, but it is inevitable. Our job as academics is first, of course, to advance the science itself, and (also first) to nurture the next generation of talent, but second, to convey the scientific results to colleagues, funders and investors in a way that tempers expectations to minimize the depth of the winter when it comes.

In July 2021, Prof. Victor Galitski of the University of Maryland's Joint Quantum Institute posted a long anti-quantum hype piece on LinkedIn. I disagree with the title not at all. I disagree with some elements of the contents just as emphatically as I agree with the title. Usually I just let such criticism slide (or post a vision for the future), but this time I felt I should respond, both because this one has gotten some airplay and because Galitski's own JQI position seems to lend it some weight of authority. Plus, he brings up some good points worth responding to thoughtfully, and some naive points that I'll endeavor to respond to politely. And finally, some people I admire have also made positive comments about this, so I wanted to counterbalance their thoughts. Although this is structured around Galitski's criticism, I hope the points I make will resonate farther than just the single article.

Let me attempt to summarize his points, first the major ones, then the smaller ones, rather than in the order he presents them.

  1. There is a lot of unjustified hype around QC these days.
  2. The hype comes from unsubstantiated claims about a. the present and near future of quantum hardware, and b. the long term potential of quantum to change the world.
  3. These claims are coming from a. pure at heart researchers who have been corrupted by capitalism, and b. already corrupt capitalists with no clue, who are only in this to make a buck. The latter group includes VCs and classical computing people.
  4. These claims are resulting in an investment bubble that will inevitably pop.
  5. That bubble is draining away talent, both seasoned and potential, that rightly belongs in academia.
  6. As a corollary, the work done in industry (both large and small) is less important than the work done in academia.
  7. The popping of that bubble will poison investment for some time to come.
  8. Only physicists are qualified to talk about quantum.
  9. The barrier to entering the software industry is low, therefore software is easy and never gets harder, and therefore is unworthy of respect.
  10. Quantum systems are vulnerable to security problems.
  11. Today's quantum systems can be fully simulated, therefore today's cloud accessible quantum systems may be fraudulent.
Put this bluntly, this is more than legitimate skepticism of quantum hype, it is a pinched, narrow view of the world.
Let me try to go through these points roughly in order:

1. hype exists. Hoo, boy, I don't think anyone would disagree with this.

2. hardware and software aren't good. In his second paragraph, Galitski manages to both diss the fundamental importance of quantum algorithms and pooh-pooh the state of hardware. Despite being at JQI (which I suppose has a broader remit than just quantum information), he states, rather bluntly, that none of the existing algorithms will truly revolutionize our world, and by implication that such a revolution is unlikely to ever be forthcoming. I disagree. It is no longer anything more than obstinacy to refuse to recognize the profound shift that quantum information represents at the theoretical level. It is fully as fundamental as the shift from analog to digital information. When and how that will affect daily practice is the question at hand.

It is true that very few algorithms have been effectively evaluated for the requirements for machines to execute them on problems of commercial or scientific interest. But that number is not zero; it's perhaps ten or twenty, depending on how you count, and yes, the fidelity and resource demands often come out far higher than we naïvely initially hope. Shor's algorithm was among the first so evaluated; recently, chemistry and finance are getting the treatment. Ultimately, we need to go through the entire Quantum Algorithm Zoo, line by line, and identify the smallest problem that's infeasible classically, and therefore where QCs need to be in technical development to generate truly new results (as well as figure out which of those algorithms have real-world impact, and which are only of theoretical interest). However, the existence of hybrid algorithms complicates that picture; we may well reach the point where quantum computers do sub-problems for us in a useful fashion before they truly, definitively exceed classical supercomputers.

Data centers today consume about 1% of the world's generated electricity (and still growing), and Haber-Bosch process manufacture of agricultural fertilizer another 1% of all energy. The logistics and transportation industries consume even larger amounts of energy, and optimizing them is an enormous computational task. Both specific computational results and the general deployment of quantum computers may impact this energy landscape, but it is incumbent upon us to make that story increasingly concrete. This is very much an engineering problem, and requires incorporating a lot of details about the machines to be used; it's much more than a $O(\cdot)$ problem.

Classical supercomputers are, in fact, an interesting challenge to compare. The fabrication of quantum computers benefits from classical VLSI technology and their operation requires a lot of supporting classical computation. More importantly, the success of classical digital computers is so tremendous, that quantum computers have a very tall hill to climb before surpassing them. Conversely, classical computers are facing truly fundamental problems: working at the atomic scale, and dealing with heat. The former is a result of Moore's Law, the latter of Dennard scaling. Current transistors are only a few tens of atoms across, and we don't know how to make transistors out of anything smaller than an atom. The latter has a solution, but will require major reengineering. (See my paper, Q's C problem, C's Q problem.) Quantum computing offers partial solutions to these problems, both with its physical technological contributions and its potential to attack certain classes of computational problems,  especially those with modest amounts of state but exponential growth in the interesting state space. So, quantum computers still have a long ways to go, but they are both desirable and necessary.

3. hype is corrupted, unqualified or both. Wow, the nose-in-the-air ivory tower attitude here is high. "the researchers are forced to quit activities, they are actually good at and where they could have made real impact, and join the QC hype", we are told. For more on this, see points 5 and 8, below.

4. this is a bubble. This is perhaps Galitski's most important point. Growth in investment is absolutely necessary for the field to expand beyond its academic roots and create an industry, but the media hype and current ready availability of VC funds means that not all investment is wise. The way to improve the quality of investment is experience and education. Some of these startups will fail, for sure; some should never be invested in in the first place. Want to make a difference here? Engage with VCs and help them learn and make wise decisions.

5. brain drain.  Eating our seed corn is definitely a problem, largely self-correcting, known since the 1990s before. There are plenty of reasons to dislike Silicon Valley, but overall the balance (and sometimes tension) between government labs, government funded university research, corporate funded research both at universities and in its own labs, industrial development and startups is fundamentally healthy. (Though the US, EU, JP, KR, CN, SG, and AU models differ rather dramatically.) There is a valid and serious issue of how to best manage this (or to let it operate without intervention), but Galitski isn't making an argument on that topic, just lamenting the movement of people into other areas.

6. industrial work isn't good, and is mostly less than fundamental. I think this is so blatantly wrong (or at least elitist "my problems are the only ones worth solving") that it hardly needs refuting; as I noted above, we need to go through the available algorithms and figure out which are industrially relevant, and that's an engineering activity sitting right where people are poised to leap into industry, making their systems, algorithms and talent useful outside the confines of their own laboratory. That tech transfer is among the most important activities of all, even if there is a lot of skepticism about how well that really works out for universities.

The whiff of success, and with it the possibility of strategic advantage and riches (including university IP licensing) is leading to increased friction within the system, imposed by governments and corporate agreements, impeding flow of people and ideas through restrictive agreements and import/export paperwork and restrictions. Of the two, the government-imposed limits worry me more, because they can't be gotten around.

7. popping bubbles are bad. Galitski enumerates his two main points: that the current investment scene is a Ponzi scheme, and that the lure of money is drawing the best people out of academia and into the nascent industry. (He lists a third point, that hype is bad for science, but here he seems to primarily mean as a consequence of the first two points.)

In a true Ponzi scheme, investors have a responsibility to pay those who recruited them, which they fulfill by attracting others to pay them. This pyramid or tree structure depends on continued exponential growth in those willing to invest, and so collapses when the potential investor pool dries up, with the last round of investors left holding the bag.

I don't think a bubble is the same thing as a Ponzi scheme. Moreover, if we manage it well, investment in quantum computing will grow wiser and more rational. In that sense, criticism and discussion of irrational investment and building realistic expectations is welcome.

It does puzzle me why Galitski cares at all, since he apparently thinks there is little of value in quantum computing altogether. "To be sure, there are gems," he says, but there is little if anything positive in his take.

8 & 9. quantum belongs to the physicists.

To really see Galitski's opinion of the tech industry as a whole, it's worth quoting him:

A successful company in the "quantum technology space" can not pop up like Facebook or TikTok or a similar dumbed down platform, based on a code written by a college drop out. What's needed is years of education, work, and dedication. But what's going on is that there is an army of "quantum evangelists," who can't write the Schrödinger equation[.]

"You can't QC if you don't Schrödinger" smacks of elitism, but I suppose that's a point of view with moderately broad support in the community. (Heck, of course an author like Galitski thinks you should do a lot of QM before you do QC.) Personally, I can say

$i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle = H|\psi(t)\rangle$

with the best of them, but -- and this will elicit gasps -- I don't think you need to do that in order to do QC. In fact, I think it misses the point if you want to develop software; the skills you need are very different. (See my quantum computer engineer's bookshelf.) I'd be more inclined to say you can't QC if you don't sashay the Fourier. Finding the interference patterns that drive interesting quantum algorithms will require creativity, math, and perhaps geometric thinking; one-dimensional wells, the ultraviolet catastrophe and perturbation theory can be left for (much) later.

It's not clear which tech industry college dropout he has in mind; certainly there are a lot to choose from. There are even a lot to choose from if you restrict your list to those whose products have a mixed effect on society as a whole. It is true that it is possible to begin a large classical project with almost no investment; the barrier to entry is low. That is largely seen as a plus, rather than a minus, across the industry. But being dismissive of the amount of investment of time and brainpower, and the actual intellectual innovation and research it takes to reach the scale of global impact is foolish.

Fundamentally, it is important to recognize that there are a lot of really smart people in the world who aren't physicists, and some of them are trying to figure out how to deploy quantum computers (and quantum networks) outside of the physics laboratory. There are hardware engineers, software engineers, and business people who are learning. They need the room, time, respect and support to make this happen.

I have spent quite a bit of time with people in Japan, the U.S., and other countries who started with zero clue about quantum but are starting companies. Some of them start out roll-your-eyes clueless, and yes, most of those will go down in flames. Others, however, will surprise you. Through hard work and a willingness to study, they are in fact learning. Ultimately, they will build or buy a clue, or go out of business.

Yes, it would be better if they weren't a drain on resources (money and people) and reputation while acquiring or failing to acquire their clue. But over time, those doing the evaluation (VCs and the general public) will themselves become more knowledgeable and sophisticated. Personally, I would rather they did that with our blessing and our support rather than without.

10. insecure systems. I have no doubt that today's cloud-accessible quantum systems have security vulnerabilities. All computer systems have them. It's a tenet of our industry. I don't understand why this is relevant to Galitski's larger point.

11. fraud! Because today's systems could be fully simulated, there might be fraudulent companies out there, some Quantum Theranos. Yeah, I suppose that's possible. "Fake it 'til you make it." Faking quantum computers is easy; faking quantum computer development is hard. You think investors aren't going to come look into the labs? You think they aren't going to expect to see dil fridges, racks of FGPA boxes, even lines of FPGA source code? And, over the next few years, results of calculations that can't be simulated? Especially in a post-Theranos atmosphere? Due diligence is always necessary (and I have seen it go wrong), but I don't think this is a valid point for criticizing the nascent industry.


Overall, I find Galitski's criticism to have a few valid points; we all agree that hype will result in negative effects for the community as a whole as a "reality correction" sets in. But -- and perhaps I'm being too sensitive here -- I read his criticism as coming from a deep misunderstanding and dislike of the tech industry, and skepticism not just about the current quantum frenzy but more deeply of the value of quantum computing itself. I disagree.

We want to avoid the sheer silliness of the dot com bubble, its worst excesses on domain names and business models. At the same time, we want to avoid the prolonged AI winters in which too few smart people and too few research dollars entered the field. (Keeping in mind that, despite its demonstrated, thrilling successes, we might be in a time of over-exuberance for machine learning, the current favored model of AI; studying its successes and excesses carefully would be instructive for the future of quantum computing.) Let's all be responsible and realistic about the amount of work to be done, but maintain our optimism and faith in the long-term vision.

To quote myself, we are in the time of Babbage trying to foresee what Knuth, Lampson and Torvalds will do with these machines as they mature. Let's do it.

Onward and upward!

(*) What a mixed metaphor! Can we have springs and renaissances, too? Or at least some explanation of how a bubble popping results in winter?

Wednesday, September 22, 2021

Astrophotography: Kanto Dark Spots


My wife and I have been going places to shoot night skies off and on for the last couple of years. We live in Kamakura, which is suburban and within the "light dome" of Yokohama itself, second largest metropolis in Japan. So we've got to go somewhere in order to get decent skies. There are one or two spots within an hour's drive, but many of the places we have gone are 3-5 hours each way. (Less late at night, but can be hellishly bad late afternoon on a Sunday, trying to get back toward the population centers of Kanto.) The screenshot above (from https://www.lightpollutionmap.info/) shows the challenge we're up against. The blue areas deep in the mountains to the west would be 5 hours' drive without traffic, and the even darker blue areas well to the north of Tokyo would be closer to 6 hours' drive.

Turns out my wife and I have kind of different goals; I am getting into deep sky photography, wanting several hours of perfectly clear skies and unobstructed views. My wife wants nice foregrounds in front of dramatic skies; Milky Way is good, but some clouds at sunset or sunrise are even better. She also likes shooting at the beach, even with her tripod standing in the surf. (Yeah, she's hard on equipment; sends her DSLR bodies and occasionally lenses for professional cleaning when needed.) Naturally, anyone with telescope optics and mechanics will be horrified at being where salt, sand and moisture are. Some of these spots have both beach access and a good spot on a high bluff, fairly safe from such concerns. A few of these are well up in the mountains.

This lists sites in Chiba, Kanagawa, Shizuoka, Yamanashi, Ibaraki, and "other", in order. At the bottom, you'll also find a list of other tools & websites I use.

This posting is progressively updated. Check back occasionally for new sites & new info about old sites.

Monday, September 13, 2021

Ranking the Star Trek: The Original Series episodes

Inspired by the 55th anniversary of the first broadcast, I'm going back and watching Kirk and company, more or less in order but with a little bit of skipping around. Watching them from the perspective of 2021, the most egregious thing is not the effects (some of which have been upgraded anyway, in the Netflix version) or simplified plots or retro future tech, or even race relations, it's gender roles and outright sexism. I'm sure having women Starfleet officers was very progressive for 1966, and it is true that there will be a certain amount of sexual tension in any crew (even a single-gender one), but it's pretty blatant.

On the other hand, if you like looking at 1960s style beauty in stunning costumes, it's definitely a bonanza.

Let's divide the original three seasons up into half-season blocks and rank them separately, see what we get.

I'm just going to post this and update it ad hoc as I watch more episodes.

Season 1, first half

In ranked order, with broadcast order in parentheses (following the Netflix counting of "The Cage" as #1 and the first regularly scheduled broadcast of "The Man Trap" as #2). The first half ends with "Balance of Terror". The top three here are classics, IMO; after that, it drops off kind of quickly, but only "Mudd's Women" would I call actively bad. On the whole, the writers, production team, directors and actors really hit the ground running in this first half year, but I suppose two years of gestation helped.
  1. "The Menagerie" (12 & 13): Wow, this is better than I had remembered, one of the best episodes of all, in my current judgment. It's better as "The Menagerie" than as "The Cage", with the wrapper meta-story, but hard to believe the studio execs didn't just fall all over themselves getting this launched after the first pilot. Loyalty on trial, and important questions about what drives us as humans. Will we lose our will when illusion takes over?  (Today, there are those who claim that the Internet and smartphones are "robbing us of our boredom," and that's a solid concern, IMO.)
    Really glad the Enterprise tech got a facelift from its 1950s look to the 1960s look of the series in full gear, but interesting that the transporter is 100% the same. Of course, there is a gratuitously good looking officer on the starbase for Kirk to ogle (complete with seductive music, the only adjective here is "lovely"), entirely aside from Pike's green alien dancer. As much as you gotta love Kirk, Pike would have made a great captain and Number One should have stayed. Pike's wheelchair and communication tech were surpassed for Hawking and others with little more communication capability than moving their eyes, but the point stands. Also glad they dumped running a starship with paper and clipboards!
  2. "What Little Girls are Made of?" (8): An episode I had largely overlooked before. Are our petty jealousies and flaws a product of our organic bodies, or would they be the same in an android? More than a little iffy on what "programming" an android imprinted from a sentient being means, but asks interesting questions. Christine made the tough choice to break off an engagement to pursue a Starfleet career, a pretty progressive move for the day. And who doesn't love Lurch? Not the first and certainly not the last dying/dead civilization to be explored by a guest star, then left behind without a further thought as the Enterprise warps off to another adventure, though.
  3. "Balance of Terror" (15): Peace through strength, very Cold War. Honorable people fulfilling their duty on both sides of a conflict can still result in waste of life, and war. Prejudice based on appearance is, well, a bad thing. And love, and loss, happen under many circumstances. This is by far the most space opera-y episode of the first half season, with "Run Silent, Run Deep"-style cat and mouse starship-to-starship hunting. Electromagnetic signals, surely, but I'm a bit dubious about the need to work quietly! A great episode, even if the ending is inconclusive. Going in, I was expecting this to be my top episode for this half year, but the ending robs it of first place.
  4. "Charlie X" (3): Teenage angst and self control, Uhura ad libbing a funny song about Spock, 3-D chess, what more can you ask? The first time, but not the last, we encounter an apparently superior race who then inscrutably leaves without us even getting a chance to ask their names -- and we seem totally unworried about that. Not wild about the ending, this one leaves me uneasy, which is a good thing.
  5. "The Enemy Within" (6): The dubious plot device of the transporter dividing based on personality aside, a solid episode. We need our yin and our yang to be whole.
  6. "The Man Trap" (2): Not as chauvinistic as the title suggests. One of several in this first half season where illusions, mind control, ESP, or telekinesis plays a big role. What is it that makes us happy? First redshirt to die, in the very first regular broadcast episode, and we have established a paradigm.
  7. "Miri" (9): A solid episode. A human attempt to live forever has intergenerational consequences, and nearly takes out Kirk, McCoy, Rand and Spock, too. This one (as with many of the episodes, both good and bad) doesn't really need a starship; it's SF, but could take place anywhere. But the timeline doesn't really make sense -- how did they get there three centuries ago? And once again we warp away, leaving behind a live community who could really use our help.
  8. "The Naked Time" (5): It takes a contrived plot device, but we get to learn about the innermost thoughts of the crew. Sulu's stripped-to-the-waist swashbuckling is the most memorable bit, but Christine's love of Spock and Spock's sometimes wobbly control of his emotions advance the characters the most. Kirk's iron will, sense of duty and love of the ship get him through it.
  9. "The Corbomite Maneuver" (11): My brain had this listed as dreadful, but it's not as bad as I remembered/feared. The first time we meet a (possibly) technologically superior species, get over an initial misunderstanding, and leave on mutually agreeable terms.
  10. "The Conscience of the King" (14): A pretty good human drama about how hard conditions and impossible choices can incite horrible, inhumane actions. This one doesn't need starships.
  11. "Where No Man Has Gone Before" (4): Its biggest gift, of course, is the title. Another telekinesis episode, with muddled reasoning for the sudden growth in powers of a character or two, but an interesting question about how we will deal with ourselves when we start to outgrow these bodies -- from both sides of that issue. Also, a barrier at the edge of the galaxy? Really?
  12. "Dagger of the Mind" (10): The first of many geniuses who advance Mankind, then go wrong later in life. Establishes a precedent of Kirk not asking anyone else to do something he wouldn't try first, but is sitting down in a brain ray chair you suspect damages minds really a good idea?
  13. "Mudd's Women" (7): All the good stuff is in the last two minutes. Otherwise, c'mon, man, smuggling brides to male-only mining outposts in the 23rd century and controlling women by controlling their access to a "Venus drug" beauty enhancer? And a lot of "hubba! hubba!" from the crew. Umph. Is this our first reference to Kirk being married to the Enterprise?

Season 1, Second Half

Lots to look forward to. Through my rose-tinted glasses, "A Taste of Armageddon", "Devil in the Dark", and "Arena" are all great episodes, leading up to "City on the Edge of Forever" (not only inarguably the best episode, it's definitely got the best title). Hoping they have aged well.
  1. "City on the Edge of Forever" (29): Accept no substitutes. The finest episode in all the ST universe. And only nine weeks earlier, "Tomorrow is Yesterday" showed that time travel could be treated both relatively rigorously and interestingly, and yet here CotEoF blows it out of the water.
  2. "Errand of Mercy" (27): Are we really as different from the Klingons as we think? Non-corporeal, powerful aliens solve the ultimate plot dilemma for the episode.
  3. "Devil in the Dark" (26): One of my favorite episodes: will we recognize other life, and other intelligence, when we find it? (We'll leave aside the Class M Planet bipedal species, 1.5-2m tall, with eyes, ears, a mouth, favoring N2-O2 atmosphere, that seem to keep popping up in ST:TOS.) How will we communicate with it? (Well, that one is kind of finessed in this episode.) Will we be able to establish (in Kirk's own words) a modus vivendi? Cheesy 1960s "monster"/alien "effects" aside, this one would be fun to revisit later, to learn about the Horta's society. And man, for something made out of silicon, the body part that gets phasered off the Horta is awfully light!
  4. "Tomorrow is Yesterday" (20): Solid time travel paradox. Established that gravity + warp = time travel, a device we will use again in movies and other series.
  5. "Space Seed" (23): An iconic episode, this gave us a look at 21st century history and it gave us the great Khan, the best human villain we get in TOS (and the movies). Hurt only by its innate chauvinism.
  6. "Arena" (19): A personal favorite, but would have been better with Fredric Brown's original (but probably unfilmable in 1966 and certainly not a sympathetic character) alien. The watered-down ending compared to Brown's original short story hurts a bit.
  7. "A Taste of Armageddon" (24): One of the best episodes. If you sanitize it, is it still war? Aren't we supposed to be horrified, repulsed by war? Some pretty blatant ignoring of the Prime Directive, if you consider them to to be the kind of civilization not to be interfered with. Also, the issue of the U.S.S. Valiant's disappearance 50 years ago gets referred to, but just dropped as an issue.
  8. "This Side of Paradise" (25): Another episode with a bad rep in my memory, but turned out to be pretty good. The spores and the Bertholdt(?) rays are a bit contrived (especially the "we can fix your health" bit), but asking the question of whether humans must strive in order to be whole, to be human, is an eternal question. Answered in favor of striving rather than paradise here (spoiler alert! But did you expect different?), nothing super-original in thinking, but well plotted and executed. Far better than "Archons" (below), and an interesting comparison to "The Enemy Within" (above) in what makes us human.
  9. "Court Martial" (21): Solid. Can we trust data just because it's recorded? Will a person really hold a grudge serious enough to fake their own death to sabotage another's career? There is a hint that Riley's daughter learns he is still alive, but that's never pursued. Perhaps it's continuity issues, but it feels to me like this one (and several other episodes) had scenes that were written and either never filmed or cut from the final episode for time or other reasons.
  10. "The Alternative Factor" (28): The idea of alternate universes in and of itself was probably a fresh concept, but this episode has some holes and doesn't really address the core issues of the multiverse very clearly. And that's a heckuva...UFOy spaceship. Not bad, not good.
  11. "The Galileo Seven" (17): This one seems to be ranked highly in a lot of polls, but I found it awfully blunt. A test of Spock's logic as a method of command could be really interesting, but it's such a contrived plot, including unseen giant natives with Earth-like simple spears. And would you really have three of the top four officers on one shuttle that is nominally out on a data-gathering mission? To me, this feels like a script written by a young fan, rather than a mature writer in the full swing of Trek. (n.b.: Some of the fan fiction exceeds the original in depth, originality and maturity!)
  12. "The Squire of Gothos" (18): One of the more memorable "encounter with a god-like entity" episodes, but in this case a petulant child with a silly view of Earth and humanity. Don't think too hard about this one.
  13. "Shore Leave" (16): An occasional light episode is fine, but this is just silly. Not a good start for the 2nd half of Season One. Another corporeal species apparently advanced compared to us, but not interested in conquest. Leave them and warp away, without trying to establish an embassy!
  14. "Operation -- Annihilate!" (30): Encounter with perhaps the most alien species in Season One, but we just kill it then get outta there. Also sets the record for cheesiest practical effects.
  15. "Return of the Archons" (24): This one's just a muddled mess. Too many things going on. The "Festival" is never really explained, nor is anything about the 6,000 year old technology. Why the town looks like the late 19th century in the U.S. is baffling, and we don't get any sort of justification even for how the Enterprise crew knew how to appear in period costume. The weapon tubes used by the lawgivers are examined once and shown to be nothing but empty tubes, but that's never pursued. How people not of "The Body" are detected isn't discussed. If everyone is part of The Body, why are the lawgivers needed at all? And, most of all, all signs point to the planet's residents being human. If so, how did they get there 6,000 years ago, and why would there be any parallel at all with Earth civilizations? Yet another episode in which the Enterprise is investigating a missing starship, then just warps away without really completing that investigation.

Season 2, First Half

  1. "Journey to Babel" (10): One of the very best episodes, thanks to D.C. Fontana's rigorous and compassionate writing. I might place this behind "City on the Edge of Forever" as second-best episode overall. Diplomacy and intrigue, this one could be a Mediterranean or European council just as easily as Federation.
  2. "Amok Time" (1): Even better than I remembered. Makes up for Sturgeon's silliness in "Shore Leave". I wonder how Spock later explained to T'Pau that she had been snookered, though? A couple of things are...illogical, but the look at Vulcan is great, even if the culture does kind of resemble a mishmash of Asian tropes.
  3. "Mirror, Mirror" (4): an iconic episode, using parallel universes to ask if we are really as pacifist and advanced as we think. Echoes episodes from Season 1, examining our inner selves, but perhaps done best here. Don't think too hard about the parallel universes, though.
  4. "The Doomsday Machine" (6): Real drama, and an interesting take on how we will react when we run into a mindless machine that has only finding more energy for itself as a goal.
  5. "The Changeling" (3): A largely forgettable episode, but it planted the seed for ST:TMP, and so is logically necessary.
  6. "Metamorphosis" (9): Love comes in many forms. Once again an alien without a true body but many powers.
  7. "I, Mudd" (8): Way better than "Mudd's Women", but still borderline silly. So many improbable or implausible plot elements, and terrible system design in the android distributed control and logic systems.
  8. "Catspaw" (7): Just silly. Nudibranch-like aliens manage to stop the Enterprise, take human form, and find our cultural spooky memories by accident (why are they purely European tropes such as iron maidens and black cats?). And once again we endanger the entire executive leadership of the Enterprise. The limits to the powers of the aliens are, as almost always, unclear.
  9. "Who Mourns for Adonais?" (2): This is dreadful, which is really a shame since the core ideas are interesting. What if the ancient gods were space travellers? Do gods exist without people to worship them? (Shades of American Gods?) Should one episode really be trying to answer both questions?  They seem like pretty separate incidents/questions to me. At least this time Lieutenant Palamas, who falls for the hunky space god, gets to have a spine and do her duty for her ship, unlike Lt. McGivers, who falls for the hunky fascist in "Space Seed". But there's still a lot of 1960s gender roles baked into this one.
    This "advanced aliens can control anything with their minds" trope certainly wears thin. And does this dude have a real body, or not?
    It's a little too much "Squire of Gothos meets Space Seed", though. A few lines of dialog are thought-provoking, but I'm guessing NBC's Mike Pences on their censorship committee forced the addition of "The one we have is enough," with respect to gods. Almost got away with saying, "We no longer have need of gods," without any qualifications!
  10. "The Apple" (5): Among the worst episodes, with white/orange primitives who bow down to a local machine god that controls them entirely but also keeps them completely healthy. Very little about this makes sense, and it is essentially a white-people-save-the-natives-from-their-own-superstitions schtick.

Season 2, Second Half

  1. "The Trouble with Tribbles": Pure fun. Tribbles, Klingons, and a bar fight over an engineering insult, what more can you ask?
  2. "The Ultimate Computer": Themes that will echo for time to come. Can our technology replace us? Should we risk human lives if we can risk a machine instead? Are we doomed to transmit our own flaws to our technological offspring? To me, this is a great episode.
  3. "Wolf in the Fold": Meh. But Scotty always deserves more screen time, and he gets it here, even if it's not in Engineering.

Season 3, First Half

Season 3, Second Half

  1. "Elaan of Troyius": Kirk plays diplomat and disciplinarian, while using his love of the ship as an antidote to an aphrodisiac.

Wednesday, July 21, 2021

Spelunking CACM, vol. 5 (1962)

For 1962, I considered choices such as an early Knuth paper, on tricks for making evaluation of polynomials more efficient, an early (but not the first) paper on theorem proving machines, a description of an event for high schoolers that points out that there were already 8,000 computers and 30,000 professionals in the country (sadly, the article has no demographic info on attendees), and especially an early paper on multiprogramming from NASA (what's not to love about a paper that says, "Some educationally valuable mistakes were made"? It's instructive that it refers to "the interrupt feature", indicating its newness, but the modern term "interrupt service routine" was already in use.). In the end I settled on a notice about ACM's policy toward standardization. CACM already had a section on "Standards", edited by S. Gorn, but this notice is otherwise unsigned.

Early, the paper makes three binary divisions: users v. "professional computer people", industrial v. theoretical (interestingly, not academic), and hardware v. software. This divides those with an interest into eight categories.

It points out the risks of too-early standardization. As a vendor, if you tie yourself to a standard too early, an innovative competitor can introduce something new, and your hands are tied.

I found this table intriguing. Likely you're vaguely aware of some of these organizations, but you may not realize how early and dynamic they were.  Keep in mind that this is a mere 17 years after the end of World War II, and yet Germany, Italy, the Netherlands, France, and Japan, who collectively suffered some of the worst devastation, are represented. (Russia, China, Poland and Belgium are listed, too, but don't seem to have entries, so I'm baffled as to why they are included.) (In the most breathtaking post-WWII recovery, just two years later Tokyo would host the Olympics and the first shinkansen line would open.)

It's also interesting that the table focuses on language; today, the UN's list of official languages notwithstanding, almost all international standardization work takes place in English, with a vestige still of French. I admit to being lucky to have been born in an English-speaking family at a time when it is the de facto language of science, technology and international commerce. Not so long ago, the choices would have been French (esp. for diplomacy) and German (science and technology).



Quoting from the paper:

The policy of ACM toward standardization is therefore the following:

1. It is extremely conservative as far as the development and promulgation of standards is concerned.

2. It is resistant toward precipitate standardization, specially in any area in which not enough is known to make such standardization theoretically sensible or stable.

3. It tends to be neutral in those areas where standardization is a matter of arbitrary selection, in spite of its recognition of the usefulness of such selection. That part of its membership which is vitally interested in such arbitrary selection is already represented in the industrial side of the activity.

On the positive side, the society is vitally interested in maintaining wide open channels of communications.

4. Thus it takes a positive interest in the stabilization of terminology, whether by reporting common usage or by declaring preferred usage (the normative function).

5. It is interested in the development of appropriate fundamental concepts, the establishment of the relationships among them, and in the quick dissemination of such developments.

6. Finally, it is interested in the development of standard methods of specification of processors, whether they be computers, programs or systems, of languages for such processors and of translation processors for such languages. Included in the methods of specification are methods of documentation for each type of audience or interest in the computer area.

Overall, the policy expresses some interest in standardization of systems, esp. programming languages, it seems, but little else. Almost sixty years later, we can see that indeed ACM, despite its importance in the computing ecosystem, has largely remained aloof from the issues of standardization, leaving that to ANSI, IEEE, FIPS, ISO, IETF, NBS/NIST et al.

Tuesday, July 06, 2021

Building a Raspberry Pi 4 MPI Cluster in 2021

 


We are using Ubuntu on Raspberry Pi 4 boards with 8GB RAM, 64GB flash drives, coupled using a cheap gigabit Ethernet switch. I wanted to use PoE (power over Ethernet), but that requires a "hat", an additional daughter board to extract the power, and it's moderately expensive compared to the cost of ordinary power supplies. Moreover, PoE-capable switches are more expensive and a shade harder to get ahold of. (Apologies for the mess in the photo, we should straighten that out. We also don't yet have a permanent location for this. Another group in our lab 3-D printed holders and a 19" rack mount frame for theirs, but we haven't gotten that far yet.)

Getting it all running was rather a pain. Here were our pain points and some advice:

  • We accidentally installed ARM7 Ubuntu on some machines and ARM64 on others. This problem won't become apparent until you compile and run your own code on multiple nodes via MPI, at which point it will tell you "Exec format error," and you'll have to go back and make all the nodes agree on architecture & chipset support. This was the last major problem we had to solve, but I mention it first since it's one you want to get right up front. All things being equal, unless you're creating a mixed cluster with older hardware, you probably want the 64-bit installation.
  • My first mistake was mixing installs of MPICH and OpenMPI. They are two separate implementations of MPI. Either is apparently fine, but don't mix them. If you just do
    sudo apt install mpi
    you will get OpenMPI. It doesn't include headers and the development tools, so you won't be able to compile. You also need the package mpi-default-dev.
  • You need openssh-server, but that's usually included a default Ubuntu setup. Likely, you'll also need to install gcc, make, git and gdb.
  • We're still tinkering with the best way to share setup info, including username databases and SSH keys for students and the like, but what we've settled on for the moment is Ansible, a popular networked systems management tool.
  • We set things up to share the executable via NFS. (We're not doing data-intensive stuff, just introductory programming exercises for now, so we're not sharing some major data farm.) Getting permissions right here took a little bit of work.
  • Our biggest pain point, which took the longest to solve, was getting the firewall settings right. Even though ompi_info tells me it's not compiled for IPv6, in fact the basic ssh that is used to initiate communications apparently runs over IPv6 anyway, if v6 is configured on our systems. Took us a couple of hours to figure this one out. Even when we briefly turned off the firewall entirely for debugging purposes, we were getting timeouts that baffled us. (ss was a big help here in figuring out what connections are trying to happen, but it takes a little greping to sort the wheat from the chaff.) (And random, 35-year-long rant: what is it with UNIX folks and short commands/tool names? "ss"? What is that?!? At least "netstat" has some mnemonic relationship to what it does.)
    Also, the default setting for Ubuntu firewall is "all outbound traffic allowed, no inbound traffic allowed," so even if you think you have the firewall entirely off, that might not mean what you think it means!
When your setup is close to working right, 

mpirun -np 2 --host raspi1,raspi2 hostname 

should print out the names of your hosts.  (Replace raspi1 and raspi2 with DNS names or IP addresses for your machines.) That just executes the command hostname on the remote host, showing that your communication is working. Since each machine has that command on it, it won't reveal the first problem above, the ARM7/64 issue.
 
That's just some quick notes in case you're running into similar problems. I'll try to flesh this out later.

Sunday, June 20, 2021

Spenlunking CACM, vol. 4 (1961): Soviet cybernetics and computer sciences, 1960

The January 1961 issue is dedicated mostly to compiler-related issues, especially ALGOL, though there are some articles on arithmetic and one on digital computers in universities; other issues from the same year include work on error correcting codes. One intriguing one talks about mathematical models for documentation and search. Algorithms are published en masse, with little commentary. Most are short subroutines for calculating mathematical functions. The July issue includes what might be the first publication of quicksort; I'm not enough of a historian on algorithms to say whether it's new here, or just published for the record. But the description is, um, terse:

That's it, that's the whole thing. No, I can't read it, either, and I think I know how quicksort works.

CACM, by now, features black and white photos on the cover. By 1961, we can say that CS research and the operation of ACM are in full swing. There is even a letters to the editor section; one February letter discusses an earlier article on multi-processing (contrasted, correctly, with multi-programming). There are a few women authors; Lynn could be either man or woman, but Joyce, Judith, Mary and Patty are unlikely to be men. (A noticeable number of authors use only initials, as well.) The names Wilkes, Hoare, Dijkstra flit past; and, for the first time, I spott Knuth's name as an author.

But one article in particular caught my eye.

Edward A. Feigenbaum, one of the founders of GOFAI, already a Berkeley professor at the time, visited the Soviet Union in 1960, and had some things to say about the state of their computing (and their ability or willingness to run an interesting conference). Interestingly, in 1982, Ed would be one of the prominent senior foreign guests at the first Fifth Generation computer conference held by ICOT in Japan.

In the article, "Soviet Cybernetics and Computer Sciences, 1960", Feigenbaum does quite a bit of complaining about the Soviets as hosts. The report is long and detailed (14 pages of 3-column text, no figures), covering his attendance as a delegate to the First International Congress of the International Federation of Automatic Control.  If you read Russian (or use a translator), you can find a report at http://www.mathnet.ru/.

Feigenbaum objected to the style of the conference, referring to the "tedium" of each paper being followed by an extensive "discussion" that amounted to a further clarification or rebuttal of the paper. Again, here, he complained about the erratic performance of translators.

For Feigenbaum, and probably for his audience, the most interesting part was not the 400 papers presented by 1,200 delegates, but the individual visits he managed to make, seeing some Russians he already knew by name. However, he was stymied in his attempts to see others, and some of the ones he did get to meet offered him no interesting information. But he did manage to find some people working on speech, automated translation, brain simulation, and other AIish topics as well as mathematical computation.

He actually learned quite a bit about some of the computers themselves, including which ones were mature enough to handle a true compiler for a language.

He described the chess machine (by which I think he means JOHNNIAC running the Newell-Shaw-Simon program; he refers to the machine as "antediluvian") and geometry machine (Gelernter at IBM) then under development. Apparently, the optimists at the time believed that a chess machine would be (world?) champion by 1970 and would prove new mathematical theorems by 1970, as well.  The Soviets seemed to concur with that as a timeline, but were amazed that such impractical research was "allowed" in the U.S., and might even be conducted by capitalist corporations. Feigenbaum explained how foundations, corporations and the government support research, and the Soviets were reportedly impressed.

In the end, Feigenbaum concluded that 

I concur with the opinion of most U. S. computer scientists who have visited Russia that at present the United States has a definite lead over the Soviet Union in the design and production of computing machines, but that there is no gap in fundamental ideas,with the possible exception of the production of reliable transistors.With the importance of computers to modern science and technology, there is no doubt that fairly soon the Soviet Union will be producing as many computers as we do. To what extent they will utilize these computers effectively, and in what new ways, I have no immediate answer[.]

Of course, we now know that reliable translation has taken a further sixty years already, and performance is still spotty. I sometimes wonder how much of the complexity of such problems that people like Feigenbaum had accurately anticipated.

Monday, May 31, 2021

Spelunking CACM, vol. 3 (1960): Automatic Graders for Programming Classes

This one boggles my mind. In the October 1960 issue of Communications of the ACM, Jack Hollingsworth of the Rensselaer Polytechnic Institute Computer Laboratory published a paper titled, "Automatic Graders for Programming Classes".

This was on an IBM 650 computer. The 650 has a drum for temporary storage, and input and output are via punched cards. The grader program itself functioned as an executive of sorts, loading a student's already-compiled program from a card deck, setting up input data, running the program, comparing the output to an expected value, and aborting by punching a card indicating the error if it doesn't match. The grader is remarkably sophisticated; it can handle multiple independent assignments in a single run, by using different card decks for the input and output expected values.

They used this grader in a class of over 80 programming students. The article doesn't say if any of the students were women, but RPI already had a handful of women students at the time, so it's possible. Two machine operators are mentioned by name in the acknowledgments, both women; it's likely that they had a very high degree of technical skill in operating the machine and possibly in programming it.

"In general only an eighth as much computer time is required when the grader is used as is required when each student is expected to run his own program, probably less than a third as much staff time, and considerably less student time." That was very important in the 1950s, as machine time was an expensive and prized commodity.

The writing of the paper is a little rough; there's not much in the way of introduction, it just dives straight into some of the details of using the program.  We do learn that the grader was first used fifteen months before the paper was written, so presumably in 1959, perhaps as early as 1958. Pseudocode is included.

Given that I still grade student programs by hand, I should probably take a lesson from some of the pioneers from before I was born, and learn to save myself some work!