If you're a history buff, a collection of writings on the history of the Internet in Asia:
https://sites.google.com/site/internethistoryasia/home?authuser=0Thursday, July 31, 2025
Monday, July 28, 2025
Bollocks
Professor Peter Gutmann, of the University of Auckland, a well-known computer security researcher, has made something of a name for himself as a quantum skeptic, at least with respect to quantum cryptanalysis and post-quantum cryptography (PQC). His argument is roughly two-pronged:
- Quantum computers aren't making progress on factoring at all; and
- even if they did, computer security people and cryptographers have much larger problems to worry about.
- Germany had a massive weapon systems boondoggle during WWII.
- OWASP lists a lot (tens of thousands!) of threats to the security of computer systems, and the highest-ranked attack on the mathematics of encryption was #17,245 (not a typo). Roughly, the argument is:
- Success with mathematical attacks are high-effort and low success probability;
- even if you succeed, you recover a few bits of the contents of one message; and
- of the top ten high-priority problems, when you succeed you win big -- you get "the plaintext of all of the messages."
- (And holy cow, not directly in Peter's talk, but there are always examples of how human stupidity is the number one threat!)
- NSA can already factor 1,024-bit RSA keys, if they're willing to commit leading-edge supercomputer time in allocation chunks of a year at a time.
- Quantum computer-based factoring has grown from 4 bits to 5 bits over the last 20+ years.
- Quantum computers are physics experiments, not computers.
- (Brings up poorly-regarded D-Wave factoring.)
- PQC is very hard to implement well, and costs a lot in engineering time and network/execution time.
- (Various disparaging of academics and standards organizations as becoming self-justifying.)
- "Quantum cryptanalysis is the string theory of security" and my dog can factor just as well.
- Yes, and while cautionary tales are good, cautionary tales are not the same thing as prediction or even valid analogy.
- Okay, but I think the implied message here is off: if you can crack one Diffie-Hellman key exchange, you gain the ability to read everything in that conversation (n.b.: it's harder than just factoring or discrete log, there are other mechanisms involved), but the bigger catch would be the RSA private key of an important individual, which would allow you to impersonate them across a range of systems; certainly there are organizations that would pay a lot of money for that. Of course, I'd argue that it's pretty easy for truly high-value targets connecting to high-value systems are likely secured via shared private key, so hacking RSA is lower value. Peter is definitely more knowledgeable than I am in this area.
- Okay, but is that relevant to the anti-quantum argument? Is the argument just that people won't really commit big resources to factoring? I'd like to hear the oral argument that accompanies this point.
- This is the big one: he's saying progress in development of quantum computers is so poor that we can effectively discount them as any sort of threat. Ooh...okay. It's a fair point that reported successes in the literature on the task of cryptanalysis are advancing at a glacial pace. (We have worked on this topic.) But projecting from past progress to future progress is dangerous in this field. We have known since the beginning of this field that error correction would be necessary. Until we hit the threshold that allows quantum error correction to be executed effectively, progress on hardware did not translate into equivalent algorithmic successes.
Well, guess what? The relentless improvement in hardware means we have passed that basic threshold on at least two very different hardware platforms in the last two years. At least two other companies have released roadmaps that take us to large-scale, fault-tolerant systems within five years. At that level, that means they think they know how to solve all of the problems standing in their way. Even if they are off by a factor of two, that still means we're there within a decade, I'd bet sooner.
So my opinion is that pooh-poohing the likelihood of the advent of cryptographically relevant quantum computers (CRQCs) seems unwise. I think it's bordering on irresponsible to assume the technology won't happen; the argument instead needs to be about how much to prioritize countermeasures. - In today's environment, strongly agreed. Dave Farber said to me several years ago (perhaps as far back as 2019, though I think it was a little more recently than that), when I showed him some Qiskit code, "This isn't an application, it's an experiment." I think we as a community need to think very hard about how to deliver hardware, runtime systems and tools, and applications to customers.
- (Pass.)
- Cost of PQC is high -- oh, yes, definitely. I attend IETF meetings and listen to people moan about how hard it is. I'm not an expert here, though.
- (Pass.)
- Funny! (I need a dog. I love dogs. But I'm allergic to dogs, work too much and travel too much, and I think dogs in Japan don't have good lives, but all that's for a different posting, some other day...)
Monday, July 07, 2025
I Believe in Web3...Just Not That Web3
(Note: This was originally written in summer and fall 2022, and for various reasons I decided not to publish it then, despite the obviously enormous amount of work I put into it. One reason was that I wasn't completely satisfied with it, so I still consider it to be a work in progress. Given the enormous volumes of writing out there dedicated to the broad topic of web3, it would also be rather bold of me to think that I have much new to offer. This is, instead, a way to organize my thoughts, which I am willing to share with you. If you are kind, I will be happy to carry on a dialog that might improve my understanding, as best my time permits.)
I believe in the re-decentralization of the web. I want creators to be paid and I believe in the idea of micro-payments, and it would be nice if that meant something other than advertising. I love Larry Lessig's idea of managing the incentives that web2 companies have to create addictive, outrage-driven products. Although the Russian aggression in Ukraine brings the whole liberal project into question, I still think increased flow of goods, people, knowledge and principles across borders leads us toward a better world. And Joi Ito's description of web3 as being about community brings it into focus, gives it a direction.
What I don't believe in is most of the technologies being touted as transformative and necessary to web3.
What I don't believe in is blockchain as a currency, a store of value, or a speculative investment. I do believe it potentially has value as a public record of certain communications. It is also a brilliant technical innovation, still searching for the right way to be applied.
I very definitely don't believe in NFTs. I can't see what value they add at all.
And the metaverse...ah, the metaverse.
Backing up for just a second, blockchain, NFTs, and virtual reality/metaverse are the rather disparate technologies that are getting welded together and touted as the cure for everything that ails today's World Wide Web (and there is a lot that ails the web). Collectively called web3, the purported win is that they enable DeFi (decentralized finance, in contrast to CeFi or the "fiat economy", which is never used as a compliment), DAOs (distributed autonomous organizations), and more.
Supposedly.
And supposedly, in at least some tellings, not only do these technologies solve what ails the web, they solve some fraction of all the world's problems.
Which means, I suppose, we need to begin with a quick look at what some of those problems are and how we got here before we look more closely at the technologies on offer.
The Vision
Among quite a number of things I read, the longest and most coherent was Joi Ito's book in Japanese, 「テクノロジーが予測する未来」(tekunorojii ga yosoku suru mirai, or The Future that Technology Predicts, more or less), which I read most of. It partly inspired this posting, so I will be referring to it quite a bit, but I don't want this to be just an analysis of Joi's arguments. The book is very focused on the notion of community and the ability to quickly create and scale up new communities. Joi is well known for being quick to "try on" new ideas, always looking for something to remake the world, so his thoughts are interesting even if not always as grounded in technological or human feasibility as we might wish.
Joi describes web1.0 as read, web 2.0 as write, and web3 as join (or maybe "participate"). This takes us from 1.0's monolithic web servers whose installation and maintenance and publication required significant technical expertise and capital, through blogs (and the early days of online shopping), to 2.0's Facebook-dominated world of SNSes where anyone can write or share photos far more easily and a handful of hypergiant e-commerce corporations control what we search for, buy, read and use. Today, the buzzword web3 is supposed to help us build large-scale, global, autonomous communities, with as little regard for existing prejudices, practices, rules and laws as we can get away with.
Joi describes project-based organizations that come and go like movie productions. Given the chaos, stress, grift and uneven distribution of rewards in moviemaking, I'm not so sure that's an attractive description. It also sounds like a macro-scale gig economy, where many people have to hustle for every dollar, and I am certain that is not the right model for everyone (though it may be for some).
The word efficiency comes up repeatedly. To Joi, this seems to be the heart of what these new technologies bring, and it is a seductive Siren. After all, the worldwide web itself is "only" a more efficient way of publishing and sharing information. If a new efficiency really takes hold, it can transform the world.
Ownership of not only the things you create but the things you buy is frustratingly difficult to even understand, let alone manage, today. Amazon can delete things from your Kindle, and prevents you from selling them on to others. HP can remotely disable the printer you bought and paid for, if your credit card that must be filed with them expires. John Deere has...a complicated relationship with the right to repair something you own. (Okay, now we're getting a little far from web3.) John Deere also lays claim to the data that their Internet-connected farm equipment collects. One of the principles of web3 is to return ownership of data to those who generate it.
The Internet is sometimes touted for its "permissionless innovation" (which, of course, is how we got the web itself -- no one had to give Tim permission to deploy the first web server). I would called DAOs "deferred legality". Spin up a quick bulletin board or server or Github project for the community to meet, invite others in, establish a handful of rules on how decisions are made (By humans? By an algorithmic "smart contract"? How do you allocate weight in voting?), how work is distributed and how people are paid (in tokens, presumably, whose utility for buying real-world groceries may vary), and you're in business -- maybe literally selling something, maybe just collectively creating something fun.
The Technology
The Legal Matters
We actually just covered some of the legal issues of NFTs, sort of, so let's look at DAOs in the real world.
Joi touts the innovative, token-based governance of DAOs, but it's really not clear how contract disputes, labor disputes, legal liability, taxation, and adherence to international norms should be enforced. Environmental and workplace regulations (not to mention rent, equipment and insurance of all forms) are pushed to the individual participants. Given that the vast majority of DAOs will remain too small to bother with, it can be argued that the deferral of resolving legal status is the right approach, but it's worth noting that this even includes the matter of legal jurisdiction.
Of course, I am not the first to think of any of this. In fact, Wyoming already has a law on the books on how to incorporate an LLC for a DAO. Tennessee also has such a law, signed into law in April 2022, and aims to be the "Delaware of DAOs". Services to help you set up a Wyoming DAO abound, and they encourage you to put in as much real money and to clarify these issues as completely as you can. Of course, the more completely you specify these things up front, the more it looks like a conventional small company with employee ownership, but there do seem to be differences. Consider, for example, a smart contract used to make decisions with impact outside the immediate group, such as buying or selling something. What if the transaction happens to be illegal in one or more jurisdictions? Who can be held liable, and how can the DAO be modified to make sure it doesn't happen again?
The Fit: Does the Tech Do the Job for Web3?
Transactions
Education
I don't see much overlap between web3 goals and education.
DAOs and Human Organizations
Instability, Scams and Ponzi Schemes, Burning up the Earth, and Other Such Minor Issues
Surely there is nothing more to be said by this point; a lot of people have pointed out problems with both blockchain and NFTs. Most importantly (in line with #1 in the list in the next section), I think it is absolutely unconscionable the amount of energy expended every day for such a small number of transactions. Proponents have talked for years about shifting from proof of work to proof of stake, but a) it doesn't seem to be happening, and b) proof of stake appears to exacerbate some governance and consensus problems.
Arvind Narayanan had a nice thread on blockchain, quoting a blog posting by Bruce Schneier. Bruce and the others he links to cover the core arguments pretty well, so I am not going to reiterate them all here. Bruce is one of the signers of the letter to Congress urging regulation of crypto finance. Quoting just a couple of Arvind's tweets,
[B]lockchain has so far proven useless. Worse, it's proven a costly distraction to people and communities who are trying to solve real problems...I can't tell you how many times I've talked to energetic students with great ideas about what's wrong with our institutions who, in a sane world, would be working on fixing our institutions but instead have been seduced by the idea that you can replace them with smart contracts.
Let me address just a few of the issues that I think haven't gotten as much attention as they deserve, at least in the set of things I have been reading.
Getting Squeezed
Miners join a network voluntarily, with the idea that they provide a service that others will pay for. The single biggest problem in this libertarian paradise, from what I can tell, is that very aspect. With only a handful of miners worldwide, miners could make a comfortable living and the environmental impact would be low. But as long as there is profit to be made, new miners will join, driving up the collective mining rate and increasing competition, such that the probability of an individual miner receiving the payout for successful mining goes down. Thinner margins will mean that only those who can efficiently run large-scale operations can afford to stay in business. As a result, there is a massive explosion in worldwide mining capability that today damages the environment and distorts the market for semiconductors, but that can't hold indefinitely. I figure that ultimately there will be a crash or consolidation of miners such that we end up with the Walmart of miners, with everyone else driven out of business. (We may be seeing this already.) Or, perhaps the better analogy is Subway franchisees.
Instability
"Pump and dump" and other scams abound, but an even bigger issue, if possible, is whether the system itself actually works and can be trusted to always work when we need it. As I write this in summer 2022, the last several months has seen a lot of instability in the cryptocurrency markets, even in the so-called "stable coins" that are supposed to be pegged to a currency such as the dollar, but in reality are still vulnerable to the fundamental issues of liquidity (one of the key concerns expressed by Paul Blustein) and whether or not someone actual wants to buy what you are holding at a time when you have little choice but to sell.
Lately I have been seeing ads for automated crypto trading accounts. Even in the highly regulated world of stock trading, algorithmic trading is potentially the most destabilizing technology introduced since the stock ticker itself. Lots of agencies oversee the large operators, and economists in every major bank, government and trading house must be scrutinizing the situation and looking for positive feedback loops that can cause markets to gyrate out of control. They have also instituted "circuit breakers" in case major problems develop. The financial crisis of 2008 may have shown that small investors can and will still lose their entire investment, but the existence of the big houses and the regulators reduces the number of scams and provides some (emphasis on "some") recourse when troubles occur. The cryptocurrency market has none of this.
Scrip and Taxes
A question: are DAO tokens just company scrip? I don't think so, but there are enough similarities to be disquieting. In the early 20th century U.S., with human mobility rising rapidly but not yet easy and the megalopolises not yet a majority of the population, many remote, small towns were essentially one-industry, even one-employer, towns. Employers often paid employees in scrip instead of U.S. dollars. Scrip could buy goods, at inflated prices, at the company store, or be exchanged for dollars at disadvantageous rates.
DAO tokens feel a little like scrip, a little like getting paid in stock. Tokens can be exchanged for goods, but only within a limited community. Because you aren't physically limited, you can shop anywhere that will take your tokens/scrip, but to participate in the broader economy you have to find someone who will change your tokens for money that works in your local economy.
Of course I understand that there are different kinds of tokens (some say as many as six), some of which are closer to currency and some of which are closer to voting stock. But if part of the design is to maximize liquidity and the velocity of the economy, won't even the stock-like tokens be traded rapidly and likely wind up concentrated in a few hands? I'm pretty unclear on how the dynamics of all of this is supposed to work out, how it is likely to work out, and what the failure modes are. But I'm pretty doubtful that DAO tokens and cryptocurrencies with little value behind them are ultimately stable.
Speaking of trading and the economy, if you are paid for work in some DAO's tokens, which you can trade for goods, when does it become taxable income? When it gets exchanged for local fiat currency? What if that is never?
Vision, Revisited
- Climate change and sustainable development. Without solving these things there is no community, no human security, no meeting of basic human needs.
- Data and information systems security. As IT people, this has to be Job One. Add in general systems stability (CIA = confidentiality, integrity and availability), and this is far more important than some random, clever new feature.
- Establishing personal autonomy, privacy and empowerment. Note that, to liberal me, this does not imply leaving people out there on their own, with no support.
- DEI. (To the extent to which it's different from the above.)
- The next billion. Kilnam Chon has a talk with the title Future Internet for the Other Billions. Ever since its inception, the Internet has always faced the challenge that adding the next group of connected people has meant reaching groups that are less technologically literate and perhaps poorer, and with greater environmental and infrastructure challenges of all sorts.
- Technologically, the end of Moore's Law. Estimates of data center energy consumption range from 70 TWh/year to three times that, or around 1% of the global total electric power generation. Although our efficiency has increased dramatically, we, as an industry, still have work to do.
Final Thoughts and Notes
..wait, I've gotten all the way to the end here without mentioning "the metaverse", or virtual reality worlds. Maybe that means...it's really a separate thing? FWIW, I was intrigued by VRML all the way back in the mid-1990s. It seemed like a good idea at the time. Whether the tech could keep up was another matter. I suppose, eventually, we will have Snow Crash-style virtual reality, but not yet. People -- including many who are unhappy with their real-world circumstances -- will probably like it. But other than a place to hang your virtual art you spent a lot of tokens acquiring, it's not clear to me that it's either necessary or sufficient for web3. (There have been lots of novels and short stories set in virtual worlds in the last four decades or so. One recent one I read is A Beautifully Foolish Endeavor, with a villian I swear is modeled on Jordan Peterson.)
Since I have invested two decades in working on a technical area or two whose real-world value has yet to be proven, I'm sensitive to criticisms of blockchain that it's a hammer in search of a thumb to hit. I have worked in large companies, in startups, and in academia in both Japan and the U.S., but people like Joi and many, many serial entrepreneurs have far more experience than I do, so you should probably trust them more than me on what works well and what doesn't.
One friend of mine said, "It's so complex it's hard to see how stupid it is." This friend has spent far more time studying all of this than I have. Until recently, I had not invested much effort in studying web3, and I am sure it shows. I will endeavor to keep learning, and may update this posting. If so, I will add a change log at the bottom.
A final note: I recently had a conversation with one of our students, Shaimay Shah, who is due to graduate momentarily. He said (quoted with permission),
I think the web3 tech is my generation's way to maybe make a difference...I want to look back 20-30 years down the line and tell my kids that I made a difference to society.
References
- Nick Weaver, "The Web3 Fraud", my favorite technical criticism on the technical costs of running a web3 site.
- Dave Farber and Dan Gillmor, Cryptocurrencies Remain a Gamble Best Avoided.
- https://www.msn.com/en-us/news/technology/bored-ape-yacht-club-the-nft-collection-that-s-becoming-a-real-offline-brand/ar-AAQT8yv
- https://www.vice.com/amp/en/article/y3v3ny/all-my-apes-gone-nft-theft-victims-beg-for-centralized-saviors
- https://moxie.org/2022/01/07/web3-first-impressions.html
- NFT Mona Lisa
- https://internetcomputer.org/
- https://dfinity.org/
- https://www.technologyreview.com/2020/07/01/1004725/redesign-internet-apps-no-one-controls-data-privacy-innovation-cloud/
- https://www.stephendiehl.com/blog/against-crypto.html
- NFTs being stolen:
- https://twitter.com/arvalis/status/1468814628276695041
- https://docseuss.medium.com/look-what-you-made-me-do-a-lot-of-people-have-asked-me-to-make-nft-games-and-i-wont-because-i-m-29c7cfdbbb79
- Inequality in crypto:
- Ashish Rajendra Sai, Jim Buckley, and Andrew Le Gear, "Characterizing Wealth Inequality in Cryptocurrencies".
- Khristopher Brooks, "Bitcoin has its own 1% who control outsized share of wealth".
- David DSHR Rosenthal, "Can We Mitigate Cryptocurrencies' Externalities?"
- https://twitter.com/doctorow/status/1493288001107021826
- Irving’s blog posts:
- https://blog.irvingwb.com/blog/2022/04/what-is-web3.html
- https://blog.irvingwb.com/blog/2021/12/the-metaverse-the-next-major-phase-of-the-internet.html
- https://www.mollywhite.net/annotations/latecomers-guide-to-crypto
- Links from Mike: (1) Useful background reading about the future of the metaverse: https://www.eurasiagroup.net/live-post/the-geopolitics-of-the-metaverse (2) especially the first and last sections. https://moxie.org/2022/01/07/web3-first-impressions.html
- https://conversationalist.org/2020/03/05/the-prodigal-techbro/
- https://decrypt.co/100687/ethereum-creator-vitalik-buterin-contradictions-web3-values
- https://twitter.com/VitalikButerin/status/1526378787855736832
- Joi Ito 伊藤穰一 on web3 town Yamakoshimura 旧山古志村
- Another article on web3 town Shiwa-cho, a town of 33,000 people in Iwate-ken.
- https://ssir.org/articles/entry/the_good_web#
- https://ethereum.org/en/developers/docs/web2-vs-web3/
- https://inrupt.com/solid/p frustratingly short on details, but Tim Berners-Lee's vision sounds pretty good.
- Ethereum consortium, Web2 v. Web3. From Ethereum's point of view, of course, but pretty level headed. What's missing is whether Ethereum is either necessary or sufficient, or even a way, to achieve the goals laid out.
- https://www.zenbusiness.com/how-to-start-a-dao/
- Cointelegraph: Deconstructing sidechains — The future of Web3 scalability.
- https://cointelegraph.com/news/deconstructing-sidechains-the-future-of-web3-scalability
- https://asia.nikkei.com/Opinion/Bitcoin-will-be-remembered-as-a-historically-insignificant-fallacy
- https://doctorow.medium.com/moneylike-d20f8279a72e
- https://blog.makerdao.com/the-different-types-of-cryptocurrency-tokens-explained/
- https://www.fsa.go.jp/en/policy/bgin/ResearchPaper_qunie_en.pdf
- https://www.nasdaq.com/articles/what-is-ethereum-name-service-and-how-do-you-get-a-.eth-web-3.0-domain
- https://www.researchgate.net/publication/260438995_Trends_in_worldwide_ICT_electricity_consumption_from_2007_to_2012
- https://www.nature.com/articles/d41586-018-06610-y
- Morgan Ames, "Laptops Alone Can't Bridge the Divide," a must-read on the failure of technology alone to solve problems in education.
- Folding Ideas, "Line Goes Up -- The Problem with NFTs", a two-hour, fast-paced dissection of NFTs and blockchain both. Mostly clear and mostly calm, but occasionally heavy on the jargon and slips into occasional fits of outrage. Surprisingly watchable, despite the length and topic.
- https://nymag.com/intelligencer/article/three-arrows-capital-kyle-davies-su-zhu-crash.html
- https://web3isgoinggreat.com/
- https://www.nature.com/articles/s41598-022-18686-8
Change History
Thursday, May 15, 2025
Modern-Day Optical Network Physical Signal Encoding
This blog posting is still in editing, and is posted just so I could talk to some students about elements of the contents.
Recently, I posted about how SDH optical networks encode bits at the physical level. My interest in the topic stems from a desire to know how to multiplex classical and quantum information on different channels/wavelengths, and part of that involves a basic understanding of the classical signals on the fiber. In both the demonstration network we are building and in the longer term as experimental specifications develop into standards, we may choose to put classical synchronization signals and the like for the quantum signals into the same fiber, or we may decide to carry full-on classical data traffic.
Of course, SDH is rather old now, going back to the 1990s, and optical networking has advanced considerably, especially for data center and local area networks. The most obvious place to look for newer developments is Ethernet, so here we are. First, let's look at almost-but-not-quite leading edge networks, where the technological decisions are more settled than in, say, Ultra Ethernet, which is still under development. (I do hope to come back to Ultra soon, but the draft specs are currently closed to the public.)
[tl;dr: PAM-4, with four distinct signal amplitudes, is common. Development of Ethernet using 16QAM was suspended after a draft specification was developed but not approved. As far as I can tell there is no standardized use of quadrature amplitude modulation in the optical regime, though it's common in RF.]
Many of Ethernet specifications are available for free from IEEE, including 802.3db-2022 - IEEE Standard for Ethernet - Amendment 3: Physical Layer Specifications and Management Parameters for 100 Gb/s, 200 Gb/s, and 400 Gb/s Operation over Optical Fiber using 100 Gb/s Signaling, and 802.3df-2024 - IEEE Standard for Ethernet Amendment 9: Media Access Control Parameters for 800 Gb/s and Physical Layers and Management Parameters for 400 Gb/s and 800 Gb/s Operation which mention PAM but not QAM.
400Gbps Ethernet has 11 separate physical layers that run over fiber (one still in development), two twisted-pair copper and one backplane form. Let's focus on the fiber variants, since our interest here is photons in fibers (sometimes many of them, sometimes only one). Five of the variants listed at Wikipedia use PAM-4 (sometimes written PAM4 in the page), one is listed as 16QAM (but more below), and the rest don't say; perhaps they are simply on/off NRZ keying, like in earlier optical networks like SONET/SDH. (It's nice that this information was much easier to find than the original SONET/SDH stuff! Partly because I better understand what I'm looking for this time, I suppose.)
...So what are PAM-4 and 16QAM?
PAM, or Pulse Amplitude Modulation, is pretty straightforward: instead of using signal ON at full power and OFF to represent a single bit, if you use several voltages (or RF or optical signal strengths), then each symbol can represent more than one bit. A related term is Amplitude Shift Keying; I'm not sure exactly why the Ethernet folks stick to PAM instead of ASK. The picture at the top of this posting is PAM-3: the level at the bottom, the level in the middle, and the level at the top. The sloping lines are transitions from one level to another; the cleaner those lines are, the bigger the "eye" is, indicating that your circuit is very stable. (If you don't know how to read an eye diagram, you should learn.)
PAM-4 uses four signal levels, carrying two bits per symbol. Prolabs and Samtec have nice explanations of PAM-4, including eye diagrams. I mentioned above that there are many different physical layers for Ethernet; PAM-4 is used in the 400GBASE-DR4, 400GBASE-FR8, 400GBASE-LR8, 400GBASE-FR4, and 400GBASE-LR4-6 variants, all running over four or eight single-mode optical fibers working together. (More on that multi-fiber concept some other time.)
That was easy. Whew! ...but what about 16QAM? Gotta mind your Ps and Qs...same thing? Nope! QAM is a lot more complicated, involving a lot of signal processing theory we're not going to get into in this blog, but let's take a quick look anyway.
QAM is Quadrature Amplitude Modulation. In QAM, we modulate the signal using two separate waves, one sine term and one cosine term. When using on-off keying or PAM on optical fiber, our carrier signal is just the laser light amplitude, and the system is relatively insensitive to the phase of the light. With QAM, however, the phase of the carrier is critical.
One of the most basic types of QAM is 16QAM. When first getting oriented, I found this web page to be helpful, but keep in mind that it's talking about the use of 16QAM for radio signals, not optical. In 16QAM, we use two signals, I and Q, that are both sinusoids, 90 degrees out of phase. Each of those two signals is modulated to carry two bits, then the two are combined, so that I+Q is the signal transmitted. The modulation involves two choices of amplitude, and two choices of phase -- either 0 or 180 degrees added to the already 0 or 90. Since there are four possible choices for each of I and Q, we have a total of 16 possible waveforms, hence 16QAM. The 16 waveforms are shown below.
A helpful, if not super-high quality, video:
https://www.youtube.com/watch?v=6BIqEWEe5-I
The 400Gbps Ethernet variant 400GBASE-ZR was supposed to use 16QAM. BUT:
The 802.3cw website says, "The work of the IEEE P802.3cw 400 Gb/s over DWDM Systems Task Force concluded with the withdrawal of IEEE P802.3cw PAR on 22 May 2024." Apparently, they published P802.3cw/D3.0, Dec 2023 - IEEE Draft Standard for Ethernet Amendment: Physical Layers and Management Parameters for 400 Gb/s Operation over DWDM (dense wavelength division multiplexing) systems as "Active - Draft" in Dec. 2023. It's not available in the freely accessible documents yet (which only include approved standards at least six months old, as I understand it), and even my university account seems to be unable to reach it. Too bad, that's definitely where I wanted to be looking! https://www.ieee802.org/3/dj/public/24_03/motions_3cwdj_2403.pdf says the PAR (Project Authorization Request) was withdrawn by unanimous consent on 13 March 2024. This was foreshadowed a month earlier in an email from John D' Ambrosia, chair of the TF.
According to the Wikipedia page on terabit Ethernet, it was proposed in 802.3cw to use dual-polarization 16QAM, which might add an extra bit but sounds even more complicated to me.
I don't know yet where the carrier for reconstructing the signal comes from...if it's just the laser itself, that's about 200THz for 1.5um light, and we need some reference to find the right phase for the carrier. One research paper on carrier recovery:
https://opg.optica.org/jlt/abstract.cfm?uri=jlt-27-15-3042
No idea if that's what's used...
And another survey paper, heavily cited in papers and in (at least) 20 patents:
https://ieeexplore.ieee.org/abstract/document/5464309
As a companion to this posting, I am developing a Jupyter notebook on 16QAM that made the plot above.
Tuesday, May 06, 2025
Julia Parsons, "Code Girl", 1921-2025
The New York Times reported last week that Julia Parsons passed away. (The Seattle Times has a copy not behind a paywall.) She was probably the last living member of the WWII Naval Communications Annex team responsible for deciphering Enigma messages sent to and from German U-boats. She joined the WAVES in 1942, right after graduating from Carnegie Tech (now Carnegie Mellon University), and was assigned to work in the unit from 1943 through the end of the war.
I don't recall if she was mentioned by name in Liza Mundy's Code Girls, but she was definitely part of that crew. If you haven't read that book, you really should.
As one of the youngest members of the group, her initial task was to work directly on the deciphering of the messages from the U-boats. She worked with the US Navy Bombe, feeding it possible plaintext and ciphertext. The Bombe would then produce a "menu" of possible Enigma wheel settings that had to be checked to determine which (if any) of them would correctly decrypt the message. The Wikipedia article has an excellent description of the workflow.
Because the work she did was classified, she didn't talk to anyone about it until 1997, when she discovered it had been declassified in the 1970s. We probably lost a lot of history that way, as even by the 1970s many of the senior people involved had doubtless passed away.
Thank you, Ms. Parsons, for what you did for democracy and freedom. I know it came with a cost.
Wednesday, April 23, 2025
Basic Signal Modulation in SDH Optical Networks
Last night, I had an extremely basic question on how signals get onto an optical fiber, so I started looking for information on it, including the standards themselves. Despite starting from what I think of as a reasonable base of knowledge, it took me over an hour to find the answer, so I figured I'd write it down here, for both myself and posterity.
[tl;dr: At least for the set of specs I looked at, it's simple on-off keying with NRZ (non-return to zero) encoding, though RZ is also possible; that is, the light being on is a 1, and light being off is a 0. The ones I looked at don't use exotic things like phase-shift keying or the like.]
I already knew that I wanted to look at the standards for SONET or SDH (Synchronous Digital Hierarchy), which are essentially the same, and I knew that ITU-T is the organization that handles SDH. ITU-T is a standards organization, part of the ITU, which is an agency of the UN (although ITU itself is much older than the UN, go figure). It handles a lot of aspects of communication, but the part of interest here is Series G: Transmission systems and media, digital systems and networks. (Simply determining that this is the set of specs to look at took me a long time. Google doesn't really index into the ITU-T pages very well, and I wasn't familiar with the "Series" structure of the ITU-T standards. Moreover, when Google does give you a direct link to the file, it's often an older version instead of the correct, up-to-date one.) (It's also worth noting here that these are published as "Recommendations", and don't have the force of law unless adopted into a law by some country.)
It was pretty easy to find information on frame formats for STM-1, which is the key transmission format for SDH. Getting from there down to the physical layer encoding was the next step, where I stalled. Here's what I eventually found, most of them in the G.95x Digital line systems series:
- G.691 : Optical interfaces for single channel STM-64 and other SDH systems with optical amplifiers
This spec isn't part of the G.95x series but it has more on physical layer, eye mask, rise times, power levels, dispersion accommodation, etc. - G.955 : Digital line systems based on the 1544 kbit/s and the 2048 kbit/s hierarchy on optical fibre cables
This is actually pretty old (1996), but hasn't been withdrawn. There are lots of values for allowable attenuation, etc. that were simply listed as "under study". - G.957 : Optical interfaces for equipments and systems relating to the synchronous digital hierarchy
One of the most important, but be careful; it has been updated, so there are multiple versions floating around and the older one doesn't say "superseded". You want the 200603 version, dated 03/2006. This one also talks about dispersion, if that's your gig. - G.959.1 : Optical transport network physical layer interfaces
Ah, finally, here's the money!!! A few simple lines in Sec. 8.2.2.13: "The convention adopted for optical logic levels is: - − emission of light for a logical '1';
- − no emission for a logical '0'."
- How is clock recovery done in the NRZ system?
- How are frames demarcated?
- How do you turn a laser diode on and off that fast in an electrical circuit?
- Definitely need to study up more on filtering, DWDM (especially how close channels are allowed to be), and add/drop devices.
Saturday, April 05, 2025
Spelunking CACM, Vol. 23 (1980): Pilot and Medusa
The structure of the magazine is changing, but not the covers, which are still primarily black and blue, with some abstract design. The articles are longer and more in-depth, and each issue has only a few articles. Some of the older types of notices and articles, such as short algorithm descriptions, have been retired. And then rather suddenly around August or perhaps a little earlier, the article format itself changed and became more modern, with a nicer header and a three-column, large-magazine format.
Apparently there was a thing called the IBM Programmer Aptitude Test, which I have never heard of. A quick check didn't find a copy of it online, but some discussion seems to indicate that it was a lot of math word problems. A couple of researchers tested it rigorously, and found, surprise, very weak correlation between the test result and grades in an introductory FORTRAN programming course. They also found an even weaker correlation between gender and performance.
Harold Abelson and Peter Andreae wrote about tradeoffs in VLSI design. Interestingly, this is the first time I recall seeing the term "VLSI", though maybe it just didn't catch my eye before. The term itself should have been only about three years old at the time (according to Lynn Conway's reminiscences), and yet the article doesn't bother to expand the acronym.
Not only was there a chess tournament featuring a dozen programs at a 1979 ACM conference, there was also an early attempt at a man-machine team, what we might now call "centaur" or "advanced" chess, playing against (in this case) a lone human. The article authors were relieved that the lone human won.
Xerox Business Systems (not PARC?) authors published about Pilot (the figure below), an OS for personal computers complete with a single 32-bit virtual address space. (In fact, we might call it a 33-bit address space today, since addresses were to 16-bit words.) Pilot used a flat (non-hierarchical) namespace for files, each of which had a 64-bit unique identifier they refer to as a capability. The capability is supposed to be unique in space and time, across all machines. Frustratingly, the article doesn't contain much on the hardware required to run Pilot, but it's implemented in Mesa and very closely tied to that language. Inter-process communication can be either via shared memory or the communication libraries provided, which primarily focused on the PUP protocol suite, though the describe similarities to the ARPANET protocol suite. Like TCP/IP, PUP includes internetworking concepts in it.
As long as we're talking operating systems, maybe the more interesting one technically is Medusa, which ran on the Cm* multiprocessor (the figure at the top of this posting), developed at Carnegie Mellon University. The article by Ousterhout et al. describes an extremely sophisticated OS running on a distributed shared memory system. The hardware, like a NUMA system today, can directly access local or remote memory, with up to about a 10x latency penalty. The processors are LSI-11s, a version of the workhorse PDP-11 that was used for so many things over two decades and several hardware and OS iterations.
A task force includes activities, with the former roughly resembling a modern process and the latter corresponding to threads, where each activity has a specific role in the overall program/utility -- except that each activity is bound to a processor, but a task force can apparently consist of activities on many processors. One approach, shown below, is to have the many utilities (daemons, in modern terms) of the OS each running on a separate processor.
Arguably Medusa echoes some aspects of Farber's DCS and in turn influences things like VAXclusters, though neither of those systems had direct hardware access to remote memory, as far as I know/recall.
Guy Steele and Gerald Sussman described a LISP microprocessor. Interestingly, despite the fame of that pair, this article hasn't been cited much; perhaps the commercial LISP machines of just a few years later don't really owe much to it?
Also, I hadn't realized that there were formal attempts to verify security in an OS that far back -- and using capabilities, to boot.
Enough for now. Once again, this is turning into a catalog rather than a dive into one or two pleasing papers, but it's intriguing to see so much on distributed OSes showing up. What a time it was, and I was still too young to participate at all, even though some of this major work was taking place driving distance from my parents' house, in Pittsburgh. If I had known then about that work, and that I wanted to do computing systems, I might very well have gone to CMU instead of Caltech, and how different my life would have been then. Although my life later converged with many good people from CMU!
Cross-validating Quantum Network Simulators
New paper on the arXiv. I'll be presenting this one at an INFOCOM workshop in London next month.
During this cross-validation process, we not only fixed bugs in both simulators, but we gained a deeper understanding of the performance differences caused by protocol design differences.
Cross-Validating Quantum Network Simulators
Joaquin Chung, Michal Hajdušek, Naphan Benchasattabuse, Alexander Kolar, Ansh Singal, Kento Samuel Soon, Kentaro Teramoto, Allen Zang, Raj Kettimuthu, Rodney Van Meter
We present a first cross-validation of two open-source quantum network simulators, QuISP and SeQUeNCe, focusing on basic networking tasks to ensure consistency and accuracy in simulation outputs. Despite very similar design objectives of both simulators, their differing underlying assumptions can lead to variations in simulation results. We highlight the discrepancies in how the two simulators handle connections, internal network node processing time, and classical communication, resulting in significant differences in the time required to perform basic network tasks such as elementary link generation and entanglement swapping. We devise common ground scenarios to compare both the time to complete resource distribution and the fidelity of the distributed resources. Our findings indicate that while the simulators differ in the time required to complete network tasks, a constant factor difference attributable to their respective connection models, they agree on the fidelity of the distributed resources under identical error parameters. This work demonstrates a crucial first step towards enhancing the reliability and reproducibility of quantum network simulations, as well as leading to full protocol development. Furthermore, our benchmarking methodology establishes a foundational set of tasks for the cross-validation of simulators to study future quantum networks.
Monday, March 03, 2025
Open Faculty Positions at Keio's Shonan Fujisawa Campus
We have five, count 'em, five, open faculty positions in Keio's Faculty of Environment and Information Studies. If you are a researcher looking for a tenure-track position, please consider applying.
This call is very open; people of all stripes are encouraged to apply. As chair of the Cyber-Informatics Program, of course, I am hoping we will have the opportunity to hire some first-rate people in computing (defined very, very broadly).
Applications are due the end of March, I believe.
https://www.sfc.keio.ac.jp/en/employment/
Saturday, March 01, 2025
Africa and Foreign Aid Today
Monday, February 03, 2025
Research is Winning the War Against Cancer
Do I have any friends whose lives have not been touched by cancer?
If your life has been touched by cancer -- your own, a family member's, a friend's -- then you are probably aware of how difficult a disease it is, but you may or may not know this:"The country’s cancer death rate has declined 33 percent since 1991, thanks largely to cancer research that has led to new treatments, gains in early cancer detection and, most significantly, a sharp decline in tobacco use, according to a new American Cancer Society (ACS) report."
In addition, five-year survival rates for many types of cancer have improved, and the treatments themselves have fewer side effects.
The research that leads to these improvements is a global effort, but the biggest funder of research (across all fields, from astronomy to zoology) in the world is the United States National Institutes of Health.
This kind of work takes DECADES. It is not work that can be started and stopped on the whims of individuals; an unplanned-for pause can literally destroy decades of work. It takes dedicated researchers, clinicians, and hospital staff; drug development science and chemical engineering; complicated distribution and testing systems; work to understand and mitigate tradeoffs in the effects on the body, work to sort out differences in effectiveness for people with different body types, current health and genetic backgrounds; and above all a system that is built to create AND SUSTAIN the workforce we need, including universities.
When we work for the common good, without thought of profit or politics, the U.S. makes the world a better place.
https://www.cancercenter.com/community/blog/2023/01/cancer-survival-rates-are-improving
Thursday, January 02, 2025
Spelunking CACM, Vol. 22 (1979): Cheriton and Denning
Maybe it's just me, but after the huge set of world-changing things in 1978's CACM, it seems like not so much in 1979. Also, in minor grumbles, I didn't find a picture I liked in either of the papers I decided to cover.
One that inevitably caught my eye because of the first author is David Cheriton's Thoth. It's a complete, though fairly simple, OS, designed and built with the goal of being portable across machine architectures. It includes (and depends on) a compiler for a custom language. The language is apparently named "Eh", but they refer to it as "the base language" throughout the paper. It's a descendant of B & BCPL, which should put it closer to C than not, but the syntax is actually rather different. It assumes that integer pointers are themselves consecutive integers; the language doesn't support the notion of a byte pointer, because at least one of the machines they wanted to target didn't support byte pointers.
The file system is a tree that supports UNIX-like mount points, cleverly called "grafts", but otherwise the file naming seems like something from a parallel universe, when viewed from 2025. Among other things, the root of the FS is called "*", so "*/src" refers to what in UNIX terms would be "/src".
There is a function for creating a "process", but its first argument is a pointer to a function. Their "processes" are actually closer to modern threads, and share an address space. The OS includes support for multiple address spaces; the set of "processes" in one address space is a "team". Support for multiple teams can be compiled in or out, assuming the hardware has an MMU and virtual memory.
There is no support for multiprocessors, though they claim and I believe that it wouldn't be too hard in the OS itself. A process runs until it blocks, so multitasking is cooperative.
They achieved the goal of portability of software and the OS; Thoth ran on both the Data General Nova 2 (released in 1973, already not a new machine by the time of this paper) and the TI 990/10, both 16-bit machines, and so Thoth was designed around those limitations despite the goal of portability. I think it's arguable whether the system would meet the goal of portability to a later architecture; I would say it was focused on that pair of specific systems rather than truly working toward a system with open-ended future portability.
Overall, maybe not so much to write home about, though Cheriton's later work on V is claimed to be a "successor" to Thoth, and is of huge importance in the history of distributed systems.
Of note for similar reasons is Dorothy Denning's proposal for a hardware gadget that does RSA encryption and allows a PC to encrypt data it sends to a centralized file server (CF), as well as to securely exchange files with another PC via the CF. Some of the functions appear to be transparent to the PC, others don't. Ultimately using RSA as the only encryption mechanism is computationally intensive, although at the time, according to Denning, Rivest himself claimed there would soon be high-performance hardware implementations.
There were a few other things here in there in 1979, but nothing that really compels me to talk about. As always, of course, my primary strength is the systems work; I can't comment as fluidly or recognize the importance as clearly when dealing with algorithms and theory.
(On a personal note, I think 1979 was the year I got my TRS-80 Model 1, using a Zilog Z80 16-bit CPU. A simple and interesting machine.)
Happy New Year!
![]() |
Sunset over fishing boats, Zaimokuza, Kamakura, Dec. 29, 2024 |
I also did a thing or two for my soul; the most important was a nine-day road trip with my wife through western Honshu. Longest two-person vacation since our first kid was born more than a quarter-century ago.
However, I don't think I saw any live music, opera, theater or performing arts last year, except for an hour of Irish music in a pub in Dublin. That needs to be corrected in 2025!
I think I read thirty books this year, which is my goal each year, a very achievable number. (How many books you read doesn't really matter; there's no way to read any detectable fraction of the good books published in a year. Just read and enjoy.) In 2023, Kettle Bottom and Demon Copperhead (both coincidentally set in Appalachia) hit me hard. But I don't think any of this year's readings will leave a mark on my soul.
Im movies, "Past Lives", "Perfect Days" and "Dune Part 2" are very different films, and each will stay with me for a long time. The first two were officially released in 2023, but I think I saw them both in 2024. "Past Lives" is about the ache for the paths not taken, while still acknowledging that the path taken is who we are. I sobbed all the way through it. "Perfect Days" likewise is about the choices we make and the life we make for ourselves. "Dune" is, well, Dune.
And of course we can't ignore the cataclysmic event of November 5. It's incumbent on us all to do what we can to help keep the world together.
On to 2025. Hoping I can find (well, make myself find) better work/life balance, while achieving the things I want. Let's roll!