So, a while back, I blogged about a teapot that keeps track of how much tea is drunk, and emails the data out over a wireless link. The idea is to allow adults to keep track of elderly parents without having to call every day, the theory being that someone drinking a lot of tea is healthy (and the converse). (Reportedly about 3,000 people have signed up for the service so far.)
Now comes word that the Tokyo metropolitan waterworks is going to do the same thing for the amount of water that gets used. If Dad didn't get off the futon to go take a bath last night, maybe we should give him a call...
This is also viewed as a prototype for real-time reporting of water usage and the elimination of the meter reader trekking house to house.
Friday, September 15, 2006
Tuesday, September 12, 2006
Fiftieth Anniversary of the Hard Drive
Jack Cole points out that Wednesday, September 13, 2006, is the fiftieth anniversary of the shipment of the first hard drive. The IBM 305 RAMAC (Random Access Memory Accounting System) held five megabytes, was about the size of two large refrigerators, and leased for about a quarter of a million dollars a year, in today's dollars.
I wonder what the MB/watt figure is for that puppy? That measure would see a steeper decline than most other measures of growth, I think.
Wikipedia has a photo. You should also check out the Magnetic Disk Heritage Center, run by Al Hoagland, one of the founders of the field.
I wonder what the MB/watt figure is for that puppy? That measure would see a steeper decline than most other measures of growth, I think.
Wikipedia has a photo. You should also check out the Magnetic Disk Heritage Center, run by Al Hoagland, one of the founders of the field.
Books at Narita, and Japanese Currency Exchange
I discovered last week that the new Tsutaya video/book store in Terminal 1 at Narita Airport has a good English-language section of books on Japan. One of the best I've seen, in fact. I picked up You Gotta Have Wa, by Robert Whiting; there were also books on Japanese swords, art, and architecture, to go with guide books and introductory language books (no advanced ones, though). The store is on the check-in level in the recently remodeled mall between the north and south wings, so if you want to check it out as you're coming in to the country, you'll have to work your way upstairs.
By the way, if you're travelling to Japan from the U.S. (say, for QCMC, just to give this posting a dash of quantum computation), change money once you get here, not in the U.S. I brought back some money on this last trip. At SFO (San Francisco), the exchange booths were offering 104 yen/dollar. At Narita, I got 115 yen/dollar, which is as good as you'll get at any bank. Traveller's checks are 2 yen/dollar better than cash. Remember, for the most part, using credit cards (especially American ones, with ever-changing security features) is very iffy outside of major tourist hotels. Mostly you still want to carry cash. Most Post Office ATMs here will accept U.S. ATM cards. I have heard that the exchange rate is good, but I don't have evidence one way or the other.
By the way, if you're travelling to Japan from the U.S. (say, for QCMC, just to give this posting a dash of quantum computation), change money once you get here, not in the U.S. I brought back some money on this last trip. At SFO (San Francisco), the exchange booths were offering 104 yen/dollar. At Narita, I got 115 yen/dollar, which is as good as you'll get at any bank. Traveller's checks are 2 yen/dollar better than cash. Remember, for the most part, using credit cards (especially American ones, with ever-changing security features) is very iffy outside of major tourist hotels. Mostly you still want to carry cash. Most Post Office ATMs here will accept U.S. ATM cards. I have heard that the exchange rate is good, but I don't have evidence one way or the other.
Wednesday, September 06, 2006
It's a Boy!
Princess Kiko, wife of the Emperor's younger son, has had a boy. This, at least for the moment, ends the imperial succession crisis. No child eligible to ascend the throne had been born since 1965; now there is one.
The New York Times says "[t]he birth may also end the psychological drama surrounding the royal family," but in my opinion, it just makes the intrigue worse. If the child had been (another) girl, everyone would have been forced to face the idea of a woman emperor; there was no other choice. Instead, with a boy, there is the potential for the argument to continue, with some supporting the crown prince's daughter, Princess Aiko, and others arguing for the new boy (third child of the emperor's second son; he has two older sisters).
I don't have any particular insight into the royal family's internal loves, likes, and frictions, but I wonder what is going through the heads of the emperor, his sons, and their wives.
At any rate, congratulations to Princess Kiko and Prince Akishino on the birth of a healthy baby boy!
(I've blogged about this before, here and here.)
The New York Times says "[t]he birth may also end the psychological drama surrounding the royal family," but in my opinion, it just makes the intrigue worse. If the child had been (another) girl, everyone would have been forced to face the idea of a woman emperor; there was no other choice. Instead, with a boy, there is the potential for the argument to continue, with some supporting the crown prince's daughter, Princess Aiko, and others arguing for the new boy (third child of the emperor's second son; he has two older sisters).
I don't have any particular insight into the royal family's internal loves, likes, and frictions, but I wonder what is going through the heads of the emperor, his sons, and their wives.
At any rate, congratulations to Princess Kiko and Prince Akishino on the birth of a healthy baby boy!
(I've blogged about this before, here and here.)
Japanese Home Construction
The Daily Yomiuri reports that the government will study measures aimed at increasing the popularity of constructing more durable houses. At first glance this is a head-scratcher; why isn't this just determined by the market? But cheaply constructed housing is so pervasive here, that we're probably in some sort of meta-stable state where durable housing can't get a foothold in the market.
In the U.S., it's not uncommon for people to live in houses that are a hundred years old; you need to update the electrical system and maybe the plumbing, and if you want central heat/AC that's an expensive upgrade, but it's all doable. Twenty- or thirty-year-old houses are the heart of the U.S. real estate market (maybe the avocado green appliances get looked at askance if you haven't upgraded, though). Although scrape-and-bake McMansions have been the trend for a decade or so, fundamentally U.S. houses are built to last (though I have my doubts about the recent trends in big-chip particle board and some of the materials in the new large-house movement).
In Japan, it's different. I'm not old eough (forty) to associate Japan with cheap, flimsy goods, but that stereotype is really in evidence in the housing industry. Anything over ten years old is "old", and by the time it's twenty you probably don't want to live in it. When I first arrived here in 1992, I thought it was a legacy of the post-war poverty and housing shortage, but Japan has now been prosperous for several decades, and I see no changing trend. Perhaps it goes back several hundred years, to when the working assumption was that a fire would roar through Edo every thirty years or so and obliterate everything, so there was no point in overbuilding.
The average lifetime of a house, according to the article, is thirty years. The government hopes to raise the lifespan to forty years within the next decade, and up to 200 within the next fifty years. It's an admirable goal, but will have a negative impact on the construction industry, so likely will be opposed, I'm guessing.
In a Japanese house or apartment, interior and exterior doors and cabinet doors are often made of flimsy material. Within a decade, they are chipped, the paint is flaking, the hinges are iffy, they may even be delaminating. Flooring is cheap and dents easily, carpets are thin and wear and stain (even without shoes on them). Countertops are either cheap vinyl or unattractive stainless (which does have its benefits).
One of the biggest reasons things age so quickly, IMHO, is the lack of central heating/AC. It's still very rare in houses and nonexistent in apartments. When you move in, you buy a wall-mount unit for each major room. This leaves hallways and entryways unregulated. In our current apartment, for example, they are on the north side and never get sunlight, so they are damp all the time. Cardboard boxes stored under the stairs disintegrate, papers go limp and mold or mildew, and shoes mold. It's startling and disgusting to pull out a pair of shoes you haven't worn lately and find them covered with a fine fuzz, even if you last wore them on a sunny day. This inevitably has to affect the structure of the house itself, and you can see it as wallpaper peels.
Japan is incredibly humid, and hot for much of the year, but this isn't the tropics. People in the southeast U.S. have dealt with this fairly successfully. I wonder how they manage in, say, Singapore, which is fairly wealthy and very tropical?
The most recent houses are sided with some sort of artificial material, often pressed and painted to look like brick or stone. I have no idea how long that will last, but I'm not optimistic. The only positive trends I see are that walls often have some insulation (usually 35mm of polystyrene foam), and double-pane windows are very gradually gaining popularity.
As long as I'm ranting about house construction, it's worth noting that few Japanese people have a clothes dryer. Hanging your clothes out works many days, but many days it doesn't, and your clothes are left damp, with a "sour" smell. We acquired a dryer, which we use probably a third of the time, but it takes hours to do a decent job. The dryer isn't vented to the outside, so it's really just heating up the water in the clothes rather than extracting it. In the U.S. it has been common to plan your laundry room on a wall so the dryer can be vented, but here in Japan they haven't crossed the social hump in popularity of dryers that would make that desirable, and so the machines are stuck in this inefficient mode, which of course hurts their desirability and adoption.
Enough ranting for today...
In the U.S., it's not uncommon for people to live in houses that are a hundred years old; you need to update the electrical system and maybe the plumbing, and if you want central heat/AC that's an expensive upgrade, but it's all doable. Twenty- or thirty-year-old houses are the heart of the U.S. real estate market (maybe the avocado green appliances get looked at askance if you haven't upgraded, though). Although scrape-and-bake McMansions have been the trend for a decade or so, fundamentally U.S. houses are built to last (though I have my doubts about the recent trends in big-chip particle board and some of the materials in the new large-house movement).
In Japan, it's different. I'm not old eough (forty) to associate Japan with cheap, flimsy goods, but that stereotype is really in evidence in the housing industry. Anything over ten years old is "old", and by the time it's twenty you probably don't want to live in it. When I first arrived here in 1992, I thought it was a legacy of the post-war poverty and housing shortage, but Japan has now been prosperous for several decades, and I see no changing trend. Perhaps it goes back several hundred years, to when the working assumption was that a fire would roar through Edo every thirty years or so and obliterate everything, so there was no point in overbuilding.
The average lifetime of a house, according to the article, is thirty years. The government hopes to raise the lifespan to forty years within the next decade, and up to 200 within the next fifty years. It's an admirable goal, but will have a negative impact on the construction industry, so likely will be opposed, I'm guessing.
In a Japanese house or apartment, interior and exterior doors and cabinet doors are often made of flimsy material. Within a decade, they are chipped, the paint is flaking, the hinges are iffy, they may even be delaminating. Flooring is cheap and dents easily, carpets are thin and wear and stain (even without shoes on them). Countertops are either cheap vinyl or unattractive stainless (which does have its benefits).
One of the biggest reasons things age so quickly, IMHO, is the lack of central heating/AC. It's still very rare in houses and nonexistent in apartments. When you move in, you buy a wall-mount unit for each major room. This leaves hallways and entryways unregulated. In our current apartment, for example, they are on the north side and never get sunlight, so they are damp all the time. Cardboard boxes stored under the stairs disintegrate, papers go limp and mold or mildew, and shoes mold. It's startling and disgusting to pull out a pair of shoes you haven't worn lately and find them covered with a fine fuzz, even if you last wore them on a sunny day. This inevitably has to affect the structure of the house itself, and you can see it as wallpaper peels.
Japan is incredibly humid, and hot for much of the year, but this isn't the tropics. People in the southeast U.S. have dealt with this fairly successfully. I wonder how they manage in, say, Singapore, which is fairly wealthy and very tropical?
The most recent houses are sided with some sort of artificial material, often pressed and painted to look like brick or stone. I have no idea how long that will last, but I'm not optimistic. The only positive trends I see are that walls often have some insulation (usually 35mm of polystyrene foam), and double-pane windows are very gradually gaining popularity.
As long as I'm ranting about house construction, it's worth noting that few Japanese people have a clothes dryer. Hanging your clothes out works many days, but many days it doesn't, and your clothes are left damp, with a "sour" smell. We acquired a dryer, which we use probably a third of the time, but it takes hours to do a decent job. The dryer isn't vented to the outside, so it's really just heating up the water in the clothes rather than extracting it. In the U.S. it has been common to plan your laundry room on a wall so the dryer can be vented, but here in Japan they haven't crossed the social hump in popularity of dryers that would make that desirable, and so the machines are stuck in this inefficient mode, which of course hurts their desirability and adoption.
Enough ranting for today...
Reproductive Ethics in Japan: Frozen Sperm
There have been three recent court cases here in Japan concerning control and use of sperm of men who have died. In the most recent case, a woman used in vitro fertilization after her husband died and had a baby, and had asked the courts to recognize her deceased husband as the father. The courts refused. The three cases differ in length of time since the death, the details of the actual sperm storage contract, and whether the husband had agreed to a poshumous birth.
This is a complex issue. But it isn't exactly alone in reproductive issues. Who has the right to terminate a pregnancy, is it a decision of the woman alone? What rights do sperm donors have with respect to their children, and what responsibilities? Genetic testing is opening all sorts of ethical issues, from gender selection to genetic illnesses; before long it may be possible to take a stab at the baby's adult height and other characteristics, allowing the parents access to a very crude form of genetic engineering. Science recently ran an article on the questionable success of prenatal surgery.
The world is changing rapidly, and the laws and mores of society aren't keeping pace. We should all do a lot of reading and thinking about these topics.
This is a complex issue. But it isn't exactly alone in reproductive issues. Who has the right to terminate a pregnancy, is it a decision of the woman alone? What rights do sperm donors have with respect to their children, and what responsibilities? Genetic testing is opening all sorts of ethical issues, from gender selection to genetic illnesses; before long it may be possible to take a stab at the baby's adult height and other characteristics, allowing the parents access to a very crude form of genetic engineering. Science recently ran an article on the questionable success of prenatal surgery.
The world is changing rapidly, and the laws and mores of society aren't keeping pace. We should all do a lot of reading and thinking about these topics.
Tuesday, September 05, 2006
Origami Conference
Just heard about the Fourth International Conference on Origami in Science, Mathematics, and Education (4OSME) taking place at Caltech this week. Seventy papers, and an exhibition session. Worth checking into if you're in the Pasadena area (walkin registration available), and even if you're not...
Implementing Shor's Algorithm
Two papers, one brand new and one I had missed earlier in the year:
Both papers are potentially important advances toward realistic, efficient implementations of all quantum algorithms using binary integer arithmetic, not just Shor's algorithm. Without this kind of work, we will never know what kind of machine we actually need to build to meet some given performance goal.
I haven't yet read either paper in detail; both are for tomorrow's plane ride.
- Sandy Kutin's "Shor's algorithm on a nearest-neighbor machine" may clear up some of the lingering details of composing adders into the full modular exponentiation needed for Shor, and offers an elegant, efficient overall algorithm. It also appears to call into question (rightfully) some of the simplifying assumptions I made about data movement to combine intermediate results when you do multiple additions in parallel on a very large nearest-neighbor (what I call NTC) machine. Nearest-neighbor is the worst possible architecture, as far as communication costs are concerned. Analyzing it is important in that it sets a floor for performance, but we all hope that for large machines we aren't really limited to NN. The question is, how much richer can we really make the topology, and what effect will that have on performance?
- Christof Zalka's "Shor's algorithm with fewer (pure) qubits" shows how to do Shor for an n-bit number using only 1.5n qubits. I haven't read anything but the first page, but it looks intriguing.
Both papers are potentially important advances toward realistic, efficient implementations of all quantum algorithms using binary integer arithmetic, not just Shor's algorithm. Without this kind of work, we will never know what kind of machine we actually need to build to meet some given performance goal.
I haven't yet read either paper in detail; both are for tomorrow's plane ride.
Monday, September 04, 2006
Origami
Geek Press recently had a link to a Discover article about "extreme origami", mostly focusing on Robert Lang and Satoshi Kamiya. I'll throw in a couple of links: Devin Balkcom's origami-folding robot and the New York Times' article on David Huffman's curved-crease origami (which appears to still be freely available, though registration might be required).
Sunday, September 03, 2006
"Gedo Senki" Review
Warning: Plot spoilers
This morning we went to see "Gedo Senki", Studio Ghibli's anime film based on the third and fourth books of Ursula K. Le Guin's Earthsea series. (See my earlier post for some useful links.) There are things to like about the film, but on the whole it's a bit disappointing, and I suspect that LeGuin won't really be happy with the result. (Indeed, after seeing the film, I agree pretty much completely with her assessment.) It is a beautiful film, and has many positive features, but in some ways fails to capture the spirit of the books.
First off, it's important to note that the film is more violent than the books. If you're thinking about showing it to your young children, think again. This isn't Totoro, it's more along the lines of Princess Mononoke, which certainly has its rough spots. My seven-year-old daughter was rather taken aback, and I missed much of the last half hour of the film taking her out to the lobby and calming her down. It has always struck me as a bit odd that she (and others her age) can completely slough off the violence of "Pokemon" and various robot/mechanical anime TV shows. Of course, the big screen is a different experience, and it didn't help that the showing started with ads for "X-Men" and some black comedy about a dead guy, but still. I think the main reason is that this film has a true emotional heart; she became involved with the characters, and cared about their fate, which isn't true for many of the violent superhero cartoons.
Mayumi, who hasn't read any of the books (though their Japanese translations are reportedly popular) found the film confusing. I was handicapped by language (I can still catch maybe only seventy-five percent of movie/TV Japanese, and the missing quarter is important), but somewhat helped by loving the books. However, it is many years since I have read the earlier books. Some of the sky scenes of a hawk, for example, are probably intended to represent Ged taking a hawk's form and spying on the evil sorceress, but are never explained. Likewise, the fact that Teru is in fact a dragon essentially turns up at the end, with little justification; we are vaguely given to understand early in the film that she is an unusual child, but Ged never pursues her nature and it is never explained. Most mystifyingly to non-initiates, the importance of true names is not adequately detailed, and the distinction between "Haitaka" and "Ged" is obscure.
The biggest problem with the film is that it turns Ged's quest to right what is wrong with the world, caused by misuse of magic, into a wizard-versus-wizard battle. In classic kid's film fashion, the kids have to rescue the adults in the end. Although the film has its quiet moments, the undoubtedly hard-to-film parts of the pursuit of knowledge suffer, and the thoughtful, emotional core of the books is hard to find in the film. Fixing the world, it seems to me, would allow for some great visuals, so I think they passed up a good opportunity there and misconstrued the spirit of the books at the same time. In this sense, it's like some of the things I disliked about David Lynch's version of Dune, where an important practice or mystical power was reduced to a high-tech hand weapon.
Among the things I would have done differently, but don't necessarily dislike, are the color palette and the city design. Le Guin declines to comment on what race the various characters appear, citing uncertainty about how they appear to Japanese. My wife (who is Japanese), when I asked her what race Ged was, said, "Hmm, Norwegian? The city feels Roman..." The differences among characters are awfully subtle, and none appear to equate to Earthly dark-skinned islanders or Africans. Instead the correspond to what seems to be conventional anime types, rather than a stereotype-bending set of choices. The colors of the city and its design are not what I have in my mind's eye, but not necessarily wrong.
I think this film, standing alone, is reasonably successful, but I don't think it lives up to the source material. Like Le Guin, I would have been happier to see the father (Hayao) direct, rather than the son (Goro). It's a good first film from Goro, but a better script would have helped. Like all Ghibli films, it's visually rich and the music is powerful, but around Tokyo it's possible to be over-exposed to Teru's theme song.
Le Guin says it's tied up in rights trouble and won't be seen in the U.S. until 2009. Too bad, because it is worth seeing, despite the flaws. A subtitled version exists; it was shown to Le Guin. It wouldn't surprise me if that made it out to the gray market somehow before a fully-authorized version does.
This morning we went to see "Gedo Senki", Studio Ghibli's anime film based on the third and fourth books of Ursula K. Le Guin's Earthsea series. (See my earlier post for some useful links.) There are things to like about the film, but on the whole it's a bit disappointing, and I suspect that LeGuin won't really be happy with the result. (Indeed, after seeing the film, I agree pretty much completely with her assessment.) It is a beautiful film, and has many positive features, but in some ways fails to capture the spirit of the books.
First off, it's important to note that the film is more violent than the books. If you're thinking about showing it to your young children, think again. This isn't Totoro, it's more along the lines of Princess Mononoke, which certainly has its rough spots. My seven-year-old daughter was rather taken aback, and I missed much of the last half hour of the film taking her out to the lobby and calming her down. It has always struck me as a bit odd that she (and others her age) can completely slough off the violence of "Pokemon" and various robot/mechanical anime TV shows. Of course, the big screen is a different experience, and it didn't help that the showing started with ads for "X-Men" and some black comedy about a dead guy, but still. I think the main reason is that this film has a true emotional heart; she became involved with the characters, and cared about their fate, which isn't true for many of the violent superhero cartoons.
Mayumi, who hasn't read any of the books (though their Japanese translations are reportedly popular) found the film confusing. I was handicapped by language (I can still catch maybe only seventy-five percent of movie/TV Japanese, and the missing quarter is important), but somewhat helped by loving the books. However, it is many years since I have read the earlier books. Some of the sky scenes of a hawk, for example, are probably intended to represent Ged taking a hawk's form and spying on the evil sorceress, but are never explained. Likewise, the fact that Teru is in fact a dragon essentially turns up at the end, with little justification; we are vaguely given to understand early in the film that she is an unusual child, but Ged never pursues her nature and it is never explained. Most mystifyingly to non-initiates, the importance of true names is not adequately detailed, and the distinction between "Haitaka" and "Ged" is obscure.
The biggest problem with the film is that it turns Ged's quest to right what is wrong with the world, caused by misuse of magic, into a wizard-versus-wizard battle. In classic kid's film fashion, the kids have to rescue the adults in the end. Although the film has its quiet moments, the undoubtedly hard-to-film parts of the pursuit of knowledge suffer, and the thoughtful, emotional core of the books is hard to find in the film. Fixing the world, it seems to me, would allow for some great visuals, so I think they passed up a good opportunity there and misconstrued the spirit of the books at the same time. In this sense, it's like some of the things I disliked about David Lynch's version of Dune, where an important practice or mystical power was reduced to a high-tech hand weapon.
Among the things I would have done differently, but don't necessarily dislike, are the color palette and the city design. Le Guin declines to comment on what race the various characters appear, citing uncertainty about how they appear to Japanese. My wife (who is Japanese), when I asked her what race Ged was, said, "Hmm, Norwegian? The city feels Roman..." The differences among characters are awfully subtle, and none appear to equate to Earthly dark-skinned islanders or Africans. Instead the correspond to what seems to be conventional anime types, rather than a stereotype-bending set of choices. The colors of the city and its design are not what I have in my mind's eye, but not necessarily wrong.
I think this film, standing alone, is reasonably successful, but I don't think it lives up to the source material. Like Le Guin, I would have been happier to see the father (Hayao) direct, rather than the son (Goro). It's a good first film from Goro, but a better script would have helped. Like all Ghibli films, it's visually rich and the music is powerful, but around Tokyo it's possible to be over-exposed to Teru's theme song.
Le Guin says it's tied up in rights trouble and won't be seen in the U.S. until 2009. Too bad, because it is worth seeing, despite the flaws. A subtitled version exists; it was shown to Le Guin. It wouldn't surprise me if that made it out to the gray market somehow before a fully-authorized version does.
Headin' My Way?
News on Super Typhoon Ioke is a little sparse at the moment, but it may be heading toward Tokyo. It reached Cat 5 for a while, but inevitably would be weaker by the time it arrived here.
It apparently flattened Wake Island yesterday, and should be pounding Minami Torishima about now, way southeast of Tokyo. Yesterday's path prediction showed it likely to pass through the Ogasawara Islands, toward Shikoku & Kyushu. As of 6:00a.m. this morning, though, the projected path has shifted far to the north, with a probable passage just east of Tokyo and landfall in northern Honshu, according to the most recent typhoon map from the Japan Weather Agency.
This might interfere with my plans to fly to the U.S. on Wednesday...
An article in today's Daily Yomiuri says that monitoring of hurricanes crossing the International Date Line to become typhoons began in 1951. Only six were recorded up to 1990, and ten have been recorded since then. The article quotes a meteorologist from the University of the Air, which is a wonderful name for a school.
It apparently flattened Wake Island yesterday, and should be pounding Minami Torishima about now, way southeast of Tokyo. Yesterday's path prediction showed it likely to pass through the Ogasawara Islands, toward Shikoku & Kyushu. As of 6:00a.m. this morning, though, the projected path has shifted far to the north, with a probable passage just east of Tokyo and landfall in northern Honshu, according to the most recent typhoon map from the Japan Weather Agency.
This might interfere with my plans to fly to the U.S. on Wednesday...
An article in today's Daily Yomiuri says that monitoring of hurricanes crossing the International Date Line to become typhoons began in 1951. Only six were recorded up to 1990, and ten have been recorded since then. The article quotes a meteorologist from the University of the Air, which is a wonderful name for a school.
Thursday, August 31, 2006
World Go Oza
The Toyota & Denso World Go Oza, a single-elimination tourney featuring 32 of the world's top players, is taking place right now in Tokyo. Japan started with ten players, but only three made it to the second round. The two North American representatives, three European representatives, and one Central/South American representative were all eliminated in the first round. The eight quart-finalists were 3-3-2, Korean, Chinese, Japanese. The Asia-at-large player (from Hong Kong) and Taiwanese player were both eliminated in the second round. Three of the four remaining players are (South) Korean, with one Japanese player. The semifinals are tomorrow, ending a week of intense play, but the finals won't be played until January 6-8.
Lee Chang Ho is one of the remaining four; many people seem to consider him the top player in the world.
My confusion is compounded by being unable to read the Chinese characters for many of the non-Japanese names (including the remaining nominally Japanese player, Chang Hsu, who is probably Chinese-born, if I guess right), and the fact that the Japanese assign a different phonetic reading to all of those names than their native readings, which makes it hard to match up the romanized version of their home-country names with the Japanese pronunciation.
Lee Chang Ho is one of the remaining four; many people seem to consider him the top player in the world.
My confusion is compounded by being unable to read the Chinese characters for many of the non-Japanese names (including the remaining nominally Japanese player, Chang Hsu, who is probably Chinese-born, if I guess right), and the fact that the Japanese assign a different phonetic reading to all of those names than their native readings, which makes it hard to match up the romanized version of their home-country names with the Japanese pronunciation.
Tuesday, August 29, 2006
Rose on Analog
Gordie Rose, of D-Wave Systems, has a new blog, and in a recent post talks about hating the gate model, and his strong preference for adiabatic quantum computation. Interestingly, he refers to the gate model as a useful theoretical construct but impractical to implement, whereas cluster state and adiabatic are practical.
My opinion, admittedly less informed than Rose's, is exactly the opposite. This comes, no doubt, from my background as a classical digital computer guy. I know how to create languages and compilers for them, and how to program them, how to build them, how to make them fault tolerant. Despite having taken a class from Carver Mead, I understand almost none of those critical topics for analog (whether quantum or classical).
Moreover, many quantum algorithms are essentially digital in nature (though the interference in the QFT can be described as an analog phenomenon). The best use of cluster-state computing we know of at the moment is as a substrate for the circuit model. And, at least for, say, digital arithmetic (something I know well), the cluster-state model uses one hundred times the resources of a more straightforward implementation. Michael Nielsen is working on some intriguing things, combining cluster-state with error correction in ways that may be more robust or more efficient, but that 100x penalty is a big one to overcome as we struggle to get even a few gates working properly.
Don't get me wrong -- I'm a big fan of analog computing. But Rose's opinion I find a little startling, and it will prod me into some more reading on adiabatic quantum computation, which has been on my list of things to do anyway...
My opinion, admittedly less informed than Rose's, is exactly the opposite. This comes, no doubt, from my background as a classical digital computer guy. I know how to create languages and compilers for them, and how to program them, how to build them, how to make them fault tolerant. Despite having taken a class from Carver Mead, I understand almost none of those critical topics for analog (whether quantum or classical).
Moreover, many quantum algorithms are essentially digital in nature (though the interference in the QFT can be described as an analog phenomenon). The best use of cluster-state computing we know of at the moment is as a substrate for the circuit model. And, at least for, say, digital arithmetic (something I know well), the cluster-state model uses one hundred times the resources of a more straightforward implementation. Michael Nielsen is working on some intriguing things, combining cluster-state with error correction in ways that may be more robust or more efficient, but that 100x penalty is a big one to overcome as we struggle to get even a few gates working properly.
Don't get me wrong -- I'm a big fan of analog computing. But Rose's opinion I find a little startling, and it will prod me into some more reading on adiabatic quantum computation, which has been on my list of things to do anyway...
Thursday, August 24, 2006
What He Said
Bruce Schneier is one of the sanest, smartest security people around, as he demonstrates once again with an essay on terror.
Friday, July 28, 2006
Now Available: "Arithmetic on a Distributed-Memory Quantum Multicomputer"
Available as quant-ph/0607160. This is an extended version of our ISCA paper, submitted to ACM's Journal of Emerging Technologies in Computing Systems.
Tuesday, July 25, 2006
Physics at Work and Play
A friend of mine sent me this, and wondered if it's fast enough to read your email on the surface of the pool...
The End of Moore's Law?
At ISCA this year, much of the talk in the halls was about the end of Moore's Law. Not down the road as we get to atomic structures too small to do lithography, now. Moore's Law ended last year (2005). The CPU manufacturers can't keep increasing the clock speed enough to gain the 2x performance improvement we have come to expect every (pick your doubling time).
The biggest problem is heat. Smaller transistors have higher leakage current, meaning more heat even when the system is idle. Raise the clock speed, and heat goes up. The maximum that can be extracted from a silicon die is about 100 watts per square centimeter. We're already there.
So, what next? Well, it is well know that Intel and AMD are both shipping dual-core processors -- two microprocessors on a single chip, sharing an L2 cache. Both companies have also promised quad-core chips by mid-2007. What is happening is that each processor will now gain perhaps only 10-20% in performance each year, but the number of cores on a chip will double every (pick your doubling time) until lithography really does run out of gas, or quantum effects come into play, in fifteen years or so.
What does this mean to Joe Programmer? It means that if you have code whose performance you care about -- whether it's physics simulations or evolutionary algorithms or graphics rendering -- and your code isn't parallel, you have a problem. Not tomorrow, today. The handwriting has been on the wall for years, with multithreaded processors and known heat problems and whatnot. Now it's really here. Of course, if you have very large computations, you have probably already made that work in a distributed-memory compute cluster.
Haven't you?
The biggest problem is heat. Smaller transistors have higher leakage current, meaning more heat even when the system is idle. Raise the clock speed, and heat goes up. The maximum that can be extracted from a silicon die is about 100 watts per square centimeter. We're already there.
So, what next? Well, it is well know that Intel and AMD are both shipping dual-core processors -- two microprocessors on a single chip, sharing an L2 cache. Both companies have also promised quad-core chips by mid-2007. What is happening is that each processor will now gain perhaps only 10-20% in performance each year, but the number of cores on a chip will double every (pick your doubling time) until lithography really does run out of gas, or quantum effects come into play, in fifteen years or so.
What does this mean to Joe Programmer? It means that if you have code whose performance you care about -- whether it's physics simulations or evolutionary algorithms or graphics rendering -- and your code isn't parallel, you have a problem. Not tomorrow, today. The handwriting has been on the wall for years, with multithreaded processors and known heat problems and whatnot. Now it's really here. Of course, if you have very large computations, you have probably already made that work in a distributed-memory compute cluster.
Haven't you?
Monday, July 24, 2006
Mental Soroban
I recently mused about my daughter's soroban (abacus) lessons. Yesterday we had a nice moment over dinner. She and a boy from her class (both second graders) were sitting at the table in a restaurant, and I was challenging them with doubling addition: "What's 1+1? 2+2? 4+4?" and on up. They got as far as 4096+4096, but couldn't do 8192+8192 in their heads.
We got to a certain point (1024+1024, maybe?) and my daughter thought for a moment, then produced the right answer. When I praised her, she said, "I just imagined a soroban in my head and used that." Yes!!! That's the way it's supposed to work! She's actually learning math.
(I once TAed in a gifted program for junior high schoolers, and one of them had memorized a log table and could do not just large multiplications but even exponentiations in his head. That seems a little extreme...)
(Back In The Day when I was doing a lot of hexadecimal debugging on hardware, if I got stuck on a train without something to read, I would run through the hex multiplication tables in my head. I got good enough at it to be useful for work, but I haven't used it much in a long time, so it's mostly gone now...)
We got to a certain point (1024+1024, maybe?) and my daughter thought for a moment, then produced the right answer. When I praised her, she said, "I just imagined a soroban in my head and used that." Yes!!! That's the way it's supposed to work! She's actually learning math.
(I once TAed in a gifted program for junior high schoolers, and one of them had memorized a log table and could do not just large multiplications but even exponentiations in his head. That seems a little extreme...)
(Back In The Day when I was doing a lot of hexadecimal debugging on hardware, if I got stuck on a train without something to read, I would run through the hex multiplication tables in my head. I got good enough at it to be useful for work, but I haven't used it much in a long time, so it's mostly gone now...)
Thursday, July 20, 2006
Food With Mojo: the Molecular Tapas Bar
This is food with mojo. I'm not talking Texas side-of-steer barbecue mojo, nor am I talking habanero-induced neuronal apoptosis mojo. I'm talking about a chef with I'm-in-complete-control-of-your-sensory-experience mojo: the Molecular Tapas Bar at the Mandarin Oriental Hotel in Tokyo.
You'll find pictures on Mayumi's blog.
This was way more money that we needed to be spending on dinner as we stare at a life of postdoc penury, but it was a combination belated fortieth birthday, thesis defense, and upcoming tenth wedding anniversary dinner, and it was worth it (we had originally hoped to spend part of this summer in Europe, but things haven't all gone according to plan). We probably don't get to go back to the Molecular Tapas Bar until one of you with a large expense account comes to visit, in which case you need to let me know at least a month in advance, so I can get reservations.
The evening started with a stroll through the Ginza, including a stop for sangria (for Mayumi), ice tea (for me), olives and cheese at an open-air Spanish bar. The bar is just a couple of doors from a cheese shop we like, but we were just browsing, since there was a long evening and no refrigeration in front of us. They have fantasic cheese, with prices to match, starting at about eight hundred yen per hundred grams (about thirty bucks a pound) and going to several times that, with photos and biographies of the cheesemakers posted alongside fantastic blues and various wonderful, stinky cheeses imported from obscure corners of Europe.
The evening air was perfect, and clouds provided a spectacular sunset as we wandered down the street, ogling the people who can afford to shop in the Ginza's Prada, Mikimoto, Louis Vuitton, Apple Computer, and other emporia. Mikimoto has opened a new building with oblong, rounded windows at a cant, and an exterior color that seems to have been chosen by a committee determined to find just the right shade of elegant pink that would appeal to the largest number of unapologetically feminine shoppers. Louis Vuitton is redoing the exterior of their building, and went to a great deal of trouble to create a scaffolding that doesn't give the appearance of a construction zone. It's white-painted I-beams, with a high ceiling and lighting better than the interior of many stores, giving the sidewalk out front the feel of a mall.
When we arrived at the hotel, we had a little trouble finding it. The hotel is in the top half of a 38-story building, and not well marked at street level. We entered the elevator, and were joined by a group of men in black suits and white ties, and a woman in a formal kimono: guests at a wedding. One white-haired gentleman looked up at me from a stature that started some twenty centimeters below mine and was further compromised by his state of inebriation. "Where you from?" he asked in thickly-accented English. "California," I replied. "Ah, Kariforunia, hai, hai," he mused. He gestured vaguely at his companions. "Wedding." "Hey, are you speaking English?" someone asked him, to much tittering from the rest of the group. He waved his hand in front of his face as if shooing away flies, in that Japanese no-no-no gesture. "Katakana da yo. [I'm speaking in katakana.]" (Katakana is the Japanese syllabary used to spell out foreign words, and pronunciation probably bears as much resemblance to real English as our approximation of Chinese does to the real thing.)
The lobby of the Mandarin Oriental is on the 37th and 38th floors, with floor-to-ceiling windows and somewhat vertiginous views. The lobby is well done, elegant and modern without being intimidating. First stop was the facilities, and in the men's, you perform your business looking out and down, which is a tad disconcerting.
The Molecular Tapas Bar is through the cigar bar. They ask you to be early, since all six people at each of the evening's two sittings will eat the same food at the same time. Dinner was scheduled for 8:30, and ran two hours. You sit at a combination sushi and liquor bar, with two chefs in front of you, a bartender in the background, numerous waitresses hovering nearby, and at least one sous chef schlepping material to and from a kitchen when ordered by the chef with the wireless headset.
The meal started with a kampai (toast) of a small shot of beer topped with whipped Yakult, a viscous yogurt-based drink. (I had ordered the alcohol-free meal when I made our reservations, but that must have gotten lost somewhere.) Then, the food began: twenty-two courses approximating what was printed on the fixed-course menu, with a couple of surprising and delightful deviations. Sato-san, our lead chef for the evening, explained everything, and fancies himself a comedian. He looked at me and asked (in Japanese), "Are you okay with Japanese?" "Yes," I replied. He turned to the rest of the diners and said, "I'm okay with Japanese, too."
The experience is two parts Tokyo, one part "Iron Chef", one part Harold McGee, and one part Terry Gilliam. We were served various gases in several forms, and food from test tubes, syringes, and pressure vessels. Ingredients were mostly identifiable, including numerous vegetables, a couple of types of fish and some beef, but combined in startling ways.
The first courses, some form of extruded and deep-fried risotto and crispy floss of beet, could conceivably show up in a California restaurant. After that, as they say, when the going gets weird, the weird turn pro. Next up was a bottomless test tube with orange material inside: ikura (salmon roe) and passionfruit. Tip it back, suck on the tube, and a burst of flavor hits you, salty and sweet and unexpected. This course was fantastic, my favorite of the meal.
Throughout the entire meal, Mayumi took pictures of every dish, the utensils, the staff. At first I was vaguely concerned that it might annoy some of the other diners, but no, four of the six of us were taking photos constantly -- two using cell phones, one with a small digital camera, and Mayumi with her digital SLR. And the chef/comedian tried to get into every picture, with a "V" sign up. Japanese people are born with some gene that makes them tilt their head to the side and make a "V" (peace sign) whenever someone points a camera at them. You can take pictures of practically anything in this country and no one will blink, and these days a decent fraction land on someone's blog.
Up next was cotton candy. Well, cotton candy foam, which is even more like eating air than regular cotton candy (the chef would prove to be fond of his Williams-Sonoma hand whipper). Then, glass noodles topped with parmesan cheese. Correction: glass noodles made from parmesan cheese. How on earth did they make them transparent? They certainly retained that characteristic parmesan flavor. This was followed by probably my least favorite course of the meal, the uni (sea urchin) with apple, maccha (green tea strong enough to take the enamel off your teeth, the kind they use in the tea ceremony) and twenty five-year old balsamic vinegar. The uni had a strong flavor, and I'm sure it was good uni, it's just not tops on my list.
(The staff wasn't actually in complete control of our sensory experience; our elevator acquaintance was seated about three meters behind us at a bar table, and was still riffing on the America theme.)
One of the most fascinating dishes was pink gazpacho soup, served over a chunk of watermelon and garnished with...something. He's dipping it out of a styrofoam cooler with clouds of vapor coming out. It looks like very fine bread crumbs, and it creates clouds of vapor when it's in the soup. Explanation: olive oil frozen using liquid nitrogen! Now there's an easy way to spiff up a dish! Oh, and tasty, too.
The unagi (eel) with pineapple and whipped avocado was nothing spectacular. Good thought, though. A couple of well-done but mundane dishes (including miso soup mousse and beet ravioli that Mayumi referred to as the first time she had ever had a positive experience with a beet) were followed by more air, a palate cleanser of sorts. Large tumblers are placed on the bar, with metal straws in them. The bartender comes over and shakes a cocktail shaker, which rattles, runs through her whole mixing routine, then pours air into each of the tumblers, which are then passed out and we are encouraged to drink them. I suck on the straw. A burst of cold sherbert with enough liqueur in it to give it a real punch blasts into my mouth. (This was the one time I regretted not having pushed the non-alcohol thing, but even so, it was an experience.) (Part of the cleverness of this one is the surprising delivery, but I assume the menu rotates often enough that, even if you get the chance to try the restaurant, something else will have taken its place.)
The carrot caviar was created using numerous syringes, dribbling drops of carrot juice into an acrylic container filled with some liquid (which I think he said included calcium) that caused the juice to congeal and take on the texture of caviar. Vegan caviar!
The sizzling beef was beyond amazing. The chef brought out a pressure vessel something like a stainless steel seltzer bottle. He bled off the pressure with a blast, retrieved a roll of beef, sliced and served it. The beef sizzled! Not because it was hot, but because it was outgassing -- a cow with a MAJOR case of the bends. Put a bite in your mouth, and you can feel it fizzing in your mouth. The meat is spongy and moist, cooked just right and very tasty.
The desserts are like something from Willie Wonka. First up is one titled "Blue Hawaii" on the English side of the menu and "kuuki-gori" on the Japanese side. It's a pun on "kake-gori", or shave ice; "kuuki" means "air". It is blue, and it is indeed air: dry ice apparently misted with some blue flavoring. Clouds roll out of the dish. Put a spoonful in your mouth, and clouds billow out of your mouth! I remember as undergrads we drank mad scientist-like glasses of Coke foaming and steaming and boiling thanks to chunks of dry ice, but the thought of shaving it and eating it directly apparently didn't occur to us, I'm ashamed to admit.
Several more desserts follow, including one that looks like a sunny-side up egg and bacon but isn't, one candy wrapped in cellophane that you eat cellophane and all, and a microscopic cake in a glass bubble. Finally, it's time for the fruit course.
A tray with two strawberries, and two slices each of grapefruit, orange and lemon is placed before me. I'm suspicious. Just fruit, from these guys? We're instructed to eat one strawberry, then a slice of lemon, which of course is sour. Then we are each given a magic fruit, a berry of some sort with a large pit, which has a mild, fruity flavor. We are told to keep that in our mouths for a full minute (he uses an egg timer), then return to the next slice of lemon. It's sweet! The magic fruit has changed our perception of taste (I told you, these guys manage the entire sensory experience). Return to the strawberry, and it's almost too sweet to eat. Better eating through chemistry.
I found out about this place through a column in the Daily Yomiuri in which the writer used it as evidence that Tokyo has gotten its mojo back, like in the days of the Bubble. An evening in the Ginza and dinner at the Molecular Tapas Bar would certainly convince you that you're in a wealthy, bold country. The food was very good, but it's the presentation and delivery that really makes it a unique, almost science fiction-esque experience. All in all, we got our money's worth. We'll use these stories for a very long time. And if any of you come visit, I'll be happy to serve as interpreter while you add to your own personal stock of food stories :-).
I know I promised you a report a while ago, sorry to be so long getting it written up!
You'll find pictures on Mayumi's blog.
This was way more money that we needed to be spending on dinner as we stare at a life of postdoc penury, but it was a combination belated fortieth birthday, thesis defense, and upcoming tenth wedding anniversary dinner, and it was worth it (we had originally hoped to spend part of this summer in Europe, but things haven't all gone according to plan). We probably don't get to go back to the Molecular Tapas Bar until one of you with a large expense account comes to visit, in which case you need to let me know at least a month in advance, so I can get reservations.
The evening started with a stroll through the Ginza, including a stop for sangria (for Mayumi), ice tea (for me), olives and cheese at an open-air Spanish bar. The bar is just a couple of doors from a cheese shop we like, but we were just browsing, since there was a long evening and no refrigeration in front of us. They have fantasic cheese, with prices to match, starting at about eight hundred yen per hundred grams (about thirty bucks a pound) and going to several times that, with photos and biographies of the cheesemakers posted alongside fantastic blues and various wonderful, stinky cheeses imported from obscure corners of Europe.
The evening air was perfect, and clouds provided a spectacular sunset as we wandered down the street, ogling the people who can afford to shop in the Ginza's Prada, Mikimoto, Louis Vuitton, Apple Computer, and other emporia. Mikimoto has opened a new building with oblong, rounded windows at a cant, and an exterior color that seems to have been chosen by a committee determined to find just the right shade of elegant pink that would appeal to the largest number of unapologetically feminine shoppers. Louis Vuitton is redoing the exterior of their building, and went to a great deal of trouble to create a scaffolding that doesn't give the appearance of a construction zone. It's white-painted I-beams, with a high ceiling and lighting better than the interior of many stores, giving the sidewalk out front the feel of a mall.
When we arrived at the hotel, we had a little trouble finding it. The hotel is in the top half of a 38-story building, and not well marked at street level. We entered the elevator, and were joined by a group of men in black suits and white ties, and a woman in a formal kimono: guests at a wedding. One white-haired gentleman looked up at me from a stature that started some twenty centimeters below mine and was further compromised by his state of inebriation. "Where you from?" he asked in thickly-accented English. "California," I replied. "Ah, Kariforunia, hai, hai," he mused. He gestured vaguely at his companions. "Wedding." "Hey, are you speaking English?" someone asked him, to much tittering from the rest of the group. He waved his hand in front of his face as if shooing away flies, in that Japanese no-no-no gesture. "Katakana da yo. [I'm speaking in katakana.]" (Katakana is the Japanese syllabary used to spell out foreign words, and pronunciation probably bears as much resemblance to real English as our approximation of Chinese does to the real thing.)
The lobby of the Mandarin Oriental is on the 37th and 38th floors, with floor-to-ceiling windows and somewhat vertiginous views. The lobby is well done, elegant and modern without being intimidating. First stop was the facilities, and in the men's, you perform your business looking out and down, which is a tad disconcerting.
The Molecular Tapas Bar is through the cigar bar. They ask you to be early, since all six people at each of the evening's two sittings will eat the same food at the same time. Dinner was scheduled for 8:30, and ran two hours. You sit at a combination sushi and liquor bar, with two chefs in front of you, a bartender in the background, numerous waitresses hovering nearby, and at least one sous chef schlepping material to and from a kitchen when ordered by the chef with the wireless headset.
The meal started with a kampai (toast) of a small shot of beer topped with whipped Yakult, a viscous yogurt-based drink. (I had ordered the alcohol-free meal when I made our reservations, but that must have gotten lost somewhere.) Then, the food began: twenty-two courses approximating what was printed on the fixed-course menu, with a couple of surprising and delightful deviations. Sato-san, our lead chef for the evening, explained everything, and fancies himself a comedian. He looked at me and asked (in Japanese), "Are you okay with Japanese?" "Yes," I replied. He turned to the rest of the diners and said, "I'm okay with Japanese, too."
The experience is two parts Tokyo, one part "Iron Chef", one part Harold McGee, and one part Terry Gilliam. We were served various gases in several forms, and food from test tubes, syringes, and pressure vessels. Ingredients were mostly identifiable, including numerous vegetables, a couple of types of fish and some beef, but combined in startling ways.
The first courses, some form of extruded and deep-fried risotto and crispy floss of beet, could conceivably show up in a California restaurant. After that, as they say, when the going gets weird, the weird turn pro. Next up was a bottomless test tube with orange material inside: ikura (salmon roe) and passionfruit. Tip it back, suck on the tube, and a burst of flavor hits you, salty and sweet and unexpected. This course was fantastic, my favorite of the meal.
Throughout the entire meal, Mayumi took pictures of every dish, the utensils, the staff. At first I was vaguely concerned that it might annoy some of the other diners, but no, four of the six of us were taking photos constantly -- two using cell phones, one with a small digital camera, and Mayumi with her digital SLR. And the chef/comedian tried to get into every picture, with a "V" sign up. Japanese people are born with some gene that makes them tilt their head to the side and make a "V" (peace sign) whenever someone points a camera at them. You can take pictures of practically anything in this country and no one will blink, and these days a decent fraction land on someone's blog.
Up next was cotton candy. Well, cotton candy foam, which is even more like eating air than regular cotton candy (the chef would prove to be fond of his Williams-Sonoma hand whipper). Then, glass noodles topped with parmesan cheese. Correction: glass noodles made from parmesan cheese. How on earth did they make them transparent? They certainly retained that characteristic parmesan flavor. This was followed by probably my least favorite course of the meal, the uni (sea urchin) with apple, maccha (green tea strong enough to take the enamel off your teeth, the kind they use in the tea ceremony) and twenty five-year old balsamic vinegar. The uni had a strong flavor, and I'm sure it was good uni, it's just not tops on my list.
(The staff wasn't actually in complete control of our sensory experience; our elevator acquaintance was seated about three meters behind us at a bar table, and was still riffing on the America theme.)
One of the most fascinating dishes was pink gazpacho soup, served over a chunk of watermelon and garnished with...something. He's dipping it out of a styrofoam cooler with clouds of vapor coming out. It looks like very fine bread crumbs, and it creates clouds of vapor when it's in the soup. Explanation: olive oil frozen using liquid nitrogen! Now there's an easy way to spiff up a dish! Oh, and tasty, too.
The unagi (eel) with pineapple and whipped avocado was nothing spectacular. Good thought, though. A couple of well-done but mundane dishes (including miso soup mousse and beet ravioli that Mayumi referred to as the first time she had ever had a positive experience with a beet) were followed by more air, a palate cleanser of sorts. Large tumblers are placed on the bar, with metal straws in them. The bartender comes over and shakes a cocktail shaker, which rattles, runs through her whole mixing routine, then pours air into each of the tumblers, which are then passed out and we are encouraged to drink them. I suck on the straw. A burst of cold sherbert with enough liqueur in it to give it a real punch blasts into my mouth. (This was the one time I regretted not having pushed the non-alcohol thing, but even so, it was an experience.) (Part of the cleverness of this one is the surprising delivery, but I assume the menu rotates often enough that, even if you get the chance to try the restaurant, something else will have taken its place.)
The carrot caviar was created using numerous syringes, dribbling drops of carrot juice into an acrylic container filled with some liquid (which I think he said included calcium) that caused the juice to congeal and take on the texture of caviar. Vegan caviar!
The sizzling beef was beyond amazing. The chef brought out a pressure vessel something like a stainless steel seltzer bottle. He bled off the pressure with a blast, retrieved a roll of beef, sliced and served it. The beef sizzled! Not because it was hot, but because it was outgassing -- a cow with a MAJOR case of the bends. Put a bite in your mouth, and you can feel it fizzing in your mouth. The meat is spongy and moist, cooked just right and very tasty.
The desserts are like something from Willie Wonka. First up is one titled "Blue Hawaii" on the English side of the menu and "kuuki-gori" on the Japanese side. It's a pun on "kake-gori", or shave ice; "kuuki" means "air". It is blue, and it is indeed air: dry ice apparently misted with some blue flavoring. Clouds roll out of the dish. Put a spoonful in your mouth, and clouds billow out of your mouth! I remember as undergrads we drank mad scientist-like glasses of Coke foaming and steaming and boiling thanks to chunks of dry ice, but the thought of shaving it and eating it directly apparently didn't occur to us, I'm ashamed to admit.
Several more desserts follow, including one that looks like a sunny-side up egg and bacon but isn't, one candy wrapped in cellophane that you eat cellophane and all, and a microscopic cake in a glass bubble. Finally, it's time for the fruit course.
A tray with two strawberries, and two slices each of grapefruit, orange and lemon is placed before me. I'm suspicious. Just fruit, from these guys? We're instructed to eat one strawberry, then a slice of lemon, which of course is sour. Then we are each given a magic fruit, a berry of some sort with a large pit, which has a mild, fruity flavor. We are told to keep that in our mouths for a full minute (he uses an egg timer), then return to the next slice of lemon. It's sweet! The magic fruit has changed our perception of taste (I told you, these guys manage the entire sensory experience). Return to the strawberry, and it's almost too sweet to eat. Better eating through chemistry.
I found out about this place through a column in the Daily Yomiuri in which the writer used it as evidence that Tokyo has gotten its mojo back, like in the days of the Bubble. An evening in the Ginza and dinner at the Molecular Tapas Bar would certainly convince you that you're in a wealthy, bold country. The food was very good, but it's the presentation and delivery that really makes it a unique, almost science fiction-esque experience. All in all, we got our money's worth. We'll use these stories for a very long time. And if any of you come visit, I'll be happy to serve as interpreter while you add to your own personal stock of food stories :-).
I know I promised you a report a while ago, sorry to be so long getting it written up!
Tuesday, July 18, 2006
The Age of Spiritual Machines, Part II
[Part II of a long review. Part I is here.]
One area where I am deeply skeptical of Kurzweil is in the idea of scanning a living person's mind and reinstantiating them on the Net somewhere. While I think the principles of intelligence and consciousness are fairly robust, the instantaneous state of an individual's mind indeed seem to be ephemeral and delicate, dependent on the Brownian motion and diffusion of neurotransmitter molecules and the electronic state of individual neurons. Samuel Braunstein (a well-known quantum computing researcher) gave a casual talk a few years ago in which he estimated that teleporting a human being would require 10^32 bits of information. Even lopping off a couple of zeroes and doing just the brain, that's still eight or nine decimal orders of magnitude larger than the biggest data archives I know of. Braunstein is estimating at the atomic level, but at the molecular level might be adequate for much of the data; Kurzweil would abstract away a bunch of that data, but the scanning process would probably require that level of detail to capture the state of mind of a person, and it would have to be done in some small time slice. I'm not convinced that this will ultimately prove to be within the bounds of the laws of physics, let alone be technically practical. I certainly don't foresee that we'll have a brain scanner whose data rate is, say, 10^33 bits/second within fifty years.
But Kurzweil talks about an interesting "back door" to getting your mind on the Net that borrows from the cyberpunks. We already have retinal and cochlear implants, pacemakers, and experimental systems (both invasive and not) that transform neuronal impulses into mechanical movements. It seems that brain implants to help control seizures are on the horizon. It's certainly plausible that we will eventually develop implants with other capabilities -- the direct network (and starship piloting) links of science fiction, maybe trackers and immobilizers for violent felons, spinal cord bridges for those with damage. Maybe some sort of memory augmentation for those with a particular form of dementia? Then what's to stop an aging chess master with strong reasoning but fading memory from getting a memory implant that happens to have Modern Chess Openings in ROM? Kurzweil seems to hit a very important point that defining who remains human (and who becomes human -- ask Andrew of Isaac Asimov's "The Bicentennial Man") will become an interesting task over the next century or so. Baseball players can use contact lenses, weight machines and special diets but not steroids. As wearable computing becomes more unobtrusive and blends into implants, what is acceptable for professional go players, and for that matter what is detectable?
The creation of these technologies will take a long time, in my opinion; I very much doubt that any time during this century some human will abandon a physical body and move completely onto the Net. Instead, I foresee a century full of gradually more useful and interactive tools, some of which will begin to exhibit enough "intelligence" that they appear to think both strategically and spontaneously. Kurzweil talks about these personal virtual assistants, and how people will become more and more attached to them. I saw a remarkable example of this with Sony's Aibo robot dog. A friend bought one and brought it to the office. It had no real facial expressions, but when it tilted its head to one side and cocked an ear, everyone within sight said, "Oh, how cute!" projecting a personality onto a bit of clever programming. That phenomenon will undoubtedly accelerate as the behavior of computer systems becomes more complex; we will become emotionally attached to those that are helpful or friendly or clever or meet our biologically and culturally preconceived notions of cute. (Kurzweil doesn't really seem to consider deeply the possibility that the artificial intelligences we create will be *different* from us. If computer chess has taught us anything, it's that there's more than one way to be good at chess; what if the same turns out to be true of many other intellectual endeavors?)
Finally, but critically, Kurzweil seems to happily ignore the real world. Luddites make a cameo or two, mostly to serve as foils for explaining his technical notions more thoroughly, but society as anything other than a substrate for the growth of the technology itself seems to play no role in his thinking. We will all adopt the technology as soon as is feasible, and eventually we'll all move onto the Net (it'll be wonderful there). In particular, the developing world never appears in Spiritual. What will become of people in Bhutan and Tibet? Will this accelerate the disparity in birth rates between the developed and developing worlds, and what will that do to the world economy? Who will mine the coal that keeps the machines running? Does a world of physical isolation and idleness await us, and who will fix it when it breaks (as in E.M. Forster's "The Machine Stops")? Will this inflame ethnic, cultural, and religious tensions? What will Muslim and Catholic societies think of artificial intelligences that demand rights, let alone implants that mess around with any bodily function viewed as something core to being human? The Dalai Lama recently had some interesting speculations on the possibility that a true AI would have a soul, which must be drawn from the pre-existing, fixed pool of souls -- hmm, a grasshopper reincarnated as an AI?
In the end, even the predictions I agree with I find simplistic, and likely optimistic by a factor of two or more in time frame. I'm not sure if his unbridled optimism does more good than harm to the important ideas of AI, and the future of technology.
This is recommended reading, especially if you haven't spent any time thinking about the topics involved -- but fill your salt shaker first.
You'll find Kurzweil's timeline (created circa 1998, and continuing seamlessly from the birth of the Universe to the end of the twenty-first century) here.
One area where I am deeply skeptical of Kurzweil is in the idea of scanning a living person's mind and reinstantiating them on the Net somewhere. While I think the principles of intelligence and consciousness are fairly robust, the instantaneous state of an individual's mind indeed seem to be ephemeral and delicate, dependent on the Brownian motion and diffusion of neurotransmitter molecules and the electronic state of individual neurons. Samuel Braunstein (a well-known quantum computing researcher) gave a casual talk a few years ago in which he estimated that teleporting a human being would require 10^32 bits of information. Even lopping off a couple of zeroes and doing just the brain, that's still eight or nine decimal orders of magnitude larger than the biggest data archives I know of. Braunstein is estimating at the atomic level, but at the molecular level might be adequate for much of the data; Kurzweil would abstract away a bunch of that data, but the scanning process would probably require that level of detail to capture the state of mind of a person, and it would have to be done in some small time slice. I'm not convinced that this will ultimately prove to be within the bounds of the laws of physics, let alone be technically practical. I certainly don't foresee that we'll have a brain scanner whose data rate is, say, 10^33 bits/second within fifty years.
But Kurzweil talks about an interesting "back door" to getting your mind on the Net that borrows from the cyberpunks. We already have retinal and cochlear implants, pacemakers, and experimental systems (both invasive and not) that transform neuronal impulses into mechanical movements. It seems that brain implants to help control seizures are on the horizon. It's certainly plausible that we will eventually develop implants with other capabilities -- the direct network (and starship piloting) links of science fiction, maybe trackers and immobilizers for violent felons, spinal cord bridges for those with damage. Maybe some sort of memory augmentation for those with a particular form of dementia? Then what's to stop an aging chess master with strong reasoning but fading memory from getting a memory implant that happens to have Modern Chess Openings in ROM? Kurzweil seems to hit a very important point that defining who remains human (and who becomes human -- ask Andrew of Isaac Asimov's "The Bicentennial Man") will become an interesting task over the next century or so. Baseball players can use contact lenses, weight machines and special diets but not steroids. As wearable computing becomes more unobtrusive and blends into implants, what is acceptable for professional go players, and for that matter what is detectable?
The creation of these technologies will take a long time, in my opinion; I very much doubt that any time during this century some human will abandon a physical body and move completely onto the Net. Instead, I foresee a century full of gradually more useful and interactive tools, some of which will begin to exhibit enough "intelligence" that they appear to think both strategically and spontaneously. Kurzweil talks about these personal virtual assistants, and how people will become more and more attached to them. I saw a remarkable example of this with Sony's Aibo robot dog. A friend bought one and brought it to the office. It had no real facial expressions, but when it tilted its head to one side and cocked an ear, everyone within sight said, "Oh, how cute!" projecting a personality onto a bit of clever programming. That phenomenon will undoubtedly accelerate as the behavior of computer systems becomes more complex; we will become emotionally attached to those that are helpful or friendly or clever or meet our biologically and culturally preconceived notions of cute. (Kurzweil doesn't really seem to consider deeply the possibility that the artificial intelligences we create will be *different* from us. If computer chess has taught us anything, it's that there's more than one way to be good at chess; what if the same turns out to be true of many other intellectual endeavors?)
Finally, but critically, Kurzweil seems to happily ignore the real world. Luddites make a cameo or two, mostly to serve as foils for explaining his technical notions more thoroughly, but society as anything other than a substrate for the growth of the technology itself seems to play no role in his thinking. We will all adopt the technology as soon as is feasible, and eventually we'll all move onto the Net (it'll be wonderful there). In particular, the developing world never appears in Spiritual. What will become of people in Bhutan and Tibet? Will this accelerate the disparity in birth rates between the developed and developing worlds, and what will that do to the world economy? Who will mine the coal that keeps the machines running? Does a world of physical isolation and idleness await us, and who will fix it when it breaks (as in E.M. Forster's "The Machine Stops")? Will this inflame ethnic, cultural, and religious tensions? What will Muslim and Catholic societies think of artificial intelligences that demand rights, let alone implants that mess around with any bodily function viewed as something core to being human? The Dalai Lama recently had some interesting speculations on the possibility that a true AI would have a soul, which must be drawn from the pre-existing, fixed pool of souls -- hmm, a grasshopper reincarnated as an AI?
In the end, even the predictions I agree with I find simplistic, and likely optimistic by a factor of two or more in time frame. I'm not sure if his unbridled optimism does more good than harm to the important ideas of AI, and the future of technology.
This is recommended reading, especially if you haven't spent any time thinking about the topics involved -- but fill your salt shaker first.
You'll find Kurzweil's timeline (created circa 1998, and continuing seamlessly from the birth of the Universe to the end of the twenty-first century) here.
The Age of Spiritual Machines, Part I
[Part I of a long review. Part II is here.]
Ray Kurzweil is a salesman, and a True Believer. I just finished reading his The Age of Spiritual Machines, in which he shares his faith in neural networks, evolutionary algorithms, Drexlerian nanotechnology, and Moore's Law, which leads him to conclude that a "strong" AI (a true intelligence, more than just a program capable of passing the Turing Test) will emerge around 2019 (indeed, will be runnable on a single PC), and that progress will continue to accelerate toward the point where human and machine intelligences merge on the Net before the end of the twenty-first century (an event which he calls the Singularity).
I have many problems with the book, though there are broad areas of agreement on fundamental principles -- I'm a believer in strong AI. If carbon-based matter can think, I see no reason why silicon-based matter can't think -- and no reason to believe that we can't build it, and that it will improve over time. But that's a very far cry from agreeing with the major themes, let alone details, of Kurzweil's book.
The first, and biggest, problem is his Law of Accelerating Returns. While Henry Adams was mulling this concept for human society a hundred years ago, Kurzweil goes far beyond Adams (whom he doesn't appear to cite, though maybe I missed it; in general, the footnoting in the book is good, but the prior literature including science fiction is certainly vast) and asserts that the evolution of the Universe itself has as a goal the creation of intelligence, and that evolution runs at an ever-accelerating pace, unstoppably. He treats this as some sort of vaguely-defined physical law, which I find implausible and poorly supported, at best. (Perhaps he has a more technical argument in a paper somewhere? After all, this is a pop "science" book.) He pays a bit of lip service to punctuated equilibria (misreading Gould, in my opinion) and the possibility of catastrophic societal meltdowns, but doesn't really put much stock in them. He doesn't deal with the fact that dinosaurs seemed quite comfortably in control of the planet until catastrophe befell them -- without any archeological or paleontological evidence that dinosaurs needed intelligence to maintain their dominance, or indeed that their evolution over much of their dominant period truly constituted "progress" as we would define it.
Likewise, Kurzweil extends Moore's Law to some sort of supernatural phenomenon, arguing that computational power starting with mechanical calculators and continuing through the end of the nominal VLSI-relevant Moore's Law in 20-30 years, then continuing through some ill-defined nanotech computational substrate, continues to accelerate. Not just stays on Moore's Law, but that the performance-doubling time will continue to shrink! While his twentieth-century chart is fascinating, I doubt very much that some sort of fundamental principle is in evidence, and that the rate of computation will continue to advance until we are computing with individual quarks. Kurzweil mentions S-curves and the end of exponential growth, but simply has faith that we will find some way around it -- that as each individual S-curve begins to tail off, there is another waiting in the wings to pick up the baton and run.
Kurzweil spends a few pages discussing quantum computing, and while it's not very good, it's also not terrible for a layman's understanding circa 1998. He does conclude (correctly, IMHO) that quantum computing is likely to be a special-purpose tool, rather than a true replacement for all computation.
Kurzweil has worked on voice recognition. I don't dispute that he dictated the bulk of Spiritual to a voice recognition system, but the assertion that keyboards would practically disappear by 2009 must have seemed a reach even in 1999. Likewise, it seems to me that he has substantially oversold the capabilities -- both contemporary impact and future breadth of applicability -- of neural nets and evolutionary algorithms. I have a little experience (more as a user than developer, in collaboration with another researcher) with both evolutionary and neural nets, and in my experience, they take a lot of care and feeding, and getting them to scale reasonably with the problem size is difficult; they tend to need fairly structured guidance, rather than simply turning them on and letting them go. Let me hasten to add that I'm a believer in the value of these technologies -- but they certainly are not yet some silver bullet that allows us to dispense with understanding problems ourselves before instructing a computer how to solve them for us.
Kurzweil believes that simple (ultra-)Moore's Law growth in computation will allow us to scale up these two technologies to the point where it's possible for us to just turn them loose (maybe with a dash or two of learning about the human brain's structure) and we'll get intelligent beings; we already have abstracted the neuron adequately, and only need to evolve large enough and correctly connected neural nets and the structures themselves will take over from there. While it's a beguiling scenario, my opinion is that we are likely to actually need new insights and will have to actively guide their development. Simply creating some sort of neuronal evolutionary soup leaves us in a combinatorial space beyond all comprehending in size -- waiting for a human brain to evolve in that environment is going to require eons, in my opinion.
Kurzweil takes on Roger Penrose's The Emperor's New Mind, which was already old news when he was writing but is still an influential book. I read TENM shortly after it came out, and while the details have long since faded, I was unconvinced by Penrose's arguments, which seemed to amount to the assertion that intelligence (or consciousness?) requires some non-physical phenomenon -- or at least new physics that we don't yet understand. In the end he comes to the suggestion that intelligence is derived at bottom from quantum processes. Let me stress that my IQ is probably half of Penrose's, and finding my accomplishments if stacked up next to his would require a microscope. I'm also not a consciousness researcher (and neither is Penrose). But I don't yet see any reason to invoke new physics (beyond possibly deepening our understanding of nonlinear dynamics and complexity). There is still a lot of wiggle room for well-understood physics to generate poorly-understood macroscopic phenomena.
So here, at least, I agree with Kurzweil: I'm not convinced by Penrose's anti-strong AI arguments (many of which, according to John McCarthy, were already well refuted before TENM was published). If intelligence is a property exhibited by matter, I see no particular reason to believe that we will always be unable to create matter that thinks.
On to part II
Ray Kurzweil is a salesman, and a True Believer. I just finished reading his The Age of Spiritual Machines, in which he shares his faith in neural networks, evolutionary algorithms, Drexlerian nanotechnology, and Moore's Law, which leads him to conclude that a "strong" AI (a true intelligence, more than just a program capable of passing the Turing Test) will emerge around 2019 (indeed, will be runnable on a single PC), and that progress will continue to accelerate toward the point where human and machine intelligences merge on the Net before the end of the twenty-first century (an event which he calls the Singularity).
I have many problems with the book, though there are broad areas of agreement on fundamental principles -- I'm a believer in strong AI. If carbon-based matter can think, I see no reason why silicon-based matter can't think -- and no reason to believe that we can't build it, and that it will improve over time. But that's a very far cry from agreeing with the major themes, let alone details, of Kurzweil's book.
The first, and biggest, problem is his Law of Accelerating Returns. While Henry Adams was mulling this concept for human society a hundred years ago, Kurzweil goes far beyond Adams (whom he doesn't appear to cite, though maybe I missed it; in general, the footnoting in the book is good, but the prior literature including science fiction is certainly vast) and asserts that the evolution of the Universe itself has as a goal the creation of intelligence, and that evolution runs at an ever-accelerating pace, unstoppably. He treats this as some sort of vaguely-defined physical law, which I find implausible and poorly supported, at best. (Perhaps he has a more technical argument in a paper somewhere? After all, this is a pop "science" book.) He pays a bit of lip service to punctuated equilibria (misreading Gould, in my opinion) and the possibility of catastrophic societal meltdowns, but doesn't really put much stock in them. He doesn't deal with the fact that dinosaurs seemed quite comfortably in control of the planet until catastrophe befell them -- without any archeological or paleontological evidence that dinosaurs needed intelligence to maintain their dominance, or indeed that their evolution over much of their dominant period truly constituted "progress" as we would define it.
Likewise, Kurzweil extends Moore's Law to some sort of supernatural phenomenon, arguing that computational power starting with mechanical calculators and continuing through the end of the nominal VLSI-relevant Moore's Law in 20-30 years, then continuing through some ill-defined nanotech computational substrate, continues to accelerate. Not just stays on Moore's Law, but that the performance-doubling time will continue to shrink! While his twentieth-century chart is fascinating, I doubt very much that some sort of fundamental principle is in evidence, and that the rate of computation will continue to advance until we are computing with individual quarks. Kurzweil mentions S-curves and the end of exponential growth, but simply has faith that we will find some way around it -- that as each individual S-curve begins to tail off, there is another waiting in the wings to pick up the baton and run.
Kurzweil spends a few pages discussing quantum computing, and while it's not very good, it's also not terrible for a layman's understanding circa 1998. He does conclude (correctly, IMHO) that quantum computing is likely to be a special-purpose tool, rather than a true replacement for all computation.
Kurzweil has worked on voice recognition. I don't dispute that he dictated the bulk of Spiritual to a voice recognition system, but the assertion that keyboards would practically disappear by 2009 must have seemed a reach even in 1999. Likewise, it seems to me that he has substantially oversold the capabilities -- both contemporary impact and future breadth of applicability -- of neural nets and evolutionary algorithms. I have a little experience (more as a user than developer, in collaboration with another researcher) with both evolutionary and neural nets, and in my experience, they take a lot of care and feeding, and getting them to scale reasonably with the problem size is difficult; they tend to need fairly structured guidance, rather than simply turning them on and letting them go. Let me hasten to add that I'm a believer in the value of these technologies -- but they certainly are not yet some silver bullet that allows us to dispense with understanding problems ourselves before instructing a computer how to solve them for us.
Kurzweil believes that simple (ultra-)Moore's Law growth in computation will allow us to scale up these two technologies to the point where it's possible for us to just turn them loose (maybe with a dash or two of learning about the human brain's structure) and we'll get intelligent beings; we already have abstracted the neuron adequately, and only need to evolve large enough and correctly connected neural nets and the structures themselves will take over from there. While it's a beguiling scenario, my opinion is that we are likely to actually need new insights and will have to actively guide their development. Simply creating some sort of neuronal evolutionary soup leaves us in a combinatorial space beyond all comprehending in size -- waiting for a human brain to evolve in that environment is going to require eons, in my opinion.
Kurzweil takes on Roger Penrose's The Emperor's New Mind, which was already old news when he was writing but is still an influential book. I read TENM shortly after it came out, and while the details have long since faded, I was unconvinced by Penrose's arguments, which seemed to amount to the assertion that intelligence (or consciousness?) requires some non-physical phenomenon -- or at least new physics that we don't yet understand. In the end he comes to the suggestion that intelligence is derived at bottom from quantum processes. Let me stress that my IQ is probably half of Penrose's, and finding my accomplishments if stacked up next to his would require a microscope. I'm also not a consciousness researcher (and neither is Penrose). But I don't yet see any reason to invoke new physics (beyond possibly deepening our understanding of nonlinear dynamics and complexity). There is still a lot of wiggle room for well-understood physics to generate poorly-understood macroscopic phenomena.
So here, at least, I agree with Kurzweil: I'm not convinced by Penrose's anti-strong AI arguments (many of which, according to John McCarthy, were already well refuted before TENM was published). If intelligence is a property exhibited by matter, I see no particular reason to believe that we will always be unable to create matter that thinks.
On to part II
Monday, July 17, 2006
Inflatable Space Station
This article talks about the success of an inflatable scale model of an inflatable space station. The full-scale thing, funded by a dude named Bigelow, is supposed to be taking guests in orbit as a hotel by 2015.
Friday, July 14, 2006
Now Available: My Ph.D. Thesis! "Architecture of a Quantum Multicomputer..."
My Ph.D. thesis, "Architecture of a Quantum Multicomputer Optimized for Shor's Factoring Algorithm," is now available from my publications page or from the arXiv as quant-ph/0607065. The arXiv version uses three slightly modified figures to dramatically reduce the size of the PostScript file. As a result, the pagination also changed. Otherwise, there are no differences.
Tuesday, July 11, 2006
Soroban v. Barbie
My older daughter's in second grade, and takes soroban lessons twice a week after school. That's right, abacus. People (well, Americans, anyway) look at me like I'm insane when I tell them that. "Do they teach her how to chip flint, too?" seems to be the thought running through their heads.
A toy she has mostly outgrown is her Barbie laptop computer (she's not really into pink and flowery, but the computer has some cool games, and talks). But lately she's been playing an addition game and some of the other math games. One of them is timed -- the faster you are, the more points you get. Recently, she was playing and getting frustrated with her ability to keep up -- so she grabbed her abacus! She's faster and more accurate at two- and three-digit addition with the abacus than in her head, and seemed to do better at the computer game with her abacus by her side!
She's also learning to multiply using the soroban, ahead of learning it in her actual second-grade class. (She can also ride a unicycle, a common hobby for grade-school girls here, and read and write several hundred kanji (characters Japan borrowed from China) already. But her English is almost non-existent at this point.)
The company that I worked for here in Japan in the early 1990s still kept its books on paper, and much of the arithmetic was done by clerks with abacuses (abaci? okay, sorobans). We also had rotary-dial telephones. And we built some fantastic technology that way.
I'm sure Japanese see just as many idiosyncracies when they move to the U.S...
A toy she has mostly outgrown is her Barbie laptop computer (she's not really into pink and flowery, but the computer has some cool games, and talks). But lately she's been playing an addition game and some of the other math games. One of them is timed -- the faster you are, the more points you get. Recently, she was playing and getting frustrated with her ability to keep up -- so she grabbed her abacus! She's faster and more accurate at two- and three-digit addition with the abacus than in her head, and seemed to do better at the computer game with her abacus by her side!
She's also learning to multiply using the soroban, ahead of learning it in her actual second-grade class. (She can also ride a unicycle, a common hobby for grade-school girls here, and read and write several hundred kanji (characters Japan borrowed from China) already. But her English is almost non-existent at this point.)
The company that I worked for here in Japan in the early 1990s still kept its books on paper, and much of the arithmetic was done by clerks with abacuses (abaci? okay, sorobans). We also had rotary-dial telephones. And we built some fantastic technology that way.
I'm sure Japanese see just as many idiosyncracies when they move to the U.S...
PGP CTO on Crypto
Jon Callas, who is CTO (and CSO) at PGP Corp., posted a great message to the IP (Interesting People) mailing list today on the difficulty of cracking crypto. It's brief but thorough, eye-opening, and very grounded in reality.
Saturday, July 08, 2006
Seventy Cents per Megabit per Second (or Less)
There was a tidbit in the paper yesterday that said that Japan has the lowest average price for broadband of anywhere in the world, at an average of about seventy cents per megabit per second. We are actually paying less -- I think about fifty bucks for 100 Mbps. This is the "gigabit family type", which means that the second hop is gigabit, which I think is shared among a maximum of 32 houses, if I remember right.
The article also said that South Korea has the best availability -- the largest percentage of households could get broadband if they wanted to.
The article also said that South Korea has the best availability -- the largest percentage of households could get broadband if they wanted to.
Thursday, July 06, 2006
Asian Conference on Quantum Information Science
The deadline for submissions to AQIS is July 15, so better get it done soon if you're submitting. I'm afraid I don't have anything ready at the moment...
AQIS is the successor conference to the EQIS series, which has been very successful here in Japan, with a strong program committee and excellent speakers over the three years I've been following it. This year it's in Beijing, so get your visa lined up, too.
AQIS is the successor conference to the EQIS series, which has been very successful here in Japan, with a strong program committee and excellent speakers over the three years I've been following it. This year it's in Beijing, so get your visa lined up, too.
Wednesday, July 05, 2006
Launches
Discovery is up, and so are the North Koreans (sort of). CNN says that the Taepodong, which is supposed to be capable of reaching the U.S. from North Korea, failed after forty seconds and fell in the Sea of Japan, outside of Japan's exclusive economic zone.
With the Yasukuni Shrine visits still causing friction between Japan and Korea and China, relationships in East Asia are tense at the moment. It seems possible that Koizumi will visit Yasukuni on the anniversary of Japan's surrender on August 15, which falls a few weeks before the end of his term as LDP president. Much of the jockeying to succeed him as president (and presumably prime minister) centers around the opinion of the contestants about the Yasukuni issue.
With the Yasukuni Shrine visits still causing friction between Japan and Korea and China, relationships in East Asia are tense at the moment. It seems possible that Koizumi will visit Yasukuni on the anniversary of Japan's surrender on August 15, which falls a few weeks before the end of his term as LDP president. Much of the jockeying to succeed him as president (and presumably prime minister) centers around the opinion of the contestants about the Yasukuni issue.
Sunday, July 02, 2006
From the Ministry of Irony
As long as I'm cribbing from the Sunday morning Daily Yomiuri, can't resist this tidbit: a man was trapped in a Schindler elevator in the building in Sendai (northern Japan) that houses the branch office of Schindler itself. He was only stuck for forty minutes or so, but the fire department had to pry the doors open to get him out. The building contains residences and other offices besides just the Schindler office; the man apparently was not a Schindler employee.
Teen Mobile Phone Use in Japan
The Daily Yomiuri has another blurb citing a MHLW study. According to this one, 92 percent of high schoolers, 48 percent of middle school students, and 24 percent of fifth and sixth graders have mobile phones. A prior survey in 2001 found 27 percent and 9 percent for the latter two categories, but the blurb in the paper doesn't say about high school students.
It also says that more than 30 percent of high schoolers use their keitai (mobile phone) more than two hours a day. I suspect this includes voice, email, i-mode browsing and games, all rolled together.
This report ought to be on MHLW's news page (in Japanese), but I'm not seeing it; it may be rolled into another report...
It also says that more than 30 percent of high schoolers use their keitai (mobile phone) more than two hours a day. I suspect this includes voice, email, i-mode browsing and games, all rolled together.
This report ought to be on MHLW's news page (in Japanese), but I'm not seeing it; it may be rolled into another report...
Japan Molecular (Nanotech?) Health Study
Today's Daily Yomiuri has a blurb from Kyodo News that says that the Health, Labor, and Welfare Ministry (Kouseiroudoushou, or MHLW) has started researching the safety of "molecular substances" used in IT, cosmetics, and more. I suspect this is a nanotechnology survey, but I'm not sure; I certainly can't imagine that no one has bothered to investigate the safety of chemicals used in chip making or makeup. There's no matching news release in Japanese or English on the MHLW website, maybe on Monday.
Hubble Camera Okay?
I had missed the news that the Hubble Space Telescope's Advanced Camera for Surveys went offline two weeks ago with some sort of power supply problem, but the ops staff apparently managed to fix it from the ground, by switching to a backup supply. Whew!
Friday, June 30, 2006
CO2: Train v. Car
A couple of weeks ago I was on a crowded Tokyo train, and there was an ad at the far end of the car with some interesting data. It showed a family of four travelling by car, and producing 880 liters of CO2 emissions. In contrast, travelling by train produced 92 liters.
I couldn't get close enough to the ad to read the details, and haven't seen it again. First question is what kind of assumptions they are making -- is this packed-to-the-gills subway versus stuck-in-traffic Hummer, or is it uncrowded shinkansen green car (first class) versus K car (sub-sub-compact)? My guess would be the that comparison is intended to be favorable to the train.
I'd also like to know how many kilometers, whether luggage is involved, etc. And how is the electricity to drive the train assumed to be produced? What losses are included? Does the gasoline figure include exploring, pumping, refining, transporting, and delivering the petroleum?
This comparison is worth what you're paying for it, but it's a start...anybody got pointers to a detailed analysis, ideally including shinkansen v. airplane?
I couldn't get close enough to the ad to read the details, and haven't seen it again. First question is what kind of assumptions they are making -- is this packed-to-the-gills subway versus stuck-in-traffic Hummer, or is it uncrowded shinkansen green car (first class) versus K car (sub-sub-compact)? My guess would be the that comparison is intended to be favorable to the train.
I'd also like to know how many kilometers, whether luggage is involved, etc. And how is the electricity to drive the train assumed to be produced? What losses are included? Does the gasoline figure include exploring, pumping, refining, transporting, and delivering the petroleum?
This comparison is worth what you're paying for it, but it's a start...anybody got pointers to a detailed analysis, ideally including shinkansen v. airplane?
Thursday, June 22, 2006
Now Available: Distributed Arithmetic on a Quantum Multicomputer
Our ISCA paper, "Distributed Arithmetic on a Quantum Multicomputer", and our JETC paper, "Architectural Implications of Quantum Computing Technologies", are now both available on the Aqua Project publications page.
500GHz Transistor
John Cressler's research group at Georgia Tech and IBM have created an SiGe chip that runs at 500GHz. They achieved this by cooling the device to 4.5K. But it wasn't a slow technology to start with, having a room-temperature switching speed of 350GHz.
If I did this right, Landauer's Limit says that at 350GHz of bit destructions @300K, each transistor would generate 1.5nanowatts. So, a billion-transistor chip would have a physical, fundamental limit of at least 1.5 watts, unless reversible computing is used. Still, we're clearly orders of magnitude from that limit... (I may have dropped a small constant there, but I think I'm within a factor of two.)
If I did this right, Landauer's Limit says that at 350GHz of bit destructions @300K, each transistor would generate 1.5nanowatts. So, a billion-transistor chip would have a physical, fundamental limit of at least 1.5 watts, unless reversible computing is used. Still, we're clearly orders of magnitude from that limit... (I may have dropped a small constant there, but I think I'm within a factor of two.)
Computer Architecture Letters & TC
In my list of fast turnaround architecture-related journals, I should have mentioned Computer Architecture Letters. Four-page letters, acceptance rate currently 22%, an official IEEE Computer Society journal, a breathtakingly strong editorial board, and good papers.
It's also worth mentioning, for those of you in quantum computing, that IEEE Transactions on Computers now has Todd Brun on its editorial board; I think this makes it the first journal with a significant architecture component (though not focus) to have someone strong in quantum computing on board. Todd also seems to be very conscientious about getting papers reviewed in a timely fashion.
It's also worth mentioning, for those of you in quantum computing, that IEEE Transactions on Computers now has Todd Brun on its editorial board; I think this makes it the first journal with a significant architecture component (though not focus) to have someone strong in quantum computing on board. Todd also seems to be very conscientious about getting papers reviewed in a timely fashion.
Friday, June 16, 2006
ISCA and Importance of Conferences
I'm leaving momentarily for the International Symposium on Computer Architecture, in Boston this year. Although Mark Oskin holds the honor of authoring the first ISCA quantum computing paper, I believe, in 2003, this will be the first time there is a full session on the topic. Three papers, one from Berkeley, one from a mixed group including Davis, Santa Barbara, and MIT, and ours (Keio, HP Labs, and NII), will be presented on the last day of the conference (Wednesday). A good performance by the presenters will go a long way to convincing the architecture community that there is important and interesting work to be done, and that direct involvement of architecture folks will accelerate the arrival of viable quantum computers.
For you physicists who are still learning about the CS conference circuit, the top CS conferences review and publish full papers only, and are very competitive (I think ISCA's acceptance rate this year was 14%). Journals are important, too, but in many ways less so. Of the top ten most-cited venues in 2003, eight are conferences and two are journals, according to CiteSeer. I think CiteSeer's publication delay list is very suspect, but it gives you an idea -- conferences are much shorter. Many ACM and IEEE journals and transactions average more than a year to return of first reviews, let alone final publication. Recognizing that this is a problem, many of the newer journals, such as JETC, JILP (well, at seven years, JILP might not count as "new"), and TACO are working hard to keep turnaround time for reviews down. But the dialog on open access is, I think, further advanced in physics than in CS, which is not what should have happened -- IMHO, CS should have been the leader on this topic.
Anyway, I'll try to blog from ISCA. The program this year looks exciting, though interestingly, there are no storage papers this year. Perhaps FAST and the new Transactions on Storage are getting the best storage papers these days?
For you physicists who are still learning about the CS conference circuit, the top CS conferences review and publish full papers only, and are very competitive (I think ISCA's acceptance rate this year was 14%). Journals are important, too, but in many ways less so. Of the top ten most-cited venues in 2003, eight are conferences and two are journals, according to CiteSeer. I think CiteSeer's publication delay list is very suspect, but it gives you an idea -- conferences are much shorter. Many ACM and IEEE journals and transactions average more than a year to return of first reviews, let alone final publication. Recognizing that this is a problem, many of the newer journals, such as JETC, JILP (well, at seven years, JILP might not count as "new"), and TACO are working hard to keep turnaround time for reviews down. But the dialog on open access is, I think, further advanced in physics than in CS, which is not what should have happened -- IMHO, CS should have been the leader on this topic.
Anyway, I'll try to blog from ISCA. The program this year looks exciting, though interestingly, there are no storage papers this year. Perhaps FAST and the new Transactions on Storage are getting the best storage papers these days?
Gedo no Senki
If you're a fan of Ursula K. Le Guin's Earthsea (I am) and the Studio Ghibli anime films (I am), you're probably excited by the upcoming "Gedo no Senki". Out next month, directed by Miyazaki Goro, son of the great Miyazaki Hayao.
The trailer is out on Google Video, and Le Guin's website has a synopsis. Le Guin says she hasn't seen the film and won't comment until she does, but that's such an exceedingly cautious and neutral comment that it makes me wonder if she has doubts.
The official web page requires Flash, and fails to detect that I have it installed for some reason.
The trailer is out on Google Video, and Le Guin's website has a synopsis. Le Guin says she hasn't seen the film and won't comment until she does, but that's such an exceedingly cautious and neutral comment that it makes me wonder if she has doubts.
The official web page requires Flash, and fails to detect that I have it installed for some reason.
Pharmaceuticals in Japan
A blurb in the Daily Yomiuri this morning led me to a report by the Office of Pharmaceutical Industry Research that says that 28 of the top 88 best-selling drugs in the world are not currently available in Japan.
The report is in Japanese, and there's a lot of specialized vocabulary I don't grok at a glance, but if I'm reading it right, Japan lags an average of 2.5 years behind the U.S. in approving drugs, with an average of 1,400 days to approval, compared to 500 for the U.S. (I have absolutely no idea what starts the clock on that approval, and I consider it quite possible that a difference in bureaucratic procedures means the reality is either better or worse.) Okay, wait, if I read this right, that's time from approval in the drug's home country to approval in the local country. That is, a European drug would appear on the U.S. market 500 days after appearing in Europe, and it would appear 1,400 days later in Japan than in Europe.
Of the top 95 drugs worldwide, 38 originated in the U.S., 14 in the U.K., 13 in Japan, 12 in Switzerland, 5 in France, 3 each in Germany, Sweden, and Denmark, and 1 each in Belgium, Israel, and Croatia.
In an unrelated article in the paper, a survey found that 35% of obstetrics departments in Japan have actually stopped delivering babies. More than a third of obstetricians are aged 60 or older, which is nominally retirement age here.
The report is in Japanese, and there's a lot of specialized vocabulary I don't grok at a glance, but if I'm reading it right, Japan lags an average of 2.5 years behind the U.S. in approving drugs, with an average of 1,400 days to approval, compared to 500 for the U.S. (I have absolutely no idea what starts the clock on that approval, and I consider it quite possible that a difference in bureaucratic procedures means the reality is either better or worse.) Okay, wait, if I read this right, that's time from approval in the drug's home country to approval in the local country. That is, a European drug would appear on the U.S. market 500 days after appearing in Europe, and it would appear 1,400 days later in Japan than in Europe.
Of the top 95 drugs worldwide, 38 originated in the U.S., 14 in the U.K., 13 in Japan, 12 in Switzerland, 5 in France, 3 each in Germany, Sweden, and Denmark, and 1 each in Belgium, Israel, and Croatia.
In an unrelated article in the paper, a survey found that 35% of obstetrics departments in Japan have actually stopped delivering babies. More than a third of obstetricians are aged 60 or older, which is nominally retirement age here.
Thursday, June 15, 2006
Where Will You Be When the Big One Hits?
Last Sunday, the Daily Yomiuri had an article based on a report developed by the Tokyo metropolitan government. They estimate that, in the event of a large earthquake during the day, 3.9 million people will be stranded in central Tokyo, too far from home to walk and with no trains or other transport running. That's out of 11.44 million people away from home during the average day in Tokyo. If you include three neighboring prefectures, the number stranded could rise to 6.5 million people. That's a lot of people looking for dinner, some water, a place to lay their head, a toilet, a phone to borrow, and maybe some warm, dry clothes -- not to mention medical assistance.
Among statistical tidbits, it mentions that Shinjuku Station handles four million passengers a day. I've heard various figures for different stations, including (if I recall) 900,000 for Tokyo Station, which sounds low. I think the numbers vary depending on whether you're talking about people starting or terminating journeys, changing trains, or just sitting on a train passing through. With fourteen main platforms in the JR part of the station alone, and several connecting private lines (Odakyu, Keio (no relation to my university), and three subway lines), each with an average of several hundred trains a day and as many as fifteen cars with several hundred people each -- well, you do the math. Wikipedia's Shinjuku Station article (in English) says the number is 3.22 million.
The article says that 167,197 people will initially be stranded in Shinjuku (someone needs to teach them the difference between accuracy and precision -- or do they have a list of names already?), of whom more than 90,000 will be stranded until trains resume, which could be days. Designated emergency gathering points include major parks, but if it's raining and cold, you're not going to convince people to stay there for long.
Number of people initially expected to be stranded:
The percentage of people expected to be stranded until train service resumes varies:
The article also says that on the busiest streets, the density of people may peak at 11 people per square meter 3-4 hours after the quake. That compares to 10 people per square meter on a very crowded train. Seems unlikely to me, but the point that it's going to crowded when everyone comes out of their offices and tries to walk home at the same time is quite valid.
I have some doubts about the numbers, but bottom line: when I get back from ISCA, an earthquake kit for the office is high on my priority list. It's way overdue already.
Among statistical tidbits, it mentions that Shinjuku Station handles four million passengers a day. I've heard various figures for different stations, including (if I recall) 900,000 for Tokyo Station, which sounds low. I think the numbers vary depending on whether you're talking about people starting or terminating journeys, changing trains, or just sitting on a train passing through. With fourteen main platforms in the JR part of the station alone, and several connecting private lines (Odakyu, Keio (no relation to my university), and three subway lines), each with an average of several hundred trains a day and as many as fifteen cars with several hundred people each -- well, you do the math. Wikipedia's Shinjuku Station article (in English) says the number is 3.22 million.
The article says that 167,197 people will initially be stranded in Shinjuku (someone needs to teach them the difference between accuracy and precision -- or do they have a list of names already?), of whom more than 90,000 will be stranded until trains resume, which could be days. Designated emergency gathering points include major parks, but if it's raining and cold, you're not going to convince people to stay there for long.
Number of people initially expected to be stranded:
Tokyo | 198,309 |
Shibuya | 182,858 |
Shinjuku | 167,197 |
Ikebukuro | 165,733 |
Shinagawa | 127,864 |
Machida | 125,512 |
Ueno | 89,894 |
Hachioji | 84,528 |
The percentage of people expected to be stranded until train service resumes varies:
Station | 10-20km from home | 20+km from home | total |
---|---|---|---|
Tokyo | 31,282 | 111,146 | 142,428 |
Shibuya | 28,132 | 75,463 | 103,595 |
Shinjuku | 21,493 | 69,101 | 90,594 |
Shinagawa | 17,602 | 71,528 | 89,130 |
Ikebukuro | 22,101 | 62,663 | 84,764 |
Ueno | 11,522 | 32,712 | 44,234 |
Machida | 10,376 | 17,920 | 28,296 |
Hachioji | 6,590 | 10,768 | 17,358 |
The article also says that on the busiest streets, the density of people may peak at 11 people per square meter 3-4 hours after the quake. That compares to 10 people per square meter on a very crowded train. Seems unlikely to me, but the point that it's going to crowded when everyone comes out of their offices and tries to walk home at the same time is quite valid.
I have some doubts about the numbers, but bottom line: when I get back from ISCA, an earthquake kit for the office is high on my priority list. It's way overdue already.
Random Numbers
The NIST Computer Security Resource Center has just published a report on how to generate good random numbers. This is critical for computer security, but if you're in any sort of physical simulation or use Monte Carlo methods, and you don't know how good your pseudo-random number generator (PRNG) is (or isn't), you should check this out. Bad random numbers will do weird things to you.
Cribbed from Bruce Schneier's blog.
Cribbed from Bruce Schneier's blog.
Tuesday, June 13, 2006
Happy 20th Anniversary
Today, June 13, 2006, is the twentieth anniversary of our graduation from a small technical school in Pasadena.
Tiger, Yosufi, Roseytoes, Min, Bobo, Harold, Myles, Ross, Michelle, Ken, Takako, Hod, and the rest, What a Long Strange Trip It's Been.
I'm sorry I missed our reunion a few weeks ago, but as you can guess from yesterday's post, I've been busy. Love to you all, and hope to see you all soon...
Tiger, Yosufi, Roseytoes, Min, Bobo, Harold, Myles, Ross, Michelle, Ken, Takako, Hod, and the rest, What a Long Strange Trip It's Been.
I'm sorry I missed our reunion a few weeks ago, but as you can guess from yesterday's post, I've been busy. Love to you all, and hope to see you all soon...
Just Call Me "Doc Shorts"
I passed my thesis defense today, so I'm now Doctor Van Meter. Or, at least, will be once the final paperwork is done; it has to be printed, bound, and submitted to an all-faculty meeting next month. My committee has some suggested changes, but they left them as recommendations, so the final tweaks are up to my adviser (Prof. Teraoka) and me, as I understand it.
My thesis is, as I've mentioned, "Architecture of a Quantum Multicomputer Optimized for Shor's Factoring Algorithm". After I make the final revisions, I should be posting it on the arxiv, so stay tuned. (If you're dying for a preview, or want to put in your two cents before it's final, email me and I'll drop you a copy.)
After a lot of work and worry, the defense itself was actually remarkably uneventful. I didn't count, but there were about twenty people there, counting the professors, which is a pretty good crowd. My committee asked a number of intelligent questions, but the grilling was moderate, compared to the first presentation a month ago, when I got shelled pretty good. Thanks to the help of my advisers, the presentation got a lot better in the interim.
My committee was five professors, plus one unofficial member. That's a crowd, but in such an interdisciplinary topic it seemed kind of necessary; I had a theoretical physicist (Prof. Kae Nemoto, who will be my boss in my postdoc), an experimentalist (Prof. Kohei Itoh), a computer networking guy (Prof. Teraoka, my official adviser), one parallel systems expert (Prof. Amano), and another architecture guy (Prof. Yamasaki). The unofficial member is Prof. Jun Murai, who is now Keio's vice president in charge of research -- provost, more or less. I'm very flattered that he took the time to attend. Thanks to all of you for the hard work in mentoring me and evaluating my thesis.
Ah, the one real event of the day, which will no doubt go down in history as part of the Rod Legend. Defenses, in this country, are always given wearing a suit. But I hate suits, and the weather is now hot, sticky, and rainy, so I took my suit, still in the dry-cleaning plastic, and shoved it in a bag, and commuted the two hours to campus in shorts and a t-shirt.
Last Friday, Kae said, "On Monday, remind me to..." I interrupted her and said, "Huh-huh, I'm not taking responsibility for that. On Monday, if I remember to show up carrying my laptop and wearing pants, I'll be happy."
Do you see the punchline yet?
I got to campus, opened my bag, and... no pants. I had grabbed the suit coat, but the pants came from the dry cleaners packaged separately. Oops.
Fortunately, that was still early in the morning, Mayumi was coming to hear my defense, and she hadn't left the house yet. I called her, and she brought them with her, and everything worked out.
So, you can call me "Doc Shorts" or "Doctor No Pants" (or just "Doctor No"?). (Beats some of the nicknames Wook has had for me over the years, but I suppose I've reinforced rather than outgrown "Bean Dip".)
Thanks to all of you who have given me so much support over the years. I couldn't have done it without you.
My thesis is, as I've mentioned, "Architecture of a Quantum Multicomputer Optimized for Shor's Factoring Algorithm". After I make the final revisions, I should be posting it on the arxiv, so stay tuned. (If you're dying for a preview, or want to put in your two cents before it's final, email me and I'll drop you a copy.)
After a lot of work and worry, the defense itself was actually remarkably uneventful. I didn't count, but there were about twenty people there, counting the professors, which is a pretty good crowd. My committee asked a number of intelligent questions, but the grilling was moderate, compared to the first presentation a month ago, when I got shelled pretty good. Thanks to the help of my advisers, the presentation got a lot better in the interim.
My committee was five professors, plus one unofficial member. That's a crowd, but in such an interdisciplinary topic it seemed kind of necessary; I had a theoretical physicist (Prof. Kae Nemoto, who will be my boss in my postdoc), an experimentalist (Prof. Kohei Itoh), a computer networking guy (Prof. Teraoka, my official adviser), one parallel systems expert (Prof. Amano), and another architecture guy (Prof. Yamasaki). The unofficial member is Prof. Jun Murai, who is now Keio's vice president in charge of research -- provost, more or less. I'm very flattered that he took the time to attend. Thanks to all of you for the hard work in mentoring me and evaluating my thesis.
Ah, the one real event of the day, which will no doubt go down in history as part of the Rod Legend. Defenses, in this country, are always given wearing a suit. But I hate suits, and the weather is now hot, sticky, and rainy, so I took my suit, still in the dry-cleaning plastic, and shoved it in a bag, and commuted the two hours to campus in shorts and a t-shirt.
Last Friday, Kae said, "On Monday, remind me to..." I interrupted her and said, "Huh-huh, I'm not taking responsibility for that. On Monday, if I remember to show up carrying my laptop and wearing pants, I'll be happy."
Do you see the punchline yet?
I got to campus, opened my bag, and... no pants. I had grabbed the suit coat, but the pants came from the dry cleaners packaged separately. Oops.
Fortunately, that was still early in the morning, Mayumi was coming to hear my defense, and she hadn't left the house yet. I called her, and she brought them with her, and everything worked out.
So, you can call me "Doc Shorts" or "Doctor No Pants" (or just "Doctor No"?). (Beats some of the nicknames Wook has had for me over the years, but I suppose I've reinforced rather than outgrown "Bean Dip".)
Thanks to all of you who have given me so much support over the years. I couldn't have done it without you.
Friday, June 02, 2006
Dang, I Forgot to Set the VCR!
So, you meant to record that World Cup match taking place while you're at work. But, you forgot to set up the recorder. No problem, if you have a Sony-Ericsson SO902iS keitai (cell phone) and one of several Sony models of high-definition video recorders (hard disk or DVD either one, I think). Just call up the G-guide website on your cell phone, find the right program, and click a couple of buttons, and your phone contacts your video recorder via the Internet, and sets it to record for you.
You have to set up the service beforehand, but it's free. Connect to the program guide website, do some setup there, set a password on your DVR (digital video recorder), then have your phone connect to the DVR via infrared the first time. Presumably they exchange enough information then to find and identify each other via the Internet when you're away from the house.
Obviously, this means your DVR has to be connected to the Internet. Details of that are vague, but my first question is how NAT is dealt with. Does your DVR have to have a permanent, non-NATed, global address? Or does it get around this by polling some shared server frequently? Or, do you have to convince your ISP to set up a passthrough in the NAT so that you can connect to your DVR? Ugh. Just switch to IPv6, man (of course, your home or ISP firewall would still have to be smart enough to let the right traffic through...).
You have to set up the service beforehand, but it's free. Connect to the program guide website, do some setup there, set a password on your DVR (digital video recorder), then have your phone connect to the DVR via infrared the first time. Presumably they exchange enough information then to find and identify each other via the Internet when you're away from the house.
Obviously, this means your DVR has to be connected to the Internet. Details of that are vague, but my first question is how NAT is dealt with. Does your DVR have to have a permanent, non-NATed, global address? Or does it get around this by polling some shared server frequently? Or, do you have to convince your ISP to set up a passthrough in the NAT so that you can connect to your DVR? Ugh. Just switch to IPv6, man (of course, your home or ISP firewall would still have to be smart enough to let the right traffic through...).
HSDPA Handsets
While digging for something else I'll blog about shortly, I ran across an announcement of the N902iX, an NTT DoCoMo FOMA 3G/HSDPA handset. What does that mean to you? If your 3G cell phone network is built out for it, and increasing numbers of them are, you can, in theory, get up to 3.6 Mbps to your cell phone, compared to the nominal 384kbps of straight WCDMA. Mostly this has been available as PCMCIA cards for laptops; the claim is that this is the first true HSDPA handset.
Speaking of which, apparently a good fraction of the FOMA handsets now available include triband GSM. My N900iG, bought in January 2005, was the first such handset (and suitably buggy and power-hungry), but now DoCoMo is saying that within two years they will require all of the FOMA handsets to support GSM. The announcement I saw didn't say which frequencies will be mandated, but I think the ones in use are still proliferating worldwide...
Speaking of which, apparently a good fraction of the FOMA handsets now available include triband GSM. My N900iG, bought in January 2005, was the first such handset (and suitably buggy and power-hungry), but now DoCoMo is saying that within two years they will require all of the FOMA handsets to support GSM. The announcement I saw didn't say which frequencies will be mandated, but I think the ones in use are still proliferating worldwide...
Computing Frontiers: Why Study Quantum?
I'm just about ten days from my final defense, and while I had considered tapping the Net for people interested in commenting on my thesis, I admit I never got around to it. Here's a preview, though: the prolegomenon justifying why I think it's worthwhile for computer systems folks to be involved in quantum computing now. Comments welcome!
We are just started on a great venture. Dwight Eisenhower
The designer usually finds himself floundering in a sea of possibilities, unclear about how one choice will limit his freedom to make other choices, or affect the size and performance of the entire system. There probably isn't a best way to build the system, or even any major part of it; much more important is to avoid choosing a terrible way, and to have clear division of responsibilities among the parts. I have designed and built a number of computer systems, some that worked and some that didn't. Butler Lampson, "Hints for Computer System Design" [187]
As VLSI features continue to shrink, computers that depend on quantum mechanical effects to operate are inevitable [221, 211, 47, 143, 104]. The fundamental architectural issue in these future systems is whether they will attempt to hide this quantum substrate beneath a veneer of classical digital logic, or will expose quantum effects to the programmer, opening up the possibilities of dramatic increases in computational power [108, 89, 88, 35, 38, 279, 127, 3, 196, 233].
Small and unreliable they are, but quantum computers of up to a dozen nuclear spins [79] and eight ions [131] exist. The three most famous quantum algorithms are Deutsch-Jozsa [89], Grover's search [127], and Shor's factoring [279]. All three of these algorithms have been experimentally implemented for small-scale problems [151, 70, 68, 163, 310, 319, 320, 130]. A further extremely broad range of experiments has demonstrated numerous building blocks [326, 29, 296, 169, 224, 64, 251] based on the one- and two-qubit technology demonstrations we will see in Chapter 7. Although many theoretical and practical questions remain open, it seems reasonable to assert that implementation of quantum computation is on the verge of moving from a scientific problem to an engineering one. It is now time to ask what we can build, and what we should build. Various computer architecture researchers have begun investigating the former question, working from the bottom up [78, 146, 241, 240, 305, 145]; this dissertation and the related papers address the latter question, working from the top down [314, 317, 316, 312, 313, 315].
Why should computer engineers study quantum computation, and why now? Certainly the field of classical computer architecture is not moribund, and offers far more immediate impact for much less intellectual risk. Work that increases parallelism, reduces power consumption, improves I/O performance, increases gate speed or reduces data propagation delays is much more likely to be used in the real world, and far sooner than quantum technologies. Intel began sampling a billion-transistor microprocessor chip in October 2005, a 580 square-millimeter chip built in a 90 nanometer process. Some researchers consider integration levels of a trillion transistors per silicon chip possible [213], though we are hardly done digesting the implications of a billion transistors on a chip [246, 178, 56]. Clearly there is room on-chip for many architectural advances. Ubiquitous computing, sensor networks, augmented reality, and mobile systems will no doubt be among the most transformative technologies of the coming decades, relegating today's 3G Internet-connected mobile phones to the status of Neolithic stone adzes [261]. In âback endâ systems, continued research on computational grids and storage are critical. Among computing exotica, electrical circuits fabricated with nanotechnology [344, 32, 205, 304, 267], DNA computing [8], and amorphous computing are all other possible fields of pursuit [5]. So, why quantum?
Different researchers have different reasons for studying quantum computing. Physicists are learning fundamental facts about the quantum behavior of both individual particles and mesoscopic systems. Theoretical computer scientists are finding many fascinating new questions (and answering some of them). But to a computer systems person, quantum computation is about one thing: the pursuit of performance. If practical largescale quantum computers can be built, we may be able to solve important problems that are classically intractable. Potential applications include cryptographically important functions such as factoring, which appears to offer a superpolynomial speedup, and scientifically important problems such as simulations of many-body quantum systems, which may offer exponential speedup. Quantum computers therefore hold out the possibility of not just Moore's Law increases in speed, but a change in computational complexity class and consequent acceleration on these, and possibly other, problems.
I will not directly address criticisms of the possibility of quantum computation [98, 158], except to note that my response is different from that of Aaronson, who is excited by the inherent beauty and theoretical importance of quantum mechanics while searching for the ultimate limits to computation [3]. I, too, admire these factors, but more importantly I believe it is inevitable, as silicon devices continue to scale down in size, that we will have to deal with quantum effects. Many researchers are directing their efforts at mitigating these effects; in my opinion, we will do better by embracing them, even if "quantum computing" ultimately proves to have no computational advantage over classical.
Quantum effects are also being explored for direct exploitation as classical logic, such as recent work on magnetic quantum dot cellular automata [144]. Plasmonics, the study of electromagnetic waves propagating in the surface of a material, is developing rapidly, and might offer improvements in how we move data within classical chips [243]. More broadly, the whole area called spintronics, directly or indirectly manipulating the spin of small numbers of electrons, is already having an impact through the creation of technologies such as magnetic RAM (MRAM) [309, 330]. It has been suggested that classical computers must employ reversible logic to exceed 1022 floating point operations per second (10 zettaFLOPS) [86]. Quantum computation serves as an excellent training ground for engineers destined to work in these areas, as well as providing both fundamental and practical results that influence the technological development of these areas.
My analogy is to the field of robotics. It has been more than eighty years since the original use of the term robot to mean an autonomous, mechanical humanoid (though the idea goes back to antiquity) [322], and several decades since the debut of robotics as a respectable field of inquiry. Yet the humanoid robots of science fiction do not roam the streets of Tokyo in the first decade of the twenty-first century. This does not mean that robotics as a field has been barren; indeed, robots dominate many forms of manufacturing, and the related technologies spun off from robotics research are nearly ubiquitous. Robotics depends on, and serves as an impetus for, research as diverse as computer vision, speech recognition, fuzzy logic, virtual reality, and many mechanical advances. The road to development has been long, and the results to date look nothing like what mid-twentieth century science fiction writers such as Isaac Asimov anticipated, but the results have been extremely valuable nonetheless. So I expect it to be with quantum computing.
Chapter 1 Introduction
We are just started on a great venture. Dwight Eisenhower
The designer usually finds himself floundering in a sea of possibilities, unclear about how one choice will limit his freedom to make other choices, or affect the size and performance of the entire system. There probably isn't a best way to build the system, or even any major part of it; much more important is to avoid choosing a terrible way, and to have clear division of responsibilities among the parts. I have designed and built a number of computer systems, some that worked and some that didn't. Butler Lampson, "Hints for Computer System Design" [187]
As VLSI features continue to shrink, computers that depend on quantum mechanical effects to operate are inevitable [221, 211, 47, 143, 104]. The fundamental architectural issue in these future systems is whether they will attempt to hide this quantum substrate beneath a veneer of classical digital logic, or will expose quantum effects to the programmer, opening up the possibilities of dramatic increases in computational power [108, 89, 88, 35, 38, 279, 127, 3, 196, 233].
Small and unreliable they are, but quantum computers of up to a dozen nuclear spins [79] and eight ions [131] exist. The three most famous quantum algorithms are Deutsch-Jozsa [89], Grover's search [127], and Shor's factoring [279]. All three of these algorithms have been experimentally implemented for small-scale problems [151, 70, 68, 163, 310, 319, 320, 130]. A further extremely broad range of experiments has demonstrated numerous building blocks [326, 29, 296, 169, 224, 64, 251] based on the one- and two-qubit technology demonstrations we will see in Chapter 7. Although many theoretical and practical questions remain open, it seems reasonable to assert that implementation of quantum computation is on the verge of moving from a scientific problem to an engineering one. It is now time to ask what we can build, and what we should build. Various computer architecture researchers have begun investigating the former question, working from the bottom up [78, 146, 241, 240, 305, 145]; this dissertation and the related papers address the latter question, working from the top down [314, 317, 316, 312, 313, 315].
1.1 Computing Frontiers: Why Study Quantum?
Why should computer engineers study quantum computation, and why now? Certainly the field of classical computer architecture is not moribund, and offers far more immediate impact for much less intellectual risk. Work that increases parallelism, reduces power consumption, improves I/O performance, increases gate speed or reduces data propagation delays is much more likely to be used in the real world, and far sooner than quantum technologies. Intel began sampling a billion-transistor microprocessor chip in October 2005, a 580 square-millimeter chip built in a 90 nanometer process. Some researchers consider integration levels of a trillion transistors per silicon chip possible [213], though we are hardly done digesting the implications of a billion transistors on a chip [246, 178, 56]. Clearly there is room on-chip for many architectural advances. Ubiquitous computing, sensor networks, augmented reality, and mobile systems will no doubt be among the most transformative technologies of the coming decades, relegating today's 3G Internet-connected mobile phones to the status of Neolithic stone adzes [261]. In âback endâ systems, continued research on computational grids and storage are critical. Among computing exotica, electrical circuits fabricated with nanotechnology [344, 32, 205, 304, 267], DNA computing [8], and amorphous computing are all other possible fields of pursuit [5]. So, why quantum?
Different researchers have different reasons for studying quantum computing. Physicists are learning fundamental facts about the quantum behavior of both individual particles and mesoscopic systems. Theoretical computer scientists are finding many fascinating new questions (and answering some of them). But to a computer systems person, quantum computation is about one thing: the pursuit of performance. If practical largescale quantum computers can be built, we may be able to solve important problems that are classically intractable. Potential applications include cryptographically important functions such as factoring, which appears to offer a superpolynomial speedup, and scientifically important problems such as simulations of many-body quantum systems, which may offer exponential speedup. Quantum computers therefore hold out the possibility of not just Moore's Law increases in speed, but a change in computational complexity class and consequent acceleration on these, and possibly other, problems.
I will not directly address criticisms of the possibility of quantum computation [98, 158], except to note that my response is different from that of Aaronson, who is excited by the inherent beauty and theoretical importance of quantum mechanics while searching for the ultimate limits to computation [3]. I, too, admire these factors, but more importantly I believe it is inevitable, as silicon devices continue to scale down in size, that we will have to deal with quantum effects. Many researchers are directing their efforts at mitigating these effects; in my opinion, we will do better by embracing them, even if "quantum computing" ultimately proves to have no computational advantage over classical.
Quantum effects are also being explored for direct exploitation as classical logic, such as recent work on magnetic quantum dot cellular automata [144]. Plasmonics, the study of electromagnetic waves propagating in the surface of a material, is developing rapidly, and might offer improvements in how we move data within classical chips [243]. More broadly, the whole area called spintronics, directly or indirectly manipulating the spin of small numbers of electrons, is already having an impact through the creation of technologies such as magnetic RAM (MRAM) [309, 330]. It has been suggested that classical computers must employ reversible logic to exceed 1022 floating point operations per second (10 zettaFLOPS) [86]. Quantum computation serves as an excellent training ground for engineers destined to work in these areas, as well as providing both fundamental and practical results that influence the technological development of these areas.
My analogy is to the field of robotics. It has been more than eighty years since the original use of the term robot to mean an autonomous, mechanical humanoid (though the idea goes back to antiquity) [322], and several decades since the debut of robotics as a respectable field of inquiry. Yet the humanoid robots of science fiction do not roam the streets of Tokyo in the first decade of the twenty-first century. This does not mean that robotics as a field has been barren; indeed, robots dominate many forms of manufacturing, and the related technologies spun off from robotics research are nearly ubiquitous. Robotics depends on, and serves as an impetus for, research as diverse as computer vision, speech recognition, fuzzy logic, virtual reality, and many mechanical advances. The road to development has been long, and the results to date look nothing like what mid-twentieth century science fiction writers such as Isaac Asimov anticipated, but the results have been extremely valuable nonetheless. So I expect it to be with quantum computing.
Tuesday, May 30, 2006
Equal Time
In a recent post I dissed academic.live.com. It's now working for me, more or less, though there are some strange things that might either be intentional and bad, or just problems with Firefox on Linux. In particular, scrolling in the internal panes is erratic, and tends to take very large jumps. I do like the multi-pane interface that pops up abstracts and other info, but I think the screen real estate isn't used very efficiently, and it means you wind up having to scroll in multiple windows, which is clumsy. Moreover, the database itself is a tad erratic; sometimes, journal names or volumes or even years are missing, which would be a fatal flaw without good links to the documents. Still annoying, as I'm often looking for either recent or old papers, and it's hard to know which a given paper is without downloading it and opening it.
And I still want to know how they actually rank things.
Since I'm a big fan of Linux and free software in general, I have to rant: OpenOffice 2.0 on Fedora Core 4 (running in Japanese mode) still crashes on me on a regular basis. Often without a message or warning at all, it just blips and disappears. Recently I was working on a presentation, and it would drop out on me every 20-30 minutes. Given that file saves can take a long time, and restarts are slow, this means I lost a lot of time. Finally, things deteriorated to the point where it crashed every time I started it, without a message at all. I spent an hour trying everything, digging through the OO registry XML files and everything...then discovered that I was out of disk space. Geez. Stupid situation, to be sure, but OO can't figure that out and give me a warning?
And I still want to know how they actually rank things.
Since I'm a big fan of Linux and free software in general, I have to rant: OpenOffice 2.0 on Fedora Core 4 (running in Japanese mode) still crashes on me on a regular basis. Often without a message or warning at all, it just blips and disappears. Recently I was working on a presentation, and it would drop out on me every 20-30 minutes. Given that file saves can take a long time, and restarts are slow, this means I lost a lot of time. Finally, things deteriorated to the point where it crashed every time I started it, without a message at all. I spent an hour trying everything, digging through the OO registry XML files and everything...then discovered that I was out of disk space. Geez. Stupid situation, to be sure, but OO can't figure that out and give me a warning?
Monday, May 29, 2006
Setting the Standard
Digging for something else in my bibliography file, I ran across my entry for C.A.R. Hoare's "Communicating Sequential Processes", CACM Aug. 1978, and I wondered, "How many times has that been cited?" Well, according to scholar.google.com, 6027 times, so far.
That's the standard, then: a world-changing paper in CS gets cited about six thousand times in a little under thirty years.
I'm idly wondering what might be cited more times. Van Jacobson's "Congestion Avoidance and Control" might be the top networking paper, I'm not sure; but it exists in several forms, complicating things a bit. Scholar cites it as 1995, when it appeared in a collection, and claims a bit under 3,000 times for that.
Anybody either know what the top-cited paper is, or want to take a shot at one that beats CSP?
That's the standard, then: a world-changing paper in CS gets cited about six thousand times in a little under thirty years.
I'm idly wondering what might be cited more times. Van Jacobson's "Congestion Avoidance and Control" might be the top networking paper, I'm not sure; but it exists in several forms, complicating things a bit. Scholar cites it as 1995, when it appeared in a collection, and claims a bit under 3,000 times for that.
Anybody either know what the top-cited paper is, or want to take a shot at one that beats CSP?
Thursday, May 25, 2006
Thought-Controlled Robot
Now you can lie in your nice, comfortable MRI machine and control your robot. Or at least its hand. According to an article in the Japan Times, Honda and ATR have developed a system that can read MRI signals well enough to distinguish among a fist, an open hand, and a V sign in a few seconds, and order a robotic hand to perform a similar movement. (Yes, that's "gu, choki, pa", or "rock, scissors, paper".)
This is an improvement over other techniques for thought-controlled devices, which often require electrodes actually implanted in the brain, and/or require significant training of the user. The drawback is that it requires an entire MRI machine :-). I have no idea what this would do for Stephen Hawking, depends on which of his neural functions have deteriorated.
This is an improvement over other techniques for thought-controlled devices, which often require electrodes actually implanted in the brain, and/or require significant training of the user. The drawback is that it requires an entire MRI machine :-). I have no idea what this would do for Stephen Hawking, depends on which of his neural functions have deteriorated.
Tuesday, May 23, 2006
ITRS Emerging Research Devices
I was reading George Bourianoff's The Future of Nanocomputing, from IEEE Computer, and it led me to a section of one of my favorite references, the 2005 edition of International Technology Roadmap for Semiconductors, titled "Emerging Research Devices". 200+ references on carbon nanotubes, quantum cellular automata, etc. -- numerous alternatives to standard CMOS for achieving classical computation. On quantum computing, which it calls "coherence quantum computing" to differentiate it from classical computing performed with quantum effects, it defers to the ARDA Quantum Information Science and Technology Roadmap. All worth a look.
Sunday, May 21, 2006
Molecular Tapas Bar
I sometimes learn about things in Tokyo via circuitous routes. The Daily Yomiuri has a regular section on Sundays that's produced by the Times of London. Today's issue had a column by Richard Lloyd Parry. His rhetorical goal is to show that Tokyo is getting its mojo back, like the days of the Bubble. Parry mentions people eating sushi off the bodies of naked beauty queens during the height of the Bubble. Personally, I suspect that's some sort of urban legend morphed from a scene involving live shrimp in Itami Juzo's Tampopo, though I could be wrong.
Parry's column is primarily about dinner at the Molecular Tapas Bar at the Mandarin Oriental. Food from test tubes, by chef Jeff Ramsey. If Tokyo can support food this challenging and decadent, it's back, right?
Food from test tubes? I am so there. While it doesn't sound quite as avant-garde Homaro Cantu's Moto in Chicago, which I haven't had the pleasure of trying yet, this will definitely be an interesting experience.
Our reservations are for July 2nd (they weren't as hard to get as Parry would have you believe). Stay tuned for a report.
P.S. Parry mentions longing for a MOSburger, and compares it to McDonald's, but in fact MOSburger was started after the owner-to-be visited Tommy's.
Parry's column is primarily about dinner at the Molecular Tapas Bar at the Mandarin Oriental. Food from test tubes, by chef Jeff Ramsey. If Tokyo can support food this challenging and decadent, it's back, right?
Food from test tubes? I am so there. While it doesn't sound quite as avant-garde Homaro Cantu's Moto in Chicago, which I haven't had the pleasure of trying yet, this will definitely be an interesting experience.
Our reservations are for July 2nd (they weren't as hard to get as Parry would have you believe). Stay tuned for a report.
P.S. Parry mentions longing for a MOSburger, and compares it to McDonald's, but in fact MOSburger was started after the owner-to-be visited Tommy's.
Saturday, May 20, 2006
4.8 in Southern Chiba
No big deal, at least here in Abiko. Not quite all the way to the bottom end of the Boso Peninsula.
Wednesday, May 17, 2006
Tommy Turns Sixty
While we're on the subject of food, Tommy's turned sixty, according to an article in yesterday's L.A. Times. Events were complete with Elvis impersonators and lots of people with personal stories of arduous treks and extraordinary effort to reach the Land of Milk and Honey (or, chili burgers and tamales).
My flight to Japan the first time I moved here (March 1992) was on a Saturday morning. A group of about twenty of my friends (including at least one vegetarian) met at Tommy's for breakfast before seeing me off to the airport. Somewhere, there are photos...
And somewhere, I still have my Tommy's Fortieth Anniversary t-shirt.
My flight to Japan the first time I moved here (March 1992) was on a Saturday morning. A group of about twenty of my friends (including at least one vegetarian) met at Tommy's for breakfast before seeing me off to the airport. Somewhere, there are photos...
And somewhere, I still have my Tommy's Fortieth Anniversary t-shirt.
Space Kimchi
Oh, I love this. Kimchi in space! Man, I'm drooling already. Sign me up - I love kimchi.
But...must...resist...making bad joke...willpower...noooo...I give: ISS astronauts will now be rocket powered!
From the article: "Space kimchi is expected to be of great help in stimulating astronauts’ appetite with its zest and spices. In addition, it is effective in promoting the intestinal functions, which tend to be somewhat sluggish in space, with abundant fiber." You got that right!
Irradiated kimchi...wow...
But...must...resist...making bad joke...willpower...noooo...I give: ISS astronauts will now be rocket powered!
From the article: "Space kimchi is expected to be of great help in stimulating astronauts’ appetite with its zest and spices. In addition, it is effective in promoting the intestinal functions, which tend to be somewhat sluggish in space, with abundant fiber." You got that right!
Irradiated kimchi...wow...
Monday, May 15, 2006
Academic
Those Microsoft guys, what pranksters! Always out for a laugh.
There was a short blurb in Science recently about academic.live.com, a new search engine Microsoft has created for academic journals. Since I'm such a fan of scholar.google.com, I figured I'd give academic a whirl.
But I don't use Windows, only Linux (cue "Jaws" theme). I went to the site, and it popped up a nice search window, with FAQ below and other tidbits. I typed in something, hit search, and it moved to "Loading..." and never left that state.
I thought, "A website that doesn't work with my Linux box! From Microsoft! How unusual. Let's see if it works on Windows..." so I turned to my neighbor, who does use Windows (the Japanese version of XP), and got him to try it. Enter something, hit search, and...it hung his browser! Not even the kill button in the toolbar works. Can't iconify or background it. At least the task manager managed to kill it, though.
Just made my day...
P.S. From their FAQ:
I can kind of imagine what "quality of match" means, but what's "authoritativeness of the paper"? They explicitly rule out citation count (claiming that an iffy cite count is worse than no cite count), but never explain what they do use. Maybe a journal impact factor, like the one at CiteSeer? Inquiring minds want to know...
There was a short blurb in Science recently about academic.live.com, a new search engine Microsoft has created for academic journals. Since I'm such a fan of scholar.google.com, I figured I'd give academic a whirl.
But I don't use Windows, only Linux (cue "Jaws" theme). I went to the site, and it popped up a nice search window, with FAQ below and other tidbits. I typed in something, hit search, and it moved to "Loading..." and never left that state.
I thought, "A website that doesn't work with my Linux box! From Microsoft! How unusual. Let's see if it works on Windows..." so I turned to my neighbor, who does use Windows (the Japanese version of XP), and got him to try it. Enter something, hit search, and...it hung his browser! Not even the kill button in the toolbar works. Can't iconify or background it. At least the task manager managed to kill it, though.
Just made my day...
P.S. From their FAQ:
How do you determine relevance? Are you using citation counts in the relevance ranking?
We are determining relevance based on the following two areas, as determined by a Microsoft algorithm:
# Quality of match of the search term with the content of the paper
# Authoritativeness of the paper.
I can kind of imagine what "quality of match" means, but what's "authoritativeness of the paper"? They explicitly rule out citation count (claiming that an iffy cite count is worse than no cite count), but never explain what they do use. Maybe a journal impact factor, like the one at CiteSeer? Inquiring minds want to know...
Saturday, May 13, 2006
Carbon Nanotube Origami
The Daily Yomiuri has a short article (essentially cribbed from a Japanese AIST press release) about progress in manufacturing single-wall carbon nanotubes (SWCNTs or SWNTs). The article is actually a little hard to read, but it seems they've increased the efficiency of production of high-quality nanotubes by a factor of 100, for some measure. They also claim yield is up from 50% to 97.5% for defect-free tubes, though I don't see anything about the length of the tubes...wait, apparently 0.4 to 50nm in diameter, and 1um to several tens of microns long.
But what's cool enough to warrant a blog posting here is that they made a sheet of SWCNT material, 9 microns thick, big enough to fold into an origami crane (tsuru). The print edition of DY has a photo of it sitting on someone's hand, but the photo isn't in the online article. The press release has a different picture, but no scale; it's actually roughly "normal" size -- probably folded from a sheet at least 10cm by 10cm.
But what's cool enough to warrant a blog posting here is that they made a sheet of SWCNT material, 9 microns thick, big enough to fold into an origami crane (tsuru). The print edition of DY has a photo of it sitting on someone's hand, but the photo isn't in the online article. The press release has a different picture, but no scale; it's actually roughly "normal" size -- probably folded from a sheet at least 10cm by 10cm.
Friday, May 12, 2006
Now Available: Architectural Implications of Quantum Computing Technologies
My paper with Mark Oskin, "Architectural Implications of Quantum Computing Technologies," is now available from the archives of ACM Journal on Emerging Technologies in Computing Systems (JETC). It appears in the Jan. 2006 issue.
Quantum computing researchers are familiar with the DiVincenzo criteria which a technology must meet in order to be a candidate for functional quantum computing devices, and with the notion of an error threshold below which a technology meets the basic mathematical notion of scalability. This paper is the view of two computer systems engineers/researchers on the practical issues that will determine how easy or hard it is to build a large-scale, high-performance quantum computing system.
There is more news about my personal research, but that will have to wait for later postings, as I still have many thoughts to organize...
Quantum computing researchers are familiar with the DiVincenzo criteria which a technology must meet in order to be a candidate for functional quantum computing devices, and with the notion of an error threshold below which a technology meets the basic mathematical notion of scalability. This paper is the view of two computer systems engineers/researchers on the practical issues that will determine how easy or hard it is to build a large-scale, high-performance quantum computing system.
There is more news about my personal research, but that will have to wait for later postings, as I still have many thoughts to organize...
Alex @ Caltech?
Alex Doonesbury is considering going to Caltech. We're all waiting on pins and needles to see what her final decision is. Surely there are some pranksters at Tech who are capable of, um, influencing her decision?
Thursday, April 20, 2006
4.8 in Tochigi-ken
It's so incredibly windy here in Abiko right now, and our apartment building rattles and sways so at the slightest provocation, that at first I wasn't sure, but the TV and websites tell me 4.8 in southern Tochigi-ken.
I haven't been blogging them all, but we've had a couple of 4.0+ in the general neighborhood in the last few weeks, after several months of relative quiet.
I haven't been blogging them all, but we've had a couple of 4.0+ in the general neighborhood in the last few weeks, after several months of relative quiet.
Wednesday, April 12, 2006
CFP: International Workshop on Quantum Programming Languages
At Oxford in July. I haven't been to one of the workshops, but the papers from the first three were intriguing. Unfortunately, I'm probably too busy finishing my thesis to prepare a paper on my Aqua toolkit, which includes an assembler and circuit compiler.
See the call for papers; submission deadline is May 10.
See the call for papers; submission deadline is May 10.
Subscribe to:
Posts (Atom)