Monday, February 26, 2024

Nerd Nite Videos Online!

 

The full set of videos of Nerd Nite Tokyo talks, going back to June 2016, are now available on YouTube!

I have talked twice at NN:

Thanks to Andrew Todd for making the entire archive available. Join us at upcoming events!




Thursday, February 22, 2024

Spelunking CACM (1975): Phong Shading



Have you heard of the Phong shader? A shader is a small function or program that determines the color of each individual pixel in a computer-generated image. Gouraud’s 1971 shader is one of the simplest, both to understand and to implement. In 1975, along comes Bui Tuong Phong and completely changes the game. (Well, Wikipedia says he first published his algorithm in 1973, in his Ph.D. thesis.)

The figure on the left above uses a flat polygon shader by Newell, Newell and Sancha. Each polygon has a uniform brightness to it, a technique known as flat shading. The figure on the right uses Phong's new shader. The difference is incredible, isn't it? For the one on the left, you would think either that the vase is itself faceted, or that it's reflecting rectangles such as windows. The one on the right is much more natural.

Gouraud's shader doesn't flat-shade each polygon.  It begins with the set of normal vectors for each polygon, then it finds the normal at each vertex by averaging the normals of the surrounding polygons, a clever idea.  Then it interpolates linearly along each edge.  Finally, when creating the pixels in each scan line, it looks left and right from each pixel to find the nearest edge in each direction, and linearly interpolates from there. I'm a little surprised this works with polygons that are rather skewed, but if you think about it it's generally fine.

The problem arises because the interpolation is smooth within the polygon, but the slope of that interpolation has a discontinuity where you cross boundaries. This results in an effect known as Mach bands, where the human visual system assigns a ridge of brightness just to one side of that discontinuity, visible in the figure on the left below (or in the first figure if your browser shows them stacked).



To really see the Mach band effect, if we zoom way in on the figure on the left above, we get:



If you concentrate only on the region on one side of the boundary, you will see that the shading doesn't have the bright line that seems to be there in the zoomed out picture. It's just an artifact of your brain's image processing.

Most importantly, in Gouraud shading, the values for the pixels are linearly interpolated. With Phong shading, instead, normal vectors are interpolated. Once you have the normal vector for the point, you can then calculate the appropriate projections to determine specular reflection as well as actual value.  This means calculating cosines, or approximations of them, and taking square roots. It's a lot more math than Gouraud, but the quality of the results in the figure on the right above is obvious. This comes largely due to the now continuous first derivative as you cross polygon boundaries.

Oh, and a historical note: I mentioned in the 1974 spelunking that Utah was (and is) an important proving ground in computer graphics. Guess what? Both Gouraud and Phong were at the University of Utah. And, sadly, Phong died very young, of leukemia, indeed right after publishing this paper.

Shifting gears, Weber and Gilchrist (presumably both men, based on their given names) reported on progress of women in the field of computing, mostly from the point of view of wages in comparable jobs rather than job opportunities and hiring. Unfortunately they didn't have good numbers on scientific programming jobs and the like, but they did find modest growth in business analysts -- up to 13% women! Depressingly, their conclusion hoping for advances has only partially been realized:
In looking to the future, it will be interesting to observe whether or not the number of women graduates in- creases in the fields related to computer design and use. It will be even more in- teresting to see if the percentages of women in the higher levels of programming and systems analysis increase as the women now in the lower levels gain experience and become eligible for promotion... For the present, the conclusion must be reached that there is still a long way to go before the statistics will indicate that women enjoy complete equality with men in the computer field.

 Guy Steele himself wrote a long paper on garbage collection using a shared-memory multiprocessor. Very forward thinking. I'm actually a little surprised at the complexity of this.

Overall, to me, 1975 was a much quieter year than the explosive year of 1974.


Monday, February 19, 2024

Maebashi: the Second Cyber Civilization Research Center

I should have commented on this when the news came out a while ago: Professor Jiro Kokuryo will be retiring from Keio in a year, after which he will spread the gospel of Cyber Civilization by creating a new CCRC at Kyoai Gakurn University in Maebashi. Great news for him personally, as well as being the first outpost for our plan for global domination. Congratulations!

(Research) Distributed Systems that Influenced Me

 Recently I wrote about some heavily-used, commercial distributed systems that influenced me. They were the obvious ones: NFS, VAXclusters, X Windows. It's important to note that those systems, in turn, were influenced by the seminal work of the Xerox PARC team that did the Alto, Ethernet, the laser printer, the mouse and GUI, and object-oriented programming. I read about them but I didn't have the privilege of working with them.  The PARC outputs became commercial systems in the 1970s, but they were so radical that it is almost fair to call them research instead.

And, of course, before PARC there was the legendary Mother of All Demos. I'm not sure when I first heard about it; not until after much of my study, I think. Now you can watch it on YouTube, but in my formative years it wasn't available and so was myth more than actual influence.

My basic story goes something like this: I arrived at Caltech wanting to be an astronomer or paleontologist, but wound up drifting, studying CS, EE, and optics (Applied Physics). (The optics study pays big dividends now in my quantum research, especially Quantum Internet.) I learned to program a little, including in CS1 taught by Carver Mead and a great lab by own advisor, Chuck Seitz, on 8-bit and 16-bit microprocessors. (If memory serves, we did assembly for 8088, 8086, Z-80, 6502 and 6800.) Thanks to Ron Ayres, who taught me silicon compilation and who worked for the MOSIS Service, I wound up with a job at USC/ISI in the Computer Center as VMS sysadmin supporting MOSIS, and discovered that I loved working on computer systems (a field I have never really left since). 

At some point, I was wandering the ISI library, and pulled the (paper) proceedings of ASPLOS-III off the shelf, and basically devoured it cover to cover. (I was particularly intrigued by a paper showing how to balance vector and scalar performance in an architecture.) I decided I was interested in computer systems research, and applied to but wasn't accepted into Ph.D. programs at Caltech, MIT and UT-Austin. Instead, I entered the USC master's program part time while continuing to work at ISI. For some reason I picked EE-Systems (computer engineering) rather than CS. My advisor, I believe, was Alice Parker, but my true mentors were Kim Korner and Deborah Estrin, with a little influence from Peter Danzig, all from the CS department. It was with them that I studied distributed systems.

In one seminar class taught by Deborah and Kim, we read about sixty papers, including a few on AI and other broad topics that Kim felt like we should study. Kim's graduate-level OS class was essentially reading the important literature. I also took a summer class on federated databases by Peter, and a couple of parallel architecture classes, one that was more lecture style and one that was focused on the literature. Over those four classes focused on the computing systems research literature, I probably read in excess of two hundred papers. I should dig up those reading lists.  (I did the non-thesis option because I knew there were a lot of classes I wanted to take; in addition to those four and the architecture lecture, I took a compiler class, an OS lab class (which I would teach a few years later), probability, and queueing theory, for a total of nine classes, each of which was a lot of work.)

In that seminar-style class taught by Deborah and Kim, we studied these systems and concepts, and quite a few more (in no particular order):

  • Lamport’s clocks: To me, this was a revelation. It's one of those ideas that changes the way you think, which of course is one of the goals of education, right?
  • Byzantine generals (Lamport again): Since I was a VMS sysadmin, this one also helped me to understand what I was seeing when the VAXcluster hung. Not that it really helped me debug the cluster; usually, we just rebooted nodes until we found the one that was the problem, then the rest of the cluster would spring back to life.
  • Virtual time (USC and/or JPL): WOW! Speaking of ideas that changed my perception of the world, virtual time showed how optimistic computation can be used to create a system that computes on an effectively globally synchronous model. Jefferson wrote in terms of simulations that have a notion of time, but the techniques are far more general than just physical simulations. The paper lists Jefferson's affiliation as USC, but I think he was at JPL by the time I was reading this. I have never met him.
  • Amoeba (Vrije University, Amsterdam): Among the systems on this list, Amoeba probably influenced my thinking more than the others here, though it had important drawbacks that meant it was never competitive as a production system. It was just too different from UNIX at a time when that meant death. But its true distributed object model and the expectation that objects could execute on VAX, 68000-family or other processors seamlessly was a critical, and radical, goal. In fact, I tried to follow that model myself in a project I did in one of my grad school classes, shipping code from node to node across a socket for recompilation and execution at the far end.
    Among other features, it was more or less what we would call a microkernel, with services focused on supporting individual program objects rather than a full process at it is understood today. Communication between objects was a form of RPC.
    The 1990 CACM paper linked to above (which I think was the paper we read) doesn't mention the Amoeba project's ongoing, largest contribution to computing: the Python programming language.
  • Sprite (Berkeley): my primary memory of Sprite is its process migration, which was implemented early on and extended and studied into the mid-1990s. Its single-system-image approach is important and powerful, but not easy to make robust and high performance. Much later, I was impressed and intrigued by Mor Harchol-Balter's observation that process lifetimes follow a long-tailed distribution, making process migration more valuable than it might be otherwise. It is, however, difficult to pull off; you have to maintain open files and any names or services that the process was accessing before migration have to be transparent once moved. Once the process has moved, it also becomes more fragile; it is now dependent on not only its current machine environment, but the continued accessibility of the server from which it migrated, which must continue to provide some of these services. Making this robust is hard.
    And so, today, that migration isn't a feature of any major OS: instead, we migrate entire virtual machines in order to achieve load balance within a data center! This VM migration is often engineered so that post-migration, the VM is no longer dependent on the machine it just left, which improved robustness and allows that machine to taken down for maintenance (or shut down altogether). This depends, of course, on any external devices -- most especially storage, in block or file servers -- being independently accessible.
  • V (Stanford): Across the bay from Berkeley, a separate distributed OS was being developed. One of the most interesting aspects of V was VMTP, a transaction-oriented network transport protocol that supported reliable multicast and was useful for RPCs to multiple servers at the same time. VMTP is one of the few protocols to arise from OS research of that era that resulted in an RFC. We considered adopting VMTP or elements of its design when I worked at Quantum on a network-attached storage system. I am intrigued that the only author listed on the RFC is David Cheriton himself.
  • LOCUS (UCLA): I was not particularly a USC partisan, so the cross-town rivalry didn't really figure into my thinking, but for some reason LOCUS didn't make a particularly strong impression on me. Although its goal was a single system image (SSI), neither the mechanisms nor the results seemed particularly radical to me at the time. Later I came to realize how valuable the remote I/O mechanisms were and how important Gerry Popek was; I still have my students read his formal description of virtualization from the early 1970s. Popek also was Ph.D. advisor to my friend John Heidemann, who has gone on to become one of the most-cited computer systems people of all time.
  • Grapevine (Xerox PARC): The granddaddy of them all. PARC is famous as the birthplace of the modern office LAN environment, with GUI workstations, file servers and network-connected laser printers. The Alto was the machine, and Grapevine was the distributed naming and messaging system that made it all work together.  Grapevine had a naming system which, to me, has echoes in LDAP; it was focused on naming of services rather than just machines, and so could be used for discovery as well as simple lookup. On top of this naming service was built a messaging (email, more or less) system as well as a method of finding file servers. It included authentication of users in ways that were left out of contemporary systems, a weakness that echoes down through today. It even ran in something of an inter-office fashion, though I think the federation of systems was probably a little wonky.  Less than a decade later, Grapevine would be displaced by Sun’s machines, NFS, RPC, LPD, and SMTP, all built on top of DNS, TCP, UDP and IP running over the Ethernet created at PARC. Arguably, Grapevine would have been a better platform for building those early services, though I don’t think the implementation language(s), APIs and libraries were well matched to the move to UNIX and C. I wish I had had some direct experience with Grapevine and Altos; I believe there were still a few around ISI when I arrived in 1985, though I don’t think they were in active use.
  • Condor (Wisconsin): the hunter of idle workstations. A great idea, scavenging CPU cycles from idle machines to use in an office-wide batch queueing system. Doing this well requires sharing file namespaces across machines.  I remain baffled that UNIX has never had a true distributed batch system for ordinary users to submit compute-heavy jobs.
  • Andrew File System (AFS) (CMU): When I first encountered it, AFS seemed to me like a "better NFS".  It doesn't have actual locking, but it caches files locally, allowing offline writes and working better in a wide-area environment. That wide-area ability comes from reducing latency for operations because of the nearby location of your current copy, security, and in part because it uses TCP by default instead of UDP. This local copy approach also improves global performance and scalability, but it is not designed to support a truly shared write workload to the same file; it depends on a callback approach to resolving write conflicts, including when reconnecting after a network partition. One feature is that the AFS file namespace was essentially global because it incorporated the origin of the file in the name, almost but not quite precursor to URLs.  AFS never quite took off as a one-for-one replacement for NFS, though, perhaps because NFS worked "well enough" for most installations most of the time, perhaps because AFS was initially free but later controlled by a company that wanted to commercialize it. Most likely it was just the NFS first mover advantage.
    A little more than decade after studying it,  while working at Nokia, I would propose using AFS in mobile phones as a file sharing system and automatic backup system for photos and documents, years before the iPhone, Google Drive and other cloud storage. Unfortunately, I didn't sell the vision well enough, and the idea didn't go anywhere. Sigh. That's one of my biggest regrets in my career. I think we could have changed the way people thought of mobile terminals well before the iPhone became a thing.
    Around Christmas one year, I did prototype IPv6 code for AFS; I think my code was partially or completely rewritten before becoming part of the main AFS distribution, though.
  • DNS (USC/ISI) (I think; we might have done this in another class.): I also played bridge and poker with Paul Mockapetris, who was at ISI, so it seemed a bit of a lark to be studying his work. I didn't grasp at the time how important DNS was and Paul would become. Unlike most of the other things listed here, DNS isn't a programmable platform in and of itself, but the depth of its relationship to the key ideas we all study in distributed systems work cannot be overstated. The widely distributed, federated, high availability, soft failure, eventual consistency model is crucial to global-scale systems.
  • Plan 9 (Bell Labs) (I think; we might not have done this in class. It’s even possible that I didn’t really study it until returning to ISI in 1995.): Plan 9 has an extensive Wikipedia entry, so quite a number of people were apparently influenced by its "everything is a file" model, where processes, IPC, sockets, and everything else appeared somewhere in the file tree under /.  I think Plan 9 might have been the origin of /proc. Each process could also have its own namespace, an idea that now shows up in virtual root directories used as a common means of isolating processes. (chroot existed long before this, of course.) (Edit: Just two days after I published this, The Register published a very thorough description of Plan9, including its current state. The article was apparently based on a talk given by the author at the FOSDEM conference.)
  • Closure in Prospero (Washington): speaking of naming, this short paper by Cliff Neumann (who would shortly come to ISI, where he remains) has a closure, or a program that resolves names, to accompany each name. I liked this idea.

I think logical clocks, Byzantine generals, and virtual time were probably in Kim's advanced operating systems class, rather than the class with Deborah; in Kim's class we also read about Multics, VM (the IBM OS), and other already-classic systems that seemed a little alien to me and rather opened my eyes. And, of course, we learned about Mach, the epitome of the microkernel system and also a topic of research at ISI.
I mentioned that deviating from the UNIX model meant death for your system in the 1990s. This was, of course, the Killer Micro era. Microprocessors were coming out at a furious pace, with clock speeds and overall performance climbing to breathtaking levels, with the CISC architectures (x86, 680x0) being displaced by RISC (MIPS, Sparc, PowerPC, PA-RISC). Performance was moving so fast that there was no need to deploy multiple processors on ordinary programming tasks; by the time you got your code working, there was a processor with twice the performance available.

A glance at the origin institutions of the list above shows clearly why California was the place to be. Some versions of Silicon Valley’s history focus on the literal silicon (chips), and perhaps rust (disk drives). Some focus on PARC's outsized role. But it's important to be aware of the overall fecundity of systems work in general and California in particular in the 1970s, 80s and 90s.

Oof, more than enough for now. This entry has gotten a lot longer than I intended, even though it still only scratches the surface of ideas that have influenced my own thinking. Thanks for reading this far!

Sunday, February 11, 2024

Spelunking CACM, Vol. 17 (1974): Clipping Polygons, Parallelism, and UNIX

Well, we are finally within half a century of the present! And what a year 1974 was...

Up to this point, we have not had a lot of graphics-related articles, but of course Ivan Sutherland is the father of all. He and Gary Hodgman described clipping algorithms for polygons, in a well-cited paper. This paper is fairly long (11 pages) for this era in CACM, though a good fraction is figures and pseudocode. The authors were at Evans & Sutherland, spun out of the University of Utah in 1968, perhaps the first computer graphics company and the training ground for some of the most important people in the field, some of whom went on to found other companies. I love that the authors even consider polygons that surround the observer (as in the figure above). Also gotta like the sentence, "We are somewhat chagrined that the obvious extension of work on line clipping with which we have been involved [11] kept us so long from seeing the simplicity of the present approach."

Leslie Lamport showed how to parallelize an in-place computation on an n-dimensional array with nearest-neighbor dependencies by working along the diagonal from one corner toward the opposite corner, in a technique he called the hyperplane method. Every entry on a line (2D), plane (3D) or hyperplane (>3D) perpendicular to a line extending along the diagonal can be updated at once, safely, with the amount of parallelism in each time step growing as you get farther from the corner. It’s mathematically elegant, and might still be useful when each entry is some complex object, but today’s practice is to cache the previous value long enough to allow entire arrays to update at once, where “cache” here could be “send your current value to all neighbors in a message”.

Since I worked on hierarchical storage management (HSM) systems, there is a soft spot in my heart for a paper proposing a heuristic for planning file placement on removable disk packs. Howard Lee Morgan's affiliation in this paper is listed as Caltech, which also intrigues me, since his Wikipedia page doesn't list Caltech. (At least, I assume it's the same guy; how many Howard Lee Morgans would have been in CS in 1974? And I can't find any other mention of the name associated with Caltech.)

Definitely have to cheer on my friend Mary Shaw with an early (her first?) CACM article, on efficient compilation. I have only known her since the COVID-19 pandemic and only remotely, but in this article she writes as clearly as she speaks today. (Edit: Mary tells me this was her thesis work.)

Ah, Hydra. Back in 1970 I talked about Per Brinch Hansen's Nucleus; this one is another important paper in the history of operating systems. It's the first full multiprocessing kernel of which I am aware, and talks about resources and objects in modern ways. Protection is provided via capabilities, and there is clear talk of separation of policy and mechanism. Capabilities can only be modified by the kernel. At the time of this paper, they did not yet have a file system, nor apparently users. The machine itself, C.mmp, was a set of PDP-11s modified to share memory through a crossbar switch. They also had a form of virtual memory, though I am not clear about whether VM was a protection mechanism and whether each process had its own address space. I don't think so; I think VM was used only to map the shared memory into the processor's address space.

Yes, C.mmp and Hydra were from CMU, where Mary Shaw worked then and still works today; CMU has been important in computing for this entire half century.

And then, of course, there is a paper about another PDP-11 operating system, this one named UNIX. I'm shocked to see that, by ACM counting, it has still been cited less than a thousand times. C'mon, people, what are we doing? Maybe the counter has rolled over?

Interestingly, the UNIX paper is part of a special issue on the fourth SOSP, but the Hydra paper is not. Nor, for that matter, is Hoare's paper on monitors.

And this is without even talking about Dijkstra and Knuth. Heavens, what a year! If all of the years from this point forward are going to be this rich, I'm going to have to change the way I go about this -- there is too much to ingest, and too much for a single blog posting!

Sunday, January 28, 2024

Code Monkey and the Magnificent Seven

This is something I wrote back in 2007, slightly adapted; the original is available on the IP list archives.

I really was going to keep my mouth shut on this topic, but I just can't help myself; the topic is stuck in my brain and I have to get it out through my fingers.

This discussion has ranged from how CS is taught to the grand structure of society, touching on whether or not a college education is worthwhile, the place of the programmer in society and the company, choice of career, and the issues of globalization. Let me offer first a link that I think is an actual contribution to the discussion; the rest I'll let you judge for yourself :-). First the CS education, then the larger issue, including a few embryonic thoughts about apprenticeship.

On how to teach CS, let me point out that this issue has received attention at the highest levels. The ACM issues guidelines on curricula for:
  • computer science
  • computer engineering
  • information systems
  • software engineering
  • associate degree programs
  • K-12
and most have been revised since 2000. The CS undergrad curriculum recommends, for example, a core of only 18 lecture hours on operating systems, out of a total of 280 or so core hours. In a U.S. research university, that would roughly represent the sophomore-junior content of a CS degree, and would probably roughly be doubled in the senior year to a total of perhaps 560 hours of lecture and maybe three times that in homework, a couple of thousand hours on CS alone, ignoring math, other sciences and engineering, humanities and social sciences. In other types of advanced learning institution, the amount could be substantially less or quite a bit more.

Craig Partridge mentioned a workload for students that amounted to about 600 lines of code a week for the term. I would hope that master's students in CS would be able to work at that rate, but that's clearly too much to expect of first-year programming students. The ACM curriculum above includes a several-page discussion of the importance of teaching programming v. principles, which I won't go into here, but I highly recommend reading it.
(If you're keeping score on the anecdotal topics mentioned so far in this conversation, version control probably makes the core curriculum, automata and shared libraries are in the supplementary curriculum. A couple of lectures on privacy, intellectual property and other aspects of computers in society seems like a positive thing.)
I'm not involved in the curriculum effort, but I suspect they are under more or less constant revision. In recent years, some universities have been adding programs in game design, bioinformatics, business/technology dual majors. One can certainly argue that the overall CS curriculum is in flux and varies across the country (and around the world), but beyond that it's difficult to generalize.
The above I'm fairly confident is a positive contribution to the discussion. From here down, I'll let you form your own opinion. Much of what I would write Karl Auerbach has already written, and far more succinctly and eloquently than I seem to be capable of. But let me offer a twist or two, and go a step farther.
Zach and other writers made several major points:

  1. The undergrad CS curriculum at many colleges/universities is inadequate.
  2. The undergrad CS curriculum at many colleges/universities is poorly taught.
  3. Students arrive with differing backgrounds and ability levels, and consequently come away with different learning experiences and knowledge.
There are several corollary points:
  • A. It's possible to pick up "on the street" what you need to know to do most jobs in a Silicon Valley startup.
  • B. Promotion at major research universities forces a focus on research rather than teaching.
  • C. CS as a major is not very attractive right now, for a variety of reasons, culminating in "programmers are treated like dirt in companies, and with all the jobs being exported to India, it's a bad career choice anyway".

I think all of these points except #3 are debatable. Certainly the discussion so far has not made a distinction between #1 & #2. I'm not really going to discuss most of these points directly, but they form the undercurrent in what follows.

For millenia, kids were apprenticed to a profession at an early age. We still have this, after a form. LeBron James, Michelle Wie, the Magnificent Seven (remember them?), and all pro go players and musicians took up their vocations essentially full time well before what we consider "working age". Many of us starting hacking at a similar age, but maybe we need formal extracurricular mentoring and a similar support system? A way for kids to "go pro" earlier and still get the coaching they need? Virtuosos almost all spend 20-40 hours a week at it from the age of fourteen or earlier, why should programming be any different?

I've heard people say, "Oh, who needs college?" But I'm a fan of the modern apprenticeship -- the Western liberal arts college degree. It's your chance to read Wilde, Plath, Joyce, Thoreau, the Federalist Papers; learn a foreign language (okay, I flunked German), play on the basketball team, take up scuba diving; study how the encroachment of the modern world is impacting the Mbuti pygmies and the Navajo; maybe do a year abroad (I wish I had). Personally -- call me a science chauvinist -- I think anyone who claims to have a well-rounded education should be able to talk intelligently about relativity (special, at least) and quantum theory, arguably the two most important (and misunderstood) discoveries of the twentieth century (fans of the nature of the chemical bond, the structure of DNA, the Big Bang and plate tectonics all have a case, though). (On a tangent, let me recommend Alan Lightman's The Discoveries and Richard Muller's course titled "Physics for Future Presidents".)

For most people, college will be the broadest point of your life. For many -- including a particular skinny sixteen-year-old from southern West Virginia -- it's your first extended contact with people from other parts of the country and the world, and with the breadth and wealth of the world of ideas. Take time to hang out with the music majors (okay, Caltech doesn't have any of those, but it does have English majors). Consider a career in astronomy or geology instead of hacking. Trust me, once you're in a company and writing code, the world gets a whole lot narrower, even if you're working shoulder to shoulder with people from China, India and Finland.

If you're of pragmatic bent and recoil in horror at the thought of whiling away your days trying to decipher Joyce and Eliot rather than starting that get-rich-quick Internet business, well, I have trouble imagining that anyone can be effective at building large-scale computer systems without some background in probability. Probability requires at least basic calculus, and if we can get you as far as queueing theory, it will change the way you think about systems problems. If you are interested in image or audio processing, you need linear algebra and Fourier transforms, and the basics of amplifiers and RC circuits won't hurt; if you're in communications, antenna design and wave guides help; if you write firmware, you gotta understand what the funny lines on a signal analyzer mean. And I have yet to meet the engineer (myself included) whose writing and presentation skills need no polishing. Note that none of that is included in the 560 lecture hours mentioned above, but I wouldn't trade any of it for another hour hacking code.
Is it possible to learn all of this without going to college? Of course! Two of the brightest people I know are extremely well-rounded, and neither finished college. Did it hurt their careers? Probably in a minor way. Are they successful anyway? Very. (And no, neither is named Gates or Jobs). If you're smart, disciplined, get a good break and a good mentor, you can do it. But for most people, the direction and semi-structure (compared to high school) of a university will bring them needed breadth, depth, and maturity -- and a higher probability of success in life. (No guarantees, though!)

It's also true that the university environment as currently constructed is not perfect for everyone. But for many people, bailing out with the excuse, "I wasn't learning anything anyway," is a true statement that says as much about them as it does about the system.

And lest you think I'm in favor of stuffing people's heads with facts but not teaching them to solve problems, let me reassure you: I tell my students that the only thing they *must* learn in college is how to think, everything else is optional. While I do believe there is a core curriculum I wish they would learn, four years is a short time and a focus on any given technology is extraordinarily short-sighted when planning for a forty- to sixty-year career.

By now I've no doubt convinced you that I'm an unrepentant ivory-tower academic. However, I've spent about half my career to date in industry. Was I well-prepared when I came out of school? Let's say I was still a work in progress, and I owe more to a mentor and friend named Wook than any professor. Certainly I have regrets, the largest of which is not taking Jim Blinn's graphics class, but most of them are due to my own shortcomings. But because I was taught how to learn and how to work hard, I have followed up with enormous amounts of learning, and at every step of the learning process I felt that my time at Caltech helped.

From here, I was going to springboard into the place of the programmer in society and globalization, but this is way too long already. Let me thank you for reading this far, and finish with four thoughts:
  • The Western liberal arts CS degree, properly taught and learned, has long-lasting value. The key question is, is it worth a thousand bucks a week for four years? 
  • Engineers are paid to solve problems, not to write code. Demonstrate an ability to learn about the problem domain in which you are working, and your prospects improve; breadth and depth help. 
  • What is it that makes Code Monkey such a pathetic figure? The fact that he's not in control of his own destiny. If you want that control (and, IMO, if you want a date), being articulate and well-rounded is the path to leadership.
  • Finally, keying on that last word, the job of a university is to provide productive members of society. You should graduate knowing not just a handful of technical facts, but how you can find the best role for yourself, where "best" makes you happy, earns you a buck or two, and allows you to contribute to the greater good.
Do we accomplish all of this in four years? Nah. Do we start students on the right path? Some of them. Should we figure out how to do our best for more of them? You betcha. Are professors perfect, giants striding the Earth before whom all should bow? No way. They're taught to swagger, but if they haven't humbled deep down by their own learning experience, they haven't been paying attention.

One thing's for sure, though: the development of the next generation of leaders, both in technology and society, is far too important to be left to what disaffected teenagers pick up surfing the blogs of disaffected, semi-literate, technical and political ideologues.

(Heavily Used) Distributed Systems that Influenced Me

 I had my first job, as a VMS sysadmin, and did my master's degree at USC, during the heyday of distributed operating systems research and implementation. Here are a few of the ones that influenced me:

  • VAXclusters: My pal Wook, a former DEC employee himself, was rather disparaging of VAXclusters and the VMS operating system in general, but the distributed file system, distributed block-level access to network-attached disk drives mediated by a distributed lock manager, the ability to share a single set of accounts across the cluster, and especially the distributed processing queue for batch jobs became my expectations for what a system should be. Why did a good distributed batch system never become a standard feature of a UNIX house? (You should should definitely read Nancy Kronenberg's paper on VAXclusters.)
  • NFS: Of course, one of the two indispensable tools of a CS research organization in the late 1980s was a Network File System (NFS) server; you could access it from almost any flavor of UNIX and many non-UNIX OSes. Naturally, this built on Sun RPC (remote procedure call), which among other things included mechanisms for marshaling data types for exchange across the network between different kinds of CPUs and OSes, though RPC itself as a concept was older.
  • X Windows (from Project Athena): The other indispensable tool. Of course we first used workstation GUIs on Suns, MicroVAXen, Macs and Amigas, and then later Microsoft Windows, but the most important innovation of the mid-1980s was arguably X Windows, which allowed a program with a GUI to run a program with heavy computational, memory or I/O requirements on a remote server with the mouse, keyboard and graphics all on the desktop. Yeah, it was hard to configure and get working, and architecturally the split of functionality between the desktop and the server might not have been right, but this set the standard, pretty literally.
    I had forgotten how big Project Athena itself actually was; IBM and DEC contributed several thousand machines, from PC-class to large minicomputers and servers, and about ten staff to MIT.
    The other big thing to come out of Athena, of course, was Kerberos. Both X Windows and Kerberos are still in use today.
Those were the ones that permeated the local atmosphere at USC/ISI, where I worked 1986-1992, then again 1995-1997. There were probably other systems that I was using at the time that I have forgotten were radical or influential; to us, this was just the ocean we swam in, unaware of the water except when something broke. (My first job, as a sysadmin primarily responsible for the VAXen but with secondary responsibility for pretty much everything else, was to fix things when they broke; that was often enough to keep a pretty good sized group of us busy full time.)

I'll do a separate post on some of the distributed OS research systems and papers that influenced me. There are a bunch of those, too...

Saturday, January 06, 2024

Spelunking CACM, Vol. 16 (1973)

James R. Bell of DEC (Digital Equipment Corporation) (any relationship to Gordon Bell or to the James Bell I went to Caltech with?) wrote a highly-cited article on threaded code that, to my eyes, looks like byte-code compiled code for an interpreted language, as in the figure above. (The figure doesn't quite convey the distinction between the left and right figures, but it's there). On the CPUs of the 1990s this many jumps would totally trash your pipeline and possibly your cache, though today's CPUs invest a lot of effort in branch prediction and the like such that some of those jumps could cost as little as zero cycles, though the indirect through a register could make that harder.

Before the late-80s trend toward language-agnostic RISC systems that put the onus of efficiency on compilers and run-time systems (and ignited the most ferocious era of Moore's Law-enabled competition), it was common in systems research to consider hardware architectures that were tuned toward execution of a specific language. A team from IBM microprogrammed an IBM 360 model 25 to accept a baroque machine language specifically for APL. I rather miss those days; I was a fan of the Lisp machines of the 1980s, though there were among the first and most thorough victims of the Killer Micros.

 It's interesting to note that people have been concerned that we might have too many PhDs for half a century now. Such opinions seem to alternate (during boom times) with the fear that we don't have enough, that we are eating our seed corn (to leap ahead to 1981 in CACM for a minute).

We have done a little work on clique finding on quantum computers, so I probably should be more familiar with the work of Bron and Kerbosch. I am rather deliberately not looking just for articles that are highly cited; that would be boring and wouldn't really help anything. Instead I am looking for trends as well as things that just catch my fancy, as clever or prescient or sometimes extra-far off base. But this one might be the single-most cited paper in CACM in calendar 1973, and so is interesting for that reason as well as the overlap with my own work.

Overall, my impression of 1973 is that it was a year of "settling in", of solid but mostly not groundbreaking work, at least as represented in CACM. Soon enough, though, we will be into the Xerox PARC, home PC and UNIX workstation eras...looking forward to seeing how CACM changes. Next stop: the current half-century!

Tuesday, January 02, 2024

Spelunking CACM: Some Thoughts on Operating Systems

Reading through these early papers, it's astounding how much the Titans understood, and what they could see. This is perhaps most evident in the field of operating systems.
I am known for saying, "Hardware has been ahead of software for my entire life," but it's important to make the distinction between vision and implementation in both. Also, for OSes, hardware support is often key, so they develop hand in hand, and given the difficulty of effective simulation, it may be natural for software to lag hardware, but here I am not talking about a few months; I think it takes us YEARS to fully exploit the bounty of computational capabilities that the hardware gods provide. See, for example, Larus's Spending Moore's Dividend.
And obviously, OSes continue to be developed, hopefully on a largely monotonic upward trend.
But here I want to talk about the profound ideas in operating systems. And, despite my general optimism and respect for the younger generations, this is going to sound pessimistic...
I don't think we have had a Truly Important idea since the late 1960s or early 1970s.
By that time, the basic structure was in place: the program and the process; multiprogramming and multitasking; files and hierarchical file systems; virtualization of memory and entire systems; inter-process communication; shared-memory and distributed-memory parallel processing; networks; even the rudiments of distributed file systems and systems existed in Farber's DCS. It was all there.
So...
What was the last "this changes everything" idea in operating systems?