Ada Lovelace Day Spotlight: Radia Perlman

The next time you visit Facebook, do a search on Google, waste time on YouTube, or do anything online, stop and thank Radia Perlman.  It’s her algorithm that makes it possible for data to traverse the Internet, thus earning her the nickname “Mother of the Internet”.

Dr. Perlman is a Fellow at Intel, and has spent her career working on computer networking and computer security.  She earned bachelors and masters degrees in mathematics from MIT in the 1970s and a PhD in computer science from MIT in 1988.  She’s worked for many big-name companies throughout her career, from BBN to DEC to Novell to Sun.  She has nearly 100 patents in networking and security technology and has authored two widely-used and innovative textbooks.

(On a personal note, I saw Dr. Perlman speak at Grace Hopper in San Diego in 2006—she presented some of her more recent security work.  The talk was very interesting and thought-provoking, and afterwards I got to speak to her for a few minutes one on one—one of the highlights of that conference for me!)

Dr. Perlman’s most famous contribution—the one that earned her her nickname—solved a particularly thorny problem in the early days of computer networks:  how do we ensure that data gets to its intended destination without getting hopelessly lost, or going in circles?  How can we determine the path that data should take when going from one place to another?  (If you consider for a second just how many millions of computers, smart phones, appliances, and other devices exchange information over computer networks—well, you get a sense for just how important this problem is not only to solve, but to get right!)

Unfortunately, her first attempts met with….well, I’ll let this snippet from a 2006 article in Investor’s Business Daily tell the story:

Radia Perlman had a solution for an information routing problem. Unfortunately, no one was listening.

It was the mid-1970s, and Perlman was a software designer for computer network communication systems — and one of the few women in the field.

At a vendor meeting where engineers were asked to help with the routing problem, Perlman spent 30 minutes illustrating her solution with an overhead projector. But the event organizers ignored her findings. Why? Because she was a woman. What did women know about computers?

“At the end of the meeting, the organizers still called for a solution after I had just given them one, which really irked me,” she said.

She persisted, though, because she believed in herself and she also believed in her solution.  Eventually she made her way to DEC, where her solution met with more receptive ears….and the rest is history.

In a nutshell, her solution is what we call a spanning tree.  Let’s say that you have a bunch of computers that are connected together, and that you want to send data from Computer A to Computer B.  There are most likely multiple routes the data could take, because there’s more than one way to get from A to B.  Spanning tree ensures two things:  (1) that there is a way for data to get from Computer A to every other computer in the network in a “straight line” (i.e., without doubling back to a computer it’s already visited on its journey), and (2) that these data “paths” are the shortest and/or most efficient ones possible in this network.  So not only does spanning tree ensure that my data can get from A to B, but that it will do so in the most efficient way possible.

The really cool thing about this algorithm is that the computers in the network figure this all out themselves, without human intervention.  This means that if something happens to the network—a computer crashes—the network can figure out a new spanning tree all on its own, as soon as it figures out that there’s a problem.  This sort of self-healing behavior is what makes our modern-day, large, sprawling Internet possible—without it, things would surely ground to a halt often, because computers are always failing and crashing and otherwise misbehaving.

Dr. Perlman has made many other fascinating contributions throughout her career—for instance, I just learned that as an undergraduate, she worked on ways to teach computing to very young kids using LOGO and TurtleGraphics.  Her more recent work is more heavily security focused, studying new ways to encrypt and decrypt data, make files “disappear reliably”, and do distributed authorization. In 2005, she won a Woman of Vision Award for Innovation from the Anita Borg Institute for Women and Technology.

Throughout her career, her work has been ahead of its time, and I look forward to continuing to follow her career and her contributions.

(Post for Ada Lovelace Day—thanks Suw Charman-Anderson for organizing it again this year!)

Advertisement

Apparently I am a trendsetter

So, last week was the annual SIGCSE conference (SIGCSE = Special Interest Group on Computer Science Education), and I don’t know/remember the whole story, but it had something to do with something the keynote speaker said that became a running riff at the conference….

Anyway, that’s not important.  What is important is that I, apparently, am a trendsetter!

Check out the home page for the conference

… you’ll see, right there front and center on the front page, a link to an order page for this:

this is what a computer scientist looks like t-shirt

Sound familiar, anyone?

(Personally, I think mine looks better!)

No, THIS is what a computer scientist looks like!

The original! And from the same company, too.

Man, if I had only known, I could have started a lucrative side business or something!

UI design: the Rodney Dangerfield of CS?

I met with my Comps* group yesterday to discuss extending their project into the spring as an independent study, with the hopes that the extra 10 weeks would improve the software to the point where it could be released to the public.  I asked them to brainstorm and prioritize what they would do with the extra 10 weeks.  They suggested adding features, redesigning some of the logic, changing the language, etc—the usual suspects.  I waited for a lull in the conversation before asking the following question:

“What about user testing?  Focus groups?  Making sure that the program you’ve designed actually works for your chosen population?”

Silence.  Uncomfortable silence.  Then:  “Well, we have a pretty good idea of what our target demographic is, and we [insert lots of assumptions about what the population looks like, acts like, can do, etc], so we don’t really need to do user testing.”

I wish I could say that this is an isolated incident, but it’s not.  One of my Comps groups last year wrote an educational game for 6-8 year olds.  Any guesses as to how many 6-8 year olds were surveyed over the course of the project?  (Hint:  it rhymes with “hero”.)  And I do hear things from students that imply that user interfaces, and user interface design, is not “real” computer science—the “hard” and “important” work is the backend stuff, while as long as you can make the interface functional enough (for a computer scientist), that’s good enough.

This mentality makes me very, very angry.

I would argue that often the hardest part of any project is the human part.  It is difficult to figure out what your population really wants and needs, and then to translate these into highly functioning and intuitive components.  And this is not just a “soft” people skill, either—it takes real technical, and design, chops to be able to do this well.  Hell, it’s sometimes hard to work in project teams, too, with personalities and philosophies and work ethics that differ from your own.

But.  Good software and good technology is not developed in a box, in isolation.  Good software and good technology is designed to be used.  By real and actual people.  Few of whom are computer scientists, or think/act/react like computer scientists.  So basing your design on how you, the computer scientist, thinks/acts/reacts is faulty.  For instance, early voice recognition systems literally did not register women’s voices—because the designers were men, and the designers built the system based on their experience and characteristics (and thus for lower-pitched voices)**.  More recently, HP’s face recognition software failed to recognize faces of color.  If our design teams are largely white and male and geeky, then we get software and technology that is unusable for part of the population.

User interface design gets no respect, and/or is “ghettoized”, and I’m not sure how to change this.  But it troubles me.  I want to continue to have my Comps students work on interesting and technically challenging problems that are also service- or people-oriented, that take CS out of the lab and into the realm of real, societal problems.  And yet, I don’t want to have to spend every year arguing with my students as to why solving societal problems means that they should be getting out of their tech bubble and considering the real world around them, that such time is not wasted time, and that they fail to do so at their own peril.

* Comps = basically, our senior capstone experience, where teams of 4-6 students work for 2 terms on interesting and difficult CS problems.

** Interestingly, I tried to Google up a link for this, but what I found was a link to a very recent study at the University of Edinburgh indicating that telephone voice recognition software actually has a harder time with male voices than female voices (mainly due to use of “filler” words like “umms”).  So we’re still not getting voice recognition software right!

Writing as research

I’m running a bunch of experiments this week—sort of a bigger run of a set of proof-of-concept experiments I did late last summer.  Basically I’m trying to determine how much of an effect changing the transport mechanism of video data over a network affects the measured characteristics of the video data.  This may have a big effect on the measurement system I’m trying to design:  we need to know what we’re measuring before we can build a system to measure it!

But that’s not really the point of the post.

What strikes me as I run these experiments is how much I rely on writing as both a research tool and an idea generator.

In the past, my research followed more of a linear approach:  define the problem, come up with a hypothesis, design the experiment to test the hypothesis (or, let’s face it, find the hypothesis in some cases), run the experiments, (rerun the experiments because something got screwed up), analyze the resulting data, refine the problem/hypothesis, redesign the experiment, etc.  Only after this cycle was complete did writing even appear on the scene.  Writing up the results, experiments, hypothesis, etc. was the last stage of the process, and signaled that the research process was winding down for the project in question.

Now, my research is definitely  more circular, and writing appears multiple times in the research cycle.  Sometimes I’ll write right away, as soon as I define the problem, as a way of thinking through my ideas.  (A lit review is a good way to do this.)  More often, though, writing occurs as I run the experiments and collect and analyze the data.

It turns out that writing is very useful at this stage in the game.  Practically speaking, writing up results as you generate them really comes in handy for future papers:  you have a stable of already-packaged results that can be inserted into various papers, you can assemble conference papers and journal papers with shorter lead times, etc.  But I find it also helps refine my thinking.  By writing up results as I analyze them, I focus more clearly on my intended audience:  What would my peers think about my results?  What questions might they have about what I’m discussing, or what questions would they have about these results?  What are the most likely criticisms that a peer reviewer would have?  This helps me clarify my own arguments and sometimes even guides the analysis I end up doing.  It helps me uncover holes in my own logic and points I’m glossing over.  It also helps me figure out, more quickly, what new questions this data raises, and helps me enter the experiment/analysis refinement stage sooner (and also serves as a better guide to the refinement process).

It also helps with the “research amnesia” problem.  Often, I’ll get a great idea for a new project, or a new aspect of a question to ask.  It turns out, though, that I will often have the same great idea more than once.  So the first thing I do when inspiration strikes is to see if I’ve tried to answer that question before.  90% of the time, I have.  This saves me time, because I don’t end up repeatedly going down the same dead-end path.  But it works the other way too:  sometimes revisiting these dead ends means I notice something that escaped my attention the first time, and which changes the whole analysis, so that a dead end becomes a new fruitful line of research.  (This is how my 2 most recent conference papers came to be!)

So this week, I’m multitasking:  running experiments, analyzing the data as it becomes available, and writing as soon as an analysis is complete.  I definitely won’t completely answer the questions I set out to answer—I’ve already found some holes in my logic!—but I know that I’ll be much, much closer to the eventual answer as a result.

I just wish I had learned this valuable research skill sooner in my career!