UI design: the Rodney Dangerfield of CS?

I met with my Comps* group yesterday to discuss extending their project into the spring as an independent study, with the hopes that the extra 10 weeks would improve the software to the point where it could be released to the public.  I asked them to brainstorm and prioritize what they would do with the extra 10 weeks.  They suggested adding features, redesigning some of the logic, changing the language, etc—the usual suspects.  I waited for a lull in the conversation before asking the following question:

“What about user testing?  Focus groups?  Making sure that the program you’ve designed actually works for your chosen population?”

Silence.  Uncomfortable silence.  Then:  “Well, we have a pretty good idea of what our target demographic is, and we [insert lots of assumptions about what the population looks like, acts like, can do, etc], so we don’t really need to do user testing.”

I wish I could say that this is an isolated incident, but it’s not.  One of my Comps groups last year wrote an educational game for 6-8 year olds.  Any guesses as to how many 6-8 year olds were surveyed over the course of the project?  (Hint:  it rhymes with “hero”.)  And I do hear things from students that imply that user interfaces, and user interface design, is not “real” computer science—the “hard” and “important” work is the backend stuff, while as long as you can make the interface functional enough (for a computer scientist), that’s good enough.

This mentality makes me very, very angry.

I would argue that often the hardest part of any project is the human part.  It is difficult to figure out what your population really wants and needs, and then to translate these into highly functioning and intuitive components.  And this is not just a “soft” people skill, either—it takes real technical, and design, chops to be able to do this well.  Hell, it’s sometimes hard to work in project teams, too, with personalities and philosophies and work ethics that differ from your own.

But.  Good software and good technology is not developed in a box, in isolation.  Good software and good technology is designed to be used.  By real and actual people.  Few of whom are computer scientists, or think/act/react like computer scientists.  So basing your design on how you, the computer scientist, thinks/acts/reacts is faulty.  For instance, early voice recognition systems literally did not register women’s voices—because the designers were men, and the designers built the system based on their experience and characteristics (and thus for lower-pitched voices)**.  More recently, HP’s face recognition software failed to recognize faces of color.  If our design teams are largely white and male and geeky, then we get software and technology that is unusable for part of the population.

User interface design gets no respect, and/or is “ghettoized”, and I’m not sure how to change this.  But it troubles me.  I want to continue to have my Comps students work on interesting and technically challenging problems that are also service- or people-oriented, that take CS out of the lab and into the realm of real, societal problems.  And yet, I don’t want to have to spend every year arguing with my students as to why solving societal problems means that they should be getting out of their tech bubble and considering the real world around them, that such time is not wasted time, and that they fail to do so at their own peril.

* Comps = basically, our senior capstone experience, where teams of 4-6 students work for 2 terms on interesting and difficult CS problems.

** Interestingly, I tried to Google up a link for this, but what I found was a link to a very recent study at the University of Edinburgh indicating that telephone voice recognition software actually has a harder time with male voices than female voices (mainly due to use of “filler” words like “umms”).  So we’re still not getting voice recognition software right!