Trip report: GHC 2016

I’m writing this post on the plane ride home from Grace Hopper on Friday afternoon. Unlike previous years, I escaped the conference early: a compromise with my kids since I was at a conference last month and will be at a workshop next month. Still, it feels like I managed to squeeze about 10 days into 3, so while on one level I’m sad to be missing the last keynote and tonight’s party (and dinner with Carleton folks present and past!), on another level I’m just done with conferencing.

It’s been several years since I’ve done a proper conference trip report — I used to do them semi-regularly (see here, for example), but in the past few years life’s gotten in the way. But I wanted to honor my time at the conference this year, so I’m resurrecting the trip report tradition.

I should come clean first, though: After last year’s conference, I swore up and down that I wasn’t going to attend this year. I’m not a fan of Houston (sorry, Houston!), and logistically last year was kind of a pain. Plus I knew I wanted to attend Tapia and wasn’t thrilled about going to 2 conferences so close together. But then I was tapped to be Posters Co-Chair, and it sounded like too good of an opportunity to pass up. And then since I was going anyway, I agreed to speak in the CRA-W early faculty career track and volunteer in the Student Opportunity Lab. On top of that, I had my LACAFI booth organizing/setup/wrangling duties.

Apparently my conference motto is: If you’re going to attend, be busy!

The days and weeks leading up to the conference were busy: working with my co-chair to select and assign ACM Student Research Competition (SRC) poster and finals judges; working with my co-presenter on our slides and role-play scenarios for our CRA-W session; stressing over whether we had enough people to cover the booth during the Expo hours. (This year a lot of our usual booth-staffing suspects took GHC off, either because they were at Tapia and/or they’re going to SIGCSE. We missed you, intrepid volunteers!) Then there were receptions and breakfasts and meetups to keep track of. I actually had to put everything on my calendar and set multiple alarms so that I knew exactly where I had to be and when. It was looking very likely that I was not going to make it to any sessions that I was not leading or speaking at, so I didn’t even bother to look at the program.

To add more excitement to the mix, I realized a few days before the conference that my constant desire to sleep and my low-level ever-present funk was not due to recovering from the marathon I had just run, but was in fact my depression flaring up. Good times. I was worried, because I knew I’d have to be “on” a lot of the time I was at GHC, and was starting to dread going. I decided to give myself permission to skip out on anything that was not absolutely necessary if need be, to be a hermit when I needed to, and to escape the conference when possible, to recharge and try and keep the depression at bay. I’m happy to say that my strategy worked, and I was able to cope and function at a decent level. The knowledge that I was leaving the conference early also helped. This meant that I didn’t seek out people I knew to the extent that I normally do, but it was worth it for self-preservation.

I arrived in Houston on Tuesday afternoon, along with what felt like half of the 15,000 attendees of the conference. I was hoping to have some time to relax before attending the HP Inc reception as an NCWIT member that evening, but a longish wait for my luggage and a taxi meant I had enough time to quickly unpack and then head to the reception. The reception was at a really cool place, and I spent a lot of time chatting with someone I haven’t caught up with in a while. It was weird to be at an HP reception, given my former life as an HP Labs post-doc, but it was neat to hear about what HP’s up to now and to share stories about my time there. All of the HP women there were so friendly and welcoming, and it was a lot more fun than I expected.

I skipped the keynote on Wednesday morning, sadly, to set up our LACAFI booth. I had to get more creative than I intended with our limited space, but I made do. Once the Expo opened, our swag disappeared quickly, so we’ll definitely have to bring more next year.

I knew that the afternoon/evening would be crazy full, so I escaped the conference for a while to recharge and grab some cheap Tex-Mex food. Once the Expo opened, I came back to check on our booth, then wandered around the Expo. I kept running into alums, which was awesome. I promised some of them I’d find them later, a promise I did not keep. (Sorry, alums! Nice to see you briefly, anyway!) I also randomly ran into my posters co-chair, whom I’d never met in person, so we chatted for a bit. She is awesome, and I hope I get to work with her again someday.

Wednesday afternoon was the poster session and the first part of the SRC. Hilarity ensued (only hilarious now in hindsight) when the first poster judges came back to tell us that they could not find the poster numbers we assigned them — turns out we had posters listed by submission IDs, but they were actually numbered by position in the hall, and there was no easy mapping between them. Whoops! Luckily our judges did not revolt, and were super patient as we figured things out. (We joked that we gave them an encryption problem to solve before they could judge the posters.) Judging took way longer than we expected, but we finally figured out the finalists from the judges’ scores and got that info to our awesome ABI contact. At this point, my co-presenter for the CRA-W talk showed up so we could go over our slides and plan for our talk the next morning, after which we headed to a reception for CRA-W scholars. The reception was a great end to the day, but I was totally wiped afterwards, and collapsed into bed as soon as I got back to my room.

Thursday morning began with our CRA-W talk on balancing teaching, research, and service in academia. The talk was way better attended than I expected given the early hour and intended audience. And the role-plays we planned (my co-presenter’s idea) were a hit! The audience was game to participate, asked great questions, and offered great tips and advice to each other.

Afterwards, I met up with my colleague David, who was wrangling the students this year, and chit-chatted about sabbatical and department stuff. While I’m really enjoying sabbatical, I do miss the day-to-day encounters and conversations with my colleagues, so it was nice to reconnect. I then escaped for a bit to recharge, then headed back to the Expo to snap up some swag for my kiddos and chat up some people at the booths.

Thursday afternoon was as tightly packed as the previous day. We had the undergraduate and graduate SRC finals back-to-back, one of my duties as posters co-chair. The talks were fabulous and our judges were simply amazing and thoughtful. (One of my regrets for missing the Friday keynote is that I was not able to see these six incredible finalists receive their awards.) My co-chair and I then headed to one of my favorite annual events, the NCWIT reception. I met new people and caught up with some colleagues from liberal arts schools, took a picture with the rest of the CRA-W speakers, and got to hear a surprise speech from Megan Smith, the US CTO, who stopped by the reception. I always love what Megan has to say, so that was a fabulous treat. By this point, I was exhausted and my brain was mush, so I again collapsed (after stopping for gelato on the walk back to my hotel — priorities!) as soon as I got back to my room.

Friday started early with the CRA-W scholars breakfast. I sat with my posters co-chair; a colleague I see every year at GHC, SIGCSE, and NCWIT’s Summit; and some very enthusiastic students. If I have to be at something that early, it’s worth it when the conversation is that fabulous. I then went to an actual conference session (on motherhood in academia), then volunteered at the Student Opportunity Lab talking to students about how to get into undergraduate research, in somewhat of a speed-dating format. One last check of the LACAFI booth and the handoff of exhibitor’s credentials and I was on my way to the airport and back towards home.

My relationship with GHC has definitely changed over the years. While I think the conference is now way too big and way too career-fair focused, and while I think these are detrimental changes, I’m still surprised by the ways in which the conference rejuvenates me. What I get out of the conference now is very different from what I used to get out of the conference, and changes every year. This year, I definitely felt like my role was to mentor and give back to the community, but in giving to others in this way I was immensely fulfilled. I networked less, but felt more fulfilled by the interactions I chose to have. This year’s conference reaffirmed that GHC does still hold relevance to my professional life — maybe not on an every year basis anymore, but definitely within a rotation of conferences.

Reuniting with an old familiar course after a long layoff

As you could probably tell from the radio silence, things have been crazy around here. December and the first part of January were a blur of grant writing (and frantically finishing up simulations/analysis to generate data for the grant proposal) and job applications, and oh yeah, some holidays and travel. And in the midst of this craziness, class prep for a course I last taught in Spring Term 2012 (almost 3 years ago!): Intro to Computer Science.

Intro CS used to be my bread-and-butter course. I taught at least one, and typically 2, sections of intro each year through most of my time here. Intro is probably one of the most challenging courses to teach, partly because students come in with wildly varying backgrounds and partly because there’s so much to learn and grasp early on—the learning curve can be steep, and trying to keep track of all the syntax while also learning to think in a completely different way about problem solving is tricky and can be daunting. But it’s precisely because of the challenge, and because the students learn so much and grow so much over the course of the term, that it’s one of my favorite courses to teach.

Recently, we’ve handed over much of the teaching of intro to our visiting faculty. Part of this is because we often haven’t hired our visitors by the time we have to craft the next year’s schedule, so it’s easy to assume that whomever we eventually hire can teach intro. Part of this is also to give our new and visiting faculty a break—by teaching multiple sections of a course over the year, they are doing fewer new-to-them preps, which eases their burden. And our visitors tend to do a nice job with the course. The price of this, unfortunately, is that old fogies like myself don’t get the pleasure and the privilege of introducing students to the discipline like we used to.

Last year, when I was making the schedule for this year (one of the “perks”(?) of being chair), and weighing everyone’s teaching preferences, I saw that I had an opportunity to teach a section of intro, so I scheduled myself for one of the sections.

The re-entry has been a bit rough. Fortunately a lot of what I used to do and a lot of my old intuition about how to approach various topics has come back as I’ve reviewed my old class notes and my sample code. We’ve switched from Python 2 to Python 3 since I last taught, which I’ve taken as an opportunity to rewrite most of my sample code (which also helps with the recall). However, I tend to over- or underestimate what we can get done in the course of a 70 minute class (mostly overestimating at this point), and I’ve forgotten just how much trouble students have with a few key concepts early on in the course. My timing is off, too—I feel like I’m spending too much time explaining things and not leaving enough time for coding and practice in class—but I think I’m starting to get a better handle on that mix of “talk” and “do”.

There have been some benefits to the long layoff, though. I have some new ideas that I’ve been trying out—for instance, starting class by having students work on a problem by hand for 10-15 minutes, to get the intuition behind whatever we’re coding up in class that day—that I might not have considered if I was teaching intro more consistently. I’m reading the textbook more carefully (because none of the readings are familiar anymore and I’ve switched textbook editions), so I have a better sense of the level of preparation students have when they come into class after completing the daily targeted readings and practice problems. I’ve done more live-coding in class, because as I’ve been re-working my code examples I’ve noticed places where it would benefit students to see me code and think out loud in real time, rather than just walking them through pre-written code. Basically, I get to see the course with fresh eyes, without all the stress of it being a completely new prep.

So I’m immensely enjoying the intro experience again, and while on balance the layoff was partly beneficial, I hope that I don’t go quite such a long time between teaching intro sections again.

The art and science of choosing textbooks

Even after 11+ years of teaching, I find that selecting textbooks for a class is much more of an art than a science. Sure, I can apply “scientific” principles to selecting a textbook—evaluating how certain key topics are covered, weighing the ratio of code to theory, vetting the explanations and examples—but in the end, I never quite know how my choice will go over. In the best case scenario, I identify a book that both my students and I like and find effective. But honestly, if I can at least identify a book that we can both tolerate, I count that as a win. Sometimes, even with my careful selection process, I get things wrong. And sometimes, a book that’s worked wonderfully in the past falls flat with the next group of students.

I found myself in that last situation last spring, when I taught Software Design. In the past, I’ve used a particular textbook for the design patterns portion of the course. I like this particular book a great deal: it’s irreverent and clever, but most importantly, it presents each design pattern through the lens of an extended example, coded up in Java. In the process of developing the example, the students see a lot of code, and are also introduced to the pitfalls and common variations of each pattern. The students also see examples of each pattern in practice, giving them context for the usage of each pattern.

Last year, for the first time, my Software Design students complained about this textbook choice. Loudly. They hated this book. It’s too long, they said. It’s too condescending, they said. It lacks gravitas, they said. (OK, they didn’t say exactly that last one, but that was the gist of the criticism.) Basically, they wanted a textbook that was more textbook-y and less over the top.

I’ve refrained from assigning the Gang of Four design patterns book in the past. Not because I don’t like it—on the contrary, it’s a fabulous reference book, and I do use it often in my own work. I don’t assign it because the usage examples are rather sparse and there isn’t a lot of code in the text. I just wasn’t convinced that it would work in my class. But, since the criticism was so strong last time around, I decided to give it a go this time around and see if I had better success.

The verdict? This text hasn’t worked well at all for this group of students. As I suspected, the students are having a lot of trouble figuring out when or why to use each pattern. The lack of examples and lack of code is negatively affecting their comprehension. (And the few examples in the Gang of Four? My students don’t find them convincing.) I’m having to do a lot more PR work in class to get them on board with the concept of design patterns. I’ve spent almost all my class time in this part of the course lecturing (lots and lots of lecturing, which I typically try to avoid) and walking through examples, when normally I’d set up an example, walk through part of it, and leave the rest to my students to develop, so that they get some much-needed practice with the patterns.

Next time I teach this course (which, sadly, won’t be for at least 2 years), I will go back to the Head First book. I do think some of the criticisms from last year’s students are valid, but if anything this term’s experience has underscored just how important good, extensive code examples are in learning a difficult concept or set of concepts. This experience has also helped solidify in my mind why I choose to use this book, and I’ll be able to make a stronger case for it with the next batch of students.

Now, if I could just find a Data Structures textbook that I don’t absolutely abhor….

 

Musings on learning a new (to me) subfield

The other day, a student left my office, beaming. She and I had just finished discussing some project ideas she might pursue, with me serving as a technical mentor (and mentor in general). I was also beaming—the projects and the potential collaborations sound exciting! There’s lots of stuff of interest to pursue!

Then I put my head on my desk and groaned, as the Impostor Syndrome and doubts started to creep in.

For much of my career, I have been a Networking Person. Not as in “someone who schmoozes and hands out business cards” or “one who is always on Facebook” (ok, maybe that latter one is true), but as in someone working in the field of computer/communications networks. I was a Networking Person when I was still an electrical engineer. I was a Networking Person in my Master’s project and PhD dissertation. I was a Networking Person in my post-doc. I am “the” Networking Person at Carleton. I do research in the broad areas of Computer Networks, publish most of the time in networks-related conferences, journals, and workshops. I always have, and continue to enjoy, networks as a field (other that the dismal percentage of women)—I find the field fascinating, the possibilities endless. I geek out on RFCs and traceroute; an afternoon playing with Wireshark is my idea of fun.

However, lately I’m finding that I have a new passion in an entirely different subfield. It started off as mainly a teaching interest: a module in a Software Design course, a dyad, and eventually an A&I (freshman) seminar. At some point I realized I was actually doing some of this stuff in my research. And then I started working on a project where this other subfield is as much a part of the project as the Networking part. And started getting excited about other projects—like the one at the start of this post—that are clearly and firmly in this subfield.

I think this means that I’m not just a Networking Person anymore. I’m well on my way to being a Human-Computer Interaction (HCI) Person.

When I was just “dabbling” in the field, or just teaching it, I felt more comfortable with this dual role—perhaps because Networking Persona was still the dominant persona. But as time goes on, that’s becoming less true. It’s definitely not half-and-half, but it’s getting close. HCI Persona is here to stay, and is growing. I’m just as fascinated, and sometimes more fascinated, with the HCI research questions in this project (and in general) as I am with the networking questions.

And this has me somewhat panicked. Philosophically, I’m thinking about the point of the PhD. Is the point more to make you an expert in a tiny corner of your chosen subfield? Or is it more to teach you the skills you need to become an expert in a tiny corner of any subfield? Some skills obviously do transfer—how to do a literature search, how to evaluate sources and conferences and journals, how to learn something quickly, how to envision further extensions and applications of a concept. But a lot of your time in graduate school is learning a particular piece of a particular subfield: what are the seminal works and ideas? what is the main corpus of knowledge and main skill set that everyone in this subfield should have and know? what are you going to be the expert in?

To what extent are you “stuck” in your subfield post-PhD? And how far afield can you go, successfully?

In some respects, this whole internal discussion and line of questioning is moot, because I’ve clearly already headed down the HCI Persona road and don’t particularly want to turn back. But it is something I continue to reflect on, as I work hard to catch up to speed on something I have never, ever formally studied.*

Have you gone far a(sub-)field of your dissertation subfield, or discipline? If so, I’d love to hear your experiences in the comments!

* The irony is not lost on me that I’m choosing to stress about switching subfields within the same discipline, when in fact I switched an entire FIELD and have never formally studied the field in which I now work.

The annual obligatory pre-Grace Hopper Conference Post

Eight.

That’s how many Grace Hoppers I have attended, counting this year.

I joke every year that the more Hoppers I attend, the fewer Hopper sessions I attend. That is certainly true this year. While I’ve done a somewhat decent job in my work life and my personal life this year in terms of curating my commitments, I’ve apparently not done the same thing with this conference. I’m definitely overcommitted, although each of the things I’m committed to are worthy and fun in their own right:

  • I’m on a panel! (Hence the badge.) I get to talk about what it’s like to be a faculty member at a liberal arts college, with some powerhouse women as my fellow panelists. (Seriously, I got imposter syndrome just reading their bios!). The presentation looks like it will be a lot of fun and we’ll hopefully have plenty of time for Q&A. (11:45am Thursday, MCC 200 H-J.)
  • I am also on the posters committee and will be judging the student research competition on Wednesday evening (6:30-9, MCC Halls B-C). I love the poster session and I love talking to up-and-coming researchers, so this will be a lot of fun.
  • My NCWIT Academic Alliance duties continue on the recruitment and engagement front. We’ll have our usual reception for faculty (Thursday evening, 6:15, MCC 205 C-D). In addition, my co-team leader Doug and I will be in the NCWIT lounge Thursday and Friday afternoon demoing something new that we’ll be rolling out to Academic Alliance members soon (if you were at the Summit, you saw an early version of this). Look for the lounge and look for us there!
  • I’m helping staff the LACAFI booth. This year LACAFI’s a silver sponsor, which hopefully means our booth will be somewhat easy to find. Stop by and say hi when you find us!

This year’s conference is special. It’s in our backyard, practically! We have the biggest group in Carleton history going: 13 students, 3 faculty, and at last count at least 4 alums. (I’m sure there are plenty more, so Carl alums: If you’re reading this and will be at Hopper, email, tweet, or DM me! We’d love to get together with you and we have something planned, even.) It will be admittedly a bit weird to have the conference in our fair city, but not losing a day to travel is definitely a nice bonus.

Timing-wise, the conference is ideal this year. Recently I’ve had a series of borderline-demoralizing, unbloggable encounters that alternately have me feeling like I’m shouting into the wind, or I’m a fish way out of water. I need to recharge my batteries, and Grace Hopper always does that for me. I don’t know if it’s the energy or the speakers’ passion or seeing colleagues from other institutions or just the sheer joy of not being the only or one of few women in the room. Probably all of the above. Whatever it is, I need it this year, badly.

If you’ll be there, I hope to see you! If not, you can vicariously experience the conference through my tweets—assuming I can find some time to tweet. Grace Hopper, here we come!

 

(Guest post) A Call to Action: A Student’s Perspective on Gender Diversity in the Carleton College Computer Science Department

Note: This is a guest post by Alex Voorhees ’13, a Computer Science major and Educational Studies concentrator at Carleton. The post is an assignment for EDUC 395: Senior Seminar. For this assignment, the students write and publish an editorial on some aspect of the seminar’s topic, which this year is Gender, Sexuality, and Schooling. When Alex approached me about writing a guest post, I enthusiastically agreed, because I thought it would be interesting to get a student’s perspective on our departmental culture, something the CS faculty here spend a great deal of time discussing. I encourage you to chime in with your thoughts in the comments, and hopefully Alex will engage back here in the comments as well. Now, without further ado….

Sixth-Term sophomores just declared their majors at Carleton, and the Computer Science (CS) department saw huge gains. Not only did it become the second most popular major on campus, but it garnered 55 new majors. The percentage of women CS majors for the new class is at an all-time high, 30%, and is even more than the last two years, 20% and 18% respectively.  Carleton has been tremendously successful in increasing the fraction of women CS majors, yet it remains far below the percentage of women at Carleton. Thus it is quite clear that new initiatives are needed to encourage more women to enter the field.  I am calling for more action.

I have noticed many positive changes during my past four years as a major in at Carleton’s Computer Science department. Most notable has been the faculty. When I took my first CS class as a freshman, there was only one female professor in the department. Now, in my senior year, there are now three, making up a third of the department. While this may seem small, compared to other small liberal arts school in Minnesota this is actually quite large. This larger number of female faculty has certainly helped attract women to the department. However, this is far from the only positive change. For example, a subtle change recently caught my eye: the backgrounds of the computer screens in the computer lab show pictures of famous male and female computer scientists. I think this is a great idea to show every student in the lab that women have played an integral role in the development of the field, despite being outnumbered by their male counterparts.

While these changes have been positive, there is still much work to be done. I have witnessed instances of women in the department experiencing various kinds of bias. Most of these are micro-aggressions ranging from comments made in passing to actions.  For example, I was in a course taught by a female professor and certain male students acted in a way that I am sure they would never have in a class taught by a male professor. At the end of her lecture, one of these male students literally walked out of the classroom in a clearly disrespectful manner.  I felt horribly because of how the student acted, and the fact that I did nothing about it. Overall, I think the CS department does an excellent job creating a positive culture for women. We need to not only encourage women to take CS classes, but also to work to change the attitude of some male students in the department. Moreover, the male students cannot act as bystanders when they witness micro-aggressions. When you see or hear something that might be considered a micro-aggression, do not be afraid and say something!  I think a great idea for the CS department would be to offer a class on the history of computer science to illustrate the important role of women the development of the field. With women acting as the CEO of Yahoo and the COO of Facebook, such a class is a no brainer.

Research minute: The practical aspects of evaluating video quality in real time

Whenever we present our work on evaluating users’ quality of experience (QoE) with online streamed video (like watching YouTube), people ask us, “So, how would this work in a real-world system?”

We’ve come up with some neat algorithms for determining how a user would evaluate video quality, based solely on measurements we can easily obtain like bandwidth and received packets, but our work has largely remained in the proof-of-concept, measurement and analysis tool realm up to this point. As a former engineer, though, I’m interested in building things. My vision for this project has always been to build a functioning system, one that can predict and evaluate video QoE in real time. So a couple of years ago, we set out to answer some of those questions.

This week, I’ll be presenting the first results of that study, “Systems Considerations in Real Time Video QoE Assessment”, at Globecom—specifically, at the Workshop on Quality of Experience for Multimedia Communications. In this study, we attempted to answer the following questions: How frequently can we generate video QoE ratings with some degree of accuracy? How often should we sample the measurement data? How do we weigh the need to consolidate data collection (arguing for fewer, less frequent data points) with the need to monitor video quality in real time (arguing for more frequent data points)? What are the timing requirements for such a system, both in training the system and in assigning ratings to videos?

To answer these questions, we used the data we collected in the summer of 2010 for this paper, developed a mechanism to play back the data in pseudo-real time, and then sliced and diced the data in various ways. We played with the sampling rate: how many seconds should pass between measurements? We played with the amount of data to process at once: would ten seconds be enough to give us an accurate video QoE rating? would a minute be too long? Which video measurements should we use: all of them, some of them, one of them at a time? While trying all of these combinations, we kept our eye on the clock, literally: if this system is going to deliver results in real time, then we need to make sure that “teaching” the system how to evaluate videos does not take too long—otherwise, our system is not very adaptable, and thus not very useful.

In the middle of the study, we realized that our assumptions about the video delivery system itself might also impact how the system is designed. For instance, our system could look like Netflix: Netflix controls which videos are available to viewers, and the content is fairly stable (new videos are added on predictable schedules). This looks very different from something like YouTube, where the available content changes rapidly because people are constantly uploading videos.  In the former case, we can “teach” our system using videos that people will be watching. In the latter case, there are no guarantees that we have videos to teach our system that look anything like what people will be watching. So we considered both of these scenarios as well.

So what did we learn from this study?

  • Taking data samples frequently and processing smaller amounts yields the best results, most of the time. Except for the shortest video in our study (a 2 minute clip of dialog from a movie), we were best able to predict video quality by sampling data every second and evaluating the data in 20-second chunks. (For the shortest video, going 50-60 seconds between evaluations worked better. We think this is because the scene changes in this clip happened about every 40-50 seconds.)
  • More is not better, when it comes to what type of data to use. While we had 4 different categories of data available from the videos—bandwidth, frame rate, received packets, and the number of times the clip buffered—we found that concentrating on just bandwidth and received packets gave us the most accurate picture (no pun intended) of video quality, in general. Again, our shortest video was an outlier: here, frame rate did a better job of judging video quality.
  • The system can operate in real time, because video ratings assessment happens in less than a second, and at most it takes 10 minutes to train the system, which is done off-line anyway. The “less than 10 minutes to train” is key, because this means we can continually re-train our system as new videos come online, if we choose to do so.

Clearly this paper doesn’t definitively answer the question of how such a system would work, but it’s a step on the right path. We are actively considering some related questions, specifically what other infrastructure pieces would be required to support collecting, analyzing, processing, and feeding back such measurements into the system, so that the system could fix itself when video quality goes south. There’s also the question of how reliant our data is on the particular videos selected. We actually found support for “one configuration to rule them all”: a sampling rate, evaluation time, and set of measurements that worked universally well for all videos and for each of the video system scenarios, which is promising in terms of the generalizability of our solution, but further study with additional videos would definitely help.

Acknowledgements: Two of my research students, Tung Phan ’13 and Robert Guo ’13, did the initial studies and analysis of the data, in 2011. At that point, we actually got stuck and put the project aside for a bit. The insights we gained from what didn’t work informed the approach in this paper, and definitely made this paper possible! The infrastructure for collecting and analyzing the data, and the data used in the paper, came out of work in the summer of 2010 by Guo, Anya Johnson ’12, Andy Bouchard ’12, and Sara Cantor ’11.

You can’t go home again

Next week I’ll be in Chicago for the NCWIT summit*. Carleton’s an Academic Alliance member so I’ll be representing us there. I always look forward to the summit—this is my third one—but I’m especially excited about this one because Chicago is my grad school home.

I’ve been back to Chicago a number of times since I graduated, but I’ve never made it back to campus. My advisor left right before I finished, so that’s one big tie back to campus that’s not there anymore. But I still do have a number of ties there, and so this year I decided that I’d make the time to go back and visit my home of 5 1/2 years.

I’ve spent a good part of this week setting up meetings and letting people know I’ll be on campus. It’s awesome (and a bit weird) how many people remember me. Especially since it’s been…let’s say, a number of years…since I graduated. Of course I’m eager to discuss research with like-minded people, so I dutifully did some poking around on the various labs’ and groups’ sites.

In the process of poking around, I learned two things that really shouldn’t shock me anymore, but still did:

  1. I have zero research interests in common with my old lab. Zero.
  2. My research interests are much more aligned with the CS faculty than the ECE faculty.

Now, you are probably thinking “well, DUH! You are a computer scientist, after all!”  And yes, you’re absolutely right. I’ve identified as a computer scientist for at least 9 years now, and probably longer since the switch to CS really happened during my post-doc days. But part of me still identifies more as an electrical engineer. That was my undergrad identity. That was my grad school identity. That’s what I thought I was going to be when I grew up. Identities are hard to shake, apparently, even if they don’t quite fit anymore.

The thing is, shaking that identity, and taking some risks to do so, opened up a world of possibilities that wouldn’t have existed had I stayed the course. My postdoc, my current position, and all the research opportunities of the past…bunch of…years, none of that would be possible if I hadn’t decided to assume a new identity as a computer scientist. And of course, it was my time at my grad school alma mater that put me in the position in the first place to make that identity switch—where I gained the confidence in myself, and constructed a support group, and worked on the right research projects, to allow me to ultimately explore and eventually assume the computer scientist identity.

So I’ll visit my old lab and my thesis committee and reminisce a bit about my engineer-self. And I’ll make some new acquaintances as my computer scientist-self. And I’ll feel equally comfortable in both worlds, even if I can’t exactly talk research with my old lab anymore.

* If you are a reader of this blog and will be at the NCWIT summit next week, please introduce yourself and say hi!