On overwork and taking a pause

There are the plans you painstakingly make at the start of each academic term. The list of projects you’ll complete by March. The daily research/writing time you slot in on your calendar like any other meeting. The schedule of when you’ll send things to collaborators, getting them off your plate so you can move on to the next project. The time you set aside for class prep and class administration. The downtime on weekends for catching up with your family and friends and doing things that restore your soul.

And then reality hits, and there are the plans (or lack thereof) that you actually follow.

So far, Winter term has been an exercise in rescheduling and pivoting. From the polar vortex bringing near-record cold and wind chills (and 4 straight days of school cancelations for the kiddos — but none for Carleton, of course <eyeroll>), to service obligations that are all taking 10 times longer than expected (and thus still occupying valuable space on the to-do list), to the realities of teaching a course for the first time in 6 years, to collaborators and students and colleagues who are similarly overwhelmed by life and the dumpster fire that is our world these days….well, suffice it to say it’s been a challenging term.

The biggest unanticipated challenge for me? Course prep. I expected that course prep would take up a bigger chunk of my time than it normally does, given that I last taught this course in Fall 2012. But OH MY WORD, some days course prep and course administration feels all-consuming. Having 39 students in a class that requires a lot of hands-on time from me is overwhelming. And for reasons I won’t go into here (*cough* backups that weren’t really backups *cough*), I am creating about 80% of my course materials from scratch. Problem sets. Reading quizzes. Reading assignments. In-class exercises. Mini-lectures. The good news is that I am thoroughly enjoying the process, and redoing almost everything gives me the freedom to reimagine the course from how I taught it previously. That’s a tremendous gift. And chances are good that I’ll be teaching this course several times next year, so putting in the work now will make Future Amy’s life much, much easier. BUT. It is still very, very time-consuming. And most of this time is coming at the expense of my weekend fun time, which means I haven’t taken an entire weekend day off since the first of the year, and my research time.

For someone who’s worked very hard to give herself permission to take time for self-care and restoration, working every weekend has taken a huge toll on me. Before the polar vortex hit, I realized that I was heading quickly into burnout land. The polar vortex gave me permission to hibernate in my house, cancel anything that required me to physically be anywhere else but my house (including class and office hours), and while I spent much of that time working, it was at my own pace and not the panicked, break-neck pace I’d gotten used to. I caught up, sort of. I didn’t have to be constantly “on”, something that’s draining for an introvert like me. And not having to be anywhere meant that I could take breaks to do things that restore me, like craft, color, and work on puzzles.

There’s still way too much on my plate, but I’m at the point where I feel like I can manage it better, and where several things are close to finished. I think I can actually take the majority of this weekend off, for a change! And — dare I say it? — I should be able, starting next week, to get back into my daily research/writing practice, and make progress on something other than advising my research students on their projects.

Advertisement

Introducing CS 1 students to algorithmic bias via the Ethical Engine lab

There’s a lot of recent interest around the ethics of technology. From recent popular press books like Algorithms of OppressionAutomating Inequality, and Technically Wrong*, to news stories about algorithmic bias, it seems like everyone is grappling with the ethical impacts of technology. In the computer science education community, we’re having our own discussions (and have been for some time, although there seems to be an uptick in interest there) on where ethics “belongs” in the curriculum, and how we can incorporate ethics across the curriculum — including in introductory courses.

One initiative aimed at touching on ethical issues in CS 1 particularly caught my attention. In July 2017, Evan Peck, at Bucknell University, posted about a programming project he and Gabbi LaBorwit developed based on MIT’s Moral Machine, a reworking of the classic Trolley Problem for self-driving cars. This project, the Ethical Engine, had students design and implement an algorithm for the “brains” of a self-driving car, specifically how the car would react if it could only save its passengers or the pedestrians in the car’s way. After implementing and testing their own algorithms, students audited the algorithms other students in the class designed.

Justin Li at Occidental College built upon this lab, making some changes to the code and formalizing the reflection questions and analysis. He wrote about his experiences here. In particular, Justin’s edits focused more on student self-reflection, having them compare their algorithm’s decisions against their manual decisions and reflecting to what extent their algorithm’s decisions reflected or did not reflect their priorities.

I was intrigued by the idea of this lab, and Justin’s version seemed like it would fit well with Carleton students and with my learning goals for my intro course. I decided to integrate it into my fall term section of intro CS.

Like Evan and Justin, I’ve made my code and lab writeup freely available on GitHub. Here are links to all three code repositories:

Framework

Based on Justin’s and Evan’s writeups, I made several modifications to the code.

  • In the Person class, I added “nonbinary” as a third gender option. I went back and forth for a bit on how I wanted to phrase this option, and whether “nonbinary” captured enough of the nuance without getting us into the weeds, but ultimately decided this would be appropriate enough.
  • Also in the Person class, I removed “homeless” and “criminal” as occupations, since they didn’t really fit in that category, and made them boolean attributes, similar to “pregnant”. Any human could be homeless, but only adults could have the “criminal” attribute associated with them.
  • In the Scenario class, I removed the “crossing is illegal” and “pedestrians are in your lane” messages from the screen output, since in this version of the code these things are always true.

I also made it a bit clearer in the code where the students should make changes and add their implementation of the decision making algorithm they designed.

Execution

I scheduled the lab during Week 8 of our 10 week course, just after completing our unit on writing classes. We take a modified “objects-early” approach at Carleton in CS 1, meaning students use objects of predefined classes starting almost immediately, and learn to write their own classes later in the term. The lab mainly required students to utilize classes written by others, accessing the data and calling upon the methods in these classes, which conceivably they could have done earlier in the term. However, I found that slotting the lab in at this point in the term meant that students had a deeper understanding of the structure of the Person and Scenario classes, and could engage with the classes on a deeper level.

I spread the lab over two class periods, which seemed appropriate in terms of lab length. (In fact, one of the class periods was shortened because I gave a quiz that day, and the majority of the students had not finished the lab by the end of class, which leads me to believe that 2 whole class meeting periods at Carleton, or 140 minutes, would be appropriate for this lab.) As they do in all our class activities, students worked in assigned pairs using pair programming.

On the first day, students made their manual choices and designed their algorithm on paper. To ensure they did this without starting with the code, I required them to show their paper design to either my prefect (course TA) or myself. A few pairs were able to start implementing the code at the end of Day 1. On the second day, students implemented and tested their algorithms, and started working through the lab questions for their writeups. Most groups did not complete the lab in class and had to finish it on their own outside of class.

At the end of the first day, students submitted their manual log files. To complete the lab, students submitted their algorithm implementation, the manual and automatic logs, and a lab writeup.

Observations

Unexpectedly, students struggled the most with figuring out how to access the attributes of individual passengers and pedestrians. I quickly realized this is because I instruct students to access instance variables using accessor and mutator methods, but the code I gave them did not contain accessor/mutator methods. This is a change I plan to make in the code before I use this lab again. I also plan to look a bit more closely at the description of the Person and Scenario classes in the lab, since students sometimes got confused about which attributes belonged to Scenarios and which belonged to Persons.

Students exhibited a clear bias towards younger people, often coding this into their algorithms explicitly. One pair mentioned that while their algorithm explicitly favored younger people over the elderly, in their manual decisions they did “think of our grandmas”, which led to differences in their manual and automatic decisions in some places. A fair number of students in this class came from cultures where elders traditionally hold higher status than in the US, so the fact that this bias appeared so strongly surprised me somewhat. Pregnant women also got a boost in many students’ algorithms, which then had the effect of overfavoring women in the decisions — which many students noted in their writeups. While nearly all pairs explicitly favored humans over pets, a few pairs did give a small boost to dogs over cats, while no one gave any boost to cats. I’m not sure why this class was so biased against cats.

I was impressed by the thoughtfulness and nuance in many of the lab writeups. Most students were able to identify unexpected biases and reason appropriately about them. Many thoughtfully weighed in on differences in their algorithm’s choices versus the choices of their classmates’ algorithms, one pair even going so far as to reason about which type of self-driving car would be more marketable.

In the reflection question about the challenges of programming ethical self-driving cars, many students got hung up on the feasibility of a car “knowing” your gender, age, profession, etc, not to mention the same characteristics of random pedestrians, and being able to utilize these to make a split-second decision about whom to save. This is a fair point, and in the future I’ll do a better job framing this (although to be honest I’m not 100% sure what this will end up looking like).

One of the lab questions asked students to reflect on whether the use of attributes in the decision process is ethical, moral, or fair. Two separate pairs pointed out that the selection of attributes can make the decision fair, but not ethical; one pair pointed out the converse, that a decision could be ethical but not necessarily fair. I was impressed to see this recognition in student answers. Students who favored and used simpler decision making processes also provided some interesting thoughts about the limitations of both “simpler is better” and more nuanced decision-making processes, both of which may show unexpected bias in different ways.

Conclusions and takeaway points

Ten weeks is a very limited time for a course, so for any activity I add or contemplate in any course I teach, I weigh whether the learning outcomes are worth the time spent on the activity. In this case, they are. From a course concept perspective, the lab gave the students additional practice utilizing objects and developing and testing algorithms, using a real-world problem as context. This alone is worth the time spent. But the addition of the ethical analysis portion was also completely worth it. While I have yet to read my evaluations for the course, students informally commented during and after the exercise that they found the lab interesting and thought-provoking, and that it challenged their thinking in ways they did not expect going into an intro course. I worried a bit about students not taking the exercise seriously, and while I think that was true in a few cases, by and large the students engaged seriously with the lab and in discussions with their classmates.

I teach intro again in spring term, and I’m eager to try this lab again. The lab has already sparked some interest among my colleagues, and I’m hoping we can experiment with using this lab more broadly in our intro course sections, as a way to introduce ethics in computing early in our curriculum.

*all of which are excellent books, which you should definitely read if you haven’t done so already!

Theme for 2019: Foundation

Each year (or at least the years I remember to do so), instead of making resolutions at the start of the year, I pick a theme for the year. I prefer themes to resolutions because themes serve as overarching, guiding principles. They help focus me on what’s important, at least in theory. If an opportunity arises, or I need to make a choice about something, I see if it fits with my theme. If it doesn’t, it’s usually a sign that I need to pass on the opportunity, or make a different choice.

Some of my past themes include “healthy”, in 2017, and “defining”, in 2010.

2018 was a challenging year in many ways. There were some highs — completing my first triathlon and my first open taekwondo tournament, getting accepted to and starting the HERS Institute, teaching Intro again after a long hiatus — but many lows as well — breaking my elbow, and having to cancel our highly-anticipated camping trip; injuring myself AGAIN last month in taekwondo class. As much as I hate to admit it, injury dominated my year. My year was a year of can’ts, of carefuls, of wariness, of modifying. Many times, I felt like I’d be injured forever.

This year, I want to focus on strengthening my core: my core skills, my core values, my resilience in general, and my overall physical strength.

  • Work-wise, 2019 is a transitional year. I’ll be taking on a new and exciting opportunity (more on that in a later post) where I’ll get to learn and practice new skills. But it’s also time consuming. So I’ll need to be clear on my priorities and how I choose to spend my time, to make sure I’m spending it on the right things.
  • From a health perspective, I’d really like to STOP being injured in 2019. To do so, I need to make sure I have a strong base: strong muscles, a strong core, a solid cardio foundation. And I need to be smart about gaining strength and ramping up my cardio. I eat pretty healthfully already, but my body is changing as I age, so making sure the food I eat fuels me well is also important, so that my body can stay in one piece for a change.
  • My kids and my spouse are the most important people in my life, and the ones who usually get the worst of me, the end-of-the-day-I-have-no-reserves-left me. As my kids get older, they need me differently than they did when they were babies and toddlers — bigger kids, bigger problems, as they say. And my spouse and I are like two ships passing in the night lately, which is not really conducive to a healthy marriage. My relationships with them, and my friends, are important to me, and I need to start treating them as such.

So my word for 2019 is FOUNDATION. My focus this year is on building and strengthening my foundation. Clarifying my priorities. Building up my physical strength and health. Focusing on relationships. Acquiring new skills and practicing weaker skills. Preparing myself for future challenges.

Do you have a word or theme for 2019? Please share it in the comments!