5 Pandemic Teaching Practices I Plan to Keep

As Spring Term wound down and hints about the structure of Fall Term (and the summer) emerged over the past few weeks, I found myself reflecting on the past year+ of pandemic teaching. I talked in my last post about returning to some of the normalcy of interacting in person, and how much I’m looking forward to little things I used to take for granted. At the same time, I recognize how much grief and trauma we carry forward, individually and collectively, and wonder / worry about what that will look like and how we will deal with it, next year and beyond.

Somewhere in the middle of those two spaces lies pedagogy — what it was in the Before Times, what it became in Pandemic Times, and what it will look like henceforth. I’ve spent a lot of time thinking about the “henceforth” part. What have I learned in the last 4 terms of teaching online? How will this carry over into my future courses? What’s going to “stick”?

This, of course, is not a one-time reflection — I’m sure my thoughts will morph as we settle into whatever becomes our normal. But at this point in time, I keep settling on the same 5 things, which makes me think these will be the most likely to “stick”.

1. Weekly grid

I first saw this idea in a Resilient Pedagogy workshop last summer (and blogged about it here). The grid communicates expectations to students about what’s going on in class in a particular week, in what mode, and how long each item should take them to complete. I’ve found the grid invaluable for planning out each week. It shows me whether activities are balanced across modalities or whether I have to switch things around (e.g., do I have too many asynchronous team activities planned?). It keeps me honest in my expectations of students — I can quickly see if what I’ve planned will take 8 hours or 16 hours, and adjust accordingly. And I find it much, much easier to parse than looking at a list of activities on a Moodle page. So much so, that I embedded the grid for the week into that week’s Moodle page.

Screen shot of Moodle page for Week 6 in Software Design, highlighting the activity grid.

I’ll continue this because: It makes planning easier for me! And it has all of the key information for the week in one place for the students, including when office hours are and how to access the lab assistants.

2. Sunday night videos

I’m pretty sure I got this idea from Small Teaching Online, by Flower Darby and James Lang. The Sunday Night Video, so named because I often ended up recording and/or posting the video on Sunday night, is a short, 5-10 minute video which presents a high-level review of what we did in class last week and what’s coming up this week. The review and preview focus on how the course activities, concepts, skills, etc. fit into the learning goals and the larger arc of the course. Similarly to the grid, it provides orientation and context within the course — why are we doing this set of activities now? How will this get us closer to achieving the learning goals in the course?

Screen shot of the weekly review/preview video, a.k.a. the "Sunday Night Video"

I’ll continue this because: It’s a quick and accessible way to remind students of how all of the pieces fit together. It shows students how we’re progressing towards the learning goals for the course. It helps them connect the dots.

I might modify this by: Instead of recording a video, I could start off the first class meeting of the week with this content. I don’t know if that’s the best use of limited class time, but I could probably do a variation of this in a shorter amount of time. I may experiment with this in the fall, when I’m teaching a first year seminar.

3. Collaboratively annotated readings

I’ve posted previously about my use of Hypothes.is in my Computer Networks course (also written up here), and I’ve also used Hypothes.is in Software Design. When I first experimented with it, I thought of it exclusively as an asynchronous team tool, for students to label and highlight course concepts together. (For instance, in Software Design I have students apply Steve Krug’s Trunk Test to a web site, finding and highlighting answers to each of the Trunk Test questions.) The more I used it, the more I realized how I could use it to focus students’ attention on key concepts in particularly dense readings, or guide students through reading a recent paper related to course concepts, or (in the case of Computer Networks) walking students through a protocol specification. The example below shows my annotations in our online textbook for a particularly tricky topic.

Annotated text using Hypothes.is, explaining the finer points of TCP Congestion Control in a Computer Networks course.

In turn, students can add their own highlights, comment on my annotations, and so on — which leads to a dialog about the material before we even get to class!

I’ll continue this because: It’s an effective way for me to communicate how students should read a particular selection and what to focus on, and help them be more effective readers of technical content. It allows students to communicate with me as they are reading so that I can get a clear sense of what’s confusing and what’s piqued their interest. The act of annotating a reading also serves as a valuable check to me — I can hone in on what’s really important, and cut out sections that I may have assigned in the past but that don’t carry much weight in terms of student comprehension of a particular concept.

4. Using Google Docs during small group activities

When we moved to online teaching, I lamented the loss of in-person group work and of teaching in my favorite classroom space, a large room with tables and walls of whiteboards. How would I reproduce the collaborative brainstorming, the collective question-answering, the creation of communal artifacts, and my walking around the room to answer questions and redirect the wayward group?

Answer: collaborative editing of Google Docs.

Example of a collaboratively edited Google Doc from Software Design, where teams analyzed different websites.

Collaboratively-edited Google Docs allowed me to reproduce the spirit of all of those things. Student teams either had their own document to edit, pre-populated with the discussion questions and prompts, or had a section of the document to edit, also pre-populated with the questions / prompts (shown in the example above). I’d send student teams to breakout rooms after setting up the activity. Sometimes I’d travel from room to room, but because I found this more disruptive than helpful, I’d usually just monitor the activity on the document(s). If I wasn’t seeing any typing for a while, I’d stop by the room. If someone in a team wrote a particularly interesting, insightful, or good point, I’d add a comment. I also used comments to ask guiding questions if a group seemed to be heading off-track or in the wrong direction. The document(s) provided a record of class discussion, which students could revisit or, if they’d missed class for whatever reason, use to catch up. (This was particularly valuable when I had students literally on the other side of the world for whom class met in the middle of the night and who rarely attended synchronous class meetings because of that.)

I’ll continue this because: In addition to providing students with a record of what each group produced, this provides me with a record of what each group produced. Even when I walk around the room, I miss things.

I might modify this by: having students take pictures of the whiteboards and post those to Google Drive, when we use the whiteboards in class. (It might also be an interesting learning activity to have teams annotate the pictures after the fact, as a way to consolidate their learning from a particular class session!)

5. Instructional videos / walkthroughs

I tried, as much as possible, to avoid lecturing in synchronous class meetings, instead opting to record smaller-sized lectures and posting those along with targeted readings. As the pandemic wore on, I found other valuable uses for instructional videos:

  • Walking through worked examples of problems.
  • Providing feedback on things that many students missed on an assignment or exam, to help students who wanted to revise figure out how to approach the revisions.
  • Walking students through the steps of a lab activity — showing them how to do something, and then asking them to stop the video and do a particular section of the lab (shown in the picture below).
  • Providing feedback to individual students and/or teams on an assignment, when it was easier to show them where they went astray instead of trying to put it into words.
Screen shot of a video walking students through a lab on Flask.

I’ll continue this because: Not every student is going to catch everything in a lecture or demonstration the first time around. Allowing students the opportunity to review and rewatch things at their own pace provides more opportunities for real learning — particularly if the students work the example, step through the problem, etc. along with the video. And the students who received video feedback indicated that they found this form of feedback particularly helpful, because they could see what part of the assignment particular pieces of feedback matched.

I might modify this by: finding ways to record the lecture / problem examples portion of class, maybe not all the time, but when I’m teaching a particularly difficult concept.


In reading over this list, I’m struck by the fact that all of these pedagogical practices increase transparency. They expose how students approach and apply the course concepts, and the work of small teams. They give students a glimpse into how I think about the pieces of the course and my expectations for their learning and engagement. They make more of the construction of the learning process visible. And hopefully, by being more transparent and not assuming students know why I’m doing what I’m doing, I’m also being more inclusive.

If you’ve taught during the pandemic, what new practices do you plan to continue?

Midterm … ish … update

We’re currently in Week 7 of 10 of Spring Term, and the only good thing I can say about this state of affairs is THANK GOD the administration moved fall term registration, and advising, to the summer, because if I had to meet with all of my advisees on top of everything else going on this week, I would probably run away to join the circus.

No one is ever at their best at this point in our academic year. Every other institution in the universe (it seems) is out for summer, and we’re all sick of each other and exhausted and cursing our calendar. This year those feelings are amplified. I poll my class every Wednesday (anonymously and when I remember) to see how they’re doing, and this week over half the class responded with some level of “not great”. A good number of my students are dealing with some pretty serious stuff. The other day one of my colleagues said “I wish we could just give everyone an A and send them home at this point.” Which, to be honest, sounds like an excellent strategy.

I have to say that I’ve mostly struggled through the term, too. Work continues to be a firehose, and I continue to work more hours on weekends than I’d like. There are difficult growing pains connected to my leadership role. My course grader went MIA for a good chunk of the term. Both kiddos are really struggling. I’m dealing with a level of exhaustion I haven’t experienced since I-don’t-know-when.

And yet.

I’m fully vaccinated, as is my partner, as are many of my close friends here. I’ve hugged people I don’t live with, for the first time in over a year! And one of my kids is now vaccine-eligible, and is hounding us to schedule their appointment ASAP.

I’ll be hosting students IN MY RESEARCH LAB, PHYSICALLY in a few short weeks.

My Software Design students are awesome and a lot of fun to teach. I am having a blast.

I submitted an article to a journal earlier this month! Something I’d been thinking about writing for a while and then struggling to complete for months. I convinced one of my favorite staff people to coauthor, and writing with her was one of the high points of this academic year. And I’m currently working on another paper, on work I did with students a couple of years ago, which I hope to get out for review by mid-summer.

I was elected to the college’s tenure-and-promotion committee, a 3-year stint. This is super important (and hard!) work, particularly as we figure out what faculty reviews and evaluation look like post-COVID. I’m humbled that my colleagues trust me to be a thoughtful voice in these discussions and deliberations.

Most importantly, despite everything else going on, I feel a rare sense of … calm. A sense that all of the important stuff will get done, maybe not quite on the timeline I’d like, but still, done. That the stuff that doesn’t get done wasn’t really important in the first place. That the current state of affairs, no matter how frustrating or difficult, is temporary. This is a rare state for me in normal circumstances, but especially during the spring, where my depression and anxiety are typically at their worst. Perhaps all that hard work in therapy is starting to pay off.

I hope this week, despite whatever else is on your plate, that you are able to find some small bit of calm among the chaos.

Extending the “central question” experiment to Software Design

In my Fall Term course, Computer Networks, I experimented with designing the course around a central question: should the Internet be considered a public utility or a private good? Students produced reflections at the start and end of the term around this question: the start of the term’s reflection focusing on how they understood the question at present, and the end of the term’s reflection utilizing evidence from the course to demonstrate how their answer did (or did not) evolve.*

The experiment proved so successful that I now plan to do this in as many courses as I can. I like how it focuses the students, succinctly, on the core of what we’re learning and reminds them that what they learn in this course has broader impacts beyond the content. At the same time, the central question helps me focus on what’s really important in the course, which helps me make decisions about what content to include or exclude, what to triage, and how to structure course activities.

This spring, I’m teaching Software Design — the same course I taught a year ago. Software Design is an interesting course in that we cover a lot of ground and a number of seemingly disparate topics loosely united by the theme of “these are things we think our majors should know about how to write effective software”. Things like: how to work effectively in teams. Best practices in function and class design. Code style and commenting. A bit of user interface design and accessibility. How to shepherd a project from idea to deployment. Iterative design. Ethics. Design patterns.

I’ve worked with my colleagues who also teach this course regularly over the past couple of years to streamline the story the course tells. I came up with a “layer cake” diagram to show the students how the topics unite and where various concepts fall in these layers. In my “week in review/weekly preview” videos last spring, I included this diagram to indicate what layer(s) we’d hit the previous week and in the coming week. I think this went a long way towards making the course feel less disjointed to the students.

Three layers of Software Design topics: Professionalism at the top, Design and Architecture in the middle, and tools at the bottom.
The “layer cake” model of Software Design topics. In retrospect, I missed an opportunity to make this look more like a cake.

And yet…this didn’t quite get us all the way to where I wanted to be. There’s still a lot going on in that model. Plus, I feel strongly that a huge part of “writing effective software” involves ethical and social reflection. How might this software cause harm, intentional or otherwise? Whom does this leave out? When, and why, might we choose not to bring a piece of software into the world? Do our teams embrace and interweave diverse perspectives and life experiences? In what ways can software development be an act or a practice of social justice?

So, back to the central question. Given all that’s going on in this course, what should that central question be?

The answer I settled on:

What are our responsibilities, as software developers, when putting software out into the world?

I’d like to use the same initial/final reflection assignment I used in the fall, which means I’ll need to shuffle around the first week’s deliverables (not a huge deal) and modify the current reflection I have students do at the end of this course (where they reflect on the process of software development they experienced over the term). And I think this question lends itself well to a first-day-of-the-course activity framing the course for students. (It’s more holistic than the exercise I’ve used forever, which has students reflect on examples of good and poor design in software and systems they use and work through the design outline of a mythical system.)

More importantly, I believe the question tightly ties in those ethical and social justice issues that I want students to grapple with. It reinforces the idea that software design and development is not a neutral activity. We don’t have the luxury of NOT critically examining ALL the things we bring to the process: our biases, life experiences, world views, identities, and beliefs. And this critical examination is as much a part of the software development process as using GitHub effectively, or writing solid unit tests, or constructing tightly cohesive functions, or gathering requirements.

I’m interested, and eager, to see how this experiment plays out — and I’m already looking forward to my students’ initial and final reflections on this central question.

*In a nod to universal design and flexibility, students chose the modality for this reflection. Many wrote a classic essay, some recorded videos, and others produced and narrated slide decks. My rubric accounted for these various modalities.

5 Lessons from Fall Term

Winter Term is underway (more on that next week)! Yet I still find myself processing and attempting to make sense of Fall Term. To be honest, I find myself dealing with what I can only describe as lingering and persistent trauma — not just over Fall Term, but over the state of the world more generally. It’s hard to process and analyze when everything feels so uncertain and impossibly hard.

On balance, Fall Term went…surprisingly well, given the circumstances. I had a small, engaged class of 15 students. No one unexpectedly disappeared, and everyone passed the course. My course revisions mostly worked, save for a project that went off the rails due to undocumented conflicts in different minor versions of Python. And I managed to make some forward progress on research and various other projects.

In thinking about the term, I found myself returning to five lessons I learned, or re-learned, over the course of the term.

Lesson 1: Everyone is struggling. And it’s ok to acknowledge that publicly.

Fall Term was hard for lots of us, for a variety of reasons. Time zone differences. Health, including mental health, issues. Worries over the election. Concern over the risk-taking behavior of other students. Racial trauma. Isolation and loneliness. Caregiving responsibilities. And while Carleton was not fully online, many of its courses were at least partially online, which meant everyone (students, faculty, and staff) spent much of their days interacting online — difficult even in the best of circumstances. In short, no one’s at their best.

I made checking in with my students a priority. I borrowed an idea from a staff colleague and started each synchronous class meeting with the same anonymous poll, asking them how they were doing. Originally I just summarized the responses, but as the term went on I started displaying the results as a percentage of respondents. I commented briefly, adding (truthfully) where I fell among the options, acknowledging the mindspace we collectively occupied that day, and reminded those who were struggling of various ways to reach out and seek help. Students indicated that they found this helpful — both to see that they were not alone wherever they fell on the continuum that week, and that I was honest about my own struggles. This is definitely something I will continue, including whenever we return to in-person instruction.

Poll window asking "how are you doing today?" with multiple choice options
Zoom editor view of the check-in poll I used at each synchronous class meeting.

Lesson 2: Teaching online is easier the second time around

Don’t get me wrong: Teaching online still feels unnatural, weird, and hard. But it felt way less so than it did in the spring. I was able to tap into the lessons I learned about organizing a week, a lesson, a class meeting, an explanation, and apply them to a very different course. Everything seemed to flow much better — even the project that went off the rails. It also helped that students had a term of online learning under their belts, and knew what to expect — from the modality, from each other, and from me.

I also appreciated even more all of the pedagogical work I put in this summer, and the pedagogical workshops I attended. It was time very much well spent and definitely made a huge difference in how the class ran, and worked.

Lesson 3: Specifications grading helped…a lot

Based on my reading of Specifications Grading and Grading for Equity, I completely revamped my course grading. Did it work? Hell yes!

I found this new-to-me style of grading freeing. Rather than agonizing over “is this exam answer worth 4 points or 5 points?”, I only had to ask “does this answer meet the expectations for the learning objective or concept?” Turns out, in most cases that’s a much easier question to answer. And knowing that students could revise and resubmit any summative work, I found it easier to make these judgment calls. Weirdly, I actually kind of enjoyed grading!

Most students took advantage of the revision opportunities — some multiple times. I found that a subset of the students were really invested in improving their learning through the revision process — and that this freed up some of them to take risks they might not have normally taken. Which, of course, is exactly what I want to happen in my courses! That said, from a grading management perspective, in the future I will likely limit the number of revisions, probably through some kind of token system, to prevent my workload from spiraling out of control.

I never quite figured out how to get Moodle to play nicely with this grading system. I ended up converting the expectations scale to a 4.0 scale and averaging things within categories to calculate the course grades. It was hard for students to figure out their own course grades because the averaging was somewhat opaque and was done outside of Moodle. In the future, I will invest the time to bake this into Moodle so that students have a better sense of how they’re doing in the course.

Lesson 4: Online pedagogy allows for some new collaborative learning opportunities

Computer Networks (the course I taught this fall) is conceptually tricky and often dense. In an in-person class, I make heavy use of office hours and class time to help students extract the important points of a concept, technique, protocol specification, or algorithm from the seemingly overwhelming details. After some success using Hypothes.is, an online annotation tool, in the spring, I experimented with Hypothes.is for some of the denser readings in the course. For a few of the daily targeted readings, I had students answer the reading questions in their small groups by annotating the reading with their answers. For a few others, I pre-annotated the reading to focus their attention on the main points, and had students comment on the annotations and/or add their own. I really liked how this worked out overall, and I think the students got more out of those readings. I plan to continue this practice in the spring and likely beyond. I could see it working really effectively for Intro CS and for Data Structures (our CS 2), where it’s really easy for students to get lost in the details of a reading.

Moving things online wasn’t always neat and contained, but sometimes that’s ok. I usually run an in-class simulation of Internet routing, where students act as autonomous systems in small teams: creating routing tables, entering into peering agreements with each other, and ultimately attempting to “route” data. What normally takes one class period in person spread over several days online. It was messy, and chaotic — and probably taught students about the messiness of real-world Internet routing more effectively and deeply than anything else I have attempted in my 17 years of teaching this topic.

Lesson 5: It’s really, really hard to troubleshoot virtually

When I can’t figure something out, I need to sit down and play around with it. When teaching in person, I spend a lot of time running to the computer lab so that I can see what the students see, in their coding environment, with the same tools and version of Python and all that good stuff.

When the rogue project went off the rails, I found myself flying blind. What version of Python were the students using? Were they all using the same version of Python? Is it the same as mine? Why does the code sometimes work when we ssh in to one of the servers, but not consistently? How do I help Windows users — most of my students — when I have a Mac? Ultimately, I was limited in how much troubleshooting I could do. I’m still not sure what I could have done better, other than perhaps requiring the students to run and develop the code within a virtual machine of some sort. But it’s something I continue to reflect upon how to improve — and particularly, how to better support Windows users in my courses.


I have a lighter teaching load this term — just our capstone, to make room for the heavier workload of my administrative role this term. But I’m already thinking ahead to how I can capture, consolidate, and integrate these lessons into my spring term course — the same course I taught last spring. I look forward to seeing how much better and more effective I can make that course based on what I learned this term.

Planning for Fall (a story in pictures)

I’m overwhelmed.

Fall term classes don’t start for another week and a half, and I’m already at the stage where I’m semi-catatonic by the end of the work day. (Yesterday I gave up and took a nap. At 4:30pm.)

This time of year is usually full to the brim anyway — the mad rush to finalize the syllabi, helping advisees navigate changes in their schedules, setting priorities for the year for STEM at Carleton, meetings meetings meetings (and, hey, more meetings!), … the list goes on. This year, it’s that … times a thousand.

Yesterday as I navigated through various windows and apps on my laptop, I marveled at the juxtaposition between my “normal” workflow of preparing for the term and the additional preparations for a pandemic term. At the end of the day, I took some screenshots of some of the apps and sites I used throughout the day, to put together a mini photo-essay highlighting a “day in the life” of a professor preparing for the upcoming COVID-influenced term.

  • Checklist containing items to complete for preparing a course for the start of the term.
  • Mind map of a computer networks course.
  • Backward design worksheet with learning outcomes and evidence.
  • Class meeting times schedule.
  • Moodle landing page with course listings
  • Screen shot of part of the faculty COVID-19 FAQs
  • Teaching toolkit, pandemic edition: iPad, pencil, headset

(I did, however, spare you the screenshots of the multiple Zoom meetings I’ve participated in over the past few days. And of the firehose of emails. And of the various ways my family interrupted me mid-meetings. You’re welcome.)

Looking at these pictures, it strikes me that even though everything seems completely out of whack, the basic things I do to prepare for a term — wrangle with Moodle, finalize my learning outcomes, assemble my teaching toolkit — remain largely unchanged. The details may look different, but the broader strokes resemble what used to pass for normal. And that provides me with a teeny bit of comfort as I head into what promises to be a strange and stressful term.

How does preparing for the upcoming term/semester look for you? What new things are juxtaposed into your normal workflow?

Rethinking assessment

When we moved online and moved to mandatory S/Cr/NC grading (basically, Carleton’s version of pass-fail) this spring, I vastly simplified the way I graded projects and other major course assessments. Here’s what I wrote in my syllabus:

Each assessment that you hand in will be evaluated against a checklist related to one or more of the course learning objectives. I will rank each learning objective, and the overall submission, according to a three-point scale: Does not meet expectations; Meets expectations; Exceeds expectations. If an assignment does not meet expectations overall, you (and your team, where applicable) will have the opportunity to revise and resubmit it to be re-evaluated.

… You will earn an S in the course if at least 70% of your evaluated work (after revision, if applicable) is marked as “Meets expectations” or “Exceeds expectations.” You will earn a Cr in the course if between 60 and 70% of your evaluated work (after revision, if applicable) is marked as “Meets expectations.” You will earn an NC in the course if less than 60% of your evaluated work is marked as “Meets expectations.”

CS 257, Spring 2020 syllabus

I’d heard the term “specifications grading” and I knew that what I’d be doing in the spring was in the general spirit of specifications grading (if you squint hard enough). And it worked surprisingly well. Students knew exactly what they had to do to earn a particular grade in the course, and on individual assignments thanks to targeted rubrics. Allowing revision on any major assessment meant that students could recover from the inevitable hiccups during a pandemic term (and a term marked by grief, loss, and protests over George Floyd’s murder). And at the end of the term, when some students could just not give any more to their studies due to all that was happening around them, the system extended some much-needed grace — if they’d already met the threshold for an S, they could bow out or step back from the final assessment, assuming their teams were on board with their decisions.

I wondered: what would it take to do something like this during a graded academic term? I wanted some more guidance.

And so, I did what I always do when I want to learn more about something: I hit the books. Two books, in particular: Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time, by Linda B. Nilson (2014); and Grading for Equity: What It Is, Why It Matters, and How It Can Transform Schools and Classrooms, by Joe Feldman (2018).

Both books tackle the same general problem: grades and grading are imperfect, biased, and measure lots of things other than how well students achieved learning outcomes. Nilson solves the problem with a straightforward up-or-out approach: work is either acceptable, or it’s not. Feldman’s solution is a bit more nuanced: work is somewhere on a (short!) continuum between “insufficient evidence” and “exceeds learning targets”, but nothing resembling a formative assessment and/or “life skills” gets a grade.

Specifications Grading is faculty-centered in its approach, at its heart. A key goal of specifications grading is to save faculty time while still maintaining high quality feedback to students. The specifications grading approach is two pronged. First, all individual assessment grades are pass-fail. The assignment either meets the standard of acceptability, or it does not. No partial credit, no wrangling over how many points something is worth. The standards of acceptability are spelled out in a detailed rubric, or checklist, so that students know exactly what constitutes an acceptable submission. Second, a student achieves a particular course grade (A, B, etc) by completing either a specified set of activities (“bundles” or “modules”), or by demonstrating more advanced mastery of the course learning outcomes (“jumping higher hurdles”). The bundles/modules/hurdles are spelled out in detail in the syllabus, so that it’s crystal clear how a student earns a particular grade. In my spring course, for example, the bundles were simply percentages of course assessments acceptably completed. In a graded course, a bundle is often more complex: extra assessments, for instance, or more challenging assignments. While setting up the bundles/modules/hurdles seems like a really time consuming process, it is front-loaded, done before the course starts, so that the grading itself during the term is more streamlined. Basically, the instructor decides what constitutes meeting learning outcomes, and constructs the assessments and bundles/modules/hurdles accordingly. At the end of the course, then, the grade more closely indicates the level of mastery of course learning outcomes than a traditional partial-credit focused grade.

Grading for Equity is, I would say, more student focused. (And more K-12 focused, although I certainly found enough in the book worthwhile to consider for the college context.) Grading for equity is based on three pillars. First, grades should accurately reflect student achievement towards learning outcomes. This means grades should not include things like formative assessments (homework, in-class activities), extra credit, behavior, or “soft skills” — they should only reflect the results of summative assessments, and only the most recent result of a summative assessment. Feldman also cautions against using the typical 100-point scale, which is skewed towards failure, in favor of more compact scales (a 4 point scale, for instance, or a minimum score). Second, grades should be bias-resistant. They should not reflect a teacher’s impression of student behavior, which is flawed for many reasons, nor reflect a student’s life circumstances (for instance, their ability to complete homework outside of school hours). Third, grades should be motivational. It should be transparent to students what counts as mastery of a learning objective and how to achieve a particular grade. Formative feedback should not penalize mistakes, because this promotes a fixed mindset rather than a growth mindset. For the former, Feldman is a fan of detailed rubrics and the four-point scale (something like “Exceeds expectations”, “Meets expectations”, “Partially meets expectations”, “Insufficient evidence”).

Both books agree that students learn at different rates, and any summative assessment should take this into account. Both systems, thus, allow for retakes and redos. Specifications grading puts some limits around redos to make things easier on the professor, recommending some kind of “token” system where students have some limited number of redos/late passes over the course of the term/semester. Grading for equity favors as many retakes (up to the end of the term/semester) as a student needs or wants in order for them to meet learning targets. (Theoretically, anyway; the book acknowledges that there could be a snowball effect particularly when later work depends on earlier work, and suggests that time limits on retakes would be appropriate in this context.) Grading for equity argues that later assessments should replace earlier grades rather than, say, averaging them, It gets into the weeds a bit on the freedom of faculty to count anything that demonstrates a learning objective as an appropriate assessment of that objective, including things like discussions in office hours. I get the spirit of this, but it seems like something like this would be ripe for bias.

So, how am I using what I learned from these books about assessment as I plan my fall course, a CS elective? And what am I struggling with?

Plan: Retain the meets/exceeds expectations scales, with minor changes. I really liked the ease and clarity of the three-point scale in the spring. Grading for equity makes a compelling argument for the inclusion of a “not yet demonstrated” category, allowing teachers to differentiate between “handed in and not sufficient” and “not handed in”. So I may move to a 4-point scale for some assessments (“insufficient evidence”, “partially meets expectations”, “meets expectations”, “exceeds expectations”). Roughly, “partially meets” in my head equates to C-level work, “meets” to B-level work, and “exceeds” to A-level work. Moodle likes to convert everything to percentages, which is not as useful for this type of grading. I need to figure out how to hack Moodle to show students something closer to this scale rather than “you’ve met expectations so you’ve earned 50% on this assignment”.

Plan: Allow for revisions and be flexible with deadlines. We’re still in the midst of a pandemic. We still live in a white supremacist society. And the 2020 elections….well, need I say more? Fall will be tough emotionally and mentally for many of us. Extending flexibility and grace to my students, being willing to meet them where they are, is the least I can do. And while I’ve used revisions on exams previously with pretty good results, I’m eager to extend that to all major assessments, as I did in the spring, with later grades replacing earlier grades. I still need to figure out what revision looks like for my “deconstructed exam questions”, though.

Struggle: Not grading homework. While specifications grading allows forms of preparing for class to count towards a bundle, grading for equity adamantly opposes the idea — even just giving points for students completing the homework before class, regardless of correctness. (Which has been my policy for years.) Again, the argument is compelling — homework is formative, not summative; grading homework for correctness penalizes making mistakes in the learning process; there are many good reasons students can’t complete homework outside of class. But I still want students to take preparing for class seriously, so that once we get into class we can be an effective learning community, ready to engage with ideas. Feldman indicates late in the book that keeping track of homework submission can be valuable in pointing out effective and less effective learning strategies to students, and that awarding a small percentage of the overall grade to homework is not horrible. So I think I will compromise and continue to grade homework for completion only, but reduce the percentage it’s worth (maybe from 10% to 5%?).

Struggle: Bundles and modules and objectives, oh my! I’ll admit that I had to put specifications grading down for a bit and come back to it later once it got into specific examples of bundles. Ditto when I tried to wrap my head around deconstructing my usual assessments around course learning outcomes, as grading for equity describes. I got stuck on how I’d translate this to my elective. On reflection, it will take a lot of up front work, but the increased transparency will be worth it. I plan to use the “higher hurdles” approach from specifications grading, measuring hurdle height with the 4 point scale from grading for equity. I’m still not sure if I can achieve complete separation of learning objectives when an assessment covers several of them, grade book wise, so I may have to let that go for now and try separating those out more cleanly in the grade book in a future term.

There are other struggles, of course — grading for equity has me puzzling over my approach to grading group work, for instance — but these are the key ones on my mind as I piece my course together. I’m eager to continue my experiments with assessment and curious to apply what I’ve learned from my spring experiences and from these two books.

Course design for resilience

Last week, I participated in two simultaneous online workshops around the same topic: resilient course design. One workshop was part of an ongoing series of online workshops around rethinking course design put on by the Associated Colleges of the Midwest (ACM, of which Carleton is a member). The other was a Carleton-specific “design challenge” sponsored by our Perlman Learning and Teaching Center (LTC).

The ACM workshop followed the same format as the others in the series: a Monday webinar, with content presentation and a bit of small group discussion in breakout rooms; and Friday smaller group discussions around a more specific subtopic. For instance, the discussion groups last week focused on lecture courses, discussion courses, lab courses, research seminars, and arts/performance courses. (I participated in the lecture group since that seemed to be the closest fit. Turns out, most people in the group shared similar inclinations to lecture/activity split as I do, so it was indeed a good fit.) Sometimes, there is homework assigned Monday for the Friday discussions, as there was this week. Last week, we designed a typical week in our course as homework, paying attention to a set of guiding questions about student participation in various modes.

The LTC challenge included participation in the ACM workshop, or at least viewing the Monday webinar/recording, and asked us to do the same homework as in the ACM workshop. In addition, the challenge included discussion forum postings (some in the larger group, many in smaller assigned teams), a couple of synchronous discussions, an entire day of drop-in sessions with various staff and faculty on specific aspects of course design (Moodle, Panopto, thinking through learning goals and activities, etc.), and a final reflection. The challenge setup mimicked a mini-course setup, allowing us to experience aspects of an online course from a student’s perspective.

I took a LOT away from this experience, but I want to highlight a few areas in particular: rethinking engagement; weekly structure and flow; and the student experience.

(Note: The worksheets in the images in this post all come from the two challenges, and were used across both challenges.)

Rethinking engagement

One of our first activities had us remap our “typical”, in-person course activities to activities more amenable to multiple modes of participation — fully or partially online, across time zones, taking into account student illness/quarantine/family circumstances, etc. My matrix, for the elective I’m teaching this fall, is pictured below. Entries in purple indicate what I traditionally do in this course; green entries show changes for Fall Term. The entries with purple text in green boxes indicate things I did pre-Fall 2020 that I plan to continue in the fall.

CS 331 course matrix, mapping in-person activities to more online-friendly activities
My course resilience matrix. Which is not resilient from an accessibility standpoint, as it uses color to convey meaning. Ack!

The x-axis moves (left to right) from content delivery to content application/practice; the y-axis moves (top to bottom) from face-to-face engagement to online engagement. Thus, the matrix gives us the opportunity to think through where course activities fall on each of these continuums. The red box includes activities that must be completed in person, the yellow box indicates online synchronous activities, and the white space at the bottom indicates asynchronous online activities.

Since Carleton students don’t register until August, I literally don’t know where in the world my students will be this fall. In designing my matrix, I assumed that students occupied a wide range of time zones, thus the prevalence of activities in the asynchronous zone. For the team activities, I plan to group students roughly by time zone and preferred time of day to work, as I did in the spring. This should help a bit with the time zone issues.

I still have a couple of thorny problems to work through. I’m still not sure how to replace the in-class, physical simulations of network phenomenon (routing, protocol specifications, access control, etc) that I rely heavily on in this course, although I now have some ideas to pursue. And I’m still playing around with course projects, so I can’t decide on the development platform until I finalize the projects. Otherwise, I found it easier than I expected to map my activities to their online counterparts.

Structure and flow

Later in the week, we thought through “a week in the life” of our course, using two different formats: a week-at-a-glance, and a more detailed accounting of the work itself.

Here’s my week-at-a-glance:

Table showing the "flow" of a week, in terms of formal and informal course activities each day.
Overview of a week in the course, showing the modes of engagement and what’s happening each day.

To be honest, I was worried about how well I’d be able to complete this set of activities, since I’m still trying to revise the learning goals for the course. I found instead that these exercises really clarified my thinking about the course as a whole. Specifically, it helped me think through how to spend our scheduled class time — and figuring that out helped other pieces, like asynchronous work, fall into place. In particular, I’m thinking of Wednesdays as the main days for synchronous engagement, with Fridays reserved for drilling down a bit more on applications of the content and Mondays for open-ended Q&A, on either the previous week’s content or the current week’s content.

The second part of this assignment asked us to define the set of activities in a particular week. I picked a random week in the middle of the course and came up with this plan:

A week's worth of course activities for CS 331, including prep, assignments, and support.
Course activities for a week in the middle of the term, including time estimates. Which may not be accurate.

(Of course, after I completed this worksheet, we moved the scheduled course time, so now I have to revamp the due dates. Readings will now be due the night before class, instead of the day of class.)

This was perhaps the most eye-opening activity of the week. It’s one thing to say “yeah, I’ll have them watch some videos and take a reading quiz and maybe do a worksheet or two” and another to sit down and figure out how much time everything will take and why we want students to do these things in the first place. I settled on a rough pattern of preparing for class with targeted readings, reading quizzes to ensure comprehension, and mini-lecture videos. (Since the challenge, I’ve rethought this a bit — I may give students more flexibility in allowing them to read and watch content videos before attempting the reading quizzes.) Class and class-adjacent activities include engaging with the content formally (Wednesdays) and informally (Fridays), a team asynchronous activity (which might be the same one we tackle in class, expanding on the in-class work), and project work. I still need to figure out the appropriate mix, here.

In our small and larger group challenge discussions, we agreed that students may find these charts useful, too. I’m thinking of ways I can incorporate these views (a week in the life and the detailed accounting) into my course Moodle page. (This was a question I’d hoped to ask during the Moodle drop-in hours, but I had to leave before I could ask my question. More on this below.)

The student experience

Experiencing the design challenge as a student made me more sympathetic to the student experience, and I’m rethinking aspects of my course design as a result.

The first challenge: figuring out the challenge structure. What was happening each day? How do I submit my homework? What is the homework for today? How do I post/respond to just my small group? What is my small group? Why is this activity not marked complete if my teammate handed it in? Where’s the Zoom link for today’s discussion? If I, a seasoned educator and self-proclaimed Moodle power user, had trouble figuring some of these things out, then surely some subset of our students will, too! Lesson learned: I need to be even clearer than I think I need to be, when conveying the hows and whats to my students.

The second challenge: getting help! “Yay, drop-in hours!” I thought, as I skimmed the schedule at the start of the week. Come Thursday morning, I found myself in an internal dialog, which I tried to capture in this Twitter thread:

Today I found myself waffling over whether to attend virtual drop-in hours on fall term course design. Was my question “worthy” of stopping by to ask it? When should I show up — at the start, towards the middle, at the end? Am I wasting everyone’s time? 1/2

And I realized OH MY GOD my students were likely having the SAME conversations about attending virtual office hours this spring! Now, the low attendance makes complete sense — and I need to think how to make attending office hours less scary/fraught.

Originally tweeted by Dr. Amy Csizmar Dalal (@drcsiz) on July 16, 2020.

I finally got over myself and hopped onto my first drop-in session, and had a lovely conversation with our outgoing LTC director on in-class simulations in an online environment. And commiserated with the faculty member who jumped on as we were finishing up, who, it turns out, conducted a similar internal dialog before joining the call. I need to make seeking help, and participating in office hours, less scary and more natural this fall.

Emboldened by my new-found confidence, I jumped onto a second drop-in session, on Moodle. There were already several people on the call, asking questions about assessment. I listened in and learned a few things that I made a note to try. But I had to jump off of the call to head to another meeting before the facilitator could answer my question on reproducing the spirit of the weekly plan (discussed in the previous section) on my course Moodle page. And it was not clear how I could seek out help on my question after the drop-in hours and/or after the challenge. I need to think through how to accommodate multiple student questions during drop-in hours, and how to direct students to seek help outside of these hours too.

Concluding thoughts

The challenge might be over, but planning for resilience continues. I find myself thinking through the intersection of resilient design with things like anti-racist pedagogy, time management (my students’ and my own), assessment/grading, and maintaining boundaries while providing emotional support for students. I still need to do the hard work of translating my “week in the life of the course” for each week in my course, while I’m still wrestling with learning goals and the like. This challenge laid a strong foundation for this continued work.

Participating in this challenge, and in the other online ACM workshops this summer, brought an unexpected benefit, too: Confidence. I feel a lot more confident, and capable, of pulling off a strong and worthwhile online course this fall — and beyond, if it comes to that. Things I’ve learned directly translate into in-person offerings, too — the importance of clarity and structure, the value of providing choices to students to direct their own learning, the compassion of flexibility to accommodate student circumstances and acknowledge their struggles. The deep and prolonged reflection on my pedagogy is making me a more effective and more present educator.

Big questions for fall

On Friday, Carleton released their plans for Fall Term 2020, ending a long period of discussion and speculation.

The parameters of the plan are what I expected. Our calendar works in our favor here (start in mid-September, end by Thanksgiving), so I wasn’t surprised to see that hadn’t changed. I’d expected we’d move to a hybrid model, with a mix of in person and partially to fully online courses. I was pleased to see that neither students nor faculty would be required to be physically on campus if they chose to stay home/teach online. Though, of course, some faculty (in the arts, lab sciences, teaching first year seminars), and staff who support faculty, may find themselves weighing their preferences against other factors and pressures. The decision to bring back 85% of the student body to campus surprised me, as I’d read that as an upper bound, if-everything-goes-perfectly threshold. But, here we are.

So, we have answers. But the plan, as extensive as it is, still leaves many questions unanswered.

The big question left unanswered, of course, is what happens if I get sick? (The cynical part of me wants to phrase that as “what happens WHEN I get sick?”, but I’ll try to be optimistic here.) The faculty FAQ is vague on this point. The policy outlined in the link no doubt works well enough during “normal” times, when a faculty member falling seriously ill during the term is an exceptional circumstance. But in a pandemic? When it’s likely that a nontrivial number of faculty are out for an extended period, either due to quarantine, their own illness, or care for a loved one? (Or extended child care WHEN schools close down again, assuming they open at all?) We should PLAN on faculty stepping down from their courses as NORMAL this term, not hope fervently that it doesn’t happen.

As department Comps Czar this year (i.e., the person in charge of all capstone projects), I’m drafting a plan for how to step in if/when a Comps advisor falls ill or needs to step away for part or all of a term. For my own course, I’m aiming to get as much of the course up and posted by Day 1, so that it’s easier for someone else to step in if need be. This is especially important since I’m teaching an elective course, meaning there are only 1-2 others in my department with the expertise to step in and take over. But it shouldn’t fall to individuals, and individual departments, to decide that making contingency plans is necessary. And, it’s important to note that this extensive advanced planning happens at a cost — I can’t afford to take much time off this summer for a break, and I’ll be spending less time on my research projects.

(We have not held a department-wide conversation about “who takes over which class”, but I’m already thinking ahead to what I could take on if need be.)

Similarly, what happens if a student gets sick? The student FAQ contains some guidance about testing, contact tracing, and isolation. But what about academically? Is the Dean of Students’ office planning for mass student absences, streamlining processes for extensions and leaves, ??? I would love to know this info so that I can more effectively advise students and plan my course to be as flexible as possible. And I’m particularly thinking about my own Comps project groups — Comps is a graduation requirement, so what happens if a student can’t finish out a project? We can’t handle these as exceptional cases, because they WILL be the norm.

As a parent of a teen and a special needs tween (thoughts and prayers, please), the big unknown is how will the school district’s plans impact my family’s day-to-day? That’s right, our school district has yet to announce its plans for the fall. If they’re partially or fully online, how do we supervise their schooling while doing our own jobs? What will an online school day look like? Will all 4 of us be on Zoom at the same time? If they’re back to in-person…well, does our family want to take that exposure risk? (Particularly since cases are on the rise in my county.) Should we be looking at school alternatives this year? My head hurts just thinking about all of the planning and decision making ahead of us.

Finally: what plans are in place if (when) we need to pivot back to online-only? Do we have plans? If so, will these be shared at any point with faculty, staff, and students? Similarly, how are we planning for Winter and Spring 2021? And when will these plans be communicated?

There are, of course, a zillion smaller questions as well, enough questions to feed my insomnia for weeks. And a trillion things to do, big and small, to prepare for the term ahead, as I wait and hope for those larger questions to be answered.

What big questions haunt you about the fall….and beyond? How are you coping with the uncertainty?

Virtual spring term wrap up

With all non-senior grades due yesterday morning (senior grades were due last week), spring term is finally and officially in the books!

Given that my research students started this week, and looking ahead to my schedule for the week (so! many! meetings!), I completed and submitted all of my grades at the senior grade deadline. I’ve now had a week to regroup and reflect on the term, and figure out what lessons I’ll take away from it.

In this post I’ll talk a bit about what went (surprisingly) well, what fell flat, and what I’d do differently next time, whether we’re in person, online, or some combination of the two.

What went well

Specificity. I tend to be very specific in my assignment prompts and in my assignment rubrics. Over time I’ve recognized that this is good and inclusive pedagogical practice, but honestly it was born out of necessity — as junior faculty in a male-dominated department, it was a defensive mechanism against students who questioned my pedagogy and right to be in the classroom, and/or felt I wasn’t qualified to “appropriately” assess their work. Specificity, it turns out, really helps students focus on what’s expected of them, particularly when they’re already feeling overwhelmed in an unfamiliar learning environment.

Organization. When most of your course delivery is asynchronous, Moodle gets unwieldy quickly. Even in non-virtual terms, Moodle gets unwieldy quickly. I’ve developed a template of sorts over the years for organizing my course weeks on Moodle with judicious use of labels and the assignment module, and I modified that to fit the flow of our virtual class. I also discovered that Google Calendar, when you embed it in Moodle, provides a much easier interface for students to figure out what’s due when than the built-in Moodle calendar, so I relied on that quite heavily. Finally, I’d never used Activity Completion before, but I leaned heavily on that to both control access to material (“complete this to unlock that”) and to give the students a way to keep track of what they’d completed and what they had left to do.

Consistent, stable teams. I decided early on to make the term-long project teams “collaboration teams” for the entire term. While a few teams struggled to connect with each other, the majority of teams connected effectively and formed mini-communities within the course, and these teams worked pretty much how I intended them to work. I suspect this “saved” a few students who might otherwise have gotten lost from completely falling off the wagon. (Unfortunately, it did not save all of these students.) For the most part, teams provided much-needed community within the larger course, and helped replicate some of the “table culture” that emerges in the face-to-face version of this course.

Team meetings. Boy, I wish I had done this earlier in the term! I met with each team over Zoom in the second-to-last week of the term, after peer and self evals came in and around the time the penultimate deliverable was due. I spend about half a class period doing this during in-person terms, and use it as a way to help teams figure out what to focus on for the final version of their project (are there features to jettison? are they focusing on user goals? what will get them closest to the vision they had at the start of the term?). This also provides a way for me to talk with students about team dynamics that emerged in the peer and self evaluations. I think the students were relieved to get some one-on-one focused time with me, something more personal than the weekly Zoom class meetings. And I made sure to do a quick non-course check-in with the students during this time, to focus on their well-being. This was a huge success.

Class discussions. It took me a while to figure out how to conduct these in a more authentic and engaged way. I used stable breakout rooms so that students were always paired up with the same students (their project teams), with some minor shuffling if only one student on a team showed up to class. I quickly realized that me jumping from room to room to check in did not work at all — it was clunky and immediately halted conversation. I started using Google Docs, placing all of the discussion prompts and spaces for notes there. (Sometimes I used a single document for the whole class; other times, each team had their own Google Doc.) Instead of room-hopping, I monitored the Google Doc(s), jumping in with comments in the Doc to redirect discussion or elevate a point, and popping into breakout rooms when it appeared a group was heading off-track or clearly lost. As an added bonus, the Google Docs provide a record of the discussion, so that students who could not participate in real time could still reference the notes and take-away points — and even students who participated could go back and review the take-away points. I definitely plan to use this strategy much more in the future, even in face-to-face courses! (And this might be the only effective way to conduct discussion in a socially-distanced, mask-wearing classroom.)

What fell flat

Building community. I had modest hopes at the start of the term that Slack could provide an acceptable way to build community asynchronously. I seriously underestimated the amount of work that building community online takes. Despite my best efforts (which were pretty lousy, I’ll admit!), interactions on Slack were pretty much one way between the students and me. My attempt to wrangle together a virtual project demonstration/feedback session (as several smaller combination showcase/office hours during Reading Days and Finals) failed to yield a single participant. I delve a little more deeply into the issues (and how I attempted to use badges to salvage the community) in this post on Carleton’s Learning and Teaching Center blog.

Office hours. Try as I might, I could not get students to utilize office hours on any sort of regular basis. I had both drop-in hours (no advance appointment necessary) and office hours by appointment, and both went over like lead balloons. I am not sure why students did not utilize these — fear? lack of a pre-existing relationship? (although even students I knew before this course failed to take advantage of office hours.) branding? I wonder if requiring students to make an office hours appointment with me in the first couple of weeks in the term would help fix this. At least it would remove the barrier to setting up the meeting and then showing up to the meeting.

Timing of deliverables. There was a mismatch in my head, over how I thought each week would flow, and how this matched up with when I had readings and reading activities due. Often this meant that our Wednesday synchronous class meetings hit on things that technically weren’t due until Friday. Luckily, this is easily fixable moving forward, particularly since now I have a slightly better sense of how to manage flexibility with keeping everyone roughly on the same track.

Flexibility in deadlines. Let me preface this by saying that flexibility was absolutely the right call, given all of the things students faced this spring and continue to face. For many, this flexibility allowed them to successfully complete and pass my course. Every major deliverable in the course had a de-facto 48 hour no-fault extension built in. Once students realized this, however, many of them treated that 48 hour buffer as the actual deadline. This made it more difficult for my grader and me to keep up with assessments, and to manage deliverables that built upon other deliverables. When most work revolves around team deliverables, there’s also the tricky balancing act of handling a student who needs some flexibility with the needs of the other students on the team — how does everyone navigate this minefield while meeting everyone’s individual learning goals and needs? I don’t have any great answers, but I did learn a lot this term and will use those lessons to craft better ways of handling this in the future.

Concrete takeaways and homework for the summer

I realized as I compiled my thoughts for this section that all of the takeaways revolve around one theme: providing students more autonomy and independence over their learning. Moving online forced us more into that mode, but continuing this mode offline is a good idea pedagogically anyway. So, what can I do to help students become more independent learners?

First, I want to explore the ideas of specifications and/or contract grading. I sort of did specifications grading-light this past term, with my rubrics following a “does not meet expectations”/”meets expectations”/”exceeds expectations” format, but I want to expand this idea out further. I also want to look into working with students on what they want to get out of a course and how to structure the pieces of the course to help them craft that for themselves.

Second, I want to continue integrating activity completion into my courses, to help students keep better track of expectations and due dates, and to control the release of information better (“do this before you can unlock that”). Ideally, this will also help students connect the dots of how all of the pieces relate and inter-relate.

Third, I suspect we’ll be in pandemic mode for quite some time, and that things may never quite get back to “normal”. Students may continue to engage in courses in multiple ways: in person, completely virtually, some fluid combination of these. How accessible are my courses, really? I learned a lot this term about the barriers to learning in my own materials — unclear labs, dense readings, opaque tool documentation, etc. I want students to grapple with the course concepts, not the how-tos.

Finally, I want to empower students to seek help and form connections with me and others in the course. How can I be more explicit in how I connect students to each other, and ensure that all students feel like they belong and are valued? How can I foster more back-and-forth interactions in both face-to-face and asynchronous conversations?


While this was not at all the term I was supposed to have, the struggles, the constant adjusting and readjusting on the fly, the dealing with students’ (and my own) trauma, have all made me a more reflective, more compassionate, and more effective educator. I am eager to apply these lessons moving forward, in making my courses more inclusive and accessible to all.

Week 8: The hard stuff

We’re heading swiftly toward the end of the term: next Wednesday is the last day of classes, and June 8 is the last day of finals. While at many times this term seemed like a colossal slog, now that we’re finishing up it seems to be moving at warp speed.

At this point in a normal term, I tend to ease up a bit. I know my students are stressed and tired (heck, I’m stressed and tired), so I refrain from assigning new, heavy things. The key focus now in Software Design is on finishing their term-long website projects, which keeps them plenty busy anyway. Integration is hard, so I want to give them the space and time to grapple with those tricky integration issues and annoying well-this-worked-before-why-is-it-crashing-now? bugs.

I’ve eased up this term, too, and perhaps it’s even more important now, as we’re all worn down by the fatigue of uncertainty, the struggles of learning in community online, and too much Zoom.

There are two course activities I typically do in the last couple of weeks in the term: a code review, and project presentations. Both, it turns out, are challenging to rework in a fully virtual environment. I still haven’t figured out how to pull off the project presentation piece, to be honest, although I really need to figure out something ASAP! But I did find a way to pull off the code review, and I’m eager to see how it goes.

Code review should be easy to pull off if you do it the way it’s normally done — as a way to review/test/try to break code for which you’ve submitted a pull request. This assumes access to a common repository and that you’re all working on the same codebase, which is not true here. My version of code review in Software Design resembles peer writing workshops. I divide students into “feedback groups” (usually two development teams per group), have them exchange code (usually a specific class and any helper classes necessary to understand that class), and have them review code more like you’d review writing. Groups project code onto one of the many monitors in the classroom and gather around tables to discuss it.

I considered several different ways to do this virtually and asynchronously, and weighed using various tools. In the end, I decided the costs of throwing yet another unfamiliar tool at my students outweighed any small benefits they’d derive from learning that tool. So students will use Google Docs, copying and pasting the relevant code into a Google Doc, and optionally applying syntax highlighting with one of the myriad tools out there. (I suggested Code Blocks, and I just found an online highlighter that actually connects to a website, unlike the others I tried.) Students will use the commenting feature to highlight and provide feedback on specific aspects of the code, and set up Slack channels in our course workspace for longer discussions about the code under review. This way, I can dip in and monitor the level, type, and content of feedback that teams exchange with each other. Which is actually a net benefit, because I’ll get much more information on how students review each others’ code than I do normally when I’m flitting around the room trying to listen in on each group’s feedback! (And I can provide a better post-mortem after the fact, using specific comments and examples.) In our synchronous class meeting today, before they start code review in earnest, I’ll have them work in teams to review a short snippet of code, so that they can practice the workflow and get feedback from me on how they’re reviewing code. I’m really eager to see how this goes.

The hard stuff, as I alluded to in the title of this post, leveled up this week, not just in my course, but in pretty much every aspect of my job:

  • I received unexpectedly bad news about my planned research project with students for the summer, and scrambled to work up a replacement project (after panicking, swearing, and throwing things, of course).
  • In a similar vein, I’m working with others on how to create a community of student researchers when they are all isolated from each other and remote. It’s not impossible, but it’s new to us and tricky to do well.
  • Students who struggled all term continue to struggle, meaning I’ve had some difficult conversations about what it will take to pass the course (even with our version of pass-fail grading, which is basically “pass with a C- or above”, “pass with a D”, and “fail”).
  • Discussions about next year continue, maybe not as fruitfully as I’d like and maybe with fewer answer and many more questions/unknowns than I’d like. It leads me to question whether the right conversations are not happening at all behind closed doors, or whether the right conversations are happening behind closed doors and the failure is in communicating this information to faculty and staff. Regardless, the end result is the same — more angst, more uncertainty, more anxiety about the future.
  • I have some big tasks on my plate — make sure research students get paid this summer (no small feat when it seems like everyone’s projects keep changing), assigning Comps (capstone) groups for next year, conducting oral exams for this year’s Comps students — that take plenty of time and mental/emotional energy, both of which are in short supply lately.

The best I can do is to do my best in managing my time and my energy levels, doing what I can when I can, and taking the time for self-care so that I can be present, mentally and emotionally, for all of the hard stuff on my plate. And to remember that now, especially, good enough is good enough. This will let me get to the end of the term in one piece, and leave me with enough emotional and mental space once the term concludes to reflect on the term and apply what I’ve learned to….whatever fall term and beyond look like.