Hard to believe, but it’s the end of Week 7 of the summer in my lab. We’ve been very busy recently: earlier this week my students ran some experiments (which they designed completely by themselves!) with human subjects, and they are now busily analyzing the data. They’re giving a talk, all by themselves, next week. And I’ve had them start writing up their results, with a conference paper as the eventual end goal. So things move ever onward.
Brigindo at Dirt and Rocks has a really interesting post from earlier this week about helping (doctoral) students “find the story” in their research: learning to frame their data correctly, to tell the correct story, and above all to start asking better questions. This post really resonated with me. My students are now knowledgeable enough to know what they are doing and why, but they are still learning how to ask the right questions.
Here’s an example. We have a testbed network set up on which we run our experiments. The network has a router running netem, which is software that allows us to essentially emulate the Internet: we tell it how much data to lose, or how long to delay data, or how much data to let through, etc. As part of the experiments we just ran, we set netem to lose certain percentages of data at different times. At the same time, at each test computer we measured the actual amount of lost data using a program called ping.
When we looked at the results, my students noted that the measured data loss was off from the data loss percentage we applied. Their immediate reaction was to assume that either the ping data was wrong, or netem was wrong.
This highlights a couple of thorny aspects of shaping the thinking of undergraduate researchers from “classroom thinking” to “research thinking”. First, students are used to a more black-and-white view of the universe: if one answer is right, then all other answers are wrong. This of course is not how the universe works, and in fact research questions may have many right answers, or none at all. In this case, both netem and ping may be “right”, or “wrong”, and what they are probably seeing is an artifact of the probabilistic nature of computer networks.
Second, students are not used to context switching between the big picture (what problem are we trying to solve and why?) and the smaller details (run this experiment, collect this data, do this analysis). Again, what are students more familiar with? Doing smaller tasks covering one or maybe a small handful of skills. In research, you need to have an eye on both the big and small pictures, on the details and the trends. To do so, you have to be able to take a step back from the data, or your observations/analysis of the data, and ask the right, framing questions.
The latter is probably the most difficult skill for new researchers to learn—heck, even the most experienced researchers can experience such tunnel vision from time to time. I end up teaching by example in this case: my students present their observations and analysis, and I ask them leading questions to get them to think more deeply and broadly about the data/results/whatever. I ask them to think about what the results could mean, to get them to explore different explanations and hypothesis. My hope is that by seeing me do this, they will eventually learn how to move beyond their initial reaction to and assumptions about the data and be willing to explore it in more depth or with a different lens.
In this case, I directly challenged their assumptions, and asked them to think a bit about the possible reasons for the discrepancies. I also asked them to consider what an “acceptable” amount of error would be: if the measured and applied losses are only off by 1%, is this a deal breaker or is it within the realm of plausibility? I could have just as easily told them “nah, this is normal” and be done with it, but I’d rather have them spend the extra time and come out with a deeper understanding of what’s happening than to have them blindly trust what I tell them. Because critical thinking and the willingness to consider multiple possible explanations is a vital skill for any researcher, and one that requires practice, practice, practice.