Liveblog: CALRG reading group – ‘Some guidance on conducting and reporting qualitative studies.’

This week’s CALRG reading group discussed the paper: ‘Some guidance on conducting and reporting qualitative studies’ by Peter Twining, Rachelle S. Heller, Miguel Nussbaum, and Chin-Chung Tsai. Below is a liveblog summary of our discussion. Each bullet represents one point made in the discussion (which does not necessarily represent my own views). As always, please excuse any typos or errors as it was typed on the fly.

  • We’ve been talking in CALRG recently about issues around the quality of quantitative work, but it’s also worth thinking about the quality of our qualitative work, including how to write and report findings. I’ve been shocked in some work in the lack of attention to how trustworthy the data analysis is. There’s much more of a problem of how we determine whether qualitative work is rigorous.
  • The author highlights that one of the issues is how to write up qualitative work, especially considering that it takes a lot of space. Researchers often have to think about whether the journal will allow for them to justify the work we’ve been doing in such a short amount of space.
  • Word limit is an absolute killer, as qualitative work can easily become a mammoth of a text and there are few proper journals that allow for a high word count, while at the same time they still expecting rigour and high-quality work. It’s as if qualitative work cannot fit within that bracket.
  • You could argue that being succinct makes for a better paper. But if you look at the list in this paper today of what is needed for proper qualitative reporting, then you aren’t left word count left for everything else.
  • Some journals are looking at snappy, short papers, but rich high-quality qualitative research probably doesn’t need to be in those journals
  • One additional question is whether the editors and reviewers at top journals understand qualitative research, which can be demonstrated in what they publish.
  • Is that also then a plea to get more qualitative reviews to get signed up to these hardcore quantitative journals?
  • I read this paper and wondered if this was simply the viewpoint of the editors, a set of instructions for how to publish qualitative work in Computers & Educations
  • But this work is important for us not just as writers, but also for reviewers. If you don’t know on what grounds to review something, then you will struggle to review it properly. Journals are often very vague on the grounds upon which to review things. Something like this paper to refer to is a useful tool.
  • But with the thought that you can’t do all that the paper is suggesting within a 7,000 word limit. So we have to think more broadly at questions like: Is the work credible? Is it valid? Do I believe it? Those seem to be important keys.
  • One thing I’m interested in this article is the idea of saturation and the point that we stop collecting data. In previous trainings, one critique of that method was that saturation may occur when you have a biased sample.
  • In reality, data collection has to finish at some point and, of course, you could have gone on to find something new for eternity. In this article, all of the readings at some point were saying the same things, so at some point it becomes about minimum returns.
  • If you do your sampling properly, such as by using mixed methods tools, you know that at what point you have a representative sample of the population you are interviewing. If you have a bias in your sample, then you will find saturation without the wider perspective of those with other opinions. You have to be aware of the broader sample within which you’re trying to make claims.
  • It also depends on your question, though, and what spectrum of people you need to be reviewing. On the other hand, you can have a 4* paper that analyses only one person. What’s interesting about this paper is that it was insightful and can challenge our way of thinking. One person was enough to answer the research question in that sense. It might not have been representative, but it raised interesting questions that we need to think about.
  • Interesting connection there with how the OU follows up on their student surveys and the bias that those who are already engaged in their courses are the ones who answer the survey in the first place. We also only survey those who complete courses and not those who withdraw. We only capture part of what is happening in teaching and learning.
  • Anther OU researcher is now linking this data with student characteristics to have a better understanding of who is satisfied, who is doing well, etc. Another thing to consider is there is no correlation between satisfaction and retention. Therefore, how relevant is this data to our students in the first place?
  • A critical question about this paper is about the lens of researchers. Good qualitative researchers would have at least 2 or 3 researchers to look at the data and triangulating their reviews, as well as be open in their reporting about their own worldview and how this might have affected the data collection.
  • Shouldn’t that be the case with quantitative research as well? Even quantitative work is subjective.
  • But with quantitative work, you can make the data and your syntax available to a wider audience, whereas you don’t see that happening with interview data.
  • More discussion is needed to understand how do we contrast these world lens? Shouldn’t we say there are certain standards that all research should adhere to, for instance that we have at least 2 coders?
  • I think that’s too prescriptive. If you are doing an ethnographic study, for instance, you are the lens. If you’re coding, then you want to make sure it is consistent and others can agree with it. You’d want to follow more of a protocol, including statistical analysis of the reliability. But qualitative work is a very broad category and that approach might not be appropriate in some contexts.
  • You also have to consider that when you interview people from other backgrounds than your own, the way people respond to you may differ. There’s often a power dynamic between researcher and participant.
  • That’s one reason why we can’t follow protocols such as all asking the same question in interviews, because of course people will respond in different ways because we are all different people. Therefore, we need to be more transparent about our positions and how we interact with the world from our own backgrounds and perspectives.
  • That also applies to transcriptions. For instance, someone was transcribing the interview of a child. I looked at the photograph and she had a pad over her eye, and she was talking about what was written as iPads in the descriptions, but she was actually discussing a literal ‘eye pad.’ Understanding the context of the interviewee, then, and thinking about what you’re doing when transcribing is absolutely critical. Words mean different things in different cultures and the nuances of things like sarcasm and tone are important.
  • What you said about data sets are interesting. In terms of research data management and data archives – I wonder how many data archives actually have large amounts of qualitative data. I suspect not as much.
  • Well, most of our data is video. Sharing videos of kids isn’t going to happen. Or audio with interviews. The amount of editing you’d have to do to make it unidentifiable and sanitised beyond anonymizing would be really time-consuming.
  • That also raises questions around how some journals require an open data set. When that isn’t possible with your data for privacy reasons, that means there are stipulations of where you can publish qualitative data.
  • The counter argument could be that you could share the transcripts rather than the video, or at least make available a representative portion so you can be transparent with how you analysed and coded the data.
  • What I was interested in was the part of the article that discussed mixed methods, because the paper comes down rather strongly on mixed methods…
  • The criticism was on mixing methodologies, not methods
  • I liked that part, though, as it’s important to have your methodology and philosophical stances solid before you start working on a research project, as they can be at odds with one another
  • Well, rephrasing then, I can see within one study that you want one solid methodology, but, for instance, if you look at a European project with six work packs and intersections with other projects. At which stage can two methodologies meet? Or are they always separate?
  • I think it’s about what the data is telling you and whether it gives something worth exploring later. You can’t choose your worldview. I can’t change the fact that I see it through a particular lens, based on my underlying research ideologies. It’s not that I can’t look at other work and interpret that from my perspective and see how it is useful and how it can inform my work. It’s not that other work from different methodologies isn’t useful, but I will always look at it through my particular lens.
  • I have one criticism on the paper’s division between numeric and non-numeric data – it misses the notion of numeric estimation (i.e. a solid number that is interpreted as an estimation). After all, you can use numeric information in a way that’s more subjective than hard facts.
  • I would come back and say that all numbers are subjective, not just those functioning as estimates
  • When you mention researchers’ worldviews being differently, would you be able to come to the same conclusion as someone who’s coming from a quantitative perspective?
  • No, because quantitative researchers might think their work is generalizable to a larger population but I feel that context is so important. We can’t say what happens with this 3000 students will happen similarly to another 3000 students in an entirely different context and set of circumstances.
  • One of the largest criticism I have of qualitative work is that every qualitative researcher thinks it’s fine to create their own methods of collecting and analysing data, and no one is trying to replicate these instruments and adapt them locally. Qualitative researchers always seem to suffer from a ‘not invented here’ syndrome, and that means we will never be able to compare work between different contexts in the way we can with quantitative work. To increase the quality of qualitative research, why not decide to stick to well-established instruments.
  • As a counterpoint, one area that I see this issue come up is in emotional reporting. For example, saying the exact same word doesn’t mean the exact same thing to different people. I might say I’m angry, but that means something different in someone else’ interpretation. So you need to be really sensitive to the fact that the same measures don’t measure the same things
  • The realities in qualitative research is that you can go in with the same general guidelines, but in the field, it can get buggered up and you end up with a mishmash of data and methods because the world is just a messy place. Even when you have the intention of using the same methodology, the reality is that it just won’t work practically.
  • Sure, you can always say that there are subtle differences in different local environments. But why not make sure that from the materials section onwards, we all follow some kind of well-established guidelines and instruments and then adjust it to the local context?
  • Do you mean why aren’t researchers asking the same interview questions? Because then your research questions would have to be the same. That works for comparisons studies, but when the research and the context are vastly different, then the questions from previous work wouldn’t apply to this study.
  • Every researcher thinks that the questions you’re raising are unique, but there are always links between work. The way you analyse the data, use particular methods, code or not code – these are all techniques that can be established protocols. The more you can make clear what method you’ve followed, the more rigorous your work becomes.
  • Isn’t that what grounded theory started as? But it all went in different directions.
  • There’s something to be said about building on what other people have done, and if you can make those connections, then your work will be more impactful and credible. But questions have to fit your context.
  • I think the theoretical perspective of the research changes how applicable other tools are to your circumstances.
  • At the same time, very often the tools aren’t shared in qualitative research. What I do now is make an effort publish things like our interview schedule and interview questions, even if just on a blog, exactly for that reason: to allow researchers to make these comparisons more easily.
  • Sometimes existing tools are inadequate for what you want to find. You often have to draw from multiple tools and find your way a bit by being flexible with current tools
  • Ultimately if what you’re doing is contributing to new knowledge. If you’re truly doing new and original research, then no one has done what you’ve done before anyways. Unless you’re building on previous related projects, it’s hard to reuse these kinds of tools.
  • But the drawback is that if you don’t draw on recent work, then reviewers will not understand where it comes from. You can go wild and do something no one has ever done, but someone eventually has to review your work and demonstrate that it adds something to an existing field.
  • You also run the danger in leveraging existing tools in proving exactly what you’ve decided to prove. If you don’t think of problems in novel ways, then it’s easy to bias your results by just confirming what you’re looking for.
  • Yes, but innovation is risky in that people won’t be able to understand where the idea comes from and how it builds on the field, so using existing protocols and methods can help build that bridge
  • It’s not just about getting it published, though, It’s also about getting the findings out there to the general public and making a difference in people’s thinking. A big problem is how do people and members of the public judge what is credible. We increasingly see that people don’t have the tools to make that judgment.
  • Are the ideas in this paper something anyone has been struggling with in their current work?
  • Sure, I’ve done 28 interviews and am now analysing, but we were discussing that we might need to give the coding to someone else to see if they will have the same conclusions. The problem is that I said in ethics that no one else besides my supervisors will see the data. When I give it to someone else, it has to be my supervisors. Other questions are things like, how many do need I give to someone else to determine it is reliable?
  • I’m just trapped sometimes about how to rigorously report. Do I say ‘some people said this’ or ’20 out of 25 said this’
  • If you’re going down a coding route, I would say 20 said this and 5 said the opposite. You’ve come down a route of counting because that’s what coding lends itself to. That’s what makes it credible. Another option is telling stories about each individual rather than counting the comments.
  • But by counting, the work becomes a bit quantitative
  • But that’s exactly what coding lends itself to: counting
  • Some things in your data are going to be quite black and white, but there are also going to be many small subtleties and grey areas. The more you can insure that other people agree with your definitions, the more you can rely on your own interpretation of the data
  • I would argue 3 people should always look at your codes and to calculate Cronbach alphas of inter-rater reliability. Even if you’re telling stories, someone should review and confirm, or even the interviewees themselves should confirm that what you are interpreting is right.
  • It’s about interrogating whether what you view in the data is actually happening. You need to be actively challenging what you find, and doing so overtly so so that you can write about this process in your papers.
  • It’s an endless process I feel. I transcribed my own data, for instance, and when I went back to the recording I found that the way I had written the comments didn’t match with the sentiment of the interviewee
  • That’s a good point – after all, transcription is a form of analysis in itself
  • I recommend always doing transcription yourself. Until you do transcription, you don’t understand how the process can changing and shaping the data
  • I feel it was important for me to transcribe my own data because they I knew that I got the sentiments right. Something as small as a short laugh can completely change the meaning of what someone has said.
  • So what do you do when you’ve already paid someone to do the transcription? How can you make sure that what they’ve done is correct?
  • You should listen to some of the interviews and compare with the transcription. Also if something is interesting in the transcription, you should go back to the interview and see if it was interpreted in how it looks when reading

– End of reading group time –


Should we boycott American conferences?

In response to the recent travel bans imposed by Trump’s executive order, a common question I’ve seen academics grappling with is whether those outside the US who have the privilege of free travel should boycott American conferences. As one of the few Americans in my department in the UK, this is, of course, a rather awkward conversation to join. After all, boycotting travel to America isn’t possible for me, considering many of my friends and family members live there (although I do note my privilege in present circumstances that my passport allows me to visit freely). But what about my non-American peers? If they have the agency to decide whether they will attend American conferences, should they opt not to?

I’ll preface this argument by noting that I have been absolutely appalled by these recent executive orders (along with so much transpiring in America at the moment). I’ve had the pleasure of working with immigrants and refugees from all seven of the banned countries. They’ve enriched my life and my communities, and I am proud to know them. Like many others, I am angry and I want to be vocal in opposition to these horrible developments.

It’s that very same anger that fans the flames of protest against attending American conferences. But while I agree wholeheartedly with the sentiment behind the desire to boycott travel to America, I personally feel that it is ultimately misdirected.

For many, protesting attendance at American conferences is meant to make a statement to the US government in solidarity with banned peers. Yet, it’s important to remember that the number of foreign visitors to the United States specifically for the purpose of conference attendance is minuscule compared to total yearly immigration. For citizens who have the privilege of visa waiver, the intent for visit is not even recorded. The problem with boycotting as a statement to the American government, thus, is that it will simply not be recorded. There will be no report that measures the impact of researchers refusing to travel.

But suppose there was? Let’s not pretend that the Trump administration would be anything less than overjoyed at a decrease in foreign scientists visiting the country.

Instead, lack of attendance risks damage to researchers themselves and the universities that sponsor academic conferences. The university loses revenue and reputation due to poor attendance. Members of the academic community lose the opportunity to read and reflect on the work of those who boycott. Researchers (both in attendance and in boycott) lose valuable dissemination of their work, damaging impact. PhD students lose the opportunity to make important connections for their future careers. In a worst case scenario, the advancement of entire fields of research might slow.

And what of those US-based academics banned from traveling outside America? For immigrants and refugees from the seven banned countries who already reside in America, this executive order has essentially stripped them of the ability to leave the country in fear they will not be allowed to return. By choosing to relocate all major conferences outside of the US in the future, we are essentially blacklisting these valued researchers from our scientific communities.

A second common argument for boycotting is from an economic perspective. Again, it’s important to consider who is losing revenue when we boycott conferences. The most obvious are universities and professional organisations when we no longer pay conference fees, spend money at campus cafeterias, or buy postcards from campus bookstores. Airline fees is another large expense, but we can get around this by flying with non-American companies, should we feel inclined. Hotels and restaurants can be chosen ethically by supporting corporations that work with integrity or local businesses, perhaps even those owned by the immigrants and refugees with whom we stand in solidarity. Boycotting also stands to punish the many sanctuary cities that are fighting the current administration with an outlook towards immigrant rights.

Beyond these notions, I argue that academics have a moral obligation to support our peers in America through our common value in the importance of knowledge. After all, what the Trump administration fears most is the dissemination of scientific fact and rational ideas. By choosing not to participate in academic conferences, we are supporting this cycle of ignorance and politics of cynicism. We are emboldening “alternative facts.” We are choosing to curb the advancement of science when we opt not to engage with our peers. This mindset is dangerous, particularly when the advancement of discriminatory policies is markedly the time in which dissemination of knowledge is most imperative.

Rather than boycotting scientific collaboration, how can we instead use our energies to lift up our peers who are unable to travel? What efforts can we make to ensure that, despite the inability to be physically present, their work and ideas can still be heard? How can we ensure that they have the opportunity to participate and absorb ideas in the absence of physical attendance? In what ways can we make their experiences more visible to American lawmakers and citizens in order to sway policy? How can we engage with the general public to spread knowledge and facts during our conference programming?

These are the areas in which academics have real agency to make tangible changes for our disenfranchised peers, our scientific communities, and the greater public good. By working to ensure that our peers who cannot travel can still contribute and participate, we can send a strong message in defiance of artificial (or in some cases, physical) walls. By engaging with the public, we step outside the fishbowl of academia to fight for the value of fact in communities who need that message most.

Ultimately the collective good of scientific advancement is too powerful and too important to blindly forego, especially in times as troubling as these. The free flow of ideas is, after all, a form of resistance and protest itself. What better act of defiance is there than to connect research communities under a collective reverence for facts? What stronger resistance can there be against the Trump administration than to stop these heinous acts from interfering with international collaboration and the spread of ideas? We can make a stronger statement by embracing our scientific communities and being outspoken, collectively, in our dissent.

Liveblog: CALRG reading group – ‘We feel, therefore we learn.

This week’s CALRG reading group discussed the paper: ‘We feel, therefore we learn: The relevance of affective and social neuroscience to education’ by Mary Helen Immordino-Yang and Antonio Damasio.

We did something a little different for the reading group this week, by starting out with a video introduction by the author. We then split into two groups to discuss two separate prompts (you can read through those on the CALRG blog) before coming together for a full group discussion. Unfortunately, this means I can’t provide a full summary of the conversations that came from the discussions, but below is my liveblog summary of what I heard.

Each bullet represents one point made in the discussion (which does not necessarily represent my own views). As always, please excuse any typos or errors as it was written on the fly.


  • The term “mind, brain and education” comes from the debate about the ability for neuroscience to influence education. There’s a question about whether there’s a bridge too far. Is it too much of a reach to say you understand how the brain works, and apply that to education? In the “bridge too far criticism,” they argue what is needed is, first, a connection between the brain and cognitive psychology, and then connecting that to education in a two-step process. This paper is in volume one, edition one of this new journal. In this discussion, we’ll be having discussion groups considering those two bridges separately: (1) Instruction to cognition, and (2) Cognition to neural circuitry.

Small group discussion

Our prompt:  “Sharon Griffin, Robbie Case, and Bob Siegler applied the methods of cognitive psychology to analyze the cognitive skills and knowledge children must have to succeed in learning elementary arithmetic (Griffin, Case, & Siegler, 1994). They found that the ability to do numerical comparisons – which is bigger, 5 or 7?-is one such skill. They also found that some children from low-SES homes may not acquire this skill before entering school, but with appropriate instruction, they can acquire it. Their work is but one example of a bridge that exists between cognitive psychology and instruction.”

After reading “We Feel, Therefore we Learn” does the Instruction to Cognition describe the work outlined? Would it be useful in pursuing further research based on this paper? How would you apply this concept in testing ideas presented in this paper?”

  • An obvious question is whether they are talking about learning, instruction or education in this article? They seem to use these concepts somewhat interchangeably. Where is any evidence about how to do education research or instruction?
  • The examples they provide are about how people learn in a reductive way. How does that then connect to how you instruct people?
  • Maybe that’s one argument they are trying to make, that they’ve accomplished this much so far in neuroscience, but more research is needed to connect that with what we’ve done in education
  • Cognition needs to come into play when discussing instructional design, but I don’t know if there are frameworks existing in that sense yet
  • The paper talks a lot about emotions and social embodiment (but I’m not sure she used that word), which links to a ton of sociological research that isn’t mentioned. There’s a huge number of philosophers and educationalists that think we need to think beyond the brain to the outside social world. But I have difficulties seeing how they would connect emotions and the social world.
  • It’s difficult to draw a simple line between these points – there’s a lack of clear definition of these terms in the paper.
  • The connection is there but there is little discussion about the direction. Is our brain influenced by the social world around us, or is the social world impacted by what is going on in our brains? Or both?
  • There’s also a lack of examples or thorough experiments about how this would play out in an actual classroom – how do we take an idea from neuroscience and then use that knowledge to change learning? We’re supposed to talk about the link from instruction to cognition, but we are maybe more interested in cognition to instruction in education research.
  • I think I can’t make the connection between her theory of emotion and cognition with how it can be connected with instructional design or learning materials. Does it have an effect?
  • I can certainly get on board with the idea that emotions are important to learning, but I’m not really sure what to do with it in my own practice or my own research.
  • Can we think of any examples of our own lives about emotions and learning?
  • I personally feel you have to have some element of stress in order to learn.
  • Perhaps there must be research out there already on what motivates us to learn in terms of emotion.
  • Is there an ethical implication when it comes to emotions in learning? What if we find that people learn best when they are angry or stressed? What do we do with that?
  • We already measure satisfaction of students and have found that it doesn’t necessarily link to their success. So maybe we are aware of this already on some level.
  • The paper almost ignores the things that we are aleady doing in education about emotions – there’s plenty of research on things like satisfaction and motivation, which are surely emotional but they aren’t mentioned in this article. Maybe the point is we don’t think specifically about emotion and it might be more useful to think in those explicit terms.
  • Maybe it’s connecting emotions with related concepts that we do actually spend a lot of time on in education.
  • We actually do complete research on things like social integration or psychological/mental health and the learning experience, so we are doing research on emotions but perhaps not calling them emotions
  • Perhaps we are looking more on a shallow way than digging as deep as neuroscience
  • I’m not sure it’s shallow. We could give people a validated scale for emotions, and we don’t generally use them. But the research we do might bring out the same elements. For example, there’s was a project called Xdenia that we did here that looked at stress and learning.
  • It also might depend on the person and how they manage stress.  I think that’s what the author was saying in the video, that we are very influenced by our social experiences in how we learn.
  • The more we can understand about neuroscience, the more we can explain these sort of phenomenon we see in the classroom
  • If you look at the citations in this article, though, none of them come from education. So she’s making an argument that we need to think more about emotions in our research when they are not thinking about education in their own research
  • Learning theories are missing from this article and the different forms of evidence that are valid. Perhaps emotions are a link to this and the huge amount of research on children – how can these known stages of development impact education and how we teach.
  • It’s not clear to me how they perceive education’s role in this and it isn’t grounded in current work in our field, so it’s difficult to make that connection
  • How can we link things like the research on maths anxiety with neuroscientific research? Maybe some part of her model, the emotion or cognition, is different based on past experiences or social connections.
  • The paper would be much better if it was more balanced between the fields and used more explicit connections to examples in education
  • All fields look at education from a different perspective. Neuroscience says it’s about emotions, sociology says it’s about social relationships, etc. But in reality what we need is a combination of all of these findings in different fields to make up a more complex picture of learning
  • You’re always from perspectives, but what you need is the ability to step outside your own field and think about education from these different aspects. I wonder if they even talked to educators or education researchers
  • It’s a complexity thing, isn’t it? There’s a reason why we understand young children more than adults because there’s less prior experience and variation perhaps. There are more areas to consider as people have more experiences and things become different as you look at higher levels of education.
  • She seems to be making the argument that researchers have no understanding about emotions, but that seems too narrow. To build a bridge, you have to spend some time on the other side of it.

Whole group discussion

  • Research in education is not taking into account what is happening in the brain. Perhaps we should be testing educational theories more explicitly, which could help make results more rigorous.
  • We were grasping for examples but not really finding good examples of where this works through and how it helps us understand learning and its link to instruction. We could use a really nice example of where this would work in practice in research.
  • She started on one end and can see there’s an education community over there, so I’ve gone this far and need someone coming from the other end.
  • We have to keep in mind that this is volume one, issue one. It’s  an appealing anecdote to get us interested without an evidence base, but maybe that was the point in this first issue and more work ahs been done since then
  • It very colourfully describes the concepts that the author wants you to connect with in an emotional way. That might be part of the volume one, issue one – a need to get people on board.
  • We saw this as a perspective of her discipline, but maybe she could have done more to build that bridge.
  • Group 1 talked about: the point of when they discussed the young damaged brains that had long term effects that didn’t change over time. This was an important clue that in many cases it seems that they recover, but that wasn’t true with children. It might have some different attributes, which we thought were interesting. This led us to talk about distinguishes between brain damaged brains and autism, emotional trauma and dementia. There’s a need to understand different types of brains and their connection to learning. We questioned what this means for our own research. We talked about children in care briefly and how they had some strong indications that they would not achieve as well, and have a very different type of emotional process for learning. We felt like the paper didn’t cover the actually learning process well enough to start making connections to transfer of knowledge outside the classroom. We also discussed whether understanding the brain better would lead to more ability to create artificial intelligence, and the question of whether emotions are beneficial to people specifically or thinking and learning more generally. Would an AI need to be emotional to learn? Or would it need a new process of learning?
  • The most interesting thing is where the two groups came up with the same ideas. We both talked about maths anxiety, and that’s something we can see in actually incorporate into our research
  • From a student experience, it might be helpful to use evidence from neuroscience to show to them that they learn better in certain circumstances.
  • We as researchers struggle to understand the role emotions in learning, but students probably have varying degrees of insights in the roles of emotions in their own learning processes. One of the thing that should be pursued is helping people understand the role that emotions play in their own learning. It’s an ethical necessity.
  • Connection to learning analytics: One of the problems that some experts foresaw was that we shouldn’t give work that is too challenging or frustrating, but from an education point of view you need to encounter challenge and failure and those need to be built in.
  • There are all these concepts that we know about already in education that connect with emotions. How do we translate between the two fields, rather than starting from scratch or just the neuroscience point of view?
  • That came up in Innovating Pedagogy  discussions – learning through failure
  • Connection to parenting skills – I never ask my children to try something again, I ask them to try something and see what happens. I think failure without learning from the failure is the worst. You need to build within it a reflection aspect to learn from your failures.
  • It also reminded me of Small World Theory, that you will only learn or take information from people you trust.
  • We see that play out in the news… (Note: a good ending point!)

Math-Free Intro to Statistics: Normality

A few months ago I wrote my first math-free intro to statistics, where I looked at the basic descriptive statistics of mean, median, mode and standard deviation. In this post, I’ll discuss the very scary concept of NORMALITY (in a statistical sense, not a philosophical one). As a reminder, this series of posts is meant for a theoretical understanding of statistical concepts sans math and (hopefully) without over-complicating things with technical terms. As last time, I’ll share some links at the end of the post so that you can better familiarize yourself with the more technical aspects, should you wish.

Why should I care whether my data is normal in the first place?
As you’ve probably figured out already, there are dozens upon dozens of statistical tests that you can use with your data. Each of these tests relies on a set of assumptions about the data in order to calculate correctly and reliably. One major assumption of many statistical tests is whether data is “normal.” Therefore, it is important to know whether your data is normal before moving forward and subjecting it to a bunch of statistical tests that might not be right for it. Without considering normality, you might accidentally use the wrong test or compromise the reliability of your findings.

Beyond that, looking at the normality of your data can also tell you a lot about it. Testing for normality is a good reason to start depicting your data graphically, which is one of the best ways to start exploring the trends found within it.

What does it mean to be “normal”? (I wish I knew)
We can say data is “normal” if it follows a normal distribution. When you think of data distribution, it’s best to picture it graphically (i.e. visually). A normal distribution is often called a ‘bell curve’ and can be graphically depicted like this:



How do we know this is a normal distribution? It has a few important qualities:

  1. It has a mean and median that are the same. That’s the line down the middle.
  2. It has a peak or bump in the middle, and tapers down towards the left and right.
  3. The graph is symmetric, meaning there is just as much data below the mean/median as there is above it.
  4. See those numbers on the bottom? Those are standard deviations (check out my last post for a refresher). In a normal distribution, 68% of the data fits within one standard deviation of the mean. 95% of the data fits within two standard deviations, and 99.7% of the data fits within three standard deviations of the mean.

Examples of normal distributions that are often given are things like human height, blood pressure or IQ. In a perfect world and under normal, stable conditions, these data would be depicted graphically much like the normal distribution photograph above.

But, we know the world isn’t perfect and there are plenty of factors that influence data to be ‘biased’ (meaning it leans in one direction). Indeed, almost no collected data is perfectly normal. Some reasons for this could be sampling biases. In the example of human height, we can’t measure all humans on the planet and might instead choose 100 people to represent the population. However, we might have accidentally chosen 50 extremely short and 50 extremely tall people, making our data look graphically like an inverted bell curve.

Most of the time, our data is just more complicated than this idealized depiction of normality. For example, data can be influenced by environmental or cultural factors. Some data collection processes may also rely on ‘messy’ things like human emotions or reflections. I mean, the last American presidential election probably demonstrated that human behaviour does not always follow logical paths.

So when we put data into a graph, it can take on a lot of other “non-normal” shapes. Here are a few good examples from of data that is NOT normal:


And this matters when it comes to analyzing it (as mentioned at the start of this post).

What kind of things tell us if data is normal?
When talking about data normality, there are two important properties: skewness and kurtosis.

1. Skewness looks at how symmetrical data is on either side of the mean. More specifically, it considers the size and length of the ‘tails’ on a graph, and whether they are symmetrical on each side, or if they stretch out longer to the left or the right (i.e. are biased). Here are some examples of data with different skewness:


We can measure skewness as ‘positive’ or ‘negative.’ The first photo depicts a negative skew, where the tail reaches out to the left and the peak is on the right. The middle picture is a normal distribution. The third photo is a positive skew, where the tail reaches out to the right and the peak is on the left.

What does this tell us? Let’s go back to the example of human height. In the first photo, the mean is larger than the median, meaning our sample is biased by having more tall people. In the second photo, the mean and median are the same, demonstrating we have a fairly normal sample of heights represented. In the photo on the right, the mean is smaller than the median, meaning our sample is biased by being weighted more towards short people.

2. Kurtosis considers the “peak” and how tall or flat our graph is (i.e. ‘how big is the bump?’). It is also concerned with the tails and how long or short they are. Here are some examples:


A negative kurtosis has a small peak (or no peak) and long tails. On the other hand, a positive kurtosis has a large peak with short tails. In this regard, kurtosis is influenced by the standard deviation.

So what does kurtosis tell us if we think about height again? A positive kurtosis means that our dataset is more concentrated on the median, meaning we have an overwhelmingly large population of participants who are of average height in comparison to short or tall people (and is, therefore, biased). A negative kurtosis means that our sample is more ‘spread out’ with more even numbers of short, average and tall people. This is also biased, as short or tall people do not occur in equal numbers in nature as those of average height. The normal distribution in the middle would demonstrate that we have participants in various heights of good proportion. We have a majority in the average range, with smaller and equal numbers of short and tall outliers.

How do we measure normality in a statistical sense?
You can tell a lot about the normality of your data just by graphing it, but we need a more definite answer about whether data is normal in order to move forward. Luckily, there are a few statistical tests that you can do to measure this. The most commonly used is the Shapiro-Wilk test (although there are a handful of others, depending on special circumstances surrounding your data). For a few good resources on how to calculate and interpret normality in SPSS using these tests, please see the links at the  bottom of this post.

So now that I know if my data is normal or not….what now?
If you’ve determined that your data is normal, you’re on your way to being able to use statistical tests that assume normality (EDIT: my colleague Quan made a good point to me about normality assumptions considering residuals, not the variables themselves. Here’s a website about that for now, and I’ll dive more into this in another post). Don’t jump into these tests just yet, though, as there are other assumptions you need to consider first. That’s the subject of my next post in this series, but check out this link for a quick summary in the meantime.

If your data is not normal, don’t despair. There could be a number of reasons for this with easy solutions to ‘normalize’ your data. Check out this website for a good summary of how to move forward with non-normal data.

Want to get more technical? Here are a few good resources about normal distributions:
Understanding normal distributions with MATH
How to calculate skew and kurtosis with MATH

Testing for normality using SPSS
Normality tests in SPSS

Thank you for reading my second post in this series. My next post will talk about parametric versus non-parametric tests (i.e. WHAT ARE THOSE? And what assumptions do different types of statistical tests rely on?). I’ll make sure to get this next one out a little faster 🙂


Liveblog: CALRG reading group – MOOCs, International Information and Education Phenonmenon?

This week’s CALRG reading group discussed the paper: ‘MOOCS — International Information and Education Phenomenon?’ by Lee Wilson and Anatoliy Gruzd. Below is a liveblog summary of our discussion. Each bullet represents one point made in the discussion (which does not necessarily represent my own views). As always, please excuse any typos or errors as it was written on the fly.

  • Every time I see these articles on MOOCs and we talk about quality versus completion, it feels like a mismatch. Are we comparing apples and oranges?
  • I’ll give a counter position: If you say it’s a MOOC, then it’s a course. If you claim it’s a course, then you claim it has a beginning or an end, and therefore it is of interest if people don’t get to the end. If they don’t finish, then as a teacher you are failing.
  • The paper is from June/July 2014, and it’s a bulletin rather than a journal article. It feels very outdated in some of its claims, particularly things like the challenges MOOCs face. It talks about not a lot of MOOCs being in English, for example, but a lot of that has changed now. The field is moving so quickly. I think what we’re seeing is that MOOCs are a course for people who want it to be a course, but for others it’s about the ability to take a pick of the mix. People get what they want from it and aren’t fussed with the rest. Some want a broad overview of a subject and will want to do the whole thing, but others will engage in different ways and just want to pick and choose what is relevant to them.
  • But why do we treat it differently than any other OER resource? MOOCs are the only OER where we are obsessed with people finishing it. We don’t look at how many people watched to the end of a video or how many people read through an entire ebook, but we’re obsessed with people doing every step in a MOOC. And we only do that because we call it a “course.”
  • Then why do we bother calling it a course?
  • In that regard, I would be careful about not calling MOOC users “students,” because that’s why edX got in trouble for not having accessibility features. If we don’t call MOOC users students, then they aren’t getting the services required to them by law or by institutions.
  • For us, we have to call them “learners” to differentiate between formal and informal learning. “Students” are formally registered for a degree program.
  • There is a difference between “learners” and “students,” and the largest difference is payment. Accessibility is an expensive service that we offer students at formal universities. MOOC learners aren’t “students,” and they don’t have access to many other services available to students in formal institutions, like libraries, IT services, student groups, etc.
  • From an accessibility perspective, though, we have to think about inclusion for all, not just those who are paying students. What if people decide to jump ship from formal universities and use MOOCs instead to gain these skills? In that scenario, learners with disabilities would lose access to essential services to advance their education.
  • You can also think about accreditation in this scenario. I agree people’s journey from formal to informal learning is important. For instance, at some point someone might be a learner on a MOOC and then become a formal student. It’s confusing to think about what services are due to them if we consider MOOCs as a pathway to accreditation.
  • We have a suite of 12 MOOCs, that you have to take 8 and pass by FutureLearn standards, then do a formal assessment and become a formal OU student. What we’re seeing is that the engagement is still not there in those MOOCs, it’s actually lower than what it would be on a standard MOOC. There’s an argument to be said that MOOCs could be a gateway to formal education, but what they are actually doing is what open learners have been doing for years: helping people take a look and get an idea for content, then make an informed decision about if they want to move forward
  • There was a good piece about edX and people wanting to do MOOCs for professional development, and how people with these motivations were less engaged than people who signed up because they needed to teach the activity. It’s worth considering whether people who are already attached to formal education structures are going to better suited to follow along with that sort of paradigm.
  • I wonder if those who are doing if for professional development, if they are choosing to do it or if their employer is saying “do this and bring me the certificate” If it’s just “I should know about this,” then you aren’t as committed. It really depends on what the motivation is to take the MOOC.
  • I think what the paper is talking about is related to short-term goals, but we as academics have set those goals and targets for completion. We didn’t set goals for other OERS, like how many downloads we get, but we set completion goals for MOOCS. At the same time, we didn’t think about what the learners’ goal are and how they fit in with our own goals as academics. We’ve created our own goals that we’ve fell short on.
  • The pick and mix model undermines the value of instructional design. They lead to bubble world mentalities,  where people pay attention to what they believe to be true. Education, though, should be a little more concerned with saying there are seminal pieces of info in this area of study that you should spend some time getting familiar with.
  • But the issue with buying into that is the openness of this system. If you sign up for and pay for a degree, then we can tell you what to study and have a fair sense that you’ll stick with it. In this open environment of MOOCs, we can make the structure and then people ignore it or just go to YouTube to get their information instead. MOOC users don’t want to go through a structure. There’s an interesting tension there, because as you say, education should push people to think outside what they expect to find.
  • That’s why we have to label MOOCs a “course,” it’s a different beast. It has a structure and a narrative. I still feel there is something in completion because of that.
  • It could be a module as part of a course. When I first did my OU undergrad, I could pick and mix what courses I wanted to take. That was the beauty of it, that you didn’t need to follow a strict structure.
  • But as a student, if you have a TMA over block 2, you would choose to read block 2 rather than “I’m interested in this and this and this.” Assessment is geared towards making sure you cover every bit of the course. If you weren’t assessed, would you have bothered?
  • There’s also a sense that shorter is better, that people want a shorter MOOC rather than a longer one.
  • But we only think that because we think completion matters. People say to make it as short as possible to increase completion, but it doesn’t work. We have related MOOCs running 2, 6 and 8 weeks long. We reworked those three into four different four-week MOOCs. The completion rate is to the point of a percentage the same. Why is that?  They’re shorter, they’re for credit. C0mpletion should be better, but it’s not.
  • But if it was 4 weeks versus nine months, though…
  • It’s about content at the end of the day and the learner will do what they are interested in. They will learn what they need to and get the gist of it. People take MOOCs because they just want to be able to have a better conversation with their friends, not make world peace with it. We put too much emphasis on what it can do for the learner, but we don’t consider what the learner wants out of it.
  • When you make it 4 or 8 weeks, the group of people taking it is the same. Unless you change the demographics of the population taking the MOOC, you aren’t going to see a change based on how long it is.
  • If it’s for credit, is the demographic different? That’s what we need to know.
  • Would a demographic difference be enough to change things, though? What are the support systems that learners have to complete in the first place? When you’re in a physical space you have more support of those around you. Online, it’s a very different context. You’re surrounded by people in your life who maybe don’t even know why you’d want to take a MOOC. I did a bunch of MOOCs after my masters degree, and my wife would be like “why would you do that? Why spend your weekend on this?”  And we both value already education. But in the same context, it’s about what the end game is. You need networks of support around you that acknowledge your goals or else you’re working against your context.
  • I’m working on a paper now that considers time management versus time value. Lots of MOOC papers say time is the issue with completion. Time is a fallacy, though, because if you want to finish it badly enough, then you will make the time. The real issue is that the time spent using a MOOC is not valued enough. Not that the content isn’t great or it isn’t well written or not engaging, but that your personal value puts it at a lower rate than things like spending time with your family.
  • If we say completion does matter for us, then I think you need to take an alternative model like a tv series, which encourages people to come back again and again each week. If we design MOOCs so they encourage learners to come back every week… but we don’t see a lot of that in MOOCs. There’s no narrative that unfolds that makes you really have to come back. If we really care about completion, and people don’t have to come back in the current structure, are we then thinking about redesigning them in a way that makes learners really want to come back?
  • Narratives don’t work in every context, however. I can’t write a narrative about doing your taxes. Also not all tv series push a narrative. For instance, the people who wrote Friends ,do they care if you watched every episode? No, because they are stand-alone. MOOCs are more like recipe books. They are written sequentially, but with the knowledge that people come and go within them. Recipe book writers don’t care if you’ve made every single recipe.
  • There’s a tension in the MOOC system, because they were designed with the idea of personal leaning networks in mind, and as a democratisation of education by getting people away from formal learning. “Learners” are feral and “students” are those we’ve already captured and are making money off them. But the actual learning is important to both of those people. We’re just taking the concept of a MOOC and trying to stick it back into a formal education box, but it was never designed to work in that format to begin with.
  • Right, they didn’t care about completion at the beginning with cMOOCs. That narrative only came along was only after Coursera. The learning was the important emphasis before. Stephen Downes never wrote a single paper about completion.
  • We have 2 camps now: (1) those trying to use MOOCs to expand education and learning for public good, and (2) freemium models that are trying to domesticate learners into formal students at their institutions.
  • Education is a difficult endeavor and maybe a better metaphor is more about going to the gym – you often go for a few times and then give it up. There is a perceived value in participating in education, but there are also costs involved. It isn’t a form of entertainment. It’s not bad to include entertaining elements in education, but that’s not what is going to build affinity with education. It makes learners expect a hook at the end, and makes them think a course is bad if it doesn’t dramatically rope them in.
  • It all goes back to motivation and investment. You’re more likely to go to the gym if you’ve paid for it. You can run in the open air and it doesn’t cost you anything. If you’re signed up for a formal gym, though, you’re making more of a committment, whether that lasts or not. But it’s also about whether you know someone else who goes to the gym, and whether you can go together, then you’re a lot more committed. When students paying £5000 a year for university, then they are given the equipment for learning, but they still have to put in the effort.
  • I have a different view. You talk about how paying a fee makes students more motivated. From an economic point of view, it’s already a sunk cost and that payment doesn’t impact your future decisions. On the gym metaphor, there was a study that compared a one-off gym membership with a pay as you go model, and the pay as you go members actually went more than the ones who paid the yearly sum.
  • It’s the same with MOOC presentations. The theory is that the more you present, the better it will be, the more learners you get and the higher the completion rate will be. But the more you make a MOOC available, the worse these things actually get. Scarcity drives up he value of the MOOC. You get a”I’ll do it on Monday” effect otherwise. We have MOOCS that run four times a year, and their completion rates drop faster than if we put it on just twice a year.
  • I worked for the police service and we used to put on courses, and we found if we put a lot of courses on at one time, people would sign up for them and never show up. If we didn’t charge for a course, people also felt they didn’t have a committment and didn’t show. We put less courses and charged a small amount for them, and the turn up rate was better.
  • But who is showing up? Are we supporting privilege by putting up a pay wall so that only those who can afford or have disposable income can attend?
  • There’s sort of this assumption that MOOCs are a social good – is it really true that free MOOCs are benefitting those who can’t afford to pay for formal education? We know that most MOOC users are already educated and already have degrees.
  • But having a degree doesn’t mean you have money.
  • If someone is disadvantaged, they aren’t going to take a MOOC from MIT. Time is a resource that is finite the more poor you get. What they need is education that directly relates to making their life better.
  • I worked on a project using Batched Open Courses. We threw away the idea of learning subjects, and said we would instead teach people to learn to learn first. They have almost double the completion rate. This was more public good, because we know people of lower demographics struggle with knowing how to learn. There’s no point in doing a great MOOC on curing cancer if you don’t know how to learn in the first place.
  • In formal higher education settings, you have this rigid 12 week structure. For people who don’t like this weird structure we’ve created, we can’t expect to just recreate it in a MOOC, put it into their home and then fail them there instead. There are barriers to the structure of the learning process and that’s an interesting question to consider.
  • With the BOCs, the value was higher because they needed those skills.
  • What the difference between BOCs and MOOCs?
  • No cohort, no social features, no weekly emails. We’ve stripped it all bare and just given them content. It also completely incentivises the badge and works on things like math skills, English skills, or writing skills.
  • So is it the badge or the relevance of the content that motivates learners? I would bet it’s about the development.
  • I think we have to talk about the disadvantaged person, regarding availability of devices to study etc, But mostly it’s about awareness. You say MOOC to the general public and people are confused, but if you say online course, they understand. People just have to be thinking to look for the MOOC in the first place. It’s also about relevance of the content. If we have a MOOC on basic English and followed that up with more advanced English, we’d get millions of people completing.
  • You think that, but the British Council did them and they….
  • But they didn’t do them very well.
  • It wasn’t what they thought they would be. They had from half a million participants, only 24,000 completing. That’s a lower percentage than what I’m getting on my MOOCs
  • But you’re also completing with App English like Babble.
  • It’s a new generation. People will look for things on YouTube, rather than digging through a course if they needed something relevant.
  • But langauge learning is a social thing anyways. Plugging into a MOOC isn’t going to work.
  • Tying this back to Friends and not having to watch every episode — you still need some sort of social committment and buy in, because people around you are talking about it and watching it. It’s not about locking us in a room together for 12 weeks and interacting in a formal setting, but there’s an ebb and flow of people coming into a known conversation.
  • In my research, I asked people what they liked most about MOOCs, and they said articles and videos. Learners love them. Things they hate about MOOCs? Discussion, peer review, social interaction, peer assessment.
  • Maybe because they don’t want to do the whole college thing.
  •  I think it’s because it’s overwhelming, you’ve got thousands of comments. I’ve found the less learners in a MOOC, the more comments they’ve made, and the better the comments. What’s gained if no one sees your post? There’s no decent conversation.
  • Do you know about the study groups in FutureLearn?
  • Yes, they’ve started to do study groups and it works like a first person shooter online game, where you queue up and join a group and go on. We haven’t actually run any on our MOOCs, but they have been discussing it a lot at FLAN meetings.
  • We used to do something like this on the photography course from the OU. It was a non-accredited learning course that did that dynamic grouping. Students loved that course. Because the new improved version doesn’t do those groupings, we have to redo the VLE-like grouping for the whole course. It’s a very different dynamic in the groups now. Because we work on a regional basis to cluster them, they’ve also started to do meet ups in real life.
  • Collaborative groups in schooling is probably the most hated by the students. At an exam level, it’s scary. What we’ve built culturally into our assembly line school system is counter productive to MOOC learning. MOOC learners just don’t see the point of it. It’s their journey and that’s what we’ve trained them to think.
  • Is that a failure of assessment?
  • More a failure of trust. We’ve built a system where they don’t trust collaborative learning. In a large group, you also don’t get to know anyone in the course. You don’t get comfortable with dealing with them. Why throw out a comment to thousands that will invite hecklers?
  • If you thought of it more as social media, then loads of people like that. It goes back to how similar are MOOCs to formal education versus entertainment.
  • I think it depends on the culture of the platform. For instance, students will talk about their issues with a course on the course’s Facebook group but not on the VLE because they don’t want to look stupid. We need to make people feel they can engage and not clam up. That’s the problem with a large population.
  • But was the Facebook group with no tutors?
  • Tutors could go in if they wanted to, it was available to everyone. Because of the culture of Facebook, though, they felt they could share. If you go into the VLE, it’s a classroom. There’s an educator and a facilitator there and you have to say something smart.
  • Also something in that is the mindset of the students. In Kahn Academic, they put a growth mindset quote at the top of the page and they saw a growth in performance. Walking into a context like this, you need a mindset that can help you get into the circumstances.
  • But how do you do that with 10,000 people?
  • At Kahn Academy it was just one quote at the top of the page that helped.
  • When I first started teaching, in my cohort of students there were some that were elderly and got a lot of fun out of doing the course and weren’t doing it for a qualification. With the change in funding, those sort of people stopped appearing. They may have moved on to MOOCs to fill that void.
  • Is there any knowledge that those types of learners have gone on into MOOCs to avoid the fee structure?
  • I did a survey to ask about motivations to participate in our MOOC, and didn’t mention anything about fees. At the bottom, I left a box open for comment and everyone said they were doing it because it was free. But at the same time, that means they don’t want to finish it because it’s free. Instead, they’re learning a bit through leisure learning and going on.
  • Have they done explicit research on this? People not completing because it was free? In my country formal education is free and most people still finish.
  • But you’re getting a degree out of it. You aren’t getting a pay raise because you did a MOOC.
  • But you’re learning something that might help you get there.
  • With MOOCs, the motivation doesn’t have to be professional. It can be social and personal
  • There’s also a cost side to formal education, not just money. It’s about the investment you put in to attend. If you go to university, even if you aren’t paying for it, you’re investing time and there’s an incentive to finish.
  • But the completion rate of 16 week MOOCs are the same as those that are shorter. There’s no logic to it.
  • There is a logic. If it’s a long course, you may end up investing more time in it and that makes you want to finish to make that time worth something.
  • Or you might just get frustrated and drop out.
  • Shortening it reduces the committment.
  • When the learners start to drop off is important. My assumption is after the first week or two weeks? So it doesn’t matter how long the course is.
  • They have a click and round and say “No, it’s not for me.” One big problem is that you can sign up too far in advance, and you get to the day and you aren’t feeling it anymore. People want to know about something NOW, not in three weeks’ time. Why would we expect the Google generation to do anything different?
  • I’ve signed up for lots of MOOCs in order to build my knowledge outside my knowledge area. But I’ve never completed any of them, so it makes me look like a 100% failure according to the data.
  • But do you consider yourself a failure on the courses?
  • No, but if you read my statistics, I joined and didn’t stick around. But I was happy with what I got out of it.regardless, I’m a failure in the data.
  • Learners just see it as”I’ll get back to it.” I interviewed learners from MOOCs 2 years ago, and none of them actually finished. I asked why they failed (purposefully using negative terms like that), and they said they didn’t. They said they wanted to learn a little bit more about something they already knew. They weren’t failures in their minds.
  • That’s where we get too hung up on the data with MOOCs.
  • It’s interesting because there’s a general thread that’s said “MOOCs are amazing” and all that Gartner Hype Cycle, then more on to say they’re horrible and useless. From this conversation, people seem to feel learners are getting something out of it at leisure levels, and that completion doesn’t matter. If that’s the case, then there are others, like education institutions, that will questions about whether MOOCs are worth it.
  • Because of their own emphasis they’ve put on them, and the beliefs they’ve developed on the purpose of MOOCs.
  • We’re generalizing the purpose of MOOCs, when it seems like there are so many different populations that use them for different reasons. MOOCs as a social good seem to perform poorly, but as leisure learning, they seem to perform great.



PhD Writing Camp: Summary of Our Experience

A few weeks ago I had the pleasure of joining our Leverhulme-funded Open World Learning programme on a writing retreat in the Derbyshire countryside. It was overall deemed by participants as an extremely productive and useful retreat, and we hope to make this a yearly event for all IET PhD students. Below is an account of how we structured the week and some common discussion themes, as well as participant evaluations and plans for moving forward next year.

Writing camp format
Altogether seven students participated, along with four supervisors. We initially met at the Open University before departing and discussed our goals for the week. The idea for the writing camp was that each person could have space to work on whatever was currently on their plate. For most of the first year students, this was about their upcoming first year probation report. For second year students and supervisors, it was often working on data analysis, writing articles or designing upcoming studies. By meeting together initially, we were able to understand what each person was working on, as well as make connections between those who might be working on similar tasks. Afterwards, we left for Derbyshire to our retreat house for the week, which was near a town called Matlock.

The format for the week was a morning meeting to discuss daily goals, followed by a day of free work time. Some chose to work in their rooms, while other worked in the main common area. As we were all working on different projects, this gave us the opportunity to organize our own productivity (something that seemed to work well). We also met informally with supervisors (sometimes our own and sometime those outside our supervision team), but this was organized on a case-by-case basis between individuals. At the end of the day, we all met to discuss what we had achieved that day and vent any frustrations or stumbling blocks. This was a good chance to exchange pieces of advice, and to hear feedback from participating supervisors.

One important goal of the writing camp was community building, so we were sure to also include social events throughout the day. For example, we met for lunch  each day, and organised teams to cook a sit-down dinner each night for everyone. For evening activities, we brought along board games, video games, and played badminton or tennis. We also managed to fit in walks through the surrounding countryside. Overall, the retreat was a good opportunity to build a strong community of PhD students, as well as form bridges between staff and students.

Common discussion themes

Throughout the week, our informal meetings brought up the following themes and pieces of advice:

  • The importance of taking a break and letting others read your work, and the value of receiving feedback from people outside your topic area and supervision team. After all, your writing may go from ‘A’ to ‘C’, and an outsider can more easily point out that you’ve missed ‘B’.
  •  An important thing to think about in your PhD is finding your own ways to be productive. Some people use research journals, some draw things out, etc – but no one method will work for every person. It takes experimentation to find what works for you.
  • In multi-disciplinary work, it’s important to highlight which field’s perspectives you are taking and how the other disciplines are informing your work, in order to situate your reader on your background and viewpoints.
  • It’s important to invest time in your supervisory relationship and to read your supervisors’ work in order to understand where their viewpoints come from
  • When writing about vague concepts, it is useful to explain what it is, as well as what it is not
  • In the later stages of your PhD, it’s useful to take a few days to go back to the literature to read new things, as well as reread some of the old favourites. Sometimes it is easy to get in a citation cycle, and forget the true meaning of an article.

Participant evaluations
We asked for reflective feedback from participants after returning home. Overall evaluations of the writing camp were positive. Here are a few of the things participants liked:

The writing camp helped me structure my day in a more productive way. As we had set meetings in the morning, afternoon and (informally) at coffee breaks, the times in between these meetings I worked a lot more productively than I work at home at the OU.

It was a very good experience to better meet the other students and supervisors, and especially the opportunity to receive feedback from our work.

The writing camp helped give me some much-needed renewed mental focus.

Participants also highlighted that the writing camp helped them achieve their goals:

I feel more confident in the quality of my probation report. It was nice to be able to talk to my supervisor (or other supervisors) for longer than in a normal situation.

It gave me the space to actually focus on something for an extended period of time.

I could improve my methodology and analyse the data from my interviews, and it helped me a lot to work on my probation report.

Finally, a key experience from the writing camp was the social connections made:

I loved getting to know other students and staff on a more personal level both academically and socially. I’m very much of the work-hard/play-hard philosophy, and there’s no better way to get to know people than to socialise outside of formal contexts.

I enjoyed the chance to get to know each other better, especially the supervisors on a more informal basis.

Suggested improvements
Moving forward, there were several suggestions for improving the writing camp experience, including:

  • More participation by staff members and more of a supervisor presence (this was perhaps the most common suggestion)
  • Built-in opportunities to share work and receive feedback on writing
  • More reliable wifi
  • A specific team-building activity to encourage more connections between people who may not know each other well

All in all, it was a very successful week and, hopefully, a regular addition to the IET PhD student experience.

Math-Free Intro to Statistics: Descriptive Statistics

I’ve recently had some questions from more qualitative-minded researchers about resources for understanding the foundations of quantitative methods. In attempting to compile a few resources to share, I found it a bit frustrating that there were relatively few ‘theoretical,’ easy-to-understand materials for beginners in this area (read: math-free). While I feel it’s also important to understand the technical details of statistical tools, I hope that a series of ‘math-free intros’ can ease some fear and pique the interest of those considering incorporating quantitative analyses.

So let’s start with the foundation: descriptive statistics.

What are descriptive statistics?
Descriptive statistics are pretty accurately named: they describe basic features of your data. They don’t make inferences or find conclusions. If you compare it to literature, descriptive statistics would be a simple narrative tale. In art, they would be the initial sketch or outline. The good thing about descriptive statistics is that they don’t require any fancy analytics programs — you can even do them in Excel.

When do I need descriptive statistics?
Descriptive statistics can help you take a big set of data and condense it to a more manageable form. They help identify simple trends in your data. Incorporating descriptive statistics can also help convey a lot of information in a limited amount of space. If you’re working in mixed methods or research with a qualitative focus, they can add more rigour and ‘back up’ your qualitative analysis or highlight complementing trends on a larger scale. If you’re interested in incorporating further or more advanced quantitative tools, descriptive statistics are the first step in understanding your data. They are the training wheels to your bicycle.

Alright, alright. So what are these descriptive statistics thingys?
Descriptive statistics can help you understand three concepts or trends in your data: (1) distribution, (2) central tendency, and (3) dispersion. Don’t panic yet! I promised to keep this math-free, so here we go:

1.) Distribution: the easy part
When we talk about distribution, think ‘frequencies.’ Simply put, this is when we do things like count or calculate percentages, and start using things like bar charts or pie graphs. Let’s think about classroom grades. A frequency distribution would tell you how many and what percentage of students received scores in each grade category:
50 – 59% (F):        3 (6.5%)
60 – 69% (D):       6 (13%)
70 – 79% (C):       11 (23.9%)
80-89% (B):         16 (34.8%)
90-100% (A):      10 (21.8%)

This can be interesting and, in some cases, very relative to your research questions. In some cases, you might want to demonstrate frequencies with micro-level details. However, oftentimes a broader, more macro perspective is needed, and frequency distributions don’t  necessarily demonstrate trends in your data. They also take up a lot of space and can be information overload. Luckily, we have tools to simplify this data:

2.) Central tendency: looking more into data trends
Central tendency measures reduce information in a frequency distribution into a much more manageable, quick and dirty form. It answers: What is the most ‘typical’ value in your data? How do we sum it up quickly? You can think of central tendencies as demonstrating ‘stereotypes’ in your data in a few different ways:

  • Mean or ‘average’: You know this term from every news article ever written about research. The mean is when you add up all the values and then divide that sum by how many values you have. In many cases, the mean is sufficient enough to highlight the ‘typical’ value.
  • Median: The median can be thought of as the ‘middle.’ When we list out 1, 2, 3, 4, 5 — ‘3’ is the median because it’s the number in the exact middle of the values. Why would you use this instead of (or in addition to) a mean? Let’s say you have ‘outliers’ in your data (i.e. values that have some distance outside of most of the rest of your data). In this case, a median can give a more ‘fair’ typical value.
    • Example: We have three houses on a street with estimated worths of: £300,000; £400,000 and £2,000,000. The mean housing price is £900,000. This makes the neighborhood seem more ‘well-to-do’ than it actually. In reality, it’s just 2 ‘normal’ houses and one really fancy one (not three pretty darn nice ones). The median, however, is only £400,000, which gives a much more accurate description of the neighborhood.
  • Mode: The mode is the number that occurs the most often. This can show you the ‘most popular’ or ‘highest frequency’ score or choice. It is possible to have multiple (or many) modes (example: when asking about what pet people own, the most popular answers — cat and dog — might have the same number of responses). You can also have no mode (example: no participants have the exact same weight).

Including one or several of these scores can very quickly highlight and condense data trends in a limited amount of space. HOWEVER –> At the same time, central tendencies don’t provide very much detail about how spread out the values are. Have no fear, there’s an easy fix for that:

3.) Dispersion: understanding ‘spread’
Central tendencies can occur in several different ways. Think back to the example of classroom grades. If the class test score mean (‘average’) is 75%, one explanation could be that everyone in the class scored a ‘C’ (70-79%) on the test. However, another explanation could be that half the class scored 50% and half the class scored 100%. As a teacher, this gives two very different perspectives on your classroom. In the first scenario, everyone is performing at a decent level. In the second scenario, half the class is failing and half the class is bored. Thankfully, we have a few tools that can demonstrate how we can ‘read into’ the story told by the central tendency.

  • Minimum and maximum: The most primitive way to show the ‘spread’ in the data is to simply state the ‘highest’ and lowest’ scores recorded. In the test score example, we might see (Min = 43%, Max = 100%). This can help demonstrate how far apart observed values are. However, it doesn’t tell us if these minimums or maximums are outliers or how common the lowest and highest scores are.
  • Range: A quicker way to demonstrate this is through the range, which is the largest value minus the smallest value. If the range of test scores is small, this means everyone scored mostly the same. If the range is large, it means there are wide variations between scores. However, the range can also be misleading due to outliers. If only one student scored 100% and the rest scored 75%, this means that the range is 25, which gives an inaccurate picture of what is actually happening.
  • Standard deviation (SD): A more accurate way to describe the range is by standard deviation. Standard deviation shows us how close or far the overall data is from the mean (average). A small standard deviation means most of the data is close to the average (i.e. the classroom where everyone got a ‘C’). A high standard deviation means there is high variation in the data (i.e. the class where half failed and half aced it). A standard deviation of zero is very rare, but means that everyone scored or responded exactly the same. Because I promised no math, I won’t go into details about how this is calculated, but I will provide some resources at the bottom of this post to point you in the right direction of understanding the mechanics.


That’s a lot of stuff. Do I have to include all of that in my work?
Not necessarily. I personally find it useful to calculate all of these in my initial analysis phases, just to get a ‘feel’ for the data and familiarize myself with what I’ve collected. However, what you decide to write up, publish or disperse heavily depends on your data set and your research questions. Some of these calculations may just not make sense in your context. That’s where understanding the theoretical background and meaning of statistical tools comes in handy.

Want to get more technical? Here are a few good resources for descriptive statistics:
Descriptive statistics in presentation format
How to calculate these using MATH
Making these calculations in Excel
Making these calculations in SPSS

Thanks for reading the first part in this series! My next post will dive into normal distributions and kurtosis/skew (i.e. Does my data have ‘bias’? Is it ‘symmetrical’? And why do I care?).