Liveblog: CALRG reading group – ‘Some guidance on conducting and reporting qualitative studies.’

This week’s CALRG reading group discussed the paper: ‘Some guidance on conducting and reporting qualitative studies’ by Peter Twining, Rachelle S. Heller, Miguel Nussbaum, and Chin-Chung Tsai. Below is a liveblog summary of our discussion. Each bullet represents one point made in the discussion (which does not necessarily represent my own views). As always, please excuse any typos or errors as it was typed on the fly.

  • We’ve been talking in CALRG recently about issues around the quality of quantitative work, but it’s also worth thinking about the quality of our qualitative work, including how to write and report findings. I’ve been shocked in some work in the lack of attention to how trustworthy the data analysis is. There’s much more of a problem of how we determine whether qualitative work is rigorous.
  • The author highlights that one of the issues is how to write up qualitative work, especially considering that it takes a lot of space. Researchers often have to think about whether the journal will allow for them to justify the work we’ve been doing in such a short amount of space.
  • Word limit is an absolute killer, as qualitative work can easily become a mammoth of a text and there are few proper journals that allow for a high word count, while at the same time they still expecting rigour and high-quality work. It’s as if qualitative work cannot fit within that bracket.
  • You could argue that being succinct makes for a better paper. But if you look at the list in this paper today of what is needed for proper qualitative reporting, then you aren’t left word count left for everything else.
  • Some journals are looking at snappy, short papers, but rich high-quality qualitative research probably doesn’t need to be in those journals
  • One additional question is whether the editors and reviewers at top journals understand qualitative research, which can be demonstrated in what they publish.
  • Is that also then a plea to get more qualitative reviews to get signed up to these hardcore quantitative journals?
  • I read this paper and wondered if this was simply the viewpoint of the editors, a set of instructions for how to publish qualitative work in Computers & Educations
  • But this work is important for us not just as writers, but also for reviewers. If you don’t know on what grounds to review something, then you will struggle to review it properly. Journals are often very vague on the grounds upon which to review things. Something like this paper to refer to is a useful tool.
  • But with the thought that you can’t do all that the paper is suggesting within a 7,000 word limit. So we have to think more broadly at questions like: Is the work credible? Is it valid? Do I believe it? Those seem to be important keys.
  • One thing I’m interested in this article is the idea of saturation and the point that we stop collecting data. In previous trainings, one critique of that method was that saturation may occur when you have a biased sample.
  • In reality, data collection has to finish at some point and, of course, you could have gone on to find something new for eternity. In this article, all of the readings at some point were saying the same things, so at some point it becomes about minimum returns.
  • If you do your sampling properly, such as by using mixed methods tools, you know that at what point you have a representative sample of the population you are interviewing. If you have a bias in your sample, then you will find saturation without the wider perspective of those with other opinions. You have to be aware of the broader sample within which you’re trying to make claims.
  • It also depends on your question, though, and what spectrum of people you need to be reviewing. On the other hand, you can have a 4* paper that analyses only one person. What’s interesting about this paper is that it was insightful and can challenge our way of thinking. One person was enough to answer the research question in that sense. It might not have been representative, but it raised interesting questions that we need to think about.
  • Interesting connection there with how the OU follows up on their student surveys and the bias that those who are already engaged in their courses are the ones who answer the survey in the first place. We also only survey those who complete courses and not those who withdraw. We only capture part of what is happening in teaching and learning.
  • Anther OU researcher is now linking this data with student characteristics to have a better understanding of who is satisfied, who is doing well, etc. Another thing to consider is there is no correlation between satisfaction and retention. Therefore, how relevant is this data to our students in the first place?
  • A critical question about this paper is about the lens of researchers. Good qualitative researchers would have at least 2 or 3 researchers to look at the data and triangulating their reviews, as well as be open in their reporting about their own worldview and how this might have affected the data collection.
  • Shouldn’t that be the case with quantitative research as well? Even quantitative work is subjective.
  • But with quantitative work, you can make the data and your syntax available to a wider audience, whereas you don’t see that happening with interview data.
  • More discussion is needed to understand how do we contrast these world lens? Shouldn’t we say there are certain standards that all research should adhere to, for instance that we have at least 2 coders?
  • I think that’s too prescriptive. If you are doing an ethnographic study, for instance, you are the lens. If you’re coding, then you want to make sure it is consistent and others can agree with it. You’d want to follow more of a protocol, including statistical analysis of the reliability. But qualitative work is a very broad category and that approach might not be appropriate in some contexts.
  • You also have to consider that when you interview people from other backgrounds than your own, the way people respond to you may differ. There’s often a power dynamic between researcher and participant.
  • That’s one reason why we can’t follow protocols such as all asking the same question in interviews, because of course people will respond in different ways because we are all different people. Therefore, we need to be more transparent about our positions and how we interact with the world from our own backgrounds and perspectives.
  • That also applies to transcriptions. For instance, someone was transcribing the interview of a child. I looked at the photograph and she had a pad over her eye, and she was talking about what was written as iPads in the descriptions, but she was actually discussing a literal ‘eye pad.’ Understanding the context of the interviewee, then, and thinking about what you’re doing when transcribing is absolutely critical. Words mean different things in different cultures and the nuances of things like sarcasm and tone are important.
  • What you said about data sets are interesting. In terms of research data management and data archives – I wonder how many data archives actually have large amounts of qualitative data. I suspect not as much.
  • Well, most of our data is video. Sharing videos of kids isn’t going to happen. Or audio with interviews. The amount of editing you’d have to do to make it unidentifiable and sanitised beyond anonymizing would be really time-consuming.
  • That also raises questions around how some journals require an open data set. When that isn’t possible with your data for privacy reasons, that means there are stipulations of where you can publish qualitative data.
  • The counter argument could be that you could share the transcripts rather than the video, or at least make available a representative portion so you can be transparent with how you analysed and coded the data.
  • What I was interested in was the part of the article that discussed mixed methods, because the paper comes down rather strongly on mixed methods…
  • The criticism was on mixing methodologies, not methods
  • I liked that part, though, as it’s important to have your methodology and philosophical stances solid before you start working on a research project, as they can be at odds with one another
  • Well, rephrasing then, I can see within one study that you want one solid methodology, but, for instance, if you look at a European project with six work packs and intersections with other projects. At which stage can two methodologies meet? Or are they always separate?
  • I think it’s about what the data is telling you and whether it gives something worth exploring later. You can’t choose your worldview. I can’t change the fact that I see it through a particular lens, based on my underlying research ideologies. It’s not that I can’t look at other work and interpret that from my perspective and see how it is useful and how it can inform my work. It’s not that other work from different methodologies isn’t useful, but I will always look at it through my particular lens.
  • I have one criticism on the paper’s division between numeric and non-numeric data – it misses the notion of numeric estimation (i.e. a solid number that is interpreted as an estimation). After all, you can use numeric information in a way that’s more subjective than hard facts.
  • I would come back and say that all numbers are subjective, not just those functioning as estimates
  • When you mention researchers’ worldviews being differently, would you be able to come to the same conclusion as someone who’s coming from a quantitative perspective?
  • No, because quantitative researchers might think their work is generalizable to a larger population but I feel that context is so important. We can’t say what happens with this 3000 students will happen similarly to another 3000 students in an entirely different context and set of circumstances.
  • One of the largest criticism I have of qualitative work is that every qualitative researcher thinks it’s fine to create their own methods of collecting and analysing data, and no one is trying to replicate these instruments and adapt them locally. Qualitative researchers always seem to suffer from a ‘not invented here’ syndrome, and that means we will never be able to compare work between different contexts in the way we can with quantitative work. To increase the quality of qualitative research, why not decide to stick to well-established instruments.
  • As a counterpoint, one area that I see this issue come up is in emotional reporting. For example, saying the exact same word doesn’t mean the exact same thing to different people. I might say I’m angry, but that means something different in someone else’ interpretation. So you need to be really sensitive to the fact that the same measures don’t measure the same things
  • The realities in qualitative research is that you can go in with the same general guidelines, but in the field, it can get buggered up and you end up with a mishmash of data and methods because the world is just a messy place. Even when you have the intention of using the same methodology, the reality is that it just won’t work practically.
  • Sure, you can always say that there are subtle differences in different local environments. But why not make sure that from the materials section onwards, we all follow some kind of well-established guidelines and instruments and then adjust it to the local context?
  • Do you mean why aren’t researchers asking the same interview questions? Because then your research questions would have to be the same. That works for comparisons studies, but when the research and the context are vastly different, then the questions from previous work wouldn’t apply to this study.
  • Every researcher thinks that the questions you’re raising are unique, but there are always links between work. The way you analyse the data, use particular methods, code or not code – these are all techniques that can be established protocols. The more you can make clear what method you’ve followed, the more rigorous your work becomes.
  • Isn’t that what grounded theory started as? But it all went in different directions.
  • There’s something to be said about building on what other people have done, and if you can make those connections, then your work will be more impactful and credible. But questions have to fit your context.
  • I think the theoretical perspective of the research changes how applicable other tools are to your circumstances.
  • At the same time, very often the tools aren’t shared in qualitative research. What I do now is make an effort publish things like our interview schedule and interview questions, even if just on a blog, exactly for that reason: to allow researchers to make these comparisons more easily.
  • Sometimes existing tools are inadequate for what you want to find. You often have to draw from multiple tools and find your way a bit by being flexible with current tools
  • Ultimately if what you’re doing is contributing to new knowledge. If you’re truly doing new and original research, then no one has done what you’ve done before anyways. Unless you’re building on previous related projects, it’s hard to reuse these kinds of tools.
  • But the drawback is that if you don’t draw on recent work, then reviewers will not understand where it comes from. You can go wild and do something no one has ever done, but someone eventually has to review your work and demonstrate that it adds something to an existing field.
  • You also run the danger in leveraging existing tools in proving exactly what you’ve decided to prove. If you don’t think of problems in novel ways, then it’s easy to bias your results by just confirming what you’re looking for.
  • Yes, but innovation is risky in that people won’t be able to understand where the idea comes from and how it builds on the field, so using existing protocols and methods can help build that bridge
  • It’s not just about getting it published, though, It’s also about getting the findings out there to the general public and making a difference in people’s thinking. A big problem is how do people and members of the public judge what is credible. We increasingly see that people don’t have the tools to make that judgment.
  • Are the ideas in this paper something anyone has been struggling with in their current work?
  • Sure, I’ve done 28 interviews and am now analysing, but we were discussing that we might need to give the coding to someone else to see if they will have the same conclusions. The problem is that I said in ethics that no one else besides my supervisors will see the data. When I give it to someone else, it has to be my supervisors. Other questions are things like, how many do need I give to someone else to determine it is reliable?
  • I’m just trapped sometimes about how to rigorously report. Do I say ‘some people said this’ or ’20 out of 25 said this’
  • If you’re going down a coding route, I would say 20 said this and 5 said the opposite. You’ve come down a route of counting because that’s what coding lends itself to. That’s what makes it credible. Another option is telling stories about each individual rather than counting the comments.
  • But by counting, the work becomes a bit quantitative
  • But that’s exactly what coding lends itself to: counting
  • Some things in your data are going to be quite black and white, but there are also going to be many small subtleties and grey areas. The more you can insure that other people agree with your definitions, the more you can rely on your own interpretation of the data
  • I would argue 3 people should always look at your codes and to calculate Cronbach alphas of inter-rater reliability. Even if you’re telling stories, someone should review and confirm, or even the interviewees themselves should confirm that what you are interpreting is right.
  • It’s about interrogating whether what you view in the data is actually happening. You need to be actively challenging what you find, and doing so overtly so so that you can write about this process in your papers.
  • It’s an endless process I feel. I transcribed my own data, for instance, and when I went back to the recording I found that the way I had written the comments didn’t match with the sentiment of the interviewee
  • That’s a good point – after all, transcription is a form of analysis in itself
  • I recommend always doing transcription yourself. Until you do transcription, you don’t understand how the process can changing and shaping the data
  • I feel it was important for me to transcribe my own data because they I knew that I got the sentiments right. Something as small as a short laugh can completely change the meaning of what someone has said.
  • So what do you do when you’ve already paid someone to do the transcription? How can you make sure that what they’ve done is correct?
  • You should listen to some of the interviews and compare with the transcription. Also if something is interesting in the transcription, you should go back to the interview and see if it was interpreted in how it looks when reading

– End of reading group time –

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s