Liveblog: CALRG reading group – ‘Assumptions and Limitations of TEL Research’

This week’s CALRG reading group discussed the paper: ‘Examining some assumptions and limitations of research on the effects of emerging technologies for teaching and learning in higher education’ by Adrian Kirkwood and Linda Price. Below is a liveblog summary of our discussion. Each bullet represents one point made in the discussion (which does not necessarily represent my own views). As always, please excuse any typos or errors as it was written on the fly.

  • Why paper chosen: It fits nicely with a few CALRG themes. First, how do you choose your method and your methodology? Is it right to aspire for ‘robust’ tools such as randomised control trails or AB experiments? Or are there other considerations to take into account? Second, how do we define ‘technology enhanced learning?’ What do we mean by ‘enhanced?’
  • This paper was written by two former IET workers who used to work in the same office. You can see the close collaboration in their papers, which is hard to accomplish between two researchers. The authors also have related papers about this topic, which are worth also reading.
  • This paper is about methodology, which is something I struggled with in the first year of my PhD, especially understanding the difference between ‘methods’ and ‘methodology.’ This paper pulls that out quite well here and highlights the assumptions taken when you engage with certain methodologies. The way that you set up your experiments highlights your assumptions about learning. It’s good to make this explicit in your PhD thesis and explain what you think teaching and learning involves.
  • They raise problems with certain methodologies in this paper, and I expected to see a way forward in the conclusion, but they don’t. Reading this paper is a good way to open up the discussion about how we address this problem and how we approach technology making a difference in education.
  • This journal (BJET) has a tight page count, which has influenced what they could or couldn’t include in the article. This is something we have to think about when writing journal articles in general.
  • This paper highlights just how difficult it is to establish that ‘changes’ in behaviour have occurred due to technology.
  • In talking about assumptions, this paper makes a few assumptions of its own about TEL or educational research. They assume that research in this area should be about learning gains, when perhaps research questions are addressing other factors: engagement, motivation, social influences, etc. Not all education research is explicitly about learning gains. My own research is about social elements and I struggled to relate that with their descriptions.
  • When I read this, I was thinking about Popi’s research a lot, and how she is looking at whether it is the technology influencing the learning or the teaching that is influencing the learning.
  • When I read the paper, I was thinking about my masters dissertation and my digital instruction tool — I used technology on one hand to see if students understanding improved, but I didn’t compare with a questionnaire. I tried to get a more in-depth understanding of their knowledge, such as by examining how they can explain concepts to others, and not just assessing knowledge in quantitative terms. I saw small qualitative differences in those who had access to technologies such as animations, compared to students that received concepts in a more traditional approach.
  • There’s sometimes pressure, for example by the government or funders, to compare education research with research in areas like medicine.
  • Other findings are by intuition of teachers, when they don’t necessarily have a research backing. For instance, when the Gameboy came out with games for brain training, some schools started incorporating them first thing in the morning. They found results that the math levels had gone up, but they didn’t have a control group.
  • In my previous studies, we struggled comparing a group that used technology and a group that didn’t use technology. Even if the technology shows better results, you don’t know what it is about the technology that led to change.  We had to study the tool in steps by adding new features one by one to understand what aspects of the technology helped. There’s so much different between, for instance, physical books and an online library that they aren’t very comparable.
  • Then there are challenges in the assessment of what we call ‘results.’ Do we care about the short term or the long term? Do we want quantitative assessment of knowledge (questionnaire) or a more qualitative assessment?
  • Currently, we need more research that considers the long term. Most research focusses on the short term knowledge gain.
  • It’s often difficult for researchers to consider the long term. As a PhD student, 3 years sounds like a long time, but it isn’t very long at all. By the time you’ve set up a study, you might get two cohorts, if you’re lucky. The same happens with most funded projects. You don’t really have the time or resources to come back to people to see if it’s made a difference in the long term.
  • Teachers also lose access to the technology after the study. It would be more useful to let them keep using it to see if there are consistent benefits in the long term
  • The costs of investment for schools also means that most administrators or teachers don’t bother to assess the utility of what they’ve purchased. They’ve already made the investment by buying the technologies. They don’t have the time, money or inclination to assess its worth afterwards.
  • Also most schools don’t have metrics in place for long term results, outside of standardised testing. If you’re only looking at learning gains from a testing perspective, that isn’t very holistic to evaluate the effectiveness of what you’re doing.
  • And technology changes so fast — PDAs were the forefront of technology in education 10 years ago, but that moved on just as soon as the papers about them came out. No one was interested in the results anymore.
  • I think that drives you to think about the core of what you’re asking. What conceptually does a PDA offer that is core to the research that is being conducted? What is the value offer that it has? A smart phone has a similar value offer as a PDA, which means that PDA research can be relevant in the future if these notions are addressed. That’s part of the challenge in our research field: getting to that core value or benefit of what that piece of technology is offering.
  • That’s a major challenge for people researching MOOCs as well. Do we assume that we will still be talking about these in 3 years? What are the core values that make them valuable? That they are massive? open? the networking opportunities? How do we make this research relevant to the future, when MOOCs are no longer be a hot topic?
  • What about people who are using mixed methods? What assumptions are you making when you adopt this? Are you a positivist or a constructivist?
  • It’s more of a pragmatist view. You take the best of both worlds, and understand that each have their own flaws.
  • At least, you try to take the best of both worlds.
  • Connecting that ‘core value’ notion to my own research: my interest in is in learner experience and what is unique to a MOOC that makes the learner experience different than other methods, and the role these courses are taking in the developing world. Initially I was concerned if MOOCs would remain until the end of my PhD, but the idea of content being available online for free is going to remain. So the ‘open’ aspect is that key area of the technology I’m focusing on.
  • At the OER 2015 conference, they showed a graph of the trend line of both open resources and MOOCs, and the open line was a more stable climb, while the MOOCs line was a sudden, hot flash. It’s worth considering: is the open education element what was so interesting and intriguing about MOOCs?
  • I think the concept of ‘open’ has been used by MOOCs and then disused, as things have now been hidden behind a paywall.
  • It depends on how you define ‘open.’ If you define it as accessible with no pre-requisites, then it is open.
  • One thing I’ve been grappling is getting the balance between qualitative and quantitative when using mixed methods. How strong of a claim can I make on one side or the other, especially if my data is skewed towards one side?
  • I think it depends on the audience. I’ve found that I focus on different areas of my research depending on who I’m talking to. When I go to a learning analytics conference, for instance, I tend to downplay the qualitative side of my work. Likewise if I go to a more qualitative or practitioners conference, I have to downplay the statistics.
  • The words we use are also different for different people: ‘case study’ or ‘mixed method’ can mean different things to different researchers
  • Combining methods can be challenging. We have to consider what methods we can combine, as well as why we want to combine it. We have to consider how methods can work for and with each other.
  • Going back to an earlier comment about the article assuming that research is always about learning outcomes: Outcomes that illustrates behaviour changes is also a useful method in certain circumstances. On one project we worked on, we didn’t necessarily care if students learned more, but we cared about their ability to learn how to make intelligent choices and how to seek help when they needed it. Our research questions did not even address learning gains.
  • There are large bodies of work about education and the education environment that aren’t necessarily about cognitive gains. When the authors say that ‘such methods reveal nothing about whether students achieve longer lasting gains,’ maybe it’s because that wasn’t the point of the research in the first place
  • One reason for this omission could be that it’s a very compact paper. Set in a  wider context of what they’ve written, I think they takes this on board in other papers. When reading journal articles today, we tend to dip in and dip out, so we don’t see how the authors’ views have changed over time or consider a coherent body of work of one person. Ideas continue to be developed, sometimes over 10-20 years and their ideas have matured over time.
  • In some researchers, I find their earlier work more appealing because it is more rough
  • I find it interesting when an author starts an idea, and lets others carry it forward. Think about George Siemens and the LAK community. He was one of the founding researchers of learning analytics, and many people still use his definitions and ideas today, but George himself has taken a step back and offered critiques about where others have taken his ideas. The same with Community of Inquiry, where a huge body of research have attempted to add things or edit the original theory, and the original authors sometimes write blog posts to give their opinion on the way the framework has taken shape over the years.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s