Liveblog: CALRG reading group – ‘Assumptions and Limitations of TEL Research’

This week’s CALRG reading group discussed the paper: ‘Examining some assumptions and limitations of research on the effects of emerging technologies for teaching and learning in higher education’ by Adrian Kirkwood and Linda Price. Below is a liveblog summary of our discussion. Each bullet represents one point made in the discussion (which does not necessarily represent my own views). As always, please excuse any typos or errors as it was written on the fly.

  • Why paper chosen: It fits nicely with a few CALRG themes. First, how do you choose your method and your methodology? Is it right to aspire for ‘robust’ tools such as randomised control trails or AB experiments? Or are there other considerations to take into account? Second, how do we define ‘technology enhanced learning?’ What do we mean by ‘enhanced?’
  • This paper was written by two former IET workers who used to work in the same office. You can see the close collaboration in their papers, which is hard to accomplish between two researchers. The authors also have related papers about this topic, which are worth also reading.
  • This paper is about methodology, which is something I struggled with in the first year of my PhD, especially understanding the difference between ‘methods’ and ‘methodology.’ This paper pulls that out quite well here and highlights the assumptions taken when you engage with certain methodologies. The way that you set up your experiments highlights your assumptions about learning. It’s good to make this explicit in your PhD thesis and explain what you think teaching and learning involves.
  • They raise problems with certain methodologies in this paper, and I expected to see a way forward in the conclusion, but they don’t. Reading this paper is a good way to open up the discussion about how we address this problem and how we approach technology making a difference in education.
  • This journal (BJET) has a tight page count, which has influenced what they could or couldn’t include in the article. This is something we have to think about when writing journal articles in general.
  • This paper highlights just how difficult it is to establish that ‘changes’ in behaviour have occurred due to technology.
  • In talking about assumptions, this paper makes a few assumptions of its own about TEL or educational research. They assume that research in this area should be about learning gains, when perhaps research questions are addressing other factors: engagement, motivation, social influences, etc. Not all education research is explicitly about learning gains. My own research is about social elements and I struggled to relate that with their descriptions.
  • When I read this, I was thinking about Popi’s research a lot, and how she is looking at whether it is the technology influencing the learning or the teaching that is influencing the learning.
  • When I read the paper, I was thinking about my masters dissertation and my digital instruction tool — I used technology on one hand to see if students understanding improved, but I didn’t compare with a questionnaire. I tried to get a more in-depth understanding of their knowledge, such as by examining how they can explain concepts to others, and not just assessing knowledge in quantitative terms. I saw small qualitative differences in those who had access to technologies such as animations, compared to students that received concepts in a more traditional approach.
  • There’s sometimes pressure, for example by the government or funders, to compare education research with research in areas like medicine.
  • Other findings are by intuition of teachers, when they don’t necessarily have a research backing. For instance, when the Gameboy came out with games for brain training, some schools started incorporating them first thing in the morning. They found results that the math levels had gone up, but they didn’t have a control group.
  • In my previous studies, we struggled comparing a group that used technology and a group that didn’t use technology. Even if the technology shows better results, you don’t know what it is about the technology that led to change.  We had to study the tool in steps by adding new features one by one to understand what aspects of the technology helped. There’s so much different between, for instance, physical books and an online library that they aren’t very comparable.
  • Then there are challenges in the assessment of what we call ‘results.’ Do we care about the short term or the long term? Do we want quantitative assessment of knowledge (questionnaire) or a more qualitative assessment?
  • Currently, we need more research that considers the long term. Most research focusses on the short term knowledge gain.
  • It’s often difficult for researchers to consider the long term. As a PhD student, 3 years sounds like a long time, but it isn’t very long at all. By the time you’ve set up a study, you might get two cohorts, if you’re lucky. The same happens with most funded projects. You don’t really have the time or resources to come back to people to see if it’s made a difference in the long term.
  • Teachers also lose access to the technology after the study. It would be more useful to let them keep using it to see if there are consistent benefits in the long term
  • The costs of investment for schools also means that most administrators or teachers don’t bother to assess the utility of what they’ve purchased. They’ve already made the investment by buying the technologies. They don’t have the time, money or inclination to assess its worth afterwards.
  • Also most schools don’t have metrics in place for long term results, outside of standardised testing. If you’re only looking at learning gains from a testing perspective, that isn’t very holistic to evaluate the effectiveness of what you’re doing.
  • And technology changes so fast — PDAs were the forefront of technology in education 10 years ago, but that moved on just as soon as the papers about them came out. No one was interested in the results anymore.
  • I think that drives you to think about the core of what you’re asking. What conceptually does a PDA offer that is core to the research that is being conducted? What is the value offer that it has? A smart phone has a similar value offer as a PDA, which means that PDA research can be relevant in the future if these notions are addressed. That’s part of the challenge in our research field: getting to that core value or benefit of what that piece of technology is offering.
  • That’s a major challenge for people researching MOOCs as well. Do we assume that we will still be talking about these in 3 years? What are the core values that make them valuable? That they are massive? open? the networking opportunities? How do we make this research relevant to the future, when MOOCs are no longer be a hot topic?
  • What about people who are using mixed methods? What assumptions are you making when you adopt this? Are you a positivist or a constructivist?
  • It’s more of a pragmatist view. You take the best of both worlds, and understand that each have their own flaws.
  • At least, you try to take the best of both worlds.
  • Connecting that ‘core value’ notion to my own research: my interest in is in learner experience and what is unique to a MOOC that makes the learner experience different than other methods, and the role these courses are taking in the developing world. Initially I was concerned if MOOCs would remain until the end of my PhD, but the idea of content being available online for free is going to remain. So the ‘open’ aspect is that key area of the technology I’m focusing on.
  • At the OER 2015 conference, they showed a graph of the trend line of both open resources and MOOCs, and the open line was a more stable climb, while the MOOCs line was a sudden, hot flash. It’s worth considering: is the open education element what was so interesting and intriguing about MOOCs?
  • I think the concept of ‘open’ has been used by MOOCs and then disused, as things have now been hidden behind a paywall.
  • It depends on how you define ‘open.’ If you define it as accessible with no pre-requisites, then it is open.
  • One thing I’ve been grappling is getting the balance between qualitative and quantitative when using mixed methods. How strong of a claim can I make on one side or the other, especially if my data is skewed towards one side?
  • I think it depends on the audience. I’ve found that I focus on different areas of my research depending on who I’m talking to. When I go to a learning analytics conference, for instance, I tend to downplay the qualitative side of my work. Likewise if I go to a more qualitative or practitioners conference, I have to downplay the statistics.
  • The words we use are also different for different people: ‘case study’ or ‘mixed method’ can mean different things to different researchers
  • Combining methods can be challenging. We have to consider what methods we can combine, as well as why we want to combine it. We have to consider how methods can work for and with each other.
  • Going back to an earlier comment about the article assuming that research is always about learning outcomes: Outcomes that illustrates behaviour changes is also a useful method in certain circumstances. On one project we worked on, we didn’t necessarily care if students learned more, but we cared about their ability to learn how to make intelligent choices and how to seek help when they needed it. Our research questions did not even address learning gains.
  • There are large bodies of work about education and the education environment that aren’t necessarily about cognitive gains. When the authors say that ‘such methods reveal nothing about whether students achieve longer lasting gains,’ maybe it’s because that wasn’t the point of the research in the first place
  • One reason for this omission could be that it’s a very compact paper. Set in a  wider context of what they’ve written, I think they takes this on board in other papers. When reading journal articles today, we tend to dip in and dip out, so we don’t see how the authors’ views have changed over time or consider a coherent body of work of one person. Ideas continue to be developed, sometimes over 10-20 years and their ideas have matured over time.
  • In some researchers, I find their earlier work more appealing because it is more rough
  • I find it interesting when an author starts an idea, and lets others carry it forward. Think about George Siemens and the LAK community. He was one of the founding researchers of learning analytics, and many people still use his definitions and ideas today, but George himself has taken a step back and offered critiques about where others have taken his ideas. The same with Community of Inquiry, where a huge body of research have attempted to add things or edit the original theory, and the original authors sometimes write blog posts to give their opinion on the way the framework has taken shape over the years.

LAK 16 Doctoral Consortium Summary

Today I attended the LAK (Learning Analytics and Knowledge) 2016 doctoral consortium, where there was a wide range of topics covered by a diverse group of nine PhD students. You can find more detailed abstracts and links to papers/posters online here, but below are my very brief summaries of each project, as well as a summary of overall advice shared throughout the day:

Angelique Kritzinger, University of Pretoria, South Africa
“Exploring student engagement in a blended learning environment for first year biology”
Angelique’s work focusses on analysing first year student engagement patterns in a blended course. Her research uses demographic data, prior learning data and engagement data to their consider relationships with outcome variables (i.e. marks). A CHAID analysis found that semester 1 tests could be predicted by factors such as home language, gender, ethnicity and prior learning variables.

Elle Wang, Columbia University, USA
“Bridging skill sets sap: MOOCs and student career development in STEM”
Elle is interested in understanding learner motivation, achievement and interaction in MOOCs for career development in the STEM field. Her research analyses data from Coursera and edX iterations of a  ‘Big Data in Education’ course.

Korinn Ostrow, Worcester Polytechnic Institute, USA
“Toward a sound environment for robust learning analytics”
Korinn asks: how we can use big data in education to improve products we are presenting to teachers and administratorsHer research aims to establish a framework for developing and evaluating online tutoring systems for K-12 education. (Interestingly – a need for this was specifically highlighted in the recent LAEP expert workshop)

Héctor Pijeira-Díaz, University of Oulu, Finland
“Using learning analytics with biosensor data for individual and collaborative learning success”
Héctor’s PhD focusses on the SLAM project. It’s aim: strategic regulation of learning through learning analytics and mobile clouds for individual and collaborative learning success. He uses biosensors (wristbands/eye trackers), VLE data, videos and pre-post test data in hopes of creating a biofeedback  dashboard for student emotions to support their learning processes.

Garron Hillaire, The Open University, UK
“Self-regulation using emotion and cognition learning analytics”
Garron’s research highlights a need for understanding the relationship between emotions and cognition. In the face of self-report and observation challenges, he plans to use a combination of self-report, sentiment analysis and physiological measurements to understand what emotions can tell us about learning, how emotions are displayed in data, and how students can use their own emotional data for self-regulated learning.

Michael Brown, University of Michigan, USA
“Contact forces: studying interaction in a large lecture hall using social network learning analytics”
Michael’s research focuses on tools and artefacts in social learning in STEM higher education classrooms. Using VLE data, his work highlights how instructors and students use tools in social learning in large classrooms and which factors influence peer interactions. He aims to also use actor oriented modelling to consider the co-evolution of behaviour and the structure of relationships.

Jenna Mittelmeier, The Open University, UK
“Understanding evidence-based interventions for cross-cultural group work: a learning analytics perspective”
My own research considers social tensions in cross-cultural group work. Social network analysis combined with learning analytics data has highlighted that diverse social networks lead to higher quantity of group work contributions, and qualitative interviews have demonstrated differences in student perceptions by academic achievement level. In the future, a randomised control trial will consider evidence-based interventions that encourage collaboration.

Caitlin Holman, University of Michigan, USA
“Designing success: using data-driven personas to build engaging assignment pathways”
Caitlin’s research incorporates gameful learning, and uses learning analytics in combination with qualitative interviews to build ‘personas'(detail-rich characters that represent the target audience). Personas can then be made available to teachers to prepare them for incorporating gameful learning. Her research asks: How do different types of students make choices? What can we know about students at the beginning to design better courses for them?

Hazel Jones, University of Southern Queensland, Australia
“What are the impacts of adopting learning analytics in different higher education academic micro cultures?”
Hazel’s work focuses on adoption of learning analytics at an institutional level. She asks: How do academic micro-cultures adopt learning analytics methods for informing their teaching? Which data strategies and learning analytics approaches are most effective for informing teaching between different discipline groups? The process of adopting learning analytics is analysed along with staff engagement by the use of longitudinal surveys in combination with LMS data.


Summary of overall advice for PhD students from the LAK 2016 doctoral consortium:

  • Embracing what you don’t know. One key suggest was that researchers shouldn’t be afraid of learning to do a new kind of analysis. Just because we might not be sure of how to do a certain kind of analysis, doesn’t mean we should exclude it as an option in our work.
  • PhD students as future supervisors. As PhD students will in a few short years begin supervising their own students, they should begin to think now about wording criticism in a helpful way and encouraging peers to deepen their thought process.
  • Moving from interesting research to powerful research. Research should have meaning and relevance to the field, and should push knowledge into new domains. Doing research for research’s sake is the wrong way to approach your work.
  • Multi-disciplenary backgrounds are an asset. Oftentimes PhD students straddle several fields or methods, making it difficult to feel ‘at home’ with any particular approach. However, this flexibility should been seen as an asset rather than a burden.
  • Soft skills are just as important as technical skills. Successful academics focus not just one what they say, but also how they say it. There’s a strong need for researchers who are charismatic and can communicate their work in effective ways.
  • Searching for a job can be a self-reflection tool. The job search can help you assess your own skills and reflect on your strengths and weaknesses. Although it can be a frustrating process, one positive aspect is the self-awareness that builds through the process.
  • Good supervisors encourage their students to gain valuable experiences. In today’s competitive job market, simply completing a PhD is not enough. Employers are looking for graduates who have done more, and good supervisors are the ones who encourage their students to diversify their experiences.

Sidenote: One aspect that I found particularly interesting in today’s doctoral consortium was its juxtaposition to the recent LAEP learning analytics expert workshop I attended in Amsterdam (see my summary of Day 1 and Day 2). In the workshop, several “action points” were highlighted by learning analytics experts as essential next steps for progressing the field. Although in the workshop, expert attendees often discussed barriers to incorporating these notions into their own work, many of the PhD students attending today’s LAK event are indeed incorporating these elements into their research. Thus, perhaps there is a logical connection between PhD work going on in the learning analytics field and the needs of experts.

Liveblog — CALRG reading group,’Intelligent Tutoring Systems by and for the Developing World’

This week’s CALRG reading group discussed the paper: ‘Intelligent Tutoring Systems by and for the Developing World: A Review of Trends and Approaches for Educational Technology in a Global Context” by Benjamin D. Nye. Below is a liveblog summary of our discussion. Each bullet represents one point made in the discussion (which does not necessarily represent my own views). As always, please excuse any typos or errors as it was written on the fly.

  • I found several issues with the actual research methodology of this paper, and some of that is cultural — it is written by a single American author who ignores any research not in English. There might have been relevant research in other languages, but this is ignored in this paper. In some ways, the paper lends itself to being part of the problem.
  • Sure, this paper could have been much stronger with international collaboration, and inclusion of non-English languages. The author does really even highlight this as a limitation of the paper.
  • The proposed barriers are also not unique to ITS, but rather apply to the use of all technologies in the developing world  — if you have no electricity, you have no technology.
  • One problem I found in this paper is that he never really defines what he means by ITS
  • This paper is mapping the geography of ITS research, but there’s no comparison to other fields. There is no reflection of whether the distribution of papers across the world is normal. You’d probably see the same patterns at any conference, with more of a focus on the US and Europe. You could replace his map with total papers in any discipline and it would look quite similar
  • Some of the barriers he describes are fundamental barriers that need to be developed before you can even consider bringing ITS technologies. If there is no infrastructure in place to support technology, then it is hopeless to try to make it a sustainable option.
  • ITS is not a simple technology to make work in the developing world either
  • Fundamentally the paper makes an assumption that ITS is beneficial to the developing world in the first place
  • Other than brochures from the companies marketing the systems, I’ve never seen anything about how ITS leads to learning gains
  • Link to Project Ceibal (Uruguay, one laptop per child). Their research highlights the need for infrastructure to benefit productivity. For example, simply giving schools good internet connections led to better problem solving.
  • And that research wouldn’t have been picked up in this paper because the author only looked at English publications
  • But maybe what you need is just access to the internet to encourage problem solving? Maybe it’s not related to ITS at all. What is the relationship between the two?
  • Another important question: how does that sustain itself over time? Did the good internet connection lead to better problem solving five years down the line? Or was it about the novelty of it?
  • Recent EDM conference posed an interesting question. ITS researchers have specific questions that they goes after, and their findings keep showing small incremental increases based on these models. The argument has always been that it’s a small effect, but is a lot of people if you bring it to scale. One question asked at the conference: Are we at a point in the community where they need to change what questions we are exploring?
  • Well, the questions you ask are limited by the data you can collect.
  • There was a computer-enhanced learning video from the 70s [edit: anyone have a source for this?], and the same things they said then about new technologies are being said now. If they’re saying the same thing in the 70s, then maybe we need to start asking different questions.
  • Our research is often very tech-led, it’s about what can we do, rather than what do we need.
  • In a way that’s priveledged thinking. We in the UK can be tech-led because we have the infrastructure to support that kind of thinking. This isn’t necessarily possible in some developing countries.
  • Link to Zoran Popović and his work in the ITS domain on math education. He’s considered the environment of question banks, and asked: what would traditional curriculums look like, and what are those pathways? Example: Singapore vs US curriculum styles are two potential paths through these materials. That’s a good use of how ITS informs learning pathways for design.
  • One big question is whether ITS can mark or understand what it is teaching. If so, then maybe it’s not the higher level skills students need to gain, as these sort of skills would not be understandable by a computer
  • From a tutoring perspective, I’ve been playing the Guess the Correlation game, which gives you a scatterplot and you guess the correlation. That feels like a tutoring experience, as it’s low level and grade-able. I view that as focussing on beefing up by fundamentals and small skills/components. This will contribute to fractional gains that will allow you to spend more time on things that you need more time. H
  • It has to be a pretty rich system. Example: at a military school when teaching officers who are going to be generals — they found in six months the staff officers had gamed the system for fantastic marks. They had to trash a 1.5 million system after they figured it out in one semester.
  • The same is true about the GRE (standardised test for postgraduate level study in the US). When the computer marks your written essays, you just have to learn what phrases and structures it views as “good writing” in order to score well.
  • This is another part of the problem of spending money on ITS for the developing worlds — there are inherent problems in the systems themselves, so isn’t it better to use the money for other basic needs (sanitation, etc)?
  • These products are shipped in from the west and there is a financial incentive to that, which means there isn’t a motivation to work with the local environment and stimulating their economy from the ground up
  • One example is Pearson, which is developing common core competencies around the world by getting governments to offload their problems onto their company. It’s a business model.
  • I feel that the best part of this paper was buried: at the end he talks about different models of how these sort of things are created (transferred, homegrown, combo). He begins to make distinctions that the homegrown technologies had different issues or tackled different issues. However, this is relatively hidden at the end of the paper.
  • This paper also highlights the bigger issue of writing culture in research. Obviously things are going on in Russia or China, but it probably is never written up in English. There is a huge barrier to access to research between countries that is obvious in this paper, and the author should reflect on it more.
  • Can someone explain what is the difference between ITS and adaptive learning systems?
  • ITS versus adaptive learning systems: ITS is typically a bank of questions that are mapped to key components of knowledge and ITS gives ‘hints.’ There is no scaffolding, there are hints. Adaptive  learning systems try to have a model of student understanding (i.e. Bayesian knowledge tracing). They try to understand what you know and make a choice on what you go to next.
  • Interesting in both of these systems, there is a notion that if I find the answer in a way the system didn’t expect, it’s cheating
  • But it’s not too different from being in an actual classroom.
  • It seems to deal with the perceived relevancy of what you feel is being taught. When will you ever use it? This will motivate you to “follow the system” or not.
  • One other thing about this paper: How do you determine if the system you’ve exported is actually helping? It is very tricky. Example: I’ve been trying to see how open data is being used in teaching, but no one is making publications on this topic. There are informal accounts, but going about non-published reflections in a rigorous way is tricky.
  • There’s not an internal reflection of what we’re doing either. Often, no one is thinking about ‘what is the point?’ We are reluctant to question the broader social picture and how our research or technology development benefits society.
  • It’s hard to get away from a simple ideology in education: that what we are doing is a ‘good thing’
  • And what needs to be considered is framework are we using to establish ‘good,’ especially in an international context
  • There are hardly any papers that say “this was not a good idea” especially if it was funded
  • One good example of this is Doug Clow and Rebecca Ferguson’s [edit: and Leah Macfadyen and Paul Prinsloo]’s Failathon workshop in the upcoming LAK conference. They are asking academics to come together to talk about what we did wrong and how we can learn from it.
  • The Games Learning and Society Conference also does something similar to this, by getting all stars in the field to show examples of when they had the best set-up and sign for success, but how it went wrong.
  • The military requires advanced degrees to reach certain levels in the hierarchy, but none of the officers will write a paper that’s critical about what they are doing because the guy grading you is the one above you giving you a promotion. The education that they get becomes corrupted and standardises their thinking.
  • But there is a question about providing self determination. There needs to be some level that within our subcultures that can build effective practice.
  • Professionalism means you need to self-police and self-discipline. Unless you critique your own work, you lose the edge of the profession.
  • It’s difficult in most environments to embrace failures
  • Going back to the idea of ITS, how much can they do for areas that are not necessarily fact-based, such as classic literature? How can it support interpretation?
  • Most of these systems are built for STEM people by STEM people.
  • One example of this was from my recent CALRG talk. The computer scientists were trying to find the best location on the map, but the arts people were talking about the process. There was a clear tension. Computer scientists talked about data, art people talked about getting people engaged in the process of interpretation.
  • ITS optimises time on task, getting people through it in a fast and efficient way. How do you get someone through an art gallery in a fast and efficient way? The boundaries of failure and where the models break down is an area that is rich in meaning and exploration. You’re commoditising the systematic mechanical aspect of learning. Where are we fostering the beautiful work that could be going on?
  • Example of going into a museum: watch people with the headset. They follow the sound, and when the headphone tells them to move on. Your experience is over because it wants you to go to the next painting.
  • Has anyone looked at ITS or ADL to copy aspects of what a teacher would do? Rather than pose a solution that gets people through it, but actually copy other aspects that are less obvious. Your starting point should be a teacher — observe them, break down what they do in the classroom.
  • Art Graesser used 10,000 hours of video of tutors to inform his intelligent tutor design. He had a good presentation at LAK 2 years ago where he said we have wonderful theories, but watching 10,000 hours of one on one teaching, it just isn’t there. If you start with the hypothesis that one on one is the best model for learning, looking at 10,000 hours of that experience is good practice to inform these designs.
  • Going back to ITS not supporting things like literature or art — that’s where our culture lies. If we want to transport these to developing cultures, we need to be sure that our systems are not erasing their cultures from education.
  • There is a need to be careful of cultural superiority and imperialism.
  • One of the things that is bothersome of ITS is that it has a decision tree and it decides what the answers are. If I come up with a new answer, it does not accept it, but a professor would engage with my thoughts. There is no incentive in these systems to make intellectual leaps. It is forcing people to fit a mold and that stifles debate.
  • Liz wrote a piece in The Conversation against personalised learning — her argument that the idea behind these systems is making things efficient, but learning is never linear or smooth. You always have roadblocks.
  • Right, these systems are just finding the path of least resistance.
  • My undergrad math experience: There is a clear hierarchy in math classes — the guy second in the program was asked ‘how shall we use computers to better teach the TNB plane?” His response would be to grab the computer by the power cords and swing it to throw it, and as the cords are sticking out, that is the plane. I loved that response. It wasn’t expected, using a physical object for a physics experience, and so was novel and brilliant and I still remember it today. I still think of TNB planes as a computer swinging in the sky.
  • It comes back to is there anything more efficient than someone just talking to you to help you understand? Just by waiting for a computer to turn on, you could have understood it by then in a conversation.
  • What is the expressive capacity of my intelligent tutoring system? Could I identify cultural groupings off of an intelligent tutoring system — then you are landing in the imperialistic approach to education. If the answer is yes, then you are getting to the point where you can connect people that are thinking in a way that is not in the dominant discourse in their culture. Support and facilitate diverse expressing to connect people who can help them down their path to understanding.
  • We think we have to fit in with the culture of who we are trying to help. There is this idea that you can learn from something exotic. You don’t have to remove another culture from it, but you can’t just say it’s all about understanding England. Some sort of balancing act there.
  • Boils down to education about being social engineering — a reflection of the government in power. The conservative government won’t want answers in that databank that won’t support their values. As you move that platform along to different countries, that can be restrictive.
  • We tend to put money into things that are measurable. If you put half of EdTech money into teacher development instead, there would be good results.
  • Both sides have short-comings though (putting all money in teacher training or all money into technology) — need to have a balancing act
  • There is often an ‘ITS-like’ approach to face-to-face instruction  – simply follow the worksheets, and if students do those, they will understand.
  • Professional development programs are often based on hierarchies, that the person at the top of the room has the best ideas.
  • There was a Washington Post article about professional development in Chicago schools. One teacher took a video and the teacher in the front of the room had everyone repeat after him “I will provide students with choice.”
  • The content with the method of delivery needs to match. I don’t think we cut ITS loose, but we should consider the utility of it

LAEP/LACE expert workshop: Summary of Day 2

Yesterday’s LAEP/LACE expert workshop on policy and learning analytics focused on the future. Where is learning analytics going? What will be the barriers for moving forward? Below is a summary of the day’s discussions and activities:


The first activity involved considering the year 2019 (3 years ahead), and discussing in small groups what barriers and complications will be faced in learning analytics’ immediate future. Here’s what attendees came up with:

Group 1
An important consideration is the General Data Protection Regulation (GDPR), which will be enforced in the next two years. This will impact the learning analytics field in many ways, many of which are largely unknown at this point. Europe has taken a standpoint that individual privacy is important, and that changes to current practices in general analytics are needed. Moving forward, the definition of personal data is going to be larger and more complex, and these legal changes will move universities from the role of processors of data to being containers of data. This will lead to an increased need to help parents and students understand what their data is being used.

One other issue is that organisations/schools/companies that aren’t privacy sensitive will be cautious and slow about adoption of learning analytics, while those that are not conscious will be the ones first on the market. There is a notion (or catch-22, perhaps) that we need to make sure that our regulations are not pushing us too far behind. We are potentially at a competitive disadvantage when we follow the book ‘to a T.’

Group 2
Three main ideas from this group: (1) there is a need to bring people and stakeholders on board by reaching out to teachers, students and staff in institutions. Currently our notion of ‘stakeholder’ is too narrow and means that important elements in the problem are ignored. (2) Legislation is needed to create new rules and guidelines from an EU perspective, as well as guidelines to help students/teachers understand when it is safe to use and how to best harness benefits from learning analytics. (3) Resources: we need to be developing and creating opportunities from the resources we already have, as well as creating new resources to serve the needs we still have.

Group 3
Three main points from Group 3 as well. (1) The notion of  institutional support — teachers need support to scale up their use of data and to provide more time for teacher to spend on these issues. (2) More empirical evidence will be needed moving forward. Teachers also need support to scale up and experiment, and more time to spend on these issues. (3) Grant proposals should explicitly include an evaluation phase, not just the design of new tools. Too often the goal is to simply make a tool, not to push that tool forward into use. Grants should also include 1 or 2 people that can spend a significant amount of time on them (i.e. more than 10%).

Group 4
One consideration is that 3 years is one generation of students, but the lifecycle of the teacher is much longer. The teachers of tomorrow are already on campus, and we must work with what we have. We should also keep in mind what views and sentiments are coming from our societies.

Transparency is also needed as no one truly knows what is happening with algorithms. There are discussions about ‘safe spaces’ in education and student aversion to being ‘shocked’ by educational materials –> how will that impact their views towards learning analytics? There is a need to distinguish ourselves from the negative portrayals of ‘big data’ in the media (examples: Facebook, google), otherwise we will fail because student sentiments will lead in the end.
Group 5
Group 5 highlighted two primary concerns: (1) There needs to be a focus on quality and resource creation. Too often learning analytics projects are funded for a finite amount of time. What happens when the funding ends? How do we continue to push forward initiatives when projects are finished? (2) In the end, we must consider: what do we want from learning analytics? In this workshop there has been a focus on cognitive gains, but we should also consider emotional and social elements.
Group 6
There are often problems because motivations and policy initiatives have effects at different levels. There is a problem when IT departments have control of data due to institutional policies, and this leads to the faculties being unable to access the complete set of data that is available about students. Institutional policies about continuing assessment enables experimental analytics to come into play. Government and agency policy items, such as fee structs, affect learning at all levels. There is a need for quality control, but there is also an issue of who would be responsible for managing and controlling it. Finally, there is an important question of how to drive innovation and push it in a direction it needs to go when there is resistance on multiple levels. We simply don’t have enough empirical evidence at this point to counter those notions.
Group 7
One issue that needs attention is of restraint. When thinking of technology and the future, we need to consider the danger of getting carried away and doing too much too soon. There are many fun and interesting things we can do with data, but at the same time we must keep in mind that the analytics we do must help students succeed and foster education. Perhaps we should focus on a more shallow depth across institutions rather than just a small few doing learning analytics at high levels. At a low level, maturity in the field is already there to see results, but institutions must take care not to be distracted by technological possibilities.

Foresight exercise 
The workshop next looked further into the future at the year 2025. A series of fictional case studies from the future were considered by each individual group. If you’re interested in a play-by-play, my colleague Doug Clow kept an excellent live blog (here and here). As a brief summary, here are a few key themes and considerations put forth throughout the activity (in no particular order)
  1. Bringing the data back to the learner — there was repeated reference to the role of the student in the learning analytics process. It was noted that more resources are needed to help students make sense of learning analytics and findings from their own educational data in order to form meaningful conclusions about their students. One key aspect in this is the creation of resources and ‘wise advice’ on an administrative level within the university. Further pushed was the notion that students are not necessarily on a linear or standard career path (as many of those in the room could attest to from their own educational background).
  2. It is our duty to act upon the data we have. Learning analytics has now progressed to a level that many universities are able to collect and analyse data to make predictions of individual student success. Many in the workshop felt strongly that universities who can predict that a student is at risk for failure or leaving the university have a duty or obligation to act on that knowledge. There was much discussion about how knowing is not enough, and more resources are needed to act on the ‘red flags’ that learning analytics highlights.
  3. Do we want learning analytics to change or reinforce the status quo? There seems to be a debate about whether we should look at adopting learning analytics policies and practices from the supply or demand perspective. Should we create learning analytics practices that change the way we do education? Or should the focus be on enhancing the educations systems we already have (perhaps by ‘making life easier’ for the teacher)?
  4. Intelligent systems need human and cultural awareness. Today’s workshop marked perhaps the first time that I’d seen culture and learning analytics being discussed in depth (yay!). There was mention that human-written algorithms can reinforce institutional and personal biases. Similarly, there was the notion that a multi-cultural lens is needed to interpret data outputs. There are dangers in homogenising students and a need for human interpretation to make sense of data in a realistic, real world setting.
  5. Desirable learning outcomes must be identified. It’s not enough to simply collect data and analyse it. We need to understand how to interpret data and make sense of it in the context of the curriculum. Important questions include: What do we want students to know? What data can demonstrate this knowledge?  
  6. Learning analytics should enhance teaching, not replace it. There were discussions about dystopian fears of education becoming a machine-like process and learning analytics eliminating the role of the teacher. That’s not the learning analytics we want to move towards — we want teaching practices to be aided by learning analytics, not eliminated.
  7. Individual achievements are more important than interpersonal comparisons. There were comments about normative ranking of students being problematic. Learning analytics should give insights about the individual student learning process, rather than vetting students against one another in competition for the best classroom ranking.
  8. What we measure is as important as how we measure it. Many comments warned against ‘easy data’ and interpreting based on data we have, rather than data we need. There was a fear that it is easier to measure the ‘wrong things’ than it is the ‘right things.’ There is a need to consider what we want to know about students about what data we need to get to that point, rather than just playing with data until we reach some semblance of a conclusion.
  9. We need to determine what analysis is too little versus what is too much. Many of the scenarios were considered undesirable as they learned too strictly towards a rigid notion of behaviouralism. There were many comments about the need to include (innovative) pedagogy into the equation, which can help determine the appropriateness of the ‘level’ of learning analytics for learning needs.
  10. We need more than just ‘flashy’ data. Big data is charming and many tools can create pretty graphs, but what is truly needed is a translation from ‘flashy’ to results. Important to achieving this is a focus on teachers and learners in realistic classroom settings. There is also a need for continued evidence of the benefits of learning analytics and evaluation of tools.

The final segment of the workshop considered actionable changes in policy. This was divided into two categories: “EU policy: Action NOW!” and a ‘Wish List’ for the future. Below is a summary of the discussion.

EU Policy: Action NOW!
  1. Innovative pedagogy: There is a need for novel, innovative pedagogy. We need to flip the equation and bring pedagogy into the equation to drive the use of data to solve practical problems.
  2. Data privacy: a clear statement is needed from privacy commissioners about  controls to protect learners, teachers and society
  3. Orchestration of grants: The current system does not benefit learning analytics research and more efficiency is needed to move progress forward. Considerations include: a need to focus on evidence over tools, making steps to counter duplication of work, shorter tenders to eliminate the administrative process
  4. LACE evidence hub funding: the funding for the LACE evidence hub is due to expire soon, but this project serves a vital, necessary function that drives forth the field. Continued funding is needed to make sure a repository for evidence and transparency continues to exist.
  5. 21st century skills: There is a risk that learning analytics will take us down an objective view of education, as it promotes primarily measurable things. One key missing area of measurement is on 21st century skills, as well as their relationship to learning analytics so that a particular view of education is not favoured.
  6. Focus on process rather than outcomes: Policy focuses should be not just on measurable outcomes, but also about the holistic process of learning
  7. Crowd sourced funding support: One consideration would be a crowdsource-like funding of tools that teachers actually need, perhaps with EU top-up funding when crowdsourcing is successful
  8. Ambassadors of the cause: In the outside world, very few people are actually aware of what is going on in learning analytics. We need more outreach: conferences, active local communities, building learning analytics into national curriculums, etc
  9. Open access standards: There is currently no standardisation of open standards in Europe. There are now small profiles for standards (JISC work in the UK is a good example), but these need put into practice for analytics.

Wish list

  1. Teacher education: media competencies and learning analytics knowledge needs to be built into the education for both new and existing teachers
  2. Orchestration of grants (see above)
  3. Decide which problems we want to solve: We can’t move the field forward until we have collective discussions on what direction we want to go
  4. Openness and transparency (see above)
  5. Analytics for 21st century skills (see above)
  6. Learning analytics for the process of learning (see above)
  7. Identify success cases that give us a solid foundation: We need more examples of successful analytics (such as those compiled by the LACE evidence hub!), in order to learn from those who are doing it well
  8. Supporting teachers, standards and technology architecture: There needs to be a cost-benefit analysis of investing in learning analytics. Tools and technology are needed, as well as IT workers in the school to manage it. However, we need pedagogy driving that innovation and teacher involvement in developing systems.
  9. Identify successful methodologies: (similar to #7)
  10. Facilitate data amalgamation: More consideration is needed about how to combine data sources to a multi-faceted insight into the problems we seek to solve. Another important problem is the ability to exchange data and findings between institutions.
  11. Consider where we want go to next: Learning analytics is not just about numbers and statistics. We can use our skills to tackle big problems at the university, such as gender and diversity issues. We need to consider how learning analytics and the combination of data sources can fit in with the wider university missions.

Finally, there was a vote to determine the top needs of the field moving forward. The following won in each category:

EU Policy: Action NOW!
1.) Innovative pedagogy
2.) More evidence and funding for the LACE evidence hub
3.) Ethics policies

Wish list
1. Teacher education training
2. Deciding which problems we are solving and how we are solving them

 

LAEP/LACE expert workshop: Summary of Day 1

Today’s LAEP/LACE expert workshop in Amsterdam brought together nearly 50 learning analytics experts from approximately 14 countries to discuss to role of policy in learning analytics. The culminating activity of Day 1 was a group exercise to suggest key areas that should be considered in future practice and policy creation. So what happens when you put a bunch of learning analytics experts in one room and ask them to make suggestions about moving the field forward? Here is what the groups came up with:

Group 1
Learning analytics needs rich data, but learning management system data can simply give us activity data, such as clicks and time stamps. What is needed is data from other sources, such as rich data sources that are complimentary to activity data. Two examples: data from formative assessment (assessment for learning, rather than of learning) and student disposition data. (One example of a university using this method is Maastricht University School of Business and Economics)

Group 2
We’ve had a lot of discussions about how to get rich data, but one important question is: how do we get teachers to actually use this data? Many teachers are conservative in their way of teaching and are rather qualitatively-oriented. Thus, how do we get them to work with quantitative data, with which they may feel uncomfortable? We need to be able to first convince teachers that using learning analytics is a good idea.

Group 3
We have seen long lists of tools, but an important question is: when we give this exhaustive list of resources to teachers, what do they do with it? In reality, most are probably unsure about how to approach which tools to use to fit their specific needs. Group 3’s idea is to create an evaluation framework for teachers to help them make tool selections. Rather than focusing on building more tools, we should instead focus on helping schools and teachers find the right ones for their specific needs.

Group 4
Much of the discussion so far today has been about data, projects and models, but there hasn’t been much discussion about how these connect with education. There is tendency to think about learning analytics from the supply side (i.e. the IT side). However, you also have to consider the ways in which teachers make the change to work with the technology.  Group 4’s suggestion is to look rather at the demand side, putting the teacher in the spotlight to understand what they want and need. The system should work for the teacher, not the other way around.

Group 5
Group 5 related their ideas to ‘fun.’  They argued for including a human side to learning analytics. Much of the learning analytics discussion has been focussed on performance matrices and how this affects teachers/learners/policy makers. However, there is delight and motivation inherently involved in education and gaining information about students’ learning from learning analytics systems. Learning analytics should empower learners and teachers to make the right decisions for their needs. We should work on that empowerment, keeping in mind that it’s not about activity data, but rather about rich data and the human side of learning.

Group 6
Group 6 also focussed on the teacher in learning analytics, with a message similar to Group 4. They argued that there is a plethora of tools available on the supply side of the learning analytics equation, but the demand side hasn’t yet caught up. Perhaps it is time to flip the thinking and focus on teacher wants and needs.

Group 7
Group 7 considered a way to bridge the gap between educational research and institutional use of learning analytics. They argued that a bottom-up approach is needed, in coordination with a top-down approach. On one hand, it is important for management to say learning analytics is important and why we need it. From the bottom, another need is enthusiastic people who want to do educational experiments with data. These two groups need to start talking together, which is often not the case.

Group 8
The analytic tools are plentiful, and have the capabilities to give nice pictures and graphs about student data to anyone interested. However, teachers and managers who are buying into these systems need more information about which are the most useful for their specific needs. There should be more resources for helping school and teachers decide which practices will work for them in a realistic real-world setting.


Several key themes could be seen in this activity, as well as discussions throughout Day 1:

  1. There was a focus throughout the day’s discussions on the teacher. Partly this was in the context of incorporating teachers as key stakeholders in the adoption of learning analytics practices. After all, for learning analytics to be sustainable, it must work for and with existing education practices. In other cases, there was a notion that teachers must be convinced of the merits of using learning analytics, and (in a way) their hesitancy or analytic skills gap demonstrated a barrier to wide-scale adoption. Key questions for moving forward: Do teachers have time for learning analytics? How do we train teachers to use quantitative data? How does learning analytics enrich the classroom rather than distract from it? What merits do we use to ‘sell’ the use of analytics to teachers?
  2. Learning analytics needs to be more than just numbers. As education is inherently a complex and qualitative process, it is important to not just quantify learning, but rather to consider and incorporate real, authentic human elements into the discussion. It is important to keep in mind that learning analytics is meant to serve and embellish education. However, to preserve this notion, it is important to consider what we mean by ‘education.’ What goals and values does education serve in our society, and how can learning analytics enhance that?
  3. The field cannot mature without further considering data ethics and privacy. Points were made today that this is not just about student ownership of their own data, but also about the privacy of the teacher. This is particularly important as learning analytics becomes increasingly tied to student performance and can, thus, be linked with teacher performance.
  4. There were several ideas put forth today that student data should go beyond simple activity data. Rather, analysis should include rich data sources such as disposition or assessment data to create a more rounded picture of student performance. At the same time, there was pushback that activity data can be useful as a ‘quick and dirty’ understanding of students’ understanding and engagement. Key questions to consider here: What do we want to know about students? What data do we need to get there?

However, just as important to what we did discuss are the things we didn’t discuss. Here are some notable absences in today’s discussion, which may need more attention in Day 2 of the workshop (and in moving forward the field in general).

  1. Much of today’s discussions centred on institutions, teachers and tools. One critical aspect that was not explicitly addressed was the student. How is learning analytics serving students, and what tangible benefits do they gain from the adoption of learning analytics practices and policies?
  2. Today’s talks mostly considered learning analytics in the higher education environment, whereas more discussion is needed on how this translates to other domains: schools, workplace, informal learning, etc.
  3. It’s easy to make wide suggestions for moving forward, such as ‘include more teachers’ or ‘use more data.’ However, little discussion has centred on how to address these good ideas in practice. What is explicitly needed in policy and practice to push forth these initiatives? What realistic objectives can we work towards in the creation of new policies to support our goals in the field?
  4. Here’s me being a little biased and tying in my own research: there was plenty of discussion about keeping in the diversity of needs from a teacher perspective. However, there was little talk on diversity of the students themselves. When we consider the ‘human’ elements of learning analytics, we must also consider that the human beings to which the data serves are must-faceted themselves and come from a wide variety of culture and ethnicities. How does diversity and culture play a role in (1) the data we collect about student dispositions and activities and (2) their view of the ethical use of their data? I personally feel that there is an urgent need in learning analytics to not homogenise students and their backgrounds, and to consider how things such as ethnicity and culture affect student behaviours (as this is also key to transferability of findings between institutions internationally). Similarly, we must remember that this ‘human element’ is not a synonymous or ‘one size fits all’ term for all those served by education systems.

 

 

LAEP project: What has been accomplished so far?

Today’s LAEP/LACE Expert Workshop highlighted findings thus far from the LAEP project. You can find most of the resources online for comment. Here’s what was highlighted from the project today:


 

LAEP research aims:

  • What is the current state of the art?
  • What are the prospects for the implementation of learning analytics?
  • What is the potential for European policy to be used to guide and support the take-up and adaptation of learning analytics to enhance education in Europe?

What do we want in the future from learning analytics (looking 10-15 years on), and how can policies influence this?

This was a 9 month study, which gives its final report in June. It was made up of several key areas: literature review, glossary, inventory, case studies, expert workshop, and a final report.


 

LAEP glossary:

A list of someone brand new to learning analytics, so they can look at the glossary and see the main terms. This was developed using frequently used keywords from the LAK dataset.


Literature review:

This literature review focused on implementation, which is a new addition to the field. A few key areas were highlighted, which need to be addressed in order to move the field forward:

  1. Underpinning technology
    Systems need to be technically and semantically interoperable. We don’t want everyone across Europe reinventing the wheel and not sharing expertise. We need systems that can talk to each other and can link and build on each other. We also need clarity and context in the data sets: are they children or university? We need to think about the quality of data and whether our data sets are complete. Do they have holes or gaps? Are they out of date? Are people giving us the correct data? Who owns the data? Is it owned by the individuals who generated it, or the institution, or groups? At the moment we don’t really know the answers to these questions. We need to know who should have access to data and why, and when that access is needed or should be removed. Also more research is needed on data warehouse and storage.
  2. Policy, codes of practice and governance
    A lot of data has been gathered, without people necessarily knowing it has been collected. How can we tell them about what is going on? How do we involve people in the creation of policy? How do we preserve privacy and adapt practices as things change? There is na need to keep people updated with what we are doing, otherwise their consent is meaningless as they do not know what they are consenting to.
  3. Skills and data literacies 
    Current capability to develop and deploy analytics is still low. Skills gap of those who can do the analysis, but also in who knows how to use the analytics once it is made (teachers). Data may mislead teachers, or they may simply set it aside because it is meaningless to them. We need open and shared analytics curriculum covering both technology and pedagogy. Research is needed on the use of visualisation and helping it make sense to others.
  4. Culture, values and professional practice
    We need to relate analytics to the purpose of education. To some extent we agree on that, but it can also vary by our context or institution. We need to think about what we are trying to achieve with education and how we can link analytics to that so it really works for people. If we want people to use analytics, how do we build that into training, so they can be confident with using it? Educators need to be able to use the tools and make informed decisions.

Inventory

The purpose of the inventory was to show the state of the art of the practical adoption of learning analytics. It was a “broad but shallow” collection of informative examples in three areas:

  1. Policy documents (14 inventories)
  2. Practices (13 inventories)
  3. Tools (19 inventories)

Option to look at these online and add to them — additions welcome.


Case Studies

LAEP also did an in-depth case study of six organisations:

  1. BlueCanary as a commercial provider of data science expertise
  2. Kennisnet’s work in raising sector awareness, knowledge and skills
  3. University of Technology, Sydney use of learning analytics as part of a data intensive strategy
  4. Apereo, and their creation of an open-source software stack for learning analytics
  5. Norway’s recent government initiatives and funding of a national centre for learning analytics (SLATE)
  6. the Open University’s institutional initiatives to create an ethics policy specific for learning analytics

As part of the case study process, LAEP also considered the role of policy for each. Important to note that each come from a variety of perspectives with very individual motivation. Nevertheless several key policy considerations were identified in the case studies:

  • A need to holistically include stakeholders in policy discussions, including students, teachers and industry. It is important for stakeholders to be defined and actively incorporated in the process.
  • Testing and evaluation of schools/students may lead schools/teachers to shun the use of new technologies. There is a perception that evaluation methods may favor outdated practices.
  • A need for more explicit educational ethics policies specific to education on institutional, national and international levels.
  • A shared data dictionary and ability to ethically share data between institutions

 

Liveblog — CALRG reading group, ‘Learner Performance in Multimedia Learning Arrangements’

This week’s CALRG reading group discussed the paper: ‘Learner Performance in Multimedia Learning Arrangements: An Analysis Across Instructional Approaches,’ by Tessa H.S. Eysink, Ton de Jong, Kirsten Berthold, Bas Kolloffel, Maria Opfermann, and Pieter Wouters.  Below is a liveblog summary of our discussion. Each bullet represents one point made in the discussion (which does not necessarily represent my own views). As always, please excuse any typos or errors as it was written on the fly.

  • Why we chose this paper for CALRG: This is firstly a good paper for PhD students as a basis for their literature reviews, as it defines concepts well and uses good references. Interesting bibliography as it didn’t have the ‘usual suspects’ (i.e. papers typically used in UK research). There is a tendency to only cite things in your own area, so this paper is a good grouping of the literature that we don’t see very often, and it references well-respected international journals. This paper also pulls together 4 substantial contributions to the field, which is something that needs to be done in a PhD thesis literature review.
  • This paper also had me thinking about how I evaluate quantitative papers. Things that demonstrate rigorous quantitative research in this paper: a clear write up about the methods (often if authors don’t do that, it’s because they don’t really know the rationale for what they’ve done), pilot tests existed to test for reliability of their instruments, use of the statistical validations like Chronbach’s alpha
  • This paper made me think: could we do this work here in the OU or in IET? After all, we have access to large groups of students working online, such as OU students and over 3 million FutureLearn students
  • This paper comes from a learning sciences field, which tends to take a parallel track as educational technology. In these fields, there is a tendency to do quantitative research in a rigorous way.
  • This paper is a response to the Kirschner, Sweller & Clark (2006) article, which found that guided instruction is superior to the unguided/minimally guided approach. The Eysink et al article works to highlight that actually it’s more complicated than that. It readdresses the notion that unguided approaches and the task of reviewing examples takes longer, but is more effective.
  • What they’ve done in this paper is ask four world leading institutions to design a learning approach and a learning design based on their expertise, and then tested to see which was the best. This is something we can do in our own research as well. What makes this paper so interesting is that each institution probably thought their approach was best, but the analysis found there were subtle differences.
  • Previous research has looked at the sequencing of exploratory learning: should it be instruction followed by exploratory learning, or the other way around? This is a more interesting question to ask than whether you need to learn on your own or be taught — maybe they both are important, but the sequence is the key.
  • This paper is 7 or 8 years old, and the top education journals are now moving towards more of these types of studies.  If we want to play in these big journals, what can we do to follow those kind of approaches?
  • That depends on the journal, as some are still focused on more traditional educational research, but if you want to publish in Computers & Education, you have to do a quantitative or mixed method study
  • I’m not sure about that. I think that Computers & Education, for example, is keen to publish more qualitative work, but they just don’t tend to get qualitative submissions in good standard
  • You could certainly do this study from a qualitative perspective about the user experiences, and it would still be a good study as long as the methods were rigorous
  • A profound variable in learning is the teacher, so they made a conscious decision in this research to go with technology enhanced learning because that removed the instructor. At the OU, we have a lead in these type of studies because we can do things that are entirely online, which don’t have a variable of different instructors
  • We had this conversation this week in the Leverhulme PhD student training.  Maybe the results you are getting in your research are because of the positive presence of the researcher or the teacher. To do rich qualitative work, you need to control for interaction somehow.
  • In this sense, Tessa Eysink is not necessary tech-oriented, but by adopting educational technology, it allowed her to better answer her research questions in this study
  • This research design has me thinking about what could be built upon this study. For instance, there is research was undertaken in two countries, but there was no cross-cultural evaluation. It also would be interesting to see how these approaches led to learning gains over time. The post-test in this paper was taken right after the activity, but it would be interesting to do it later to determine retention.
  • Question: This research and its findings are interesting, but to what extent has this been practically implemented?
  • That’s a real problem in educational research that needs more attention. There are some efforts –> for example, UCL has been running a series of what research says to address problems in education (note: anyone have a source for this?). We have to consider how can we take the best research findings into the classroom. People often say we don’t have good research about education, but that’s entirely wrong now. We have 10-15 years of high quality research, but it isn’t being filtered down to practice. Yes, more research is needed, but what we know now needs to be translatable into the classroom. One reason for this is teacher training, which needs to incorporate research. Some teachers were trained 30 years ago, when our understanding of education was different.
  • That’s not entirely the case, because you do see dodgy research flooding through schools, like learning styles or benefits of hydration.
  • You’re right. Why does dodgy research take off?
  • A lot of it is very appealing and seems intuitive. These ideas are seductive to the broader audience because they make sense, and are easy to engage with.
  • A lot of it in the marketing of research findings
  • There isn’t really an easy route from research into commercialisation. You can come up with a good idea, but there’s no route towards marketing it or getting your research out there.
  • You have to push to get your findings out there to the general public, and few people do that.
  • Don’t underestimate the impact of things like our Innovative Pedagogy reports
  • As a follow up to this study, the authors have done 3 or 4 more studies to consider changing the sequence or intensity, and all of those papers also got published. Then their funding ran out, and the follow up funding ran out due to changes in the Dutch government, then the researchers moved in a different direction.
  • Moolenaar et al (2010) –> even if schools find method X is better than method Y, it’s the head and the informal head of schools that determine the practices adopted
  • There’s always a 5 or 6 year delay from research to practice. The only way to move research forward is to build on other people’s work. Even if you’re more of a qualitative researcher, you should still read these kinds of quantitative articles (and vice versa), in order to understand the groundbreaking results in the field and see how to connect it to your own work, regardless of methods
  • One criticism of this paper is that it is quite long, it could have done with a bit more headlines and summaries to make it more user-friendly
  • It’s interesting to see that it took a year for corrections to get it to published state. It was a long process.
  • It’s also interesting how they looked at learning gains by splitting it into four areas, as one common problem in research is understanding how to define learning gains. However, their different types of learning gains are not defined too well in this article.
  • I’m also intersted in their translation process between German and Dutch, and how rigorous this was. Who did the translations? What was the source language? There could be differences in understanding based on subtle translation differences.
  • It also seems interesting that they opted to do the study in two countries, especially as they have taken steps to account for so many other variables (subject, age, teacher impact, etc). This isn’t really justified in the paper, and the opportunity to do a cross-cultural comparison isn’t used.
  • One problem with conducting this in two different countries: In Dutch education you have statistics in high school, but statistics are not as emphasised in Germany. This means that students may have had different understandings or skills needed to accomplish this task. However, the difficulty in any research is that there are always mitigating cirumstances. There could be millions of reasons why students engaged in these tasks in certain ways, leading to different results. In a PhD thesis, you’d have to write about these confounding factors.
  • This paper does highlight that good research has international connections, and it’s really exciting that we can connect across countries to study different ideas. We should be using these relations more.
  • Yes, but those international connections were understated, and even ignored, in this paper. They had an opportunity for a really rich cross-cultural comparison, but they didn’t use it.
  • This article was a good addition to CALRG, as it takes on the Kirschner, Sweller and Clark article, as we had talked about this article in a session last year. At that time, we said their findings wouldn’t happen in real life and those are the things that Eysink et al specifically addressed. It shows that from this sort of discussion we are having today, you can  go on and build a foundation for your own research, and decide how to investigate ideas properly.
  • Educational technology is a paradigm war in a way, but it’s good to have this debate and see where the tipping point is, to understand where inquiry learning works and under which conditions.
  • This would be really irritating as a teacher, though, as they don’t or can’t read all the research, but see these conflicts between researchers.
  • Another interesting thing in the research is that it tends to look at one design over another. Why does it have to be one method? Why is more research not looking at combinations of different styles?
  • This paper highlights well that what gurus in the field say is a good starting point for your own papers. Then you build on that previous research, even if your findings are different, by verifying what you find with your analysis.
  • Things we need to think about: How can we build on what we currently know, and how do we make people aware of our findings? How do we bring people together to do the kind of research that this paper highlights?