Share:

Reframe: Episode 75

Learning What Works in Online Learning

Reframe Episode 75

Reframe Podcast          More Miami Podcasts          Request Information

Online and virtual learning is becoming an important part of education, especially as more schools now strive to adapt and evolve in a post-pandemic world.

In this episode, Dr. Jason Abbitt, a Miami University associate professor who specializes in educational technology, online learning, and distance education, talks about new research that has implications for designing the kinds of educational experiences that can help students succeed in online and virtual learning environments.

Additional music: Lee Rosevere “As I was Saying,” “Thought Bubbles,” and “Curiosity.”

Read the transcript

James Loy:

This is Reframe, the podcast from the College of Education, Health and Society on the campus of Miami University in Oxford, Ohio.

The way we now approach teaching and learning, and even schooling in general, is changing, and technology is driving a great deal of that change.

Using technology to create engaging online learning and virtual experiences are no longer idealistic aspirations among innovative educators, who are trying to find new ways to engage learners.

These tools and techniques are here. Right now. And, in fact, they are quickly becoming more important than ever, especially as more schools and universities continue to try and adapt and evolve in our post-pandemic world.

So while online and virtual learning experiences may now be making more headlines than ever, the phenomenon has actually been studied extensively, especially by researchers like Dr. Jason Abbitt.

Jason Abbitt:

Online learning is one of those things I could talk about pretty much all day long. It's something I find interesting in that I think there's ways to do it well. I think it opens up educational opportunities where they may not be elsewhere. So how do we do it the best that we can?

James Loy:

Dr. Abbitt is a Miami University associate professor of educational psychology, who specializes in studying Educational Technology, Online Learning, and Distance Education.

His latest study, conducted alongside his colleague and co-author Miami associate professor Sarah Watt, combines online learning and learning analytics with virtual field experiences for aspiring teachers.

It’s a study that not only takes a unique approach to online learning in teacher education, but it also has wider implications for a variety of contexts -- including how we improve online learning, how to potentially better monitor student achievement in online courses, as well as how we can start to understand what works, what doesn’t, and why.

These are some of the concerns that are becoming increasingly pressing, especially in a world that is turning more and more attention and urgency to online learning, remote delivery, and virtual teaching experiences in general.

Jason Abbitt:

I think they are becoming more prevalent. In fact, with the current COVID 19 environment, I think we're rapidly going to see a lot more of them.

(Transition MUSIC)

James Loy:

In their new study, Dr. Jason Abbitt and Dr. Sarah Watt used a learning management system, or LMS, to measure the relationship between user activity and student performance through a series of online learning modules based around virtual field experiences.

In the field of teacher education, field experiences are how teachers in training start to get experience of working in real classrooms with real students. Most people might recognize it by another name. It’s often called student-teaching.

And virtual field experiences is just what it sounds like. It adds either an internet-based or multi-media component that can augment aspects of traditional in-person student-teaching.

Jason Abbitt:

There's always been a preference towards that live experience in a school. That's as almost irreplaceable. So that's been important. And I think we're seeing more of these virtual field experience be a little bit more targeted, and trying to make sure when we really want students to have an experience with a particular teaching strategy, for example. Then we can select the media content, and really have a little more targeted control in this vicarious field experience, or virtual field experience.

James Loy:

Now since this particular study took place in a class of college students learning how to become special educators, Dr. Abbitt and Dr. Watt created a virtual field experience around a series of video modules that highlighted 22 different high leverage teaching practices.

These were videos that featured real teachers actually teaching real students. And the college students, for their assignment, their job was to watch these videos and correctly identify which of these teaching practices they observed.

The college students completed this assignment 3 different times across 3 different modules. And throughout the entire process, the researchers were able to track how accurately students were in identifying the teaching practices they observed, and their improvement over time.

But they also tracked how students navigated each module, what they clicked on, how often they clicked on it, how often they viewed certain pages, and how long each module took to complete.

Jason Abbitt:

One thing that we were primarily interested, I think, was: What was the timeframe? Like, we designed these activities with the expectation that they … that the complete experience would require somewhere between, I would say, four to six hours of work. And we know that our students are working asynchronously. So they might do part of it one day, part of it another day.

And what we were interested in, in this study, was: When they started, and when they finished, and did it fit within the time frame we were expecting?

So this was a week of a class that would have had a start and end point. And we also wanted to know, within that week, how many days were they active? So then we started looking at some different characteristics, that we were able to pull the data out of the LMS about that navigational records, and analyze it, so that it gave us some information.

James Loy:

So it seems to me like the purpose of this research was designed and intended to hopefully get at two different kinds of problems, and hopefully try to provide solutions to each.

On one side, you have the concept of virtual field experiences. So I know in teacher education, like, student-teaching, getting experience in the field can be a challenge because you have to transport students to the school districts you’ll be working in, and sometimes that can be a problem. And also, of course, if you have an online program, your students could be potentially from all over the country. And so how do they get this field experience, or this real classroom experience? So it seems like using these videos would be a way to test if that actually is a viable way to do virtual field experiences.

But then on the other side, the second problem you talk about is the idea of learning analytics, which that’s a big issue today, right? What’s effective? What’s not effective? What works and what doesn’t work? So when designing an online class or online experience, you’re also looking at how students interact with it and what actually is an effective way to design learning experiences in general, which could apply to almost any kind of class.

Is that accurate and correct to say about this research?

Jason Abbitt:

It is. Yes. And really, as an initial study into this activity, really just gives us a baseline to compare future results to. We did see some things that … learning analytics is a pretty broad field. But often, we look at data … and we're trying to figure out, like, out of all of the data that we see -- so we see things like the frequency of how many times a person views a page? And how long ahead of a due date did they submit an assignment? We can do all of the calculations. And our goal, initially, I think, with this study, was to find out, well, out of all of that, what measures actually matter? What relates to the outcome for pre-service teachers? In this case, what relates to their ability to observe effectively, and identify and notice specific teaching practices?

So that’s the skill we were trying to cultivate with this. And what we wanted to try to identify is: what's the evidence within what's generally called “clickstream data” within an LMS. So things that monitor every click. And also, we can add some calculations on that to characterize the student behavior.

But out of all of what we could calculate, what actually matters? What relates to that performance? And also, what was the level of activity throughout that?

So, I think what we generally saw was out of the set of three virtual field experiences that were within this class, we saw a lot more activity -- so more clicks, more page views -- within that first virtual field experience, than we did in the second or third. Which, as a faculty member, as a teacher, you look at that and think, “Oh, wait. Does that mean they were weren't working hard on the second or third ones?”

And if I were just looking at the frequency analytic data, then I might believe that. That could almost make sense. But when we look at the actual accuracy, to what degree the students were able to identify high leverage teaching practices, we saw an increase in that over time. So we're able to see that they were getting a higher level of achievement with a lower level of clickstream activity within the LMS. Which, in general, is a good thing. That means they were able to get better at this over time. In fact, we interpret that data as seeing that they were able to get more efficient with how they were doing this. And we interpreted that as a positive outcome.

They were getting more accurate over time with less activity. So, that generally is an indicator of an increasing mastery of that skill.

James Loy:

So the first assignment, or the first module, it took them longer to complete. But that’s most likely because there's a learning curve, right? They’re just learning how to interact with the assignment, even the interface itself.

So it’s like when you drive somewhere. The first time you go to a place it might take you longer because you don’t actually know where you’re going. But the second time you drive, it’s much quicker because you know how to get from point A to point B.

Jason Abbitt:

Exactly. And that’s one aspect that's generally thought of as a positive. A best practice within online learning is that you have assignments and activities that are similar, so that there's less of a cognitive load on knowing, like, how to do that.

Another analogy would be, like, think about when your grocery store was last reorganized. Suddenly it would take you twice as long to get around a grocery store. Whereas if you know how something's going to work, then you can get through it more efficiently. So there's some benefit to repeating a similar type of activity like this that in … what we hope is that the students won’t have to spend as much time knowing what they have to do for this activity, and can better focus on getting at what they need to learn from that activity, and really enhancing that skill. And generally we did see that overall trend.

We also found some results that were not surprising in some cases. Like, I mentioned we were interested in the timing of when someone was working. And that may lead to, in the future, some teaching strategies that we could implement within our online courses.

And what we saw was -- I often referred to it as the most unsurprising result ever -- which is students who worked on this activity over a longer period of time did better than those who worked on it in one or two days.

So, that short duration -- trying to do a lot of work in a short amount of time -- was not as effective. They were not as accurate in what they produced, than those who spent a little bit more time engaged with this activity. Again, I think of that as very unsurprising. Because as a teacher you think, “Well, of course. Spend more time on something, you're gonna do better than if you try to cram it all into a short duration.”

But, in this case, we can now see that: One, we can detect that. So now that we can detect it, in the future, we could leverage that with a some type of messaging about … reminders to students, or other motivational interventions that would prompt them to get in and get engaged with that. There are some other things that we might also be able to do. But it was interesting to actually be able see that within the data, and now have evidence that we can move forward with. 

(Transition MUSIC)

James Loy:

So by analyzing the user activity data from the learning management system, Dr. Abbitt and Dr. Watt found two key findings. This first one was that as students worked through each online module, they used less and less clicks and page views to compete each assignment.

Secondly, they also found that students who spaced their worked out across a larger span of time span tended to perform better than students who completed everything all at once.

So Dr. Abbitt, with these two key findings, what are some of the biggest implications for educators? Maybe not just for teacher educators who hope to use these virtual field experiences to help teachers in training become professional educators. But also, maybe in general. What does this mean for educators who hope to design effective online learning experiences of their own? What are the implications there?

Jason Abbitt:

So I think there really are two sides of this story.

One is the virtual field experience. We did see an increasing mastery of the skill that we were targeting. So that gives us a lot of confidence that we're on to something that’s promoting the skill that we're trying to target with this, and trying to help develop within our pre-service teacher candidates. So that was a positive. So the one big takeaway is simply that this model of virtual field experience can produce positive outcomes very clearly.

And then the other side of this is thinking about the learning and analytics and what that tells us about the design of these modules. And we saw value in that repetition of the same type of module repeated throughout a class. So, one of the key parts of this virtual field experience is the observation that takes place, which is a student watching a video and trying to monitor for specific things. And what we're trying to do is, again, increase really two types of observational accuracy.

One is how many times do they see something that was actually there? But we also measured how many times did they note something that wasn't present in the video? So that false positive. And given a list of 22 high leverage practices, a lot of pre-service teachers are going to feel compelled to notice all of them at least once. Just because that seems like what an activity would be asking you to do is: Find all of these within this video. And, really, we wanted to increase their accuracy, and their ability to really deconstruct what they're seeing, as opposed to fill out a list of 22 practices.

So out of 22, there are only a certain number present. And so, what we're trying to, I think, do is change that activity so that we still see the same increase in accuracy over time, but also a decrease in that inaccurate observation, that false positive. So part of that is as simple as improving our directions in how we introduce the activity.

So that now that we have seen that behavior of trying to identify things that really aren't there, we can now modify our instructional design a bit, to reshape the instructions, set the expectations a little bit differently. And that will be interesting to see how that changes some of the LMS analytics.

So the next step with this is really to, again, operationalize what we've seen as far as the student online behavior, and how it relates to their accuracy. So that we can take action on this learning analytics, and make an intelligent instructional decision based on that data that we see.

James Loy:

And since learning analytics was a big part of this study, and what you hoped to look at, what can you tell me about the state of learning analytics today? Because finally understanding what works and what doesn’t, or what’s effective and what’s ineffective, especially with regard to teaching and learning, and student success – it’s like those insights seem to be almost like the Holy Grail that we’re searching for in a lot of ways.

So how close do you think this new study gets to chipping away at that Holy Grail of, yes, these are effective measures? This actually works? And how much further do we have to go before learning analytics is not such a big question mark for people?

Jason Abbitt:

That is a good question. As I've moved more towards research in learning analytics, and also participated in the faculty learning community on learning analytics at Miami, we all take a lot of different approaches to how we try to understand what data is important. And there are many different approaches. And the Holy Grail would be that we're able to see things that … see those patterns that we couldn't see otherwise, that would impact student achievement.

 

And I do think we are still a ways away from understanding exactly what measures are important. So I end up always back at square one, which is: We think this is important based on relevant learning theory, and some other things.

But we're also dealing in imperfect circumstances in that our technology systems weren't meant to track these things, specifically. By that I mean that canvas tracks every click, but every click may not be important. So the signal-to-noise ratio is a little out of whack in this case. And we're trying to refine things to get more signal out of that noise, and do so in an efficient way, and that's a pretty big challenge.

We also have some more practical challenges of … One thing we noted in this study was: To really make sense of this data, it would be helpful to have large samples - a lot of students doing the same thing. However, best practice in online learning would be smaller class sizes. So, we're at a bit of a conundrum there in that we can't generate enough data, while also adhering to good instructional practice. So we have to take a slower approach to this, and I think that's where we are.

And there are some other very unique circumstances like the massively open online courses, where you can get a lot of data, and that's what a lot of the learning and analytics research is based on - those things that have very large samples.

I think what we were trying to do with this study is instead of looking at that big data picture, think about small-scale learning analytics. How can it apply to a smaller course setting in a way that's still meaningful?

And we might not find every correlation that we might find if we had bigger data sets. But I think as long as we can get a few actionable pieces, that we can then continue to test for, and see how we can take analytics, and move it to an actionable item, Then I think we're in the right spot.

But the big picture is learning analytics is all over the place. But where I don't see it looking a lot is on this small scale. In other words, where learning analytics is about big data, and big data means you have a lot of people, a lot of subjects and a lot of variables, a lot of things measured. And, practically speaking, in our program, where I'm most interested in having an impact, we're not gonna see that level of big data. So we have to take a slightly different approach, and try to identify things that are meaningful in a smaller scale.

And that's where I hope to continue working is we want to find small things that are actionable, and then lead to our next improvement that we want to make about/with the virtual field experience, and try to still make forward progress, while focusing on smaller sample sizes.

(Transition MUSIC)

James Loy:

So what are some of the best practices when it comes to designing an online learning experience? Maybe for any educator who hopes to do more with online teaching and learning, or maybe someone who may be forced to do more with this soon?

I know we mentioned a couple already with learning analytics, you know, separating the signal from the noise to learn what’s working and what’s not working. Also, smaller class sizes I believe is one you just mentioned as well. But could you talk a bit more about some of the best practices in general when designing an online learning experience?

Jason Abbitt:

Yeah. So one thing that I mentioned here was the similarity in the structure of the activity, and I think the one challenge that many instructors face when initially shifting to online learning is solely thinking about the subject matter, and how to organize, deconstruct, break down, sequence the learning around the subject matter. And I don't know that as much attention is paid to the student learning experience. Like, what it looks like from a student's view in order to be successful in an online class.

So one thing that we did here was make sure that, again, these modules happen at three different points in class. So, at three different points within an eight-week class, they were doing a similar experience. So that meant by the second time they knew how to do it. And then by the third time, they really knew how to do it. So they're able to do something similar, and I think oftentimes we want to…  in designing online classes, we need to think about the structure and similarity from one week to the next, and how students can do assignments, activities that are similar, but maybe focused on different aspects of the subject matter that we want them to learn about, or the skills we want them to develop. But the similarity in that structure is helpful to get there. In other words, the less time a student has to spend figuring out what they have to do, the more of that cognitive load they can devote towards actually learning the subject matter.

So, a good structure that can be … that's similar and can be familiar. So that as a student goes through a course, or perhaps even an entire set of courses that are similar in some respect, allows them to learn more efficiently, and hopefully more effectively as well.

James Loy:

I’m also just curious to get your take on how COVID has fundamentally changed how we’re talking about or thinking about online and virtual teaching. Because things changed so quickly. In the spring they made that switch to remote delivery and online learning like so quickly. And were many schools caught off guard? Did the wish they were further along than they were and had to play catchup? Or how do you think – as we look forward to what could potentially happen going forward now – how is this going to change how we either approach it or deal with it or how it evolves from here?

Jason Abbitt:

Yeah, I think what we do run the risk of is believing that remote delivery and online learning are the same. In other words, we had to shift to remote delivery in a week. So what started on a Monday is, like, by Friday every campus was … classes were no longer meeting on campus.

So, what we're doing in online learning is a much longer process, a bigger process, of intentional course design, and careful attention to student experience. And there's really a bigger picture than what we did to, essentially, put on a spare tire for remote delivery, which is: We had a problem. We adjusted.

And now, there have been some, I think, phenomenal things that faculty and students have done to really step up to this challenge, and learned a lot in a very short amount of time.

But I think in the future, most places, probably across the board, wished that they had been a little bit further along in online learning. So I do think most colleges, most universities in general, wished they had been a little differently prepared.

But I think also there are many that should be proud of what they're able to do. So many faculty members, I think, adapted very quickly to get something that was successful in a remote delivery fashion.

What I hope the next steps are is that that gives them a basis to reflect on how to create an actual online learning experience a little bit differently. If it's necessary, and if it makes sense.

Because we've always had this approach with online learning that it has to serve some purpose. There has to be a reason for teaching online, whether that's to increase accessibility of our courses geographically, or temporarily, like to people who may be working at different times. There's a reason for moving online that I think is … Something we need to think about moving forward is how do we do that, and also use this as a way for preparing for disasters, whether they be pandemics, or snowstorms, or something in between? 

Yeah. I hope to goodness we don't see quite this experience forever. But I am eager for the conversation to shift from keeping the wheels on the bus, to how do we do things most effectively, and intentionally, and carefully, and plan for online instruction? As opposed to the remote delivery, spare tire that we just put on, which, again, worked fantastic, and there were a lot of successes. So, nothing to complain about there. But I think we need to now shift into thinking, okay, now, how do we take that experience, and learn from it, and plan differently in the future.

James Loy:

Dr. Jason Abbitt is a Miami University associate professor of educational psychology, and his research study with Dr. Sarah Watt it set to be published in the Journal of Technology and Teacher Education.

This is the Reframe Podcast. We have many more episodes available, right now, wherever podcasts are found.