As an instructor, how can I make the most out of limited contact hours with students? A semester seems like a long time… between 28 and 30 hour-and-twenty-minute meetings. But in reality, that time goes quickly, and when the end of the semester rolls around, I often ask myself, “did my students actually learn anything meaningful?”
To make learning experiences meaningful, I struggle with a basic question that most collegiate educators struggle with daily. Do I go broad, and attempt to “cover” lots of material? Or do I choose core concepts, and go into depth, giving students the time and scaffolding to ask deeper questions about the knowledge itself. This deep thinking is the gold-mine that all instructors are trying to find, but sometimes students need some “surface knowledge” before they can start digging deeper.
But I don’t want to spend eighty minutes per class delivering surface knowledge through a stale power point. And by the looks on my students’ faces when I lecture for only thirty minutes, they can only take so much surface knowledge, which is great for me, because I would rather have more interaction. Still, we need to cover some important topics. This is where Blended Learning in Higher Education (Jossey-Bass) offers direction. By mixing online instruction with face-to-face instruction in so-called “blended learning” environments (or hybrid courses as they are sometimes called), instructors can leave broad knowledge education to the computer, and make use of face-to-face instruction time to explain complicated topics like only an expert can. Authors Randy Garrison and Norman Vaughan present theoretical backing with practical strategies for designing blended learning classes. They also present six case studies of course redesigns for existing face-to-face courses.
I took interest in a case study that redesigned a Foundations of Chemistry course — a 500+ student lecture that met twice per week, with recitation sessions led by graduate teaching assistants. On average, nearly thirty percent of students failed the course. Critics pointed out that a lack of student/professor engagement led to the high failure rate, but the university had no intention of making the class smaller.
The authors consulted with the instructor to deliver a hybrid course that used online modules to provide instruction to students, followed by a brief assessment to test acquisition and retention of content. One of the lecture sessions was replaced with an additional recitation section. During the lecture, the professor went into greater depth on difficult concepts, but spent little (if any) time on easier concepts that students had mastered through the online modules and assessments.
I have struggled with is the broad knowledge versus deep learning dichotomy, and I think this case provides an elegant solution to any instructor struggling with this issue. The authors used the underlying theory of “just in time teaching” (JiTT) (Novak et al., 1999) to restructure the course — and indeed, the entire teaching/learning paradigm.
“Just in time teaching”
One of the issues teachers face is the debate between “covering” lots of information and going deeper into knowledge. I will call this the “breadth vs. depth debate.” Giving students a chance to explore knowledge and ask meaningful questions promotes stronger retention and a stronger ability to apply knowledge. But this approach takes much longer than a broad lecture. Broad lectures do little for students that can learn the basic content from a self-directed online module or textbook reading. Given ten competencies on a given lecture topic, if teachers could pinpoint two or three competencies that needed their full attention, they could make better use of class time. But how do you pinpoint these competencies?
The answer, according to Garrison & Vaughan, is “just in time teaching” (JiTT). JiTT takes a mastery approach to teaching and learning. Students complete an online quiz prior to the lesson. The instructor performs an item analysis to determine which questions were the most difficult (the questions with the most number of incorrect responders). These topics receive in-depth attention during the lecture. The students are then given a chance to take the quiz a second time, after the lecture session.
The clear advantage for JiTT is targeted instruction. Weak points get the teacher’s attention, while easier content is largely glossed over. A second advantage is that after taking an assessment, students develop a stronger sense of what they do understand, and what they don’t understand. This gives their questions much more direction. If you know that you don’t knowa topic, you are more likely to ask a question. The assessments give individualized student feedback to the instructor, helping to identify students who might struggle without added attention. The assessments also give students extra practice for test-taking. Finally, although not foolproof, it helps to solve the age-old problem of students failing to read assigned course material.
Two drawbacks for JiTT exist. First, building online assessments takes time. Teachers have to decide which competencies to test, then write psychometrically-sound items to assess those competencies. If you dislike writing multiple choice items for exams, this presents a more formidable obstacle. The labor is front-loaded, however. Once you have assessments developed, you can use them in future courses. The second drawback is that some students may object to the idea of being assessed on their ability to read and retain knowledge for a textbook chapter or an online learning module. However, two grading options can ease this problem. Instructors can use mastery settings (take the higher of the pre-test and post-test grades) or average the pre-test and post-test grades (in order to provide an incentive for students to read and give their best effort on the pre-test).
My experience making it happen
I am teaching a 300-level course this semester, and I decided to implement this strategy over the weekend. So far, response has been positive, in large part because I believe that students are quite cued in during assessments. They are the testing-generation, so if there is one skill they have refined, it’s the ability to take tests. I’ll discuss my implementation results in the next few paragraphs, and provide some thoughts for future improvements.
Figuring out the mastery settings for online quizzes. The Angel course management system we use at Michigan State has a generic mastery option that allows students to retake an assessment for an instructor-specified number of times. The grade can be set to take an average of two scores, or it can be set to take the higher of the two scores. For JiTT to work, students have to take the pre-quiz. If students get a high mark on a pre-quiz, I see no need to force them to take the post-quiz. The simple way to do things would be to make one quiz with mastery settings to average the best of two grades. But the problem here is that students might be compelled to take the quiz only once, after the lecture, so that conceivably they could get a higher grade.
To ensure that all students take the pre-quiz, it means creating two separate assessments: one pre-quiz and one post-quiz, then manually importing the pre-quiz grade for students who elect not to take the post-quiz. I will trade a little bit of extra work for having a more focused face-to-face class session. And fortunately, items copy easily from one assessment to another, so creating the post-quiz only adds a minute amount of labor. I have resolved that computer-based education can never be fully automated. So be it!
More focus and direction for students. Overall, student response has been positive. The feedback I got during class suggested that students liked the quick feedback, and it cued them in with more specific questions for class. Our first quiz covered growth during adolescence, and most students struggled with the concept of the disproportional growth that occurs in the trunk during the growth spurt. We were able to spend a good amount of time discussing peak height velocity, differences between early-maturing and late-maturing individuals, and inter-sex differences in growth rates and timing.
The questions from students were more direct. In some cases, they referenced specific questions from the test, which I felt might be a downfall, in that I might be falling into the trap of teaching to the test… but I think that is a little misleading. Because the questions are tied into an assessment, they make an impression. Because the content “counts” — albeit for only a small quiz grade — the brain seems to have stronger recall. Teaching to the test? Maybe. But now I feel like I have their attention.
Ways to make it work better
Better modules. I can’t have online quizzes for nothing but textbook readings. Eventually I will need to provide some more interesting material for student consumption. An online presentation using Adobe Presenter would be the ideal way to present the broad-base of knowledge, and afterwards, students would take the quiz. This would make the knowledge more personalized; sometimes I don’t think that everything in the textbook is useful.
Deeper item banks. Ideally, instructors teaching the same class could work together to develop a shared item bank of assessment items. Having a test bank of fifty items adds flexibility to assessments, and reduces the practice effect improvements in test-taking and makes the grade primarily determined by content knowledge.
Scaffold the process. If you’re going to change the traditional learning paradigm, students will need to learn the new routine. Fortunately, in my course, they have seemed quite adept at navigating the online quiz. But understanding the pre-quiz/post-quiz rules and the averaging of grades may prove a more difficult venture. In future classes, I will have more drills on the process built in to the early weeks of class, so that students can learn the process without having heavy penalties for failure.
All in all, Just in Time Teaching has provided me with an effective tool to address the age-old problem of breadth of knowledge versus depth of understanding. I will pursue it in future semesters and will continue to provide commentary on my findings.
- Garrison, D. R., & Vaughan, N. D. (2008). Blended learning in higher education. San Francisco, CA: Jossey-Bass.
- Novak, G. M., Patterson, E. T., Gavrin, A. D., Christian, W. (1999). Just in time teaching: blending active learning with web technology. New Jersey: Prentice Hall Series in Education Innovation.
3 thoughts on “Book Review: Blended Learning in Higher Education”
Awesome stuff! Are the questions on the post quiz the same as one the pre quiz?
Yes, the post quiz is the same. And the post quiz is optional. If the student is satisfied with the pre quiz grade, they can just skip the post quiz, and I will enter the pre quiz grade. The Angel system does not have a way to make this process completely automated, but fortunately, copying assessments is possible with a few key strokes.