Between Two Worlds – MOOCs and Assessment

Questions regarding what role MOOCs and other free learning resources will play in the wider educational universe may ultimately come down not to how well they teach but how well they can prove what students have learned.

And the way learning has traditionally been proven is through graded assignments that I’ll be lumping together under the category of “assessments” in a week-long discussion on where we are and where we need to go if open courses (massive or otherwise) are going to fulfill their potential as supplements or even alternatives to traditional brick-and-mortar education.

Having spent a number of years developing professional assessments for the employment and education industries, I wanted to make a case for testing being both an art and a science.

In traditional classrooms, artistry alone is often enough to ensure a successful teaching and assessment experience, largely because the classroom has an ultimate authority (the instructor) who gets to determine what knowledge and skills define mastery and what measurement techniques will be use to determine if this mastery has been achieved.

Yes, I know that standards at the state and (increasingly) national level play a major role in determining what will be included in a curriculum (especially in K-12).  And “teaching to the test” (i.e., aligning instruction with a standardized curriculum with the ultimate goal of high scores on state-level exams) is an issue of ongoing controversy.

I don’t plan to get into that debate here, but I do want to point out that even with such constraints, teachers have the flexibility to use their own creativity and experience to “mix-it-up” in the classroom (which is why all teachers and classes are not the same) which includes developing their own tools to measure successful learning.

While there are risks (such as arbitrariness or favoritism) inherent in grading authority being placed in the hands of a single individual, giving teachers the flexibility to experiment with different testing techniques (quizzes, exams, graded papers and projects, classroom participation, etc.) is generally a good thing.  And if everyone gets a test question wrong or a student wants a second chance on a paper, instructors (even those teaching large survey classes) usually have the ability to make judgment calls as situations and facts warrant.

But when testing becomes standardized (like it is with state-wide educational assessments, college entrance exams, professional licensure and certification exams), then the goal is no longer flexibility but comparability.  For the purpose of a standardized exam is to generate data about students who have had a wide variety of learning, testing and grading processes in varying classrooms that allow students to be accurately compared in terms of knowledge learned or skills mastered.

This is where testing as a science comes into play.  For professional testing leverages a range of statistical techniques to ensure (1) students are having the same testing experience so that scores can be accurately compared without the interference of outside “noise;” and (2) the results of testing provide a fair and valid measurement of student achievement.

This is not to say that professional testing is devoid of artistry.  For as we’ll discuss over the coming days, the difference between a good test item and a poor one usually comes down to the talent of the item writer.  But unlike professors writing test question for their own classes, test developers can usually identify clunker questions before a test is used in the field, which means testing statistics can provide powerful quality control over the item-writing process.

In one way, MOOCs resemble the classroom in that course materials (lectures, assignments, assessments) are still primarily generated by a single instructor (or small team of teachers and support staff).  But this material might be accessed by thousands or tens of thousands of students, which means testing within a MOOC environment is at the scale of many standardized assessments.

From what I’ve seen so far of different MOOCs from different professors/vendors, graded assessments (presuming they’re used at all) look suspiciously like repurposed classroom quizzes.  And while some of them have been quite challenging, I don’t think I’ve encountered any online test yet that would withstand scrutiny if judged against the standards of professional test design.

And what are those standards?  Tune in tomorrow to find out.

Next – Assessment Planning

, ,

2 Responses to Between Two Worlds – MOOCs and Assessment

  1. Robert McGuire April 9, 2013 at 2:19 pm #

    I’m noticing something similar about most of the material in general, Jon — that there is a tendency to cram the square peg of traditional classroom materials (quizzes and everything else) into the round hole of the MOOC classroom. It’s early days, still, so growing pains are inevitable, but hopefully the individual faculty will get the support they need to re-think the course design for the particular environment.

    • DegreeofFreedom April 10, 2013 at 3:29 pm #

      You have a point that I’ve been evaluating courses based on the complete “package” of material (lectures, reading assignments, homework, quizzes, etc.) associated with the course, although I’ve tried to not judge courses based entirely on which has more of these materials in one location. My general perspective is that if you are going to decide any of this material is critical to the course, you should do your best to ensure it is of the highest quality possible (which is why I’m focusing so much on testing this week since that seems to be one of the weakest elements of most MOOCs). But I do agree that the final result of the MOOC experiment needs to make room for the possible complete re-imaginging of what a “course” should look like (something that’s we’re just seeing small signs of as of now).

Leave a Reply


+ 2 = nine