Degree of Freedom

an adventure in online learning

  • Email
  • Facebook
  • Google+
  • Twitter
  • HOME
  • CRITICAL THINKING
    • Critical Thinking Essentials
    • High-Leverage Critical Thinking Teaching Practices
      • Fulfilling the Promise of Critical Thinking Education
      • What Are High-Leverage Critical Thinking Practices?
      • The 8 High-Leverage Critical Thinking Teaching Practices
    • LogicCheck
    • Critical Voter
    • Critical Thinking – Writing and Interviews
  • MOOCS
    • The One Year BA
    • MOOCS Essentials
    • MOOCS – Writing and Interviews
    • Degree of Freedom Podcast
  • TEACHING AND EDUCATION
    • Remote Teaching Resources
    • Online Course – Assessment
    • Teaching and Education – Writing and Interviews
    • Podcast – New Books in Education
  • ABOUT/NEWS
    • About the Author
    • Interviews/Speaking Engagements
    • In the News
    • Published Writing
  • BLOG
  • CONTACT

MOOCs and Grading – Interpreting Obviousity Results

November 1, 2013 By DegreeofFreedom in Filed Under: Testing and Assessment

Getting back to the Obviousity scores we looked at a couple of days ago, the lessons to be drawn even from my simple experiment go beyond just reinforcing the need to follow the professional item-writing principles, like those I recommended a few months back.

Yes, MOOC developers should avoid true/false questions and do a better job at not telegraphing the right answers to multiple-choice questions (ideally as a first step towards paying more attention to assessment quality and polish overall).

But I think we can see an answer to other questions regarding MOOCs from these relatively simple Obviousity calculations.  For example, why are MOOC classes that focus on science and technology subjects still generally considered to be better (or at least more challenging) than massive courses in social sciences or the humanities?

It may be that MOOCs on topics like computer science have just been around longer, which means they’ve benefited from being in the field where student input and teacher experience contributed to their ongoing improvement.  Or perhaps the teachers and students drawn to a technology-based educational platform are simply more inclined to put energy into a class covering a scientific or technical subject.

But I would make the case that assessments associated with courses whose content intersects with mathematics lend themselves to open-ended test items requiring calculation, test items that are intrinsically more challenging than item types such as four-response multiple-choice where the baseline score for random guessing is going to be 25% (vs. ~0% for open ended items which reward almost nothing for guesswork).  The low Obviousity score for my statistics class for example (vs. the higher score for the two humanities classes I analyzed) likely derives from the nature of the material as much as it does from decisions by course developers regarding how they would assess learning of that material.

The degree to which including True/False items in a test drives up Obviosuity scores helps explain why professional test developers eschew this item type.  And given how much inclusion of different item styles (including not just multiple-choice but also multiple response and matching) decreases the chance of getting a question right by throwing darts at the screen, why not use as many item variants as possible to make assessments more interesting (and, again, more challenging)?

While peer-reviewed writing assignments present their own problems, a course which bases final grades on a mix of automated quizzes and peer-scored essays also stands a better chance of giving students the means to put their learning to work than do less balanced classes that base grades entirely on test scores.  And some interesting variants of rubric-based self-scoring (especially in my Science and Cooking class where you self-evaluate your own short write ups of kitchen experiments) demonstrate interesting options for using open-ended assessment with non-mathematical material.

I understand that many MOOC professors already concerned about high drop-out rates might not care if it’s too easy to pass their courses (with harder testing being perceived as yet another barrier that might cause more students to quit a class).  But for those of us who finish our classes, an 82% final grade in a course that really challenged me was far more satisfying than the 97% I got in a course that didn’t.  So who’s to say whether more difficult assessment might increase vs. decrease engagement levels?

At the very least, the time it would take for course developers to put their own tests through the type of Obviousity Index calculations I performed is probably 5-10 minutes per test tops (actually, it took me far less than this even with the time needed to retrieve the quarter I was flipping which kept rolling under my desk). And while such an effort is no substitute for a more rigorous item analysis that would flag obvious problems based on things other than coin flips and text length, any procedure that can stop crappy questions from getting into field use is a step in the right direction.

Share this post:

  • Tweet
  • Email
  • Print

Comments

  1. Elizabeth says

    November 1, 2013 at 2:33 pm

    I just got my certificate from the Animal Behavior class on coursera, and I think they did a pretty good job in developing the test questions. Although you did get three tries, each try would vary the questions a bit, and would vary the answers a bit for the questions that did remain the same.

    They had one peer review assignment – we had to take a scientific paper and write a popular science press version of it (modeled on The Conversation website).

    I thought it was a good mix of testing, and especially now that you’ve posted these articles, I think they followed a lot of these guidelines. I honestly don’t remember if there were any true/ false, and you definitely couldn’t choose the longest answer – they were all long!

    This certificate (and my cert from the Holocaust class, which was solely based on three papers) are the ones I’m most proud of.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Recent
  • Comments
  • Tags

Sign up for the Degree of Freedom Newsletter

Recent Posts

  • Event and Interview
  • How to Win an Argument
  • Ethics Bowl
  • Interview with Michael Horn and Vox
  • Channeling Doubt Productively
  • Leveraging the Power of Doubt
  • A Juicy Topic – Critical Thinking

Latest Published Writing

Remote Learning Begs the Question:  Must Lectures Be So Long? (EdSurge – May 25, 2020)

It’s Time to Get Serious About Teaching Critical Thinking (Inside Higher Education – March 2, 2020)

Why the ‘Best’ Ideas in Education Technology and Reform Don’t Win (EdSurge – February 5, 2020)

Is Education Entering an “Age of Alternatives”? (EdSurge – March 2, 2019)

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,063 other subscribers

Copyright © 2025 · Degree of Freedom. All Rights Reserved.

loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.