Degree of Freedom

an adventure in online learning

  • Email
  • Facebook
  • Google+
  • Twitter
  • HOME
  • CRITICAL THINKING
    • Critical Thinking Essentials
    • High-Leverage Critical Thinking Teaching Practices
      • Fulfilling the Promise of Critical Thinking Education
      • What Are High-Leverage Critical Thinking Practices?
      • The 8 High-Leverage Critical Thinking Teaching Practices
    • LogicCheck
    • Critical Voter
    • Critical Thinking – Writing and Interviews
  • MOOCS
    • The One Year BA
    • MOOCS Essentials
    • MOOCS – Writing and Interviews
    • Degree of Freedom Podcast
  • TEACHING AND EDUCATION
    • Remote Teaching Resources
    • Online Course – Assessment
    • Teaching and Education – Writing and Interviews
    • Podcast – New Books in Education
  • ABOUT/NEWS
    • About the Author
    • Interviews/Speaking Engagements
    • In the News
    • Published Writing
  • BLOG
  • CONTACT

MOOCs and Peer Grading – Part 1

April 24, 2013 By DegreeofFreedom in Filed Under: Testing and Assessment

During a recent series on MOOCs and testing, the only subject I didn’t get to was peer-grading, the mechanism some massive classes are using to allow students to submit assignments that cannot be machine scored (such as written papers or other “artifacts” whose grading still requires the subtlety of the human  mind).

We’ll put aside attempts to computerize the grading of essays for another time, and instead focus on the primary tools used to add written assignments and similar human-graded projects to a MOOC course.

Keep in mind that this piece is informed by my experience with a grand total of one course that includes peer-grading.  But it’s also informed by experience in the testing industry that exposed me to the core technique MOOCs are using to scale subjective assignments: assessment rubrics.

While the term “rubric” can be used generically to describe any set of instructions for evaluation, in the professional testing world a rubric is a formal set of criteria that assigns a specific number of points to different aspects of a graded project.

Rubrics I’ve created or reviewed in the past used 0-3, 0-4 and 0-10 scales (with each point level being associated with a specific set of features the grader should be looking at when assigning scores).  And most rubrics ask graders to assign these point levels to multiple components of an assignment.  For example, the rubric below was used to score a student’s written answer to a question involving copyright, trademark and other intellectual property issues:

rubric-copyright

You’ll notice that each numerical score includes a detailed description of what type of answer qualities for this grade.

Essay scoring has long used rubrics to make grading more efficient and cost-effective.  In fact, the efficiency of rubric-based scoring (coupled with tools that automate the entire review and grading process) are why it’s been possible to add essays to the SAT and ACT without doubling or tripling their price.

My Coursera Modernism and Post Modernism class leverages the time-tested technique of rubric scoring to support a class where the grade is based entirely on how well students do on eight assigned papers.  And to get around the need for the teacher or his staff to grade tens of thousands of assignments, the job of grading has been handed to students themselves with each student who submits a paper being asked to pass judgment on the work of three others.

The rubric we have used for every assignment is the same and consists of three criteria (Quality of Argument, Quality of Evidence and Quality of Exposition), each of which can be assigned one of four following scores.  For instance, Quality of Argument are graded based on this scale:

rubric-coursera

Graders are also given the opportunity to leave comments on why they graded each of these criteria as they did, as well as provide feedback on the essay as a whole.

Having received grades on several assignments (as well as grading many more), I don’t think I’ve gotten (or given) a grade that could be described as grossly unfair, which means that the system is working reasonably well.  This conclusion is supported by reports I’ve heard about that found high correlations between peer-review grades and how instructors would grade the same essays themselves.

While that’s all well and good, the professional testing geek in me has a few points that need to be included in any discussion of the quality and effectiveness of this technique, a set of critiques I will provide in detail tomorrow.

Onward to Part 2

Share this post:

  • Tweet
  • Email
  • Print

Trackbacks

  1. 【認識MOOCs #18】MOOCs的測驗與評量 | 臺灣大學MOOC says:
    July 18, 2014 at 4:38 am

    […] http://degreeoffreedom.org/moocs-and-peer-grading-1/ […]

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Recent
  • Comments
  • Tags

Sign up for the Degree of Freedom Newsletter

Recent Posts

  • Event and Interview
  • How to Win an Argument
  • Ethics Bowl
  • Interview with Michael Horn and Vox
  • Channeling Doubt Productively
  • Leveraging the Power of Doubt
  • A Juicy Topic – Critical Thinking

Latest Published Writing

Remote Learning Begs the Question:  Must Lectures Be So Long? (EdSurge – May 25, 2020)

It’s Time to Get Serious About Teaching Critical Thinking (Inside Higher Education – March 2, 2020)

Why the ‘Best’ Ideas in Education Technology and Reform Don’t Win (EdSurge – February 5, 2020)

Is Education Entering an “Age of Alternatives”? (EdSurge – March 2, 2019)

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,063 other subscribers

Copyright © 2025 · Degree of Freedom. All Rights Reserved.

loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.