One of the obstacles to acceptance of massive open online courses(MOOCs) is the potential for widespread cheating. Two University of Virginia researchers are offering a solution — but it may require MOOC instructors to do a little homework themselves.
In a special issue of the journal Research & Practice in Assessment on "MOOCs & Technology," an article entitled "Fair and Equitable Measurement of Student Learning in MOOCs: An Introduction to Item Response Theory Scale Linking and Score Equating," by assistant professor J. Patrick Meyer of the Curry School of Education and doctoral student Shi Zhu, looks at ways to address cheating in MOOCs.
As in any course, the goal for professors teaching MOOCs is for their students to learn without cutting corners. With thousands of students potentially enrolled in an individual course, course instructors must use more sophisticated methods to combat cheating, Meyer and Zhu write.
The authors suggested that one strategy to reduce cheating for MOOCs is to use multiple different tests covering the same content.
"Cheating by obtaining test items or answer keys in advance of the test can be countered by the use of multiple test forms," Meyer says. "However, this practice comes with its own complications. In order for the course to be fair, one version of the test cannot be more difficult than another. They all must have the same level of difficulty. Every test must measure the same level of learning.
"Principles of fair and equitable measurement require that all of the test forms have a common scale so that scores have the same meaning and interpretation," he says.
In their article, Meyer and Zhu discuss how "item response theory" helps counter cheating and ensure fair and equitable measurement of student learning.
"Item response theory is a type of measurement that is more complicated than methods instructors use for [standard] classroom tests," Meyer says. "This type of measurement is used heavily in large-scale testing, such as the high-stakes testing in K-12 education."
What makes this type of testing difficult for use in MOOCs is that large-scale testing is usually managed by companies that employ professionals with specialized knowledge of item response theory. Instructors of MOOCs typically do not have this level of expertise.
In an effort to introduce this theory to a larger audience, the article seeks to introduce readers to the concept and explain methods for placing test forms on a common scale. It describes the underlying theory and demonstrates the way an analysis is conducted.
Research & Practice in Assessment's special issue offers some of the first analyses of actual MOOC data, and showcases the scholarship of faculty from the American Council on Education, Massachusetts Institute of Technology, Harvard University, the University of Virginia, Texas A&M University, New York University, James Madison University, and Tulane University.
No entries found