acm-header
Sign In

Communications of the ACM

ACM TechNews

Faculty Research Reveals Software Improving in Ability to Analyze, Score Writing


View as: Print Mobile App Share:
A screen of text for analysis

Researchers at Pennsylvania State University have demonstrated that software for evaluation of human writing is improving.

Credit: blog.aylien.com

Pennsylvania State University (PSU) researchers have demonstrated that software for evaluating human writing, such as ALA-Reader, is improving and expanding in use.

The research compared an ALA-Reader analysis of 90 essays to human-rated scores of those same essays, with the goal of improving the software. "This investigation allowed us to test our pattern-matching software with narrative text for the first time," says PSU professor Roy Clariana.

He notes text possesses a structure, so different texts can be compared as patterns. "Previously, we have only considered the patterns in expository texts, such as comparing students' essays about management theories to an expert's essay," Clariana says. "The better essays 'look' more like the expert."

He says the researchers are using the software to examine the concept of "knowing" in new ways, a practice known as knowledge structure (KS) theory. He notes KS theory has a direct application to reading comprehension and reading-comprehension research, which is an area of great importance in education research.

Clariana says the software could lead to a wide range of future applications, such as adding its capabilities to online text-visualization software.

From Penn State News
View Full Article

 

Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account