A researcher from Disney Research Pittsburgh is working to advance automated recognition of action in a video.
Leonid Sigal has developed a method that focuses on expressing each human body movement as a series of space-time patterns. "Modeling actions as collections of such patterns, instead of a single pattern, allows us to account for variations in action execution," he says.
Sigal collaborated with Boston University Ph.D. student Shugao Ma on technology that can detect such structures directly without fine-level automation. "Knowing that a video contains 'walking' or 'diving,' we automatically discover which elements are most important and discriminative and with which relationships they need to be stringed together into patterns to improve recognition," Ma says. They found the recognition of actions also is concise and efficient.
In testing, the researchers say their algorithm outperformed other approaches, identifying elements with a richer set of relations among them. They say the technology could be used in video search and retrieval, video analysis, and human-computer interaction research.
From Phys.org
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA
No entries found