Researchers at the Massachusetts Institute of Technology (MIT) and Harvard University have used a four-year-old computer vision system to quantify the physical improvement or deterioration of neighborhoods in five U.S. cities.
The system originally was developed to analyze street-level photos taken in urban neighborhoods in order to measure how safe the neighborhoods would appear to human observers.
The new system compared 1.6 million pairs of photos taken seven years apart.
The researchers used those comparisons to determine that the density of highly educated residents, the proximity to central business districts and other physically attractive neighborhoods, and the initial safety score assigned by the system all correlate strongly with improvement in physical condition.
The machine-learning system that assigned the safety ratings was trained on hundreds of thousands of examples, which were rated by human volunteers.
The researchers validated the system's analyses by presenting it with 15,000 randomly selected pairs of images and compared its results with those of humans on Amazon's Mechanical Turk; they agreed 72% of the time.
From MIT News
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found