Scientists at the Massachusetts Institute of Technology (MIT) and Toyota subsidiary Woven Planet found computer vision models can be trained to produce more stable, predictable visual representations, similar to those humans learn through perceptual straightening.
The researchers taught the models millions of examples via adversarial training, which enhanced their perceptual straightness while reducing their reactivity to slight errors within images.
They discovered the models trained on more perceptually straight representations could correctly classify objects in videos with greater consistency.
MIT's Vasha DuTell said, "One of the take-home messages here is that taking inspiration from biological systems, such as human vision, can both give you insight about why certain things work the way that they do and also inspire ideas to improve neural networks."
From MIT News
View Full Article
Abstracts Copyright © 2023 SmithBucklin, Washington, DC, USA
No entries found