Artificial intelligence (AI)'s sensory perceptions are advancing toward humanlike capabilities through customized sensors, machine learning (ML), and neural networks.
Carnegie Mellon University (CMU) engineers have melded infrared depth sensors with color cameras to enable robots to more skillfully handle transparent objects.
Also devised by CMU researchers is an ML model trained on a vast dataset of sounds, which can correctly identify unseen objects with 75% accuracy.
Meanwhile, software maker OpenAI has developed AI-driven applications that use a neural network to generate images from a massive image/text database, which could be employed to produce visual versions of textbooks or photorealistic movies from a script.
Massachusetts Institute of Technology scientists also are training an AI system to predict the feel of seen objects and the appearance of felt objects using VisGel, a dataset of tactile-visual pairings.
From The Wall Street Journal
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found