An international team led by researchers at Stanford University and China's Shanghai Qi Zhi Institute developed a vision-based algorithm that enabled low-cost, off-the-shelf quadruped robots to move from obstacle to obstacle autonomously and with agility.
The robodogs used images from a mounted depth camera and machine learning to determine whether to move over, under, or around the obstacles.
Without any real-world reference data, the robodogs learned how to navigate obstacles using reinforcement learning, with a simple reward system based on the distance they moved forward and the amount of effort exerted.
Stanford's Chelsea Finn explained, "Eventually, the robot learns more complex motor skills that allow it to get ahead."
Tests showed the robodogs could scale obstacles 1.5 times their height, jump over gaps 1.5 times their length, crawl under thresholds that were three-quarters of their height, and slip through crevices thinner than their width.
From Stanford News
View Full Article
Abstracts Copyright © 2023 SmithBucklin, Washington, D.C., USA
No entries found