Artificially intelligent (AI) systems have imbued robots with the ability to grasp and manipulate objects with humanlike dexterity, and now, researchers say they’ve developed an algorithm through which machines might learn to walk on their own. In a preprint paper published on Arxiv.org (“Learning to Walk via Deep Reinforcement Learning“), scientists from the University of California, Berkeley and Google Brain, one of Google’s artificial intelligence (AI) research divisions, describe an AI system that “taught” a quadrupedal robot to traverse terrain both familiar and unfamiliar.
“Deep reinforcement learning can be used to automate the acquisition of controllers for a range of robotic tasks, enabling end-to-end learning of policies that map sensory inputs to low-level actions,” the paper’s authors explain. “If we can learn locomotion gaits from scratch directly in the real world, we can in principle acquire controllers that are ideally adapted to each robot and even to individual terrains, potentially achieving better agility, energy efficiency, and robustness.”