Google research makes for an effortless robotic dog trot

As capable as robots are, the original animals after which they tend to be designed are always much, much better. That’s partly because it’s difficult to learn how to walk like a dog directly from a dog — but this research from Google’s AI labs make it considerably easier.
The goal of this research, a collaboration with UC Berkeley, was to find a way to efficiently and automatically transfer “agile behaviors” like a light-footed trot or spin from their source (a good dog) to a quadrupedal robot. This sort of thing has been done before, but as the researchers’ blog post points out, the established training process can often “require a great deal of expert insight, and often involves a lengthy reward tuning process for each desired skill.”
That doesn’t scale well, naturally, but that manual tuning is necessary to make sure the animal’s movements are approximated well by the robot. Even a very doglike robot isn’t actually a dog, and the way a dog moves may not be exactly the way the robot should, leading the latter to fall down, lock up, or otherwise fail.
The Google AI project addresses this by adding a bit of controlled chaos to the normal order of things. Ordinarily, the dog’s motions would be captured and key points like feet and joints would be carefully tracked. These points would be approximated to the robot’s in a digital simulation where a virtual version of the robot attempts to imitate the motions of the dog with its own, learning as it goes.
So far, so good, but the real problem comes when you try to use the results of that simulation to control an actual robot. The real world isn’t a 2D plane with idealized friction rules and all that. Unfortunately, that means that uncorrected simulation-based gaits tend to walk a robot right into the ground.
To prevent this, the researchers introduced an element of randomness to the physical parameters used in the simulation, making the virtual robot weigh more, or have weaker motors, or experience greater friction with the ground. This made the machine learning model describing how to walk have to account for all kinds of small variances and the complications they create down the line — and how to counteract them.
Learning to accommodate for that randomness made the learned walking method far more robust in the real world, leading to a passable imitation of the target dog walk, and even more complicated moves like turns and spins, without any manual intervention and only little extra virtual training.
Naturally manual tweaking could still be added to the mix if desired, but as it stands this is a large improvement over what could previously be done totally automatically.
In another research project described in the same post, another set of researchers describe a robot teaching itself to walk on its own, but imbued with the intelligence to avoid walking outside its designated area and to pick itself up when it falls. With those basic skills baked in, the robot was able to amble around its training area continuously with no human intervention, learning quite respectable locomotion skills.
The paper on learning agile behaviors from animals can be read here, while the one on robots learning to walk on their own (a collaboration with Berkeley and the Georgia Institute of Technology) is here.
As capable as robots are, the original animals after which they tend to be designed are always much, much better. That’s partly because it’s difficult to learn how to walk like a dog directly from a dog — but this research from Google’s AI labs make it considerably easier. The…
Recent Posts
- An obscure French startup just launched the cheapest true 5K monitor in the world right now and I can’t wait to test it
- Google Meet’s AI transcripts will automatically create action items for you
- No, it’s not an April fool, Intel debuts open source AI offering that gauges a text’s politeness level
- It’s clearly time: all the news about the transparent tech renaissance
- Windows 11 24H2 hasn’t raised the bar for the operating system’s CPU requirements, Microsoft clarifies
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010