Neglect chess, DeepMind is coaching its new AI to play soccer

Researchers at DeepMind, the British AI laboratory, have abandoned the noble games of chess and Go for a more plebeian pastime: soccer.

Google sister company yesterday published a research paper and an accompanying blog post detailing its new neural probabilistic motor primitives (NPMP) – a method by which artificial intelligence agents can learn to operate physical bodies.

According to the blog post:

A NPMP is a general-purpose motor control module that translates short-horizon motor intentions into low-level control signals and is trained offline or via RL by mimicking motion capture data (MoCap) recorded with trackers on humans or animals performing movements of interest .

Sign up for the TNW conference newsletter

And be the first in line for ticket deals, event updates and more!

In front: Essentially, the DeepMind team built an AI system that can learn how to do things in a physics simulator by watching videos of other agents performing those tasks.

And of course, when you have a huge physics engine and an endless supply of curious robots, the only sensible thing to do is teach him how to dribble and shoot:

According to the team’s research:

We optimized teams of agents to play simulated soccer through reinforcement learning and restricted the solution space to that of plausible movements learned with human motion capture data.

Background: In order to train AI to operate and control robots worldwide, researchers must prepare the machines for reality. And outside of simulations, anything can happen. Agents must contend with gravity, unexpectedly slippery surfaces, and unplanned interference from other agents.

The purpose of the exercise isn’t to build a better footballer – Cristiano Ronaldo has nothing to fear from the robots for now – but instead to help the AI ​​and its developers figure out how to optimize the agents’ ability to predict outcomes.

As the AI ​​begins training, it is barely able to move its physics-based humanoid avatar across the field. But by rewarding an agent every time their team scores, the model is able to get the numbers up and running in about 50 hours. After several days of training, the AI ​​begins to predict where the ball will fly and how the other agents will react to its movement.

According to paper:

The result is a team of coordinated humanoid soccer players that demonstrate complex behavior at different levels, quantified through a range of analyzes and statistics, including those used in real sports analysis. Our work represents a full demonstration of learned multilevel integrated decision making in a multiagent setting.

Take fast: This work is pretty awesome. But we’re not so sure it’s a “full demonstration” of anything. The model is obviously capable of running an embodied agent. But, based on the obviously picked GIFs in the blog post, this work is still deep in the simulation phase.

The bottom line is that the AI ​​doesn’t “learn” how to play soccer. It’s a brutal move within the confines of its simulation. This might seem like a bit of a quibble, but the results are pretty obvious:

Photo credit: DeepMind

The above AI agent looks absolutely terrified. I don’t know what it’s running from, but I’m sure it’s the scariest thing there is.

It moves like an alien wearing a human suit for the first time because, unlike humans, AI can’t learn by watching. Systems like the one trained by DeepMind parse thousands of hours of video footage, essentially extracting motion data about the subject they are trying to “learn” from.

However, it is almost certain that these models will become more robust over time. We’ve seen what Boston Dynamics can do with machine learning algorithms and pre-programmed choreographies.

It will be interesting to see how more adaptive models, such as those developed by DeepMind, will perform once they move beyond the lab environment into actual robotics applications.

Comments are closed.