In what has been described as a significant step towards true artificial intelligence, Google has created an AI that can learn how to play video games and devise new winning strategies.
The program was given 49 retro video games to play. Through practice it learned new and more efficient ways to win.
"This is the first significant rung of the ladder towards proving a general learning system can work. It can work on a challenging task that even humans find difficult," project lead Dennis Hassabis told the Guardian. "It’s the very first baby step towards that grander goal ... but an important one."
The program, or "agent" as the team of scientists call it, was given no instructions, just the pixels of an Atari game and a running score. To begin, the agent watches the screen and then taps the buttons to see what happens.
Eventually, the agent begins to learn the rules of each game and starts to devise strategies - some of which even the researchers hadn't considered - to achieve higher scores.
"It’s definitely fun to see computers discover things you haven’t figured out yourself," said team member Vlad Mnih, apparently without even a hint of concern.
Voicing those concerns was a former investor in Hassabis' company (DeepMind, bought by Google), Elon Musk.
The entrepreneur behind SpaceX and Tesla Motors said AI poses a significant threat to human existence.
“Unless you have direct exposure to groups like DeepMind, you have no idea how fast [AI] is growing,” he said. “The risk of something seriously dangerous happening is in the five year timeframe. Ten years at most.”
The team at Google disagrees.
"We agree with him there are risks that need to be borne in mind, but we’re decades away from any sort of technology that we need to worry about,” said Hassabis.

Comments 17