Researchers at the University of Bern in Switzerland have taught a machine to "hallucinate" its idea of what the video game "Doom" looks like, and then got a virtual agent to play its own dream version of the game so it could learn how to play the real thing.
The process first used a deep-learning model to creates a compressed version of the game environment based on a snapshot, then another model took that information to output a probability distribution of what the next frame might resemble. These models collectively constituted the virtual agent's abstract perspective of the "world," and a controller model with access to the reward functions of the game was employed to make selections on the next course of action.
The researchers think this strategy could make it possible for the engines that render video games to rapidly calculate complicated physics in the background.
From Motherboard
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found