NIPS conference: Google releases open-source, AI, 3D game-development project

DeepMind Lab was released to accelerate deep reinforced learning R&D in the AI community

Today, on the opening day of the marquee AI conference Neural Information Processing Systems (NIPS) conference in Barcelona, Google announced in a blog post the release of its DeepMind Lab project available to the AI community under open source licensing terms.

Artificial intelligence (AI) and virtual reality (VR) are the next two computing platforms. DeepMind Lab is a 3D AI platform for building virtual games that bring these two platforms together in multiple dimensions. DeepMind Lab uses a special kind of AI, called machine learning (ML). And within the field of ML, it uses an advanced form of machine learning called deep reinforcement learning (DeepRL).

Most ML learning projects are based on supervised ML in which the ML model is programmed with labeled data similar to how a human is trained to understand objects with flash cards annotated with the name of the image, except the data set of flash cards used to train the ML model is usually huge.

In the case of DeepMind Lab, Google is building on its successful earlier implementations of DeepRL. Google created an agent that defeated the Korean Go grandmaster Lee Sedol last March, and a month earlier announced it created an agent that learned to play classic Atari 2600 games with super-human skill.

Google’s stated goals for the project:

  1. Design ever-more intelligent agents capable of more sophisticated cognitive skills
  2. Build increasingly complex environments where agents can be trained and evaluated

The implicit goal is to accelerate results in the research and development of DeepRL within the community at large.

Games built with DeapMind Lab learn from experience

DeepRL agents learn by generalizing their past experience and appling it to new situations. DeepRL is based on the observation of the psychological and neuro-scientific conditions controlling animal behavior. Creating a system of machine learning that learns without supervision is the next step.

Supervised machine learning is limited to the domains where labeled data sets exist either naturally, such as images of food tagged by humans, or through labor-intensive identification and tagging. DeepRL uses a system of rewards or reinforcement to learn like animals and humans learn.

Games are an ideal framework for DeepRL because the inherent progressive score keeping serves as reward, and the past experience of succeeding and failing to score serves as the experience from which the agent learns. The agent interprets the bitmap image of its position in the game and assesses the probability of its next move, resulting in a reward, calculated with a neural network that uses the agents past experience as inputs.

DeepRL will not replace the many rich applications of supervised machine learning—such as medical diagnosis using the enormous data set of medical images labeled with expert diagnoses and can be retrained with new images and refined diagnoses by human experts. But it solves at least in part one open problem in applying machine learning to the real world, understanding the unpredictability encountered in the real world and predicting the most accurate response that is essential to create general purpose agents.

Cameras such as Tango can map 3D space, but it takes a lot of procedural software to understand 3D space in narrow domains like a human. DeepRL models trained in 3D VR games could be applied to a general understanding of 3D reality.

DeepMind Lab is intented to accelerate DeepRL R&D

Google said DeepMind Lab will be released as a Github repository under an open source license for the AI community to enhance, with the expectation that the community will create games using agents, contributing extended and improved software, models and assets. Assets are 3D game components, such as characters, and objects, such as furniture, tools and assets.

More shared code, more shared models and more assets will increase the number and quality of the intelligent cognitive agents created with sophisticated game playing skills that can interact with new complex 3D environments. The results will be evaluated by Google and the AI community to further the research community’s understanding of how DeepRL can be applied.

DeepRL could create unpredictable VR games that remain fresh

It is not clear from Google’s blog post if DeepMind Lab will immediately be valuable to game developers building next year’s games. If at some point, DeepMind Lab can be used to author new games, it will address a gap in game development.

Game developers use authoring tools such as Unity3D and Unreal Engine to create 3D assets used in games and program how the assets interact to create a compelling user experience. Developers program the assets to interact with scripts built with procedural languages such as C# and JavaScript for productivity and speed. The only alternative is native programming with C++, which is more powerful but adds complexity that hinders productivity.

Procedural programming languages are limited in handling situations in which a number of possible actions may lead to the desired result where AI is strongly suited in handling and perhaps creating unpredictability. If DeepMind Lab can be applied to game development, the unpredictability will give players multiple paths through the game, increasing the difficulty for players to master and making each new round of play new a fresh experience, thereby extending the life of the game.

Machine Learning removes the boundaries of 6DoF VR experiences

Oculus demonstrated its Santa Cruz protype headset at its Connect developer conference last October. Santa Cruz’s six degrees of freedom (6DoF) is unique because it has inside-out tracking. Top tier VR systems such as HTC Vive also have 6DoF, which maps the user's physical movements in real space to the VR experienced within the headset.

For example, with 6DoF VR, a person walking across a real space and bending over will experience the same visual sense of movement. But 6DoF tracking is accomplished by combining data from the headset’s inertial measurement unit (IMU), a combination of gyroscope and accelerometer and other sensors with external laser beacons that are set on tripods or mounted on walls.

The inside-out tracking demonstrated with Santa Cruz did not require beacons, and Oculus said it had been built using ML. Removing the boundaries set by fixed beacons will enhance and simplify VR experiences.

Possibly the two next-generation platforms, VR and AI, will meet with new VR experiences built with DeepRL that combine reality and virtual reality with inside-out tracking.

Join the newsletter!

Error: Please check your email address.

Tags Google

More about GoogleHTC

Show Comments

Market Place