SEED Research & Announcements Blogs Publications Open Source Careers Contact Us Research & Announcements Blogs Publications Open Source Careers Contact Us

SEED Applies Machine Learning Research to the Growing Demands of AAA Game Testing

Electronic Arts’ applied research group, SEED, presented five research papers and a keynote speech at CoG, IEEE’s annual conference for technical, scientific, and engineering work in games.

This year at the IEEE Conference on Games (CoG), SEED researchers presented five papers and a keynote presentation that explored the potential for imitation learning, reinforcement learning, and AI to significantly improve current game testing methods.

Game testing is a growing challenge

As modern games grow in size and complexity, so too do the demands on Quality Assurance (QA) teams. Levels, systems, and sequences must be tested and re-tested throughout the development cycle, and it’s not just a game’s final features that go through QA — all of the previous versions of those features, including scrapped ones, require their own tests and checks. It can be difficult to understand the sheer volume of work required to test a modern AAA game, but in Technical Challenges of Deploying Reinforcement Learning Agents for Game Testing in AAA Games, SEED researchers do the math for us:

“A case study [for automated testing] is Battlefield V, which requires testing of 601 different features amounting to around 0.5M hours of testing if done manually. This corresponds to ~300 work years.”

Not all testing is done manually, of course. Bots — automated AI agents — have been used to test games for years, and scripted bots — bots that get their instructions from hand-coded scripts — still do a lot of work in AAA game testing. Even so, scripting each test takes time, and each time a feature change breaks an existing test, a new one must be built. That’s why SEED has been researching ways to make game testing more adaptable, effective, and resilient.

EA takes on testing with machine learning research

Each of the five papers below takes a different approach to a game-testing problem, exploring innovative combinations of AI, reinforcement learning, and imitation learning that push the limits of current machine learning methods.

Technical Challenges of Deploying Reinforcement Learning Agents for Game Testing in AAA Games

Authors: Jonas Gillberg, Joakim Bergdahl, Alessandro Sestini, Andrew Eakins, Linus Gisslén

This paper describes the limitations of current scripted bot testing methods and explores two real-life implementations of reinforcement learning (RL) for game tests in Battlefield V and Dead Space (2023). The “Lessons Learned and Discussion” section offers a fantastic set of guidelines and considerations for others looking to integrate RL into their game testing plans.

Read the full paper here.

Efficient Ground Vehicle Path Following in-Game AI

Authors: Rodrigue de Schaetzen, Alessandro Sestini

This paper proposes a method for using quadratic Bezier curves to improve AI path-following behavior during game tests. When implemented in Battlefield 2042 using AI-driven vehicles, this method achieved faster average vehicle speeds and significantly reduced the number of times that vehicles got stuck, resulting in an impressive 39% decrease in the mean total time it took for a vehicle to navigate its path.

Read the full paper here.

Towards Informed Design and Validation Assistance in Computer Games Using Imitation Learning

Authors: Alessandro Sestini, Joakim Bergdahl, Konrad Tollmar, Andrew Bagdanov, Linus Gisslén

Reinforcement learning techniques for testing aren’t always the best or most practical solution for game teams. Implementation requires deep machine learning knowledge and training can take days to complete. This paper proposes an alternative, data-driven imitation learning (IL) technique that lets game developers train AI agents through gameplay using skills they already have. In an experiment comparing this method to two RL-based methods, the IL-based method proved significantly faster to train (in one case, training took 20 minutes compared to 5 hours), was better at imitating demonstrated gameplay, and achieved success at about the same rate as the RL baselines.

Read the full paper here.

Automatic Gameplay Testing and Validation with Curiosity-Conditioned Proximal Trajectories

Authors: Alessandro Sestini, Linus Gisslén, Joakim Bergdahl, Konrad Tollmar, Andrew D. Bagdanov

The Curiosity-Conditioned Proximal Trajectories (CCPT) method proposed in this paper combines curiosity and imitation learning to train AI agents that can navigate and test the kind of complex 3D environments found in many AAA games. CCPT’s strength is in its ability to explore in the proximity of a demonstrated path, discovering glitches and oversights that other methods miss. In an experiment described in the paper, CCPT clearly outperformed baseline methods not only in finding bugs, but in highlighting them for developers.

Read the full paper here.

Generating Personas for Games with Multimodal Adversarial Imitation Learning

Authors: William Ahlberg, Alessandro Sestini, Konrad Tollmar, Linus Gisslén

This paper builds on past methods for creating AI agents with unique “personas” — behavioral styles based on real-life human skill and playstyle combinations. Using a novel imitation learning approach called MultiGAIL (Multimodal Adversarial Imitation Learning), SEED researchers successfully trained an agent that could execute, switch between, and even blend multiple personas. For example, in a racing game experiment, an agent trained with both “careful” and “reckless” personas could be set to act according to a single persona or a customized blend of both personas. MultiGAIL also surpassed past persona training methods by training one model in total (compared to one model per persona) and making it easier for developers to train and adjust agent personas without the need for RL expertise.

Read the full paper here.

SEED Technical Director Linus Gisslén presents on machine learning at CoG

In addition to co-authoring four of the five papers presented at CoG 2023, Linus Gisslén delivered a keynote presentation on Machine Learning for Game Production. It examined the potential for machine learning to revolutionize the way games are produced and played with supporting examples of successful implementations of ML at Electronic Arts.

Related News

A Theory of Stabilization by Skull Carving

SEED
Dec 3, 2024
A new approach to stabilizing facial motion for creating photo-real avatars that significantly enhances accuracy and robustness.

Gigi Lightning Talks

SEED
Sep 26, 2024
SEED brought together developers to show off their prowess using the Gigi rapid prototyping platform for real-time rendering.

SEED's Adventure in Gameplay Innovation

SEED
Sep 13, 2024
SEED is branching out into the world of game mechanics, storytelling magic, and interactive wonders.