The journal Nature recently published Microsoft’s latest research introducing what it calls Muse, the first World and Human Action Model (WHAM,) that can generate game visuals, controller actions, or both—and this generative AI model of a video game can generate minutes of cohesive gameplay from a single second of frames and controller actions.
Unlike many AI models that focus on text or image generation, Muse was made to understand 3D game environments and respond dynamically to in-game actions—meaning it opens up a world of possibility for developers, who can now quickly prototype ideas, refine game mechanics, and more.
Developed by the Microsoft Research Game Intelligence and Teachable AI Experiences teams in collaboration with Xbox Games Studios’ Ninja Theory, Muse is trained on over a billion images and controller actions.