After delivering a new “open” AI model with better performance on a single GPU, Google has now introduced an update to the AI models for its products with Gemini 2.5, which combines “a significantly enhanced base model with improved post-training” for better overall performance. It’s claiming that the first release, Gemini 2.5 Pro experimental, leads competition from OpenAI, Anthropic, xAI, and DeepSeek on common AI benchmarks that measure understanding, mathematics, coding, and other capabilities. The new model is available to access in Google AI Studio or for Gemini Advanced subscribers in the app’s model dropdown menu.
The company is also touting Gemini’s native multimodality as an advantage, as it’s able to interpret not just text, but also audio, still images, video, and code, and says that a 2 million token context window is “coming soon” to help it process more data. Google DeepMind CEO Demis Hassabis called Gemini 2.5 Pro “an awesome state-of-the-art model, no.1 on LMArena by a whopping +39 ELO points, with significant improvements across the board in multimodal reasoning, coding & STEM,” in a post on X.
Google says it’s jumped forward in quality because Gemini models are now “reasoning” models that process tasks step-by-step and make more informed decisions, which they say results in better answers and responses for complex prompts. Now, the blog post reads, “…we’re building these thinking capabilities directly into all of our models, so they can handle more complex problems and support even more capable, context-aware agents.”
One demo video shows 2.5 Pro using those reasoning capabilities to program a video game based on a single prompt: