
Multi-agent orchestration, maker controls, and more: Microsoft Copilot Studio announcements at Microsoft Build 2025
Microsoft Build 2025 is here—our annual showcase of the most exciting innovations shaping the future of development and AI. For engineers, makers, and subject matter experts, it’s the moment to see what’s next across the Microsoft ecosystem. This year, Microsoft Copilot Studio has a number of powerful new agent-related features to show you. From multi-agent orchestration to more maker controls, computer use in agents to code interpreter, read on for a first look at Copilot Studio’s exciting announcements. Recap of major Microsoft Build 2025 announcements Jared Spataro, Corporate Vice President, Microsoft, announced some of the biggest news from the Copilot Studio and Microsoft 365 Copilot agents teams. In case you missed it, here’s an overview of a few features we’re particularly excited about. Multi-agent orchestration Rather than relying on a single agent to do everything—or managing disconnected agents in silos—organizations can now build multi-agent systems in Copilot Studio (preview), where agents delegate tasks to one another. This includes those built with the Microsoft 365 agent builder, Microsoft Azure AI Agents Service, and Microsoft Fabric. These agents can now all work together to achieve a shared goal: completing complex, business-critical tasks that span systems, teams, and workflows. This evolution reflects a broader shift in how organizations are scaling their use of agents across Microsoft. Imagine a Copilot Studio agent pulling sales data from a customer relationship management (CRM) system, handing it off to a Microsoft 365 agent to draft a proposal in Word, and then triggering another to schedule follow-ups in Outlook. Or, envision agents coordinating across IT, communications, and vendor systems to manage an incident from detection to resolution. Whether it’s executive briefings, customer onboarding, or product launches, agents can now operate in sync—bringing greater connectedness, intelligence, and scale to every step. This feature is currently in private preview with a public preview coming soon. Try Copilot Studio for free See how different organizations are employing their agent ecosystems and get inspired for how you could connect yours in Microsoft Corporate Vice President Srini Raghavan’s blog post. Computer use in Copilot Studio agents Computer use moves us closer to a more connected, intelligent world where agents collaborate seamlessly with people and systems. Agents can now interact with desktop apps and websites like a person would—clicking buttons, navigating menus, typing in fields, and adapting automatically as the interface changes. This opens the door to automating complex, user interface (UI)-based tasks like data entry, invoice processing, and market research, with built-in reasoning and full visibility into every step. Computer use is currently available through the Microsoft 365 Copilot Frontier program for eligible customers with at least 500,000 Copilot Studio messages and an environment in the United States. Bring your own model and model fine-tuning Copilot Studio continues to integrate deeply with Azure AI Foundry, and now you can bring your own model for prompts and generative answers. Makers can access more than 11,000 models in Azure AI Foundry, including the latest models available in OpenAI GPT-4.1, Llama, DeepSeek, and custom models, and fine-tune them using enterprise data. This fine-tuning helps agents generate even more domain-specific, high-value responses. Model Context Protocol Now generally available, Model Context Protocol (MCP) makes it easier to connect Copilot Studio to your enterprise knowledge systems. With growing connector support, better tool rendering, evolving scalability, and faster troubleshooting, it’s never been simpler to bring external knowledge into agent conversations. Developer tools to build agents your way Microsoft empowers developers to build agents with the tools they prefer—Copilot Studio, GitHub, Visual Studio, and more. With Microsoft 365 Copilot APIs, developers can securely access Microsoft 365 data and capabilities to create custom agents or embed Microsoft 365 Copilot Chat into apps, all while respecting organization-wide permissions. The Microsoft 365 Agents Toolkit and Software Development Kit (SDK) make it easier to build, test, and evolve agents over time. Developers can swap models or orchestrators without starting from scratch, use SDK templates to jumpstart projects, and deploy to Azure with smart defaults—all now generally available. Copilot Studio enhancements New agent publishing channels: SharePoint and WhatsApp Copilot Studio has exciting updates to available channels, including that publishing agents to Copilot is now generally available. In addition to this highly anticipated update, we’re also adding two additional channels: SharePoint and WhatsApp. These key channels make it easier than ever to bring custom agents to the places where your users already work and communicate. This helps you extend the reach and value of your agents, from serving your teams inside Copilot and SharePoint to engaging customers around the world. The SharePoint channel, now generally available, lets makers deploy custom agents directly to a SharePoint site with a single click. With authentication and permissions handled automatically, anyone with access to the site can immediately start using the agent. This extends the full capabilities of custom agents into one of the most widely used collaboration hubs in the world. Starting in early July 2025, makers will also be able to publish Copilot Studio to WhatsApp. This will allow organizations to provide conversational support and engage global users directly within the familiar, mobile-first platform—no separate website or app required. Additional maker controls for knowledge Now in public preview, new controls in the Generative AI agent settings give makers more ways to shape how agents respond, reason, and interact with users. In addition to toggles for generative orchestration and deep reasoning, you’ll see multiple categories to further ground and tune your agents. First, in response to maker feedback, we’re pleased to announce that you can now upload multiple related files into a file collection and use the collection as a single knowledge source for an agent. You can also include natural language instructions to help your agent find the most relevant document in the collection to ground each response. In the Responses section of the Generative AI tab, you can now choose your agent’s primary response model, provide response instructions, adjust response length, and turn on advanced options like code interpreter (see below) and Tenant graph grounding with semantic search. Moderation settings control how flagged responses—that is, generated responses detected to possibly have harmful content—are handled. These controls allow you