Google Stitch | AI canvas turns prompts into UI design workflows
Google has introduced a redesigned version of Stitch, an AI-native software design canvas built to turn natural language into high-fidelity UI designs. Published on March 18, 2026, the update adds an infinite canvas, a new design agent, DESIGN.md support, real-time prototyping, voice-based critique, and exports through MCP, SDK, Skills, AI Studio, and Antigravity.
Google Stitch brings vibe design into AI-powered UI creation
Google is positioning Stitch as a tool for what it calls “vibe design,” where users begin with intent, business goals, emotional direction, or visual references instead of starting from a traditional wireframe. The goal is to let teams explore UI directions quickly while keeping enough structure for design review and development handoff.
For designers, the important shift is the canvas itself. Stitch is no longer only about generating screens from prompts; it is becoming a broader design workspace where images, text, code, design rules, prototypes, and agent feedback can work together during early product exploration.
How Stitch changes UI design workflows
The redesigned Stitch interface introduces an AI-native infinite canvas, giving users space to move from early ideation to working prototypes. Designers can bring different forms of context into the canvas, including images, text, and code, then use AI to explore variations and refine ideas during the same process.
Google also adds a new design agent that can reason across the project’s evolution. The Agent manager helps track progress and manage multiple ideas in parallel, which is useful when teams need to compare directions, test alternatives, and stay organized during fast UI exploration.
New workflow options for UI designers and product teams
DESIGN.md is one of the most relevant additions for structured workflows. Google describes it as an agent-friendly markdown file that can import or export design rules across Stitch and other design or coding tools, helping teams reuse system logic instead of rebuilding design direction for every new project.
Stitch also supports faster prototyping by turning static designs into interactive app flows. Designers can connect screens, preview journeys with a Play button, and let Stitch generate logical next screens based on interactions, which can make early user-flow validation faster.
Voice capabilities add another layer to the workflow. Users can speak directly to the canvas, ask for real-time design critique, request alternate menu options, explore color palettes, or let the agent interview them to create a new landing page while staying in the creative flow.
Availability and developer handoff
Google says Stitch can connect with broader team workflows through its MCP server, SDK, Skills, and exports to developer tools such as AI Studio and Antigravity. This makes the update relevant not only for interface exploration, but also for handoff between designers, AI agents, and developers.
For production teams, Stitch is best evaluated as an ideation and prototyping layer. Designers should still review visual hierarchy, component consistency, accessibility, responsive behavior, design-system alignment, and export quality before using AI-generated UI as production-ready material.
Sources and Recommended Links
- Introducing “vibe design” with Stitch | Google Blog (Official)
- Stitch | Google Labs (Official)
- DESIGN.md | Google Stitch (Official)