谷歌Stitch与AI驱动开发的变革

内容来源:https://aibusiness.com/generative-ai/google-s-stitch-and-ai-driven-development
内容总结:
谷歌推出AI设计平台Stitch,开发者可用语音和文字生成界面
近日,谷歌发布了由其AI模型Gemini驱动的全新设计平台“Stitch”。该平台旨在通过生成式人工智能,彻底改变应用程序和网页的设计与开发流程,让开发者能够用更自然、直观的方式工作。
核心功能:用语言和图像驱动设计
Stitch的核心是一个“AI原生画布”。用户无需精通专业设计工具或编写复杂代码,只需通过输入文字描述、上传参考图片,甚至直接对着画布进行语音对话,即可生成用户界面(UI)设计。平台内置的“设计助手”能全程提供支持,包括提供设计建议、生成全新的落地页面,并实时响应用户指令进行修改。
本质是“代码助手”,但更侧重多模态输入
行业分析机构Futurum Group的分析师布拉德利·希明指出,Stitch本质上仍是一个“编码智能体”,其底层会为用户的应用生成TypeScript代码,或为网页生成HTML和CSS代码。然而,谷歌的突出优势在于出色地整合了多模态信息——图像、音频和文本,使设计师能够快速将草图、配色方案等视觉灵感转化为具体界面布局,极大加速设计进程。
行业趋势:从“人类主导”转向“人机协作”
Stitch是谷歌“Vibe”概念的最新实践,代表了软件开发领域的一个显著趋势:任务重心正从人类事无巨细地操作,转向将部分工作“外包”给AI同事。过去一年半,编码已成为“人机协作”的关键战场。不仅谷歌,Anthropic公司也推出了专注于代码的Claude Code,OpenAI近期发布的GPT-5.4系列模型同样强调在编码工作流中的高效性。
专家提醒:企业需建立约束与规范
尽管前景广阔,但希明也警告了其中风险。如果缺乏适当的控制、约束和上下文指导,完全依赖AI进行设计开发可能带来不可预知的结果。因此,企业引入此类平台时,必须建立确定性的指导元素,例如统一的企业设计规范、具体要求,或适用于设计的特定数据库,以确保产出符合预期并降低风险。
中文翻译:
由谷歌云赞助
如何选择首个生成式AI应用场景
要着手应用生成式AI,首先应关注那些能优化人类信息交互体验的领域。
该平台集成了AI原生画布,允许用户通过文字、图像和语音指令创建UI设计。随着生成式AI改变编码与设计流程——使企业开发者能够用自然语言进行创作与编程,谷歌推出了一个全新升级的平台,利用AI辅助应用程序和网页设计。
3月18日,谷歌发布了集成Stitch的Vibe Design平台。Stitch最初于2025年5月作为Gemini驱动的UI设计与代码生成工具面世。此次重构后的平台包含AI原生画布,用户可通过组合文字指令、图像和代码生成UI设计。谷歌表示,设计助手能全程协助创作过程,新增的智能体管理器可追踪设计进度。平台还配备名为Design.md的智能体友好型标记文件,用于与其他设计编码工具双向导入导出设计规则。用户甚至能直接对画布语音输入来激活动态设计。设计助手可提供设计反馈、创建新着陆页并实现实时更新。
谷歌的Stitch设计平台是Vibe理念的最新实践,让开发者向AI智能体描述编程构想,而非手动编写代码。这也体现了各行业工作模式正从人类全盘掌控,转向将部分任务交由AI协同完成。
过去12至18个月中,"人机协同"概念的核心应用领域正是编程。Anthropic公司通过Claude Code工具及智能体聚焦编程领域,实现了导航、调试和代码生成等任务的自动化。OpenAI同样持续深耕编程方向,本周初最新发布的GPT-5.4迷你版与纳米版即宣称能显著提升编码工作流效率。
Futurum Group分析师布拉德利·希明指出,尽管Stitch是AI原生软件设计画布,其本质仍是另一种编码智能体。他补充道,该平台底层会为用户应用生成TypeScript代码,或为网页设计生成HTML与CSS代码。
"这正是编码智能体的核心功能。"希明解释道。
但他同时强调,谷歌在融合图像、音频、文本等多模态信息方面表现突出。这种设计允许设计师上传创意、草图或作为配色方案的图像,快速构建界面原型,从而加速设计流程。对希明而言,这再次印证了软件工具门槛的降低趋势。
"用户无需深入学习专业软件操作,"他表示,"不必再耗费一年掌握Adobe Premiere这类工具。"用户用自然语言描述需求即可实现"意图驱动设计",这与意图驱动开发的理念一脉相承。
不过希明也警示,依赖AI驱动设计开发存在风险。因此企业需要建立确定性约束机制来规范Stitch这类平台的使用,无论是企业设计模式标准、合规要求,还是适用于设计的数据库与数据集。
"缺乏这类管控、约束与上下文背景,企业承担的风险将远超必要范围。"希明总结道。
英文来源:
Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
The platform integrates an AI-native canvas that enables users to create UI designs using text, images and voice commands.
As generative AI changes the process of coding and design, enabling enterprise developers to use natural language to create and code, Google introduced a revamped platform that uses AI to aid in the design of applications and web pages.
Google, on March 18, introduced Vibe Design with Stitch. Stitch was originally introduced in May 2025 as a Gemini-powered UI design and code-generation tool.
As a redesigned platform, Stitch includes an AI-native canvas that lets users combine text prompts, images and code to generate UI designs. A design agent can assist with the process from start to end, Google said. There is also a new agent manager that tracks the design progress. The platform includes an agent-friendly markdown file called Design.md that can be used to export or import design rules to or from other design and coding tools. Users can also vibe design with their voice by speaking directly to their canvas. The design agent provides users with design critiques, creates a new landing page and makes real time updates.
Google's design with Stitch is the latest iteration of vibe, which enables developers to describe their coding vision to an AI agent rather than manually writing code. It is also an example of how different jobs and tasks are shifting from having a human figure everything out to outsourcing some of that to an AI co-worker.
Over the last 12 to 18 months, a key target of the human-plus-AI co-worker concept has been coding. Anthropic focused on the coding domain with its Claude Code tool and agent, which automates tasks such as navigation, debugging and code generation. OpenAI has also been focusing on coding, with the vendor's most recent releases, earlier this week, being GPT-5.4 mini and GPT-5.4 nano, which it said are effective in coding workflows.
While Stitch is an AI-native software design canvas, it ultimately is another coding agent, according to Futurum Group analyst Bradley Shimmin. He added that at the bottom of the platform, Stitch will generate TypeScript for a user's app or HTML and CSS for the web page design.
"That's what coding agents do," Shimmin said.
However, Google does a good job of accommodating multimodal information such as images, audio, and text, Shimmin continued. This approach enables designers to upload ideas, sketches, or images as a color palette to lay out how they envision their interface, speeding up the design process. For Shimmin, this is another example of software's diminishing utility.
"You don't have to learn the actual app," he said. "You don't need to spend a year figuring out how to master Adobe Premiere." He added that users can write up what they want in natural language. "It's intent-driven design, just like intent-driven development."
However, there are risks when using AI to drive your design or development, Shimmin said. Therefore, enterprises need deterministic elements that guide their use of platforms like Stitch, whether that is a standard for corporate design patterns or requirements, or a database or datasets that would apply to the design.
"Without those kinds of controls and constraints, and context, you're taking a bigger risk than you probably need to," Shimmin said.