Overview
watsonx Orchestrate is IBM's multi-agent orchestration platform. Agent Builder is its core experienceโa no-code interface that lets business users build AI agents, connect them to tools and workflows, and deploy them across HR, sales, procurement, and customer care.
The platform bridges two worlds: a developer-focused ADK for code-first builders, and the visual Agent Builder for everyone else. My background as a former software developer let me translate complex agentic architecture into UI patterns non-technical users could master in minutes.
We shipped 0โ1 in 6 months, launching at IBM THINK 2025 in the CEO and GM keynotes.
>1000%
Agent deployment growth since launch
8000+
Monthly active users building agents
<4 mins
Average agent creation time-to-value
$150M
Recognized revenue in Q3 2025
AI Agent Builder launch announcement at IBM THINK 2025
What was broken?
Three products. One Frankenstein.
Automation + Assistant + Orchestrate duct-taped together
60% never deployed
Users abandoned, no guidance, total overwhelm
$300M retention risk
5 major clients wanted agents, refused to migrate
"No-code" requiring developers
Pro-code disguised as simplicity
Weeks to configure basics
Every domain needed months of custom implementation
What user knew (Watson Assistant)
What we were building (Agent Builder)
Linear, deterministic flows
Non-linear graphs with state
One assistant at a time
Multi-agent orchestration
"If user says X, respond Y"
"Agent decides based on context"
Build โ Test โ Deploy (sequential)
Configure pieces that interact dynamically
The Exploration
Three approaches. Two rejections. One insight.
1
Node-Based Canvas: Rejected โ
I explored a visual canvas matching LangGraph's architectureโnodes, edges, drag-and-drop. Competitors were doing it. It felt architecturally honest.
I rejected it after user tests. Users froze. The empty canvas problem paralyzed them. HR managers didn't know where to start.
I saw merit in visualizing agent relationships, but shelved it for post-MVP. The 2026 Canvas Builder roadmap revisits this.
2
Rigid Stepper Wizard: Rejected โ
I swung the opposite direction. Step 1 โ Step 2 โ Step 3 โ โฆ โ Done. Maximum guidance.
I rejected it too. Too rigid. Users couldn't jump back to test. Couldn't iterate in realtime. Worst of all, it lied about how agents actually workโthey're not linear.
3
Four tabs: Profile, Knowledge, Toolset, Behavior.
Free navigation. Scroll-based progression. Configure pieces that interact dynamicallyโmatching how agents actually work.
I killed the cool thing to save the outcome.
The Transformation
From seven entry points to one. From weeks to configure to minutes to deploy
New watsonx Orchestrate homepage with a very familiar chat interface that also helps onboard a new user & guides them towards building
A true no-code, intuitively guided AI agent builder experience that we pioneered
Design Principles
The Solution
One builder experience. Agentic platform from the ground up
I partnered with product leadership to reimagine watsonx Orchestrate from first principlesโbuilding a truly no-code agentic platform on LangGraph. Instead of seven entry points, we created one. Instead of weeks to configure, we designed for <10 minutes to deploy. My role extended beyond UI: I drove alignment across product, engineering, and business while personally designing the core experiences.
Four principles, one screen.
Below, see how each of the four design principles came to lifeโall in service of one North Star: <10 minutes to productive agent.
Guided without Force
The 4-tab structure guides users naturallyโProfile, Knowledge, Toolset, Behavior. Instead of rigid steps, users scroll through tabs filling out sections as they go. Before they realize it, they've built a complete agent. No manual. No training videos. Just intuitive progression that aligns with the underlying agent architecture.
Design Decision: Intuitive Flow, Zero Training
Auto-scrolling page anchors instead of forced step-by-step wizards. Builders maintain control while getting clear direction and are not exposed to the underlying complex technical architecture.
See What You're Building
"Seeing is believing" drove a core layout decision: I wanted the preview chat to occupy 50% of the screen, equal to the builder itself. Every change instantly reflects in the preview. Builders test as they build, eliminating surprises at deployment.
Design Decision: Real-Time Preview
Live preview on right, builder on leftโnot hidden in a separate tab or modal. Constant feedback loop builds confidence.
Trust Through Transparency
Enterprise users need to understand how agents make decisions. I wanted the interface to expose reasoning traces, data sources, decision paths, and guidelines enforcementโmaking AI logic comprehensible to non-technical users. No black boxes.
Guidelines give builders full control over agent behaviorโpreventing off-topic responses and enforcing compliance rules. Reasoning traces show exactly what the agent did: which guidelines it followed, which tools it invoked, and what data it used. End users see simplified responses; builders see complete execution details.
Design Decision: Trust Through Transparency & Control
Detailed reasoning traces + guidelines system. Expandable reasoning steps in preview chat showing tool invocations, input/output, and guideline enforcement. Critical for governance, compliance, and builder confidence.
Open Yet Secure
Customers want agents grounded in their company data while connecting to external systems. Knowledge integration is a core tabโupload and preview in seconds. I built import patterns to bridge UI and code, letting external agents and tools integrate seamlessly. Privacy-first, yet fundamentally collaborative and open for future scale.
Design Decision: Open Architecture with Private Context
Knowledge as core tab, not advanced feature. Import patterns supporting external agents (Langflow, A2A, MCP). Enterprise privacy without sacrificing extensibility.
See It In Action
Agent Builder Demo: Watch me build a "Cat Facts" Agent under 4 mins demonstrating guided build, knowledge grounding, realtime testing & guardrails in action.
The Impact
We shipped in July 2025. The numbers told the story.
Average build time is 3m 49s, 62% under our goal (<10 min)
Improved NPS by 71 points, from -28.7 in 2023 to +42.6 in 2025
What the data showed
11K unique users over 6 months
Daily active users steady at 200-400
30+ enterprise customers deployed
What users told us
"Easy to use but lacking some features"
Model selection still confusing
Deployment failures surfaced too late
Wanted more 3rd-party integrations
Business & Product Impact
$150M
recognized revenue in Q3 2025
$300M-$500M
at-risk accounts retained
30+
enterprise customers deployed
8K
monthly active users building agents
<4 min
average agent creation time-to-value
75%
reduction in visual/UX bugs
40%
faster dev cycles
Customer Outcomes
The platform now powers enterprise AI across Fortune 500 companiesโfrom HR automation to procurement to customer care. Client feedback consistently cited the intuitive experience, flexibility, and reliability as key value drivers.
IBM HR (AskHR)
94% of 10M+ annual inquiries resolved, >1M HR transactions processed
Dun & Bradstreet
360ยฐ risk supplier assessments, 10-20% Estimated reduction in time for procurement tasks
Lockheed Martin
Agentic AI ecosystem for 10,000 engineersโreplaced 46 tools with one platform
Wimbledon
16M fan interactions with 'Match Chat' agent โ processed 2.7M data points
These insights drove our next improvements.
Principle-Guided Improvements
1. Transparency: Model Selection
Users struggled understanding model differencesโcost, deprecation, performance impact hidden behind tooltips. I designed a detailed modal with plain-language descriptions, pricing, and status indicators.
Trust: Deployment Summary
Users deployed blindโno way to verify agents would work. I designed a pre-deployment summary showing configuration details, connection status, and errors. Users fix issues before deploying, ensuring success.
Open and Scalable: Dynamic Import Pattern
Dev and runtime teams wanted external connections (A2A, Langflow, MCP, custom APIs), but each had different credentials and config. Adding individual tabs would've created many tabsโchaos.
I designed categorized selection with dynamic forms that load the right fields per connection type.
Impact: Scalable pattern that accommodates future integrations without UI bloat.
Guided Without Force: IA Strategy
As UX owner, I pushed back on Voice and Channels teams wanting to add features directly in builderโwould've killed our <4 min goal.
I introduced "Manage" as a new top-level IA section for advanced configs. Kept builder focused on core creation flow.
Impact: Maintained <4 min build time, removed clutter, scalable for future
Governance: Agent Analytics
Post-deployment monitoring was a top customer request. We designed analytics dashboard showing usage, success rates, tool invocations at a tenant level as well as individual agent level including accessing traces for maximum governance and control. Users could confidently monitor their agents post deployment and identify issues as soon as they occur.
What's Next
2026 Vision - Canvas Builder & AI Assisted Agent Builder
I led both these initiatives based on customer requests and competitive analysis. Pitched to leadership, driving exploration of visual agentic workflow canvas based on earlier explorations for complex orchestration, relationship visualization and AI-assisted agent generation.
Currently in development.



























