aiux
PatternsPatternsCoursesCoursesNewsNewsResourcesResources
Overview

Foundations

  • What Is Conversational UI? (And What It Isn't)
  • Anatomy of a Chat Interface

Building

  • Building Message Bubbles in React
  • Typing Indicators & Streaming Responses
  • Suggested Prompts & Conversation Starters

Advanced Patterns

  • Managing Conversation Context
  • Error Handling & Fallback Design
  • Voice Interface Design Patterns

Ship It

  • Accessibility in Conversational UI
  • Putting It All Together - Architecture Checklist
  • Agentic Conversational UI - When AI Takes Actions
  1. Guides
  2. /
  3. Build a Conversational UI
  4. /
  5. Agentic Conversational UI - When AI Takes Actions
Ship ItLesson 11 of 11

Agentic Conversational UI - When AI Takes Actions

8 min readConversational UI for DesignersUpdated Apr 2, 2026

An agentic chat doesn't just answer — it sends emails, runs code, or modifies files on the user's behalf. That changes the interface fundamentally: irreversibility, trust calibration, and consent all become design problems. This lesson covers the patterns that make agentic AI safe to use.

What Makes Agentic UI Different

A regular chatbot generates text. An agentic AI generates text AND takes actions - sending emails, running code, making API calls, modifying documents. This creates new UX challenges:

  • Irreversibility - A wrong text response can be regenerated. A wrong email sent to your boss can't be unsent.
  • Trust calibration - Users need to know when to trust the AI's judgment and when to verify.
  • Transparency - Users need to see what the AI is doing, not just what it says.
  • Control spectrum - Some users want full autonomy, others want approval on every action.

These aren't abstract concerns - they're the design challenges facing every AI coding assistant (Cursor, GitHub Copilot), AI email tool (Superhuman, Spark), and AI agent platform (ChatGPT with plugins, Claude with tools).

Destination AI vs Ambient AI

The first design decision is not what the agent does — it's where it lives. A destination AI is a place users go to get help. An ambient AI is present where the work is already happening, and surfaces only when it's useful. Most agentic products work better ambient than destination, because the context-switching tax of "go to AI → get help → come back" eats into the productivity gain the agent was supposed to provide.

Destination AI
A dedicated surface the user visits ("open ChatGPT", "go to the chat tab"). Good for exploratory tasks, brainstorming, and long sessions. Costly for repetitive actions embedded in daily workflows.
Ambient AI
The agent lives inside the tools the user is already in — Google Personal Intelligence across Gmail/Photos, Figma's Make embeds inline with designs, Vercel's apply-fix button inside the editor. Lower interaction cost, but demands stronger disclosure because users haven't "summoned" the AI.

The Permission Spectrum

Every agentic feature sits somewhere on a spectrum from "human decides" to "AI decides." Your job as a designer is to know where on the spectrum the feature lives, and whether the interface reflects that honestly. The most common failure is shipping something on the right end of the spectrum with an interface that implies the middle.

Human decides
The user makes the call; the AI proposes or ranks. Perplexity Model Council (3 responses, user picks), Google Photos "Ask" suggestions, Gemini Deep Think research summaries. Safest default, lowest velocity.
Shared decision
The AI drafts, the user reviews before execution. Uber Eats cart assistant, Vercel feature flags, Figma Claude-produced code that designers keep or reject. This is the middle of the spectrum — and in practice, the middle is nearly empty. Most products skip straight from "AI suggests" to "AI acts."
AI decides
The agent acts inside a defined scope without per-action approval. Cursor autonomous coding agents, GitHub Agentic Workflows, Codex-Spark 15x code generation. Needs strong boundaries, strong disclosure, and genuine reversibility to be safe.

Three conditions for safe autonomy

When you move a feature toward the right end of the spectrum, the three conditions below are what make autonomy safe rather than reckless. If any one of them breaks, pull the feature back toward "shared decision" before ship.

Scope
Is the boundary of what the agent can touch explicit and visible to the user? Safe: "this agent can only modify files inside this repository." Unsafe: "this agent can read anything the logged-in user can read."
Stakes
Does the user understand the consequences of a wrong action at this point? Safe: a documentation update the agent can revert. Unsafe: sending an email, spending money, or modifying shared state without a preview.
Reversibility
Can the action be undone, and does the UI surface the undo? Safe: git revert, draft-that-never-sent, one-click rollback. Unsafe: data shared externally, message posted to a channel, payment processed.

Designed boundaries vs patched boundaries

The most important design decision on any agentic feature is what the AI shouldn't do. If you find yourself adding confirmation dialogs after a review meeting flagged the risk, you're patching boundaries rather than designing them — and users can feel the difference.

Designed boundary
Built in from the start. The boundary is the feature. "The repository boundary is the permission — the agent can't reach outside it, but inside that boundary it acts without asking." Feels integral, not additive.
Patched boundary
Added after the capability shipped (or after a review surfaced a concern). Confirmation modals, warning banners, off-by-default toggles. Feels like a speed bump, not a feature — and users learn to click through them.

The Disclosure Layer Framework

When the agent is ambient and partially autonomous, a single "consent event" at the start is not enough. Disclosure has to happen at four moments — before, during, controls, and after — and the design job is to make sure each moment has a real UI surface, not a buried settings toggle.

Before
What will the AI do? What will it access? Named action + consent confirmation at the point of use, not in the onboarding flow. Notion AI Meeting Notes asks "Record this meeting?" each time, not once at account creation.
During
Persistent visible signal of active operation. Pulsing waveform while listening, real-time transcript, "AI is working" badge on the cursor. The user should never wonder whether the agent is active right now.
Controls
User visibility into — and adjustment of — what the agent can see and do. Consent toggles per data source, per mode, or per session. Cursor context retention visible from the sidebar, not buried under Settings > Advanced.
After
A structured record of what was used, what was produced, and how to undo. Notion's post-session panel shows the source transcript, the extracted conclusions, and the follow-up actions the agent took. This is where audit trail and reversibility meet disclosure.
Anti-pattern: Consent Theater

A terms-of-service checkbox is not consent design. Consent needs to match the actual scope of the action, and it needs to extend to people other than the invoking user when their data or attention is affected. Meta's smart-glasses name tag feature was a textbook case: the wearer opted in; the people being identified didn't. If your AI's actions reach beyond the person in front of it, the disclosure design has to reach there too.

The Five Agentic Design Patterns

These patterns from the AIUX framework are essential for agentic conversational UIs:

1

Intent Preview

Before the AI acts, show what it plans to do and let the user approve. "I'm going to send this email to Sarah with the Q3 report attached. Should I proceed?" This is the most critical pattern for building trust.

2

Plan Summary

For multi-step tasks, show the full plan upfront. Users can approve, modify, or reject the plan before execution begins.

3

Agent Status Monitoring

While the AI works on a multi-step task, show real-time progress. Step indicators, current action, time estimates, and the ability to pause or cancel.

4

Escalation Pathways

Define when the AI should stop acting autonomously and ask for human input. High-stakes decisions, ambiguous requests, and unfamiliar territory should trigger escalation.

5

Trust Calibration

Gradually increase the AI's autonomy as users build trust. Start with "ask before every action" and let users unlock "act then notify" for routine tasks.

Designing the Action Confirmation Flow

The most common agentic pattern in conversational UI is the confirmation flow:

Intent Preview — action confirmation

This is a structured card within the conversation flow - not a modal dialog. The user can confirm with one click, edit details inline, or cancel and rephrase.

Plan Summary — multi-step execution

As steps complete, checkboxes fill in and the current step highlights - giving users real-time visibility into agent progress.

The Action Audit Trail

Every action the AI takes should be logged and reviewable. The Action Audit Trail pattern shows users exactly what happened:

  • What action was taken
  • When it happened
  • What data was affected
  • How to undo it (if possible)

In a conversational UI, this can be a collapsible section under each AI action message: "View details" expands to show the full action log. For products with many automated actions, consider a dedicated activity feed or history panel.

The golden rule of agentic conversational UI: the AI should never take an irreversible action without explicit user confirmation. For reversible actions (like drafting text), act first and let users undo. For irreversible actions (like sending an email), always preview and confirm.

Test your conversational UI: Use the free [AI UX Audit Tool](/audit) to score your interface against all 36 AIUX design patterns - including the agentic patterns covered in this lesson. Upload a screenshot and get instant feedback on which patterns are strong, weak, or missing.

Further reading
  • Who is designing the boundary for AI?· Medium

    The permission spectrum mapped against 12 real products, the three conditions for safe autonomy, and why patched boundaries keep failing.

  • AI learned to shut up. It forgot to say what it was doing.· Medium

    The four disclosure moments (Before / During / Controls / After) with Notion Meeting Notes as the worked example and Microsoft Copilot as the failure case study.

  • AI is finally learning to shut up· Medium

    Why ambient AI beats destination AI for most agentic tasks — the context-switching tax and the economics of interaction cost.

You finished Conversational UI.

Get new guides and daily AI UX patterns in your inbox. No spam, unsubscribe anytime.

Daily AIUX news. Unsubscribe anytime.

← Previous LessonPutting It All Together - Architecture Checklist
← Back to Build a Conversational UI overview

On this page

  • What Makes Agentic UI Different
  • Destination AI vs Ambient AI
  • The Permission Spectrum
  • Three conditions for safe autonomy
  • Designed boundaries vs patched boundaries
  • The Disclosure Layer Framework
  • The Five Agentic Design Patterns
  • Designing the Action Confirmation Flow
  • The Action Audit Trail

aiux

AI UX patterns from shipped products. Demos, code, and real examples.

Have an idea? Share feedback

Get daily AI UX news

Resources

  • All Patterns
  • Browse Categories
  • Contribute
  • AI Interaction Toolkit
  • Agent Readability Audit
  • Newsletter
  • Documentation
  • Figma Make Prompts
  • Designer Guides
  • All Resources →

Company

  • About Us
  • Privacy Policy
  • Terms of Service
  • Contact

Links

  • Portfolio
  • GitHub
  • LinkedIn
  • More Resources

Copyright © 2026 All Rights Reserved.