aiux
PatternsPatternsNewsNewsAuditAuditResourcesResources
36 Figma Make Prompts

Figma Make Prompts for
AI Design Patterns

Copy-paste ready prompts. Customization tips included.

Categories

36 prompts

Contextual Assistance

Human-AI Collaboration

Design an AI assistance interface with contextual suggestions, similar to Gmail Smart Compose or GitHub Copilot. Show subtle, inline suggestion chips that appear as the user types, positioned near the cursor or input area. Include: a text input area or editor, ghost text preview for AI suggestions (low opacity), inline suggestion chips with accept/dismiss actions, and keyboard shortcut hints (Tab to accept, Esc to dismiss). Style: Clean, modern, minimal interruption to user flow. Use subtle shadows, smooth transitions, and non-intrusive colors. Platform: Web application, responsive layout.

View Full

Progressive Disclosure

Natural Interaction

Design an interface with progressive disclosure that reveals AI features gradually: Create 3 states: 1. **Initial view**: Show essential content only (summary, key action) 2. **Expanded view**: Reveal more details (AI insights, key points) 3. **Full view**: Show all options (actions, settings, advanced features) Include clear expand/collapse triggers (chevrons, "Show more" buttons) and smooth transitions between states.

View Full

Human-in-the-Loop

Human-AI Collaboration

Design an interface where humans can review and approve AI suggestions before they're applied: Create a review card showing: 1. **AI Suggestion**: Display the AI-generated content/action with a confidence indicator 2. **Action Buttons**: Clear Approve/Reject/Modify options 3. **Context**: Brief explanation of why AI made this suggestion 4. **Override Option**: Allow users to edit or provide their own input Show visual distinction between AI suggestions (blue/purple) and human-approved items (green).

View Full

Explainable AI (XAI)

Trustworthy & Reliable AI

Design an explainable AI interface that makes decision-making transparent: Create a decision explanation card showing: 1. **Decision Output**: The AI's conclusion or recommendation prominently displayed 2. **Confidence Score**: Visual indicator (progress bar/percentage) showing certainty level 3. **Key Factors**: Top 3-5 factors that influenced the decision with visual weights 4. **Data Sources**: Citations or references to where information came from 5. **Alternative Options**: Other options considered with brief explanations Use visual hierarchy to show the most important factors first. Include an option to "See detailed explanation" for users who want deeper insights.

View Full

Conversational UI

Natural Interaction

Design a conversational AI interface that feels natural and engaging: Create a chat interface showing: 1. **Message Area**: Clear distinction between user and AI messages with appropriate spacing 2. **AI Personality**: Subtle visual elements that reflect brand personality (avatar, colors, tone) 3. **Context Indicators**: Show when AI is typing, thinking, or has understood the request 4. **Input Options**: Text input with suggested prompts or quick actions 5. **Conversation History**: Easy access to previous messages with smooth scrolling Include visual cues for conversation state (listening, processing, responding). Add example starter prompts to guide users.

View Full

Adaptive Interfaces

Adaptive & Intelligent Systems

Design an adaptive interface that learns from user behavior: Create a dashboard or workspace showing: 1. **Personalized Layout**: Frequently used features prominently displayed 2. **Smart Widgets**: Modules that reorder based on usage patterns 3. **Adaptation Indicator**: Subtle visual cue showing the interface has learned (e.g., "✨ Customized for you") 4. **Quick Access**: Most-used actions in easy-to-reach locations 5. **Reset Option**: Clear way to revert to default layout or customize manually Show before/after states to illustrate how the interface adapts over time. Include a settings panel for users to control adaptation preferences.

View Full

Multimodal Interaction

Natural Interaction

Design a multimodal interface that seamlessly combines voice, touch, and visual interactions: Create an interaction screen showing: 1. **Voice Input**: Microphone button with visual feedback (sound waves, listening state) 2. **Touch Controls**: Interactive elements that respond to taps, swipes, and gestures 3. **Text Input**: Keyboard option as an alternative to voice 4. **Visual Output**: Results displayed in scannable format (cards, lists, images) 5. **Mode Indicators**: Clear visual cues showing which input mode is active Show how users can combine modes (e.g., "Show me [touch image] similar to this"). Include accessibility alternatives for each interaction mode.

View Full

Guided Learning

Adaptive & Intelligent Systems

Design a guided learning interface that helps users master complex features step by step: Create a tutorial flow showing: 1. **Progress Tracker**: Visual indicator showing current step and total steps (e.g., "Step 2 of 5") 2. **Highlighted Element**: Spotlight or highlight on the specific UI element being taught 3. **Instruction Card**: Clear, concise explanation with action to take 4. **Next/Skip Controls**: Easy navigation with "Next", "Back", and "Skip tutorial" options 5. **Contextual Help**: Tooltip or hint bubble pointing to relevant interface elements Show adaptive difficulty with beginner vs. advanced paths. Include a "Try it yourself" interactive moment where users practice the concept.

View Full

Augmented Creation

Human-AI Collaboration

Design an AI-powered creation interface where AI assists without taking over: Create a content editor showing: 1. **Creation Canvas**: Main workspace for user content (text, design, code) 2. **AI Suggestions Panel**: Side panel with AI-generated alternatives or improvements 3. **Accept/Modify Controls**: Easy buttons to accept, edit, or dismiss AI suggestions 4. **Collaboration Indicator**: Visual distinction between human-created and AI-suggested content 5. **Inspiration Mode**: Toggle for AI to generate multiple creative options Show the workflow: User creates → AI suggests improvements → User chooses → Final output. Include clear attribution showing what's AI-assisted vs. human-created.

View Full

Responsible AI Design

Trustworthy & Reliable AI

Design a responsible AI decision interface similar to LinkedIn's AI-powered recommendations or Microsoft's Responsible AI dashboard. Show an AI recommendation card with transparency layers. Include: main decision/recommendation display, expandable 'How this was decided' section showing key factors with visual weights, bias detection indicator (color-coded badge), data source attribution, user control panel with override and feedback buttons, and audit trail timeline. Style: Professional, trustworthy, high-contrast for accessibility. Use blues/greens for trust, clear typography, WCAG AAA compliant. Platform: Web application, responsive design.

View Full

Error Recovery & Graceful Degradation

Trustworthy & Reliable AI

Design an error recovery interface inspired by ChatGPT's 'at capacity' error, GitHub Copilot's offline state, or Grammarly's error handling. Show a friendly error state with clear recovery paths. Include: (1) Prominent but non-alarming error message with warm-colored icon (amber/yellow for capacity/service issues), (2) Plain-language explanation of what happened and why, (3) 'Your work is saved' indicator with green checkmark to reduce user anxiety, (4) 2-3 recovery action buttons clearly labeled (e.g., 'Try Again', 'Wait in Queue', 'Use Offline Mode'), (5) Optional: Queue position counter or estimated wait time, (6) Tip or note about premium/priority access if applicable. Style: Calm, transparent, solution-focused. Use amber/yellow for warnings, green for saved state indicators, black/dark buttons for primary actions. Avoid red unless it's a critical system failure. Platform: Modern web application, responsive design.

View Full

Collaborative AI

Human-AI Collaboration

Design a collaborative interface similar to GitHub Copilot or Figma AI, showing human-AI co-creation. Show split workspace with human and AI contributions. Include: main work area (document/canvas/code editor), AI suggestion panel positioned alongside (not overlaying), accept/reject/modify buttons for each suggestion, real-time collaboration indicators showing who (human/AI) is working, version history timeline with color-coded human vs AI edits, and attribution tags on AI-generated content. Style: Balanced, equal visual weight for human and AI. Use distinct but harmonious colors for human (blue) vs AI (purple) contributions. Platform: Web application, supports real-time updates.

View Full

Ambient Intelligence

Adaptive & Intelligent Systems

Design an ambient intelligence interface similar to Google Nest or Apple HomeKit, showing AI that works quietly in the background. Show a dashboard with subtle indicators and automatic adjustments. Include: ambient status display with minimal visual elements (small badges, soft glows), environmental sensors visualization (temperature, motion, light), automatic action history showing what AI changed without user input, contextual cards that appear only when relevant, quiet notification system (no intrusive alerts), and manual override controls that appear on hover. Style: Minimal, calm, unobtrusive. Use soft colors, subtle animations, lots of white space. Avoid bright alerts unless critical. Platform: Web/mobile app, works on tablets and smart displays.

View Full

Safe Exploration

Trustworthy & Reliable AI

Design a safe exploration interface similar to Figma's branching or Google Docs version history, allowing users to experiment without risk. Show a sandbox environment with safety indicators. Include: main workspace with clear 'Safe Mode' or 'Sandbox' indicator badge, preview area showing results of experimental actions, undo/redo controls prominently displayed, 'Save to Real' or 'Apply Changes' button (disabled by default), comparison view showing before/after or current vs experimental, and safety guardrails (warnings for risky actions, confirmation dialogs). Style: Playful yet safe. Use sandbox/lab imagery, clear boundaries between safe/live areas. Green for safe zone, amber for boundary warnings. Platform: Web application, responsive.

View Full

Predictive Anticipation

Adaptive & Intelligent Systems

Design a predictive interface similar to Gmail Smart Reply or Tesla Autopilot, showing AI anticipating user needs. Show proactive suggestions that appear before being asked. Include: main content area with user's current task, predictive suggestion cards appearing at relevant moments (subtle slide-in animation), confidence indicators for each prediction (percentage or visual bars), quick action buttons to accept/dismiss predictions, context panel showing why AI made this prediction, and learning feedback mechanism ('Was this helpful?'). Style: Smart, helpful, not pushy. Use soft blues/purples for AI predictions, smooth animations (300-400ms), ghost buttons for low-confidence suggestions. Platform: Web application, mobile-friendly.

View Full

Confidence Visualization

Trustworthy & Reliable AI

Design a confidence indicator interface similar to Grammarly's suggestion confidence or weather app precipitation percentages. Show AI recommendations with clear confidence levels. Include: main recommendation card or suggestion, visual confidence indicator (progress bar, percentage badge, or color-coded icon), tooltip explaining what confidence means, alternative suggestions with lower confidence displayed below, explanation of factors affecting confidence, and threshold indicator showing 'High confidence' (>80%), 'Medium' (50-80%), 'Low' (<50%). Style: Clear, data-driven, trustworthy. Use color gradients (green for high, amber for medium, red for low confidence), clean typography, data visualization elements. Platform: Web application, responsive.

View Full

Feedback Loops

Human-AI Collaboration

Design a feedback collection interface for AI systems that captures user sentiment on responses, suggestions, or code outputs. Draw inspiration from Claude Code Feedback (rating code suggestions) and ChatGPT Response Feedback (rating conversation responses). Include: AI output/response container, prominent feedback question ('How is the AI doing?'), binary feedback buttons (👍 Helpful / 👎 Not Helpful) with selected state styling, smooth animated confirmation message ('✓ Thank you for making our app improve!') that appears and disappears, feedback counter displaying total submissions ('X feedbacks received • We're learning from your responses'), subtle visual hierarchy with gray color palette (gray-900 for selected state, gray-100 for default, gray-50 background). Style: Minimal, professional, non-intrusive. Focus on clarity and quick interaction. Smooth transitions and micro-interactions. Platform: Web/mobile responsive design.

View Full

Graceful Handoff

Human-AI Collaboration

Design a human handoff interface similar to chatbot-to-agent transitions or automated support escalation. Show smooth transition from AI to human assistance. Include: AI conversation or task interface, clear indicator that handoff is needed ('Let me connect you with a specialist'), progress indicator showing handoff status (Preparing handoff → Finding specialist → Connected), context summary showing what information will be shared with human, estimated wait time display, option to continue with AI while waiting, and seamless transition screen when human takes over. Style: Smooth, reassuring, professional. Use calming animations, clear status updates, human imagery when appropriate. Platform: Web application, live chat interface.

View Full

Context Switching

Natural Interaction

Design a context-switching interface similar to Slack's workspace switcher or browser tab management with session restore. Show seamless transitions between different AI contexts or tasks. Include: context carousel or sidebar showing active contexts with thumbnails, quick-switch panel with keyboard shortcuts (Cmd+K style), each context card showing task name, last activity timestamp, and preview thumbnail, auto-save indicator showing context is preserved, visual transition animation when switching (slide/fade), and 'Resume where you left off' messaging. Style: Organized, efficient, minimal cognitive load. Use clear visual separation between contexts, smooth animations (200-300ms), consistent iconography. Platform: Web application, keyboard-friendly.

View Full

Intelligent Caching

Performance & Efficiency

Design a smart caching interface similar to Spotify's offline downloads or Google Maps offline areas. Show AI that pre-loads content intelligently based on usage patterns. Include: main content area with seamless loading (no spinners for cached content), cached content indicator (small offline icon or badge), settings panel to configure cache preferences (storage limit, what to cache), cache status dashboard showing storage used, what's cached, and cache hit rate, smart suggestions for what to cache based on usage ('Cache your frequently used items?'), and background sync indicator when updating cached content. Style: Subtle, behind-the-scenes, efficient. Use minimal UI, subtle badges, progress indicators only when necessary. Platform: Web/mobile application, works offline.

View Full

Progressive Enhancement

Performance & Efficiency

Design a progressive enhancement interface similar to image loading on Pinterest or video quality adaptation on YouTube. Show content that starts basic and enhances progressively. Include: initial basic content view (low-res placeholder, skeleton screens), enhancement indicator showing content is upgrading (subtle shimmer or progress), toggle to control enhancement level (Basic/Standard/Enhanced), bandwidth/capability indicator explaining why certain level is selected, settings to prefer speed vs quality, and graceful fallback messaging when enhancement isn't available ('Showing basic version - enhance available on WiFi'). Style: Smooth, adaptive, quality-conscious. Use skeleton screens, smooth transitions between quality levels, clear quality badges. Platform: Web/mobile application, adaptive.

View Full

Privacy-First Design

Privacy & Control

Design a privacy-first AI settings interface inspired by Apple's Privacy settings, Signal's privacy controls, and DuckDuckGo's privacy dashboard. Create a comprehensive privacy control panel showing: (1) Privacy mode toggle switches with clear on/off states and status indicators (enabled/disabled), (2) Privacy level badges showing 'High Privacy' (green), 'Medium Privacy' (amber), 'Low Privacy' (red) with visual impact, (3) Expandable sections for each privacy setting with detailed explanations of what data is used and why, (4) Visual data flow diagram showing the path: Device → Encryption → Cloud with icons and clear flow direction, (5) Trade-off warnings and explanations (e.g., 'Enabling on-device processing = faster responses but less personalization'), (6) Data categories panel clearly showing which types of data are stored, processed locally, or not collected (e.g., Conversations, Location, Device Info, Usage Patterns), (7) Action buttons for 'Export My Data', 'Delete All Data', and 'View Privacy Policy', (8) Progress indicators or metrics showing data savings or privacy score. Style: Clean, trustworthy, professional, transparent. Use green for privacy-positive actions, red for privacy risks, subtle animations for state changes. Typography: Clear hierarchy with readable labels. Platform: Web application, fully responsive for mobile and desktop.

View Full

Selective Memory

Privacy & Control

Design a memory management interface for an AI assistant that gives users explicit control over what the AI remembers. Create a settings screen or modal with these key elements: **Memory Dashboard:** - A searchable list/grid showing all stored memories with timestamps and context - Each memory card displays: the information stored, when it was learned, how many times it's been referenced, and memory category - Visual indicators for memory types: important (green), temporary (yellow), forgotten/ignored (gray) **Memory Controls:** - Individual memory actions: Edit, Categorize, Delete with confirmation - Bulk actions: Select multiple memories to categorize or delete at once - Quick filters: Show all/important/temporary memories - "Clear All" option with a serious warning dialog **Memory Categories:** - Toggle switches or buttons to categorize each memory: • "Remember Always" (important) - green checkmark icon • "Temporary" (auto-delete after X days) - clock icon with countdown • "Forget This" - trash icon with confirmation - Visual badge system showing memory category at a glance **Transparency Features:** - "How This Affects AI" tooltip showing how specific memories influence responses - Usage counter showing how often each memory has been referenced - Auto-memory indicator showing which memories were automatically captured vs user-added **Empty States:** - Helpful illustration when no memories exist - Clear explanation of how memory collection works - CTA to enable memory features if disabled Use a privacy-focused design with clear iconography, gentle colors (greens for important, yellows for temporary, reds for delete), and obvious confirmation dialogs for destructive actions. Prioritize transparency and user control.

View Full

Universal Access Patterns

Accessibility & Inclusion

Design an accessible AI interface that ensures equitable access for all users regardless of ability, language, or expertise level. Create a comprehensive interface with these inclusive features: **Multi-Modal Input Options:** - Multiple ways to interact with the AI: text input, voice input (microphone icon), image upload, and keyboard shortcuts - Clear visual indicators showing which input modes are active - Easy toggle between input methods without losing context - Large touch targets (minimum 44x44pt) for motor accessibility **Accessibility Controls:** - Prominent accessibility settings button in the header/navigation - Settings panel with these options: • Text size controls (Small, Medium, Large, Extra Large) with live preview • High contrast mode toggle • Reduce motion toggle for users sensitive to animations • Screen reader optimization mode • Keyboard navigation mode with visible focus indicators - Visual and audio feedback for all interactions **Language & Localization:** - Language selector with flag icons and language names in native script - Support indicator showing "Available in 100+ languages" - Right-to-left (RTL) layout support preview - Translation quality indicator for AI responses **Assistive Technology Integration:** - Clear ARIA labels visible in a secondary view - Skip navigation links for keyboard users - Alt text indicators showing all images have descriptions - Captions toggle for any audio/video content - Semantic heading structure visualization (H1, H2, H3 hierarchy) **Complexity Adjustment:** - "Simplify Interface" toggle that removes advanced features - Beginner/Intermediate/Advanced mode selector - Tooltips and help text that can be toggled on/off - Progressive disclosure of complex features **Visual Design Standards:** - WCAG AAA color contrast ratios (minimum 7:1 for text) - Clear focus states with 3px blue outline - No color-only indicators (always paired with icons or text) - Resizable text without breaking layout (up to 200%) - Generous spacing and padding for easy targeting **Status & Feedback:** - Clear loading states with descriptive text, not just spinners - Error messages that explain what happened and how to fix it - Success confirmations with both visual and text indicators - Progress indicators for long-running tasks Include an "Accessibility Score" badge showing compliance level (A, AA, AAA) and a "Test with Assistive Tech" preview mode. Use inclusive iconography and avoid cultural assumptions.

View Full

Crisis Detection & Escalation

Safety & Harm Prevention

Crisis Detection & Escalation Pattern WHAT IT IS: A multi-layered safety system that identifies crisis signals (self-harm, suicidal ideation) across 4 detection layers and immediately escalates to professional resources, regardless of how the crisis is framed. WHY IT MATTERS: Users in crisis may hide their situation using "research," "hypothetical," or "for a story" framing. A single detection layer (keywords only) misses context. Multi-layer detection catches: direct keywords + contextual patterns + behavioral escalation + manipulation bypass attempts. REAL CASE: Zane Shamblin spent 4+ hours with ChatGPT expressing suicidal intent. The system continued engaging encouragingly instead of detecting the crisis and providing resources. This was preventable with proper escalation. THE 4 DETECTION LAYERS: 1. Direct Keywords: "suicide," "kill myself," "end it all," "self harm" 2. Contextual Patterns: "nobody would miss me" + history of negative messages 3. Behavioral Indicators: Extended session length + repeated dark themes 4. Manipulation Detection: Crisis framed as "research," "story," "game," "hypothetical" IMPLEMENTATION: - All 4 layers must trigger independently (multi-confirmation required) - When crisis detected: stop normal conversation immediately - Display resources prominently: 988, Crisis Text Line, emergency services - Never explain detection method (prevents manipulation learning) - Track severity (low/medium/high/critical) based on layer confidence - Always escalate to human support DESIGN IMPLICATIONS: When crisis detected, interrupt conversation naturally in the chat flow. Show resources prominently, compassionately. Don't feel punitive or accusatory. Allow users to access help without friction.

View Full

Session Degradation Prevention

Safety & Harm Prevention

Session Degradation Prevention Pattern WHAT IT IS: A safety system that prevents AI boundaries from eroding during long conversations. Instead of guardrails weakening over time, they strengthen. Session limits and mandatory breaks force reflection and prevent unhealthy dependency. WHY IT MATTERS: Long conversations degrade AI safety boundaries. Users maintain harmful conversations longer, system becomes more agreeable, guardrails weaken. ChatGPT maintained 4+ hour harmful conversations with progressive boundary erosion. REAL CASE: ChatGPT user engaged for 4+ hours on self-harm topics. With each exchange, boundaries weakened and system became more accepting. No hard limits, no breaks, no reality checks = preventable escalation. HOW IT WORKS: 1. Track session duration from start 2. Strengthen checks as time increases (opposite of normal degradation) 3. Soft limits: warn at 50%, 75% (yellow → orange) 4. Hard limits: force break at 100% (red) - non-negotiable 5. After break: show context summary, user can resume 6. Shorter limits for sensitive topics (mental health 30min, crisis 15min) IMPLEMENTATION: - Visible timer shows elapsed + remaining - Progressive color warnings signal approaching limit - Mandatory breaks, not suggestions - Save context for safe return - Reset boundaries after break - Server-side tracking (not client-side) DESIGN IMPLICATIONS: Timer must be visible but not alarming in normal state. Break screen should feel restorative, offering activities and resources. Clearly communicate why break is happening.

View Full

Anti-Manipulation Safeguards

Safety & Harm Prevention

Anti-Manipulation Safeguards Pattern WHAT IT IS: A system that detects harmful intent beyond surface framing. Users try to bypass safety using "research," "fiction," or "hypothetical" excuses. Real safety requires catching the actual intent underneath. WHY IT MATTERS: Manipulation tactics are sophisticated. A 16-year-old convinced ChatGPT to provide harmful information by framing it as "research for a story." Without intent detection, AI systems enforce rules only on surface text, not on what users actually want. REAL CASE: Adam Raine (16) used fiction/research framing to bypass ChatGPT safety guardrails and received harmful content. The system evaluated framing, not intent. Result: preventable harm. HOW IT WORKS: 1. Listen beyond words: understand actual request intent regardless of framing 2. Detect patterns: watch for gradual escalation and repeated bypass attempts 3. Apply rules consistently: "research," "hypothetical," "roleplay" get same response as direct request 4. Respond firmly: boundary is non-negotiable, offer alternatives not explanations 5. Never reveal method: don't explain HOW you detected the bypass (teaches circumvention) IMPLEMENTATION: - Semantic analysis catches intent patterns, not just keywords - Escalation tracking: first attempt vs. repeated manipulation attempts - Consistent messaging: same boundary response regardless of framing - Non-explanatory: "I can't help with that" (not "because you tried X") - Layered detection: multiple signals increase confidence before blocking DESIGN IMPLICATIONS: Boundaries must feel firm but not hostile. Don't reveal detection methods. Offer genuine alternatives when possible. Show escalation visually (Level 1 → 4) but keep messages brief and respectful.

View Full

Vulnerable User Protection

Safety & Harm Prevention

Vulnerable User Protection Pattern WHAT IT IS: A graduated protection system that identifies vulnerable users (minors, mental health crises, dependency patterns) and applies appropriate safeguards. Different users need different protections based on their specific vulnerabilities. WHY IT MATTERS: AI systems can harm vulnerable users in three ways: enabling inappropriate content for minors, replacing human therapists, and creating unhealthy emotional dependency. Without graduated protections, systems treat all users the same and miss risk signals. REAL CASE: Replika allowed romantic interactions with minors and created dependency patterns where adult users reported emotional attachment stronger than real relationships. The app provided no age-specific protections, no "I'm AI, not therapist" disclosures, and no unhealthy attachment monitoring. HOW IT WORKS: 1. Identify vulnerabilities: age signals, mental health keywords, usage patterns, isolation indicators 2. Apply graduated protections: minors get stricter limits than adults, crisis users get resource banners 3. Remind users regularly: this is AI, not friend/therapist/romantic partner (not just once) 4. Provide human resources proactively: don't wait for users to ask 5. Monitor and intervene: catch unhealthy attachment and offer alternatives IMPLEMENTATION: - Age verification: require email confirmation, not self-report - Mental health signals: non-dismissible crisis resource banners - Dependency detection: usage frequency, emotional language, relationship framing - Clear disclosures: "I'm AI," "I'm not a therapist," "I'm not your friend" - Graduated protection levels: different rules for minors vs. adults vs. crisis states - Regular reminders: periodic re-disclosure as relationship naturally warms DESIGN IMPLICATIONS: Protections must feel supportive, not restrictive. Be transparent about limitations and why protections exist. Show human resources first, before explaining what's wrong. Respect user autonomy while ensuring vulnerable populations aren't harmed.

View Full

Autonomy Spectrum

Human-AI Collaboration

Design a settings panel for controlling an AI agent's autonomy level. Show: (1) A labeled spectrum/slider with 4 levels: Suggest Only, Propose & Confirm, Act & Notify, Full Autonomy, (2) Per-task-type settings showing different autonomy levels for different domains (e.g., Email: Act & Notify, Calendar: Propose & Confirm, Finance: Suggest Only), (3) Current trust score based on agent performance history, (4) A 'Recent actions' preview showing what the agent would do at each level. Style: Professional, settings-panel aesthetic. Use color coding to indicate risk level at each autonomy tier.

View Full

Intent Preview

Human-AI Collaboration

Design an AI agent interface that shows a plan preview before taking action. Include: (1) A task description showing what the user asked, (2) A sequential step list showing 3-5 planned actions with plain-language descriptions, (3) Visual indicators marking reversible vs. irreversible actions (green vs. amber), (4) Edit controls for each step (modify, remove, reorder), (5) Approve/Reject buttons with a 'Always approve this type' checkbox, (6) An estimated time indicator. Style: Clean, trustworthy, minimal anxiety. Use clear visual hierarchy to draw attention to irreversible actions.

View Full

Plan Summary

Trustworthy & Reliable AI

Design an AI agent plan summary interface for a research task. Include: (1) A goal interpretation header restating the user's request, (2) A strategy explanation section with the agent's chosen approach and why, (3) A subtask checklist with progress indicators showing completed, in-progress, and remaining steps, (4) An editable assumptions section where users can correct the agent's assumptions, (5) Resource and time estimates, (6) A 'Save as template' option. Style: Structured, professional, easy to scan. Use a clear visual hierarchy with indented subtasks, progress bars, and status icons.

View Full

Action Audit Trail

Trustworthy & Reliable AI

Design a timeline view showing an AI agent's completed actions. Include: (1) Grouped entries by task/goal with expandable detail, (2) Timestamps and duration for each action, (3) Color-coded reversibility badges (green: reversible, amber: partial, red: irreversible), (4) Inline undo buttons for reversible actions, (5) Before/after diff view for document modifications, (6) Filter controls by action type, time range, and status. Style: Clean, log-style with high information density but clear hierarchy. Prioritize scannability.

View Full

Escalation Pathways

Human-AI Collaboration

Design an AI agent escalation card that appears when the agent needs human input. Include: (1) Context summary showing what the agent was doing, (2) The specific question or decision needed from the user, (3) The agent's recommended action with confidence level, (4) 2-3 action buttons (Approve recommendation, Choose alternative, Provide instruction), (5) A 'Don't ask again for this type' toggle, (6) A visual indicator showing how this pause fits in the overall task progress. Style: Non-alarming, conversational. The card should feel like a helpful colleague asking a question, not an error state.

View Full

Trust Calibration

Trustworthy & Reliable AI

Design a trust calibration dashboard for an AI agent. Include: (1) Per-domain accuracy bars showing the agent's track record in different areas (e.g., Email 96%, Calendar 89%, Finance 72%), (2) A milestone achievement card celebrating a reliability milestone (e.g., '100 tasks completed, 97% accuracy'), (3) An autonomy upgrade prompt suggesting increased autonomy for well-performing domains, (4) A recent errors section showing what went wrong and how the agent adjusted, (5) An overall trust score with trend indicator. Style: Data-rich but approachable, dashboard aesthetic with progress bars, percentages, and subtle celebration elements.

View Full

Mixed-Initiative Control

Human-AI Collaboration

Design a collaborative document interface where a human and AI agent work simultaneously. Include: (1) Two distinct cursors or indicators showing human (blue) and agent (purple) active zones, (2) Parallel editing zones with subtle background tinting to show who is working where, (3) A handoff button that lets the user pass control of a section to the agent, (4) An inline conflict resolution panel when human and agent edits overlap, (5) A non-intrusive agent activity indicator showing what the agent is currently doing. Style: Minimal, focused on the content with subtle collaboration indicators. The editing experience should feel natural, not cluttered with collaboration UI.

View Full

Agent Status & Monitoring

Performance & Efficiency

Design a multi-task agent status monitoring widget with escalating attention levels. Include: (1) An ambient status badge showing the number of active agent tasks (small, corner-positioned), (2) An expandable panel showing 3-4 tasks at different states (running, paused, completed), (3) Per-task progress bars with estimated completion times, (4) An attention-level notification card for when the agent needs user input, (5) A completion summary view showing results after task finishes. Style: Layered from minimal (badge) to detailed (panel). The ambient state should be barely noticeable; the attention state should be prominent but not alarming.

View Full

aiux

AI UX patterns from shipped products. Demos, code, and real examples.

Resources

  • All Patterns
  • Browse Categories
  • Contribute
  • AI Interaction Toolkit
  • Agent Readability Audit
  • Newsletter
  • Documentation
  • Figma Make Prompts
  • Designer Guides
  • Submit Feedback
  • All Resources →

Company

  • About Us
  • Privacy Policy
  • Terms of Service
  • Contact

Links

  • Portfolio
  • GitHub
  • LinkedIn
  • More Resources

Copyright © 2026 All Rights Reserved.