Escalation Pathways
What is Escalation Pathways?
Agents will encounter situations they can't handle - ambiguous instructions, conflicting information, high-stakes decisions they're not authorized to make, or tasks that exceed their capabilities. The agent needs a structured way to escalate to the human without breaking the workflow, losing context, or creating anxiety. This is different from simple error recovery because the agent hasn't failed - it's recognized its own limitations. The pattern defines four escalation types: confidence-based (uncertainty threshold), permission-based (authorization limits), conflict-based (contradictory information), and capability-based (task exceeds abilities). Each escalation preserves full context, includes a recommended action with confidence level, and allows the agent to continue from where it paused after the user responds.
Problem
Agents encounter situations they can't handle - ambiguity, conflicts, authorization limits, or capability gaps. Poor escalation design either interrupts users too frequently (escalation fatigue) or too rarely (the agent guesses wrong on high-stakes decisions).
Solution
Design structured escalation triggers with context preservation, recommended actions with confidence levels, and multiple response options. Batch non-urgent escalations, learn from repeated answers, and let users set escalation sensitivity.
Real-World Examples
Implementation
AI Design Prompt
Guidelines & Considerations
Implementation Guidelines
Batch non-urgent escalations. Don't interrupt for every minor question - collect 3-4 low-priority escalations and present them as a group.
Preserve context completely. When escalating, show the user exactly where in the workflow the agent paused, what it was trying to do, and what it's already completed.
Provide a recommended action with the escalation. Don't just ask 'what should I do?' - present 'I'd suggest X. Approve, or tell me otherwise.'
Learn from escalations. If a user answers the same escalation the same way 3 times, offer to automate that decision.
Allow users to set escalation sensitivity: more interruptions (safer) or fewer interruptions (more autonomous).
Include confidence levels with escalations so users understand why the agent paused.
Show how the escalation fits within the overall task progress so users maintain context.
Design Considerations
Escalation resolution time: how quickly users respond to escalations indicates urgency calibration
Escalation reduction rate: do escalations decrease over time as the agent learns preferences
False escalation rate: how often the agent escalates unnecessarily, creating user irritation
Missed escalation rate: how often the agent should have escalated but didn't, causing trust damage
Balancing escalation frequency with user fatigue - too many interruptions defeat the purpose of delegation
Escalation context must be preserved without requiring the user to re-read the entire workflow history
Different escalation types (confidence, permission, conflict, capability) require different UI treatments
See this pattern in your product
Upload a screenshot and find out which of the 36 patterns your AI interface uses.
Related Patterns
Graceful Handoff
Seamless transitions between AI automation and human control.
Human-AI CollaborationConfidence Visualization
Display AI certainty levels through visual indicators, helping users understand prediction reliability and decide when to trust or verify outputs.
Trustworthy & Reliable AIAutonomy Spectrum
Provide a spectrum of autonomy levels - from passive suggestions to full autonomy - that users can adjust per task type, enabling granular control over how independently an AI agent operates.
Human-AI CollaborationTrust Calibration
Design a system that progressively builds appropriate trust through demonstrated competence - showing track records per domain, celebrating milestones, and adjusting oversight based on actual agent performance.
Trustworthy & Reliable AIMore in Human-AI Collaboration
Contextual Assistance
Offer timely, proactive help and suggestions based on user context, history, and needs.
Human-in-the-Loop
Balance automation with human oversight for critical decisions, ensuring AI augments human judgment.
Augmented Creation
Empower users to create content with AI as a collaborative partner.