Error Handling & Fallback Design
AI will fail. Responses come back wrong, APIs time out, and users ask for things the model can't do. This lesson walks through the four failure modes — misunderstanding, refusal, timeout, hallucination — and how to design each one so the conversation stays productive.
The Four Failure Modes
"I Don't Understand"
- The AI can't interpret the user's request.
- Ask a specific clarifying question: "I'm not sure what you mean. Are you asking about X or Y?"
- Never show a generic "I didn't understand that" without a next step.
"I Can't Do That"
- The request is outside the AI's capabilities.
- Be honest and specific: "I can't access your calendar, but I can help you draft the meeting invite."
- Always suggest an alternative that IS possible.
"Something Went Wrong"
- API error, timeout, or rate limit.
- Show a retry button with the original message pre-filled.
- If persistent, suggest: "Try again in a moment" with a countdown.
- Never lose the user's message - save it in the input field.
"The Response Is Wrong"
- AI hallucination or incorrect answer.
- Make it easy to regenerate: a "try again" button on every AI message.
- Add feedback buttons (thumbs up/down) so users can flag bad responses.
- If your product supports it, offer "edit and resend" on user messages.
Designing the Retry Pattern
The key UX principle: errors should feel like a natural part of the conversation, not a system crash. Style them as messages, not modal dialogs.
Never silently swallow errors. A message that disappears into a void with no response is the worst user experience. Even "Something went wrong - click to retry" is infinitely better than silence.
When the AI doesn't know what it doesn't know
The four failure modes above assume the AI knows it failed. A harder class of error is when the AI gives a confident-sounding answer that happens to be wrong. These are the worst errors for conversational UI because users have no visible signal to distrust them — and the same interface used for correct answers is used for the confident-wrong ones.
Three specific sub-patterns are worth naming:
- Competitor blending
- When the AI can't read your product's content, it fills the gap with a rival's methodology, presented as if it were yours. No disclosure that content was substituted.
- Narrative recycling
- The AI surfaces outdated content (old blog posts, stale LinkedIn posts, prior-year pricing) as current. The timestamp is usually available but doesn't make it into the response.
- Confident falsification
- The AI generates plausible-looking but incorrect details (fake API endpoints, fabricated citations, invented features) with the same tone and confidence as accurate ones.
Claude's approach to these failures is the design target: state access limits explicitly ("I can't read that page — paste the content and I'll help"), cite sources when available, and prefer a shorter honest answer over a longer fabricated one. In conversational UX: surface uncertainty as a first-class message element, not a disclaimer. A visible "I'm inferring this from..." line is worth more than 400 confidently wrong words.
Related AIUX patterns: The [Error Recovery](/patterns/error-recovery) pattern covers graceful failure strategies in depth - including undo, retry, and fallback mechanisms with examples from ChatGPT, GitHub Copilot, and more. For situations where the AI should hand off to a human, see [Graceful Handoff](/patterns/graceful-handoff) and [Escalation Pathways](/patterns/escalation-pathways).