Putting It All Together - Architecture Checklist
A production-ready chat UI is eight components, three runtime concerns, and a short list of decisions you make before writing any code. This lesson lays out the component tree, architecture trade-offs, and the checklist that separates shipped prototypes from polished products.
Component Architecture
Organize your chat UI into these components:
<ChatContainer>
<ChatHeader /> // Title, avatar, actions
<MessageList> // Scrollable message area
<ChatMessage /> // Individual message bubble
<TypingIndicator /> // Three dots or streaming cursor
</MessageList>
<SuggestedPrompts /> // Contextual prompt chips
<ChatInput /> // Text field + send button
</ChatContainer>Keep state in ChatContainer. Messages, loading state, and suggestions flow down as props. User actions (send, retry, select prompt) flow up as callbacks.
Production Checklist
Core Functionality
- Message sending and receiving
- Streaming/typing indicator
- Auto-scroll (respecting user scroll position)
- Suggested prompts (empty state + contextual)
- Error handling with retry
- Conversation history (if multi-session)
Polish
- Keyboard shortcuts (Enter to send, Shift+Enter for newline)
- Message copy button
- Regenerate response button
- Mobile responsive layout
- Dark mode support
- Loading skeleton on initial load
Accessibility
- ARIA live regions for new messages
- Screen reader labels for message roles
- Keyboard navigation for all interactive elements
- WCAG AA contrast on message bubbles
- Reduced motion support
Performance
- Virtualized message list for long conversations (react-window)
- Debounced input for "user is typing" indicators
- Lazy load conversation history
- Optimistic UI for sent messages
What to Build Next
Once your core chat UI is solid, these AIUX design patterns will take it to the next level. Each one is a documented pattern with real-world examples, code demos, and implementation guidelines:
[Multimodal Interaction](/patterns/multimodal-interaction) - Let users drag images, PDFs, or code files into the chat. Design for voice + text + visual input simultaneously.
[Graceful Handoff](/patterns/graceful-handoff) - When the AI can't help, transfer to a human agent with full conversation context preserved. Critical for customer support.
[Progressive Disclosure](/patterns/progressive-disclosure) - Start with a simple chat, then reveal advanced features (system prompts, temperature controls, model selection) as users become power users.
[Context Switching](/patterns/context-switching) - Help users manage multiple conversations and switch between them without losing context.
[Confidence Visualization](/patterns/confidence-visualization) - Show users how confident the AI is in its response. Especially important for high-stakes domains like healthcare or finance.
[Feedback Loops](/patterns/feedback-loops) - Add thumbs up/down, regenerate, and edit mechanisms that help users correct the AI and improve responses over time.
Want to see how your conversational UI stacks up? Use the free [AI UX Audit Tool](/audit) to score your interface against all 36 patterns.
Pre-launch questions
Before you ship, walk through these two checklists. The first covers boundary design (what the AI can and can't do). The second covers disclosure design (what the user knows about the AI). Most AI products that fail post-launch fail one of these, not the core capability.
Boundary design
- Where on the permission spectrum does this sit — human decides, shared decision, or AI decides? Map it in the actual interface, not the spec doc.
- Is the boundary designed or patched? If the "Cancel" button was added after a review raised concerns, it's patched.
- Who else does this affect? If the AI acts on behalf of one user but touches another user's data, consent design extends beyond the invoking user.
- What happens at 15x? If the AI gets faster or runs more often, does the human review step still work, or does it become rubber-stamping?
Disclosure design
- Does the user know the AI is present? Not in the terms of service — at the moment of use, every time.
- Does the user know what context the AI accesses? What data is being read, why, and for how long?
- Does the user know what the AI is authorized to do? What actions can it take, on whose behalf, within what boundaries?
Ship the simplest version first: message bubbles, input, send, streaming response, and 4 suggested prompts. Every feature after that should be validated by watching real users interact with your chat UI.
- Who is designing the boundary for AI?· Medium
The permission spectrum in detail, the three conditions for safe autonomy (scope / stakes / reversibility), and why designed boundaries outperform patched ones.
- AI learned to shut up. It forgot to say what it was doing.· Medium
The Disclosure Layer Framework (Before / During / Controls / After) with the Notion Meeting Notes worked example that the disclosure checklist above is derived from.