aiux
PatternsPatternsCoursesCoursesNewsNewsResourcesResources
Overview

Foundations

  • What Is Conversational UI? (And What It Isn't)
  • Anatomy of a Chat Interface

Building

  • Building Message Bubbles in React
  • Typing Indicators & Streaming Responses
  • Suggested Prompts & Conversation Starters

Advanced Patterns

  • Managing Conversation Context
  • Error Handling & Fallback Design
  • Voice Interface Design Patterns

Ship It

  • Accessibility in Conversational UI
  • Putting It All Together - Architecture Checklist
  • Agentic Conversational UI - When AI Takes Actions
  1. Guides
  2. /
  3. Build a Conversational UI
  4. /
  5. Typing Indicators & Streaming Responses
BuildingLesson 4 of 11

Typing Indicators & Streaming Responses

4 min readConversational UI for DesignersUpdated Apr 2, 2026

Typing indicators and streaming responses solve the same problem: never make a user stare at a blank screen while the AI works. Indicators are right for waits under three seconds; streaming is the modern default everywhere else. This lesson covers both, with code.

The Three Response Patterns

1

Typing Indicator (Three Bouncing Dots)

Use for short waits (< 3 seconds). Shows "the AI is thinking" without committing to a response format. This is what Slack and iMessage use. Build it with three small circles using animate-bounce with staggered delays.

2

Streaming Text

The modern standard for AI chat. Tokens appear as they're generated, giving users something to read immediately. ChatGPT and Claude both use this. You consume a Server-Sent Events (SSE) stream and append tokens to the message.

3

Status Phases

For complex tasks, show distinct phases: "Searching..." then "Reading 3 documents..." then "Writing response..." This is what Perplexity and research-focused AIs use. Each phase reassures the user that progress is happening.

Typing indicator (three bouncing dots)

Implementing Streaming

Most AI APIs (OpenAI, Anthropic, Google) support streaming via Server-Sent Events. Here's the pattern:

Stream response handler
async function streamResponse(userMessage: string, onToken: (token: string) => void) {
  const response = await fetch('/api/chat', {
    method: 'POST',
    body: JSON.stringify({ message: userMessage }),
  });

  const reader = response.body?.getReader();
  const decoder = new TextDecoder();

  while (reader) {
    const { done, value } = await reader.read();
    if (done) break;
    const text = decoder.decode(value);
    onToken(text);
  }
}

In your component, create a "streaming" message and append tokens to it:

Streaming text with cursor

The message bubble shows streamingContent with a blinking cursor at the end.

Add a subtle blinking cursor (▊) at the end of streaming text. It signals "more is coming" and prevents users from thinking the response is done mid-sentence.

← Previous LessonBuilding Message Bubbles in ReactNext Lesson →Suggested Prompts & Conversation Starters
← Back to Build a Conversational UI overview

On this page

  • The Three Response Patterns
  • Implementing Streaming

aiux

AI UX patterns from shipped products. Demos, code, and real examples.

Have an idea? Share feedback

Get daily AI UX news

Resources

  • All Patterns
  • Browse Categories
  • Contribute
  • AI Interaction Toolkit
  • Agent Readability Audit
  • Newsletter
  • Documentation
  • Figma Make Prompts
  • Designer Guides
  • All Resources →

Company

  • About Us
  • Privacy Policy
  • Terms of Service
  • Contact

Links

  • Portfolio
  • GitHub
  • LinkedIn
  • More Resources

Copyright © 2026 All Rights Reserved.