Trust Calibration
What is Trust Calibration?
Users either over-trust or under-trust AI agents. Over-trust leads to passive reliance on inaccurate outputs where users stop checking and mistakes compound. Under-trust means users micromanage every action, defeating the purpose of delegation. Trust calibration is the design challenge of aligning a user's perception of the agent's reliability with its actual performance over time. Unlike one-time confidence scores, this is a relationship that evolves - the agent earns more or less trust based on its track record with that specific user. The pattern starts agents supervised with high visibility, shows per-domain track records, proactively repairs trust after mistakes, and offers autonomy upgrades only when earned. Trust builds slowly and breaks quickly, and the design must account for this asymmetry.
Example: Tesla Autopilot - Progressive Trust Building
Trust is calibrated through hands-on-wheel monitoring requirements. As the system demonstrates competence in specific conditions like highway driving, it expands to more scenarios. A disengagement resets trust indicators.
AI Design Prompt
Want to learn more about this pattern?
Explore the full pattern with real-world examples, implementation guidelines, and code samples.
View Full Pattern