This week brings tools for comparing AI responses, enhanced agent workflows for developers, and new research showing increased design hiring in the AI era.
Today in AI Products
| Feb 10 |
Model Council feature compares responses across multiple AI models
Perplexity launched Model Council, a new feature that lets users compare answers from three different AI models side-by-side. This addresses the growing challenge of understanding which AI gives better responses for different types of queries. The feature aims to improve trust and transparency by showing users multiple perspectives on their questions. Source →
Designer's Takeaway: Consider how comparison interfaces can reduce AI uncertainty for users. Design side-by-side layouts that make differences clear without overwhelming users with too many choices.
Pattern: Confidence Visualization
| Feb 10 |
New research shows increased demand for designers in the AI era
Figma released a study revealing that AI is actually driving increased design hiring rather than replacing designers. The research indicates that companies need more designers to navigate AI integration and create human-centered experiences. Hiring managers are prioritizing skills that complement AI capabilities rather than compete with them. Source →
Designer's Takeaway: Focus on developing AI-augmented design skills rather than fearing replacement. Position yourself as the bridge between AI capabilities and human-centered design outcomes.
Pattern: Augmented Creation
| Feb 10 |
New CLI logging tools optimize workflows for AI agents
Vercel rebuilt their logs CLI command with enhanced querying capabilities specifically designed for agent workflows. The tool now allows historical log querying and filtering by specific criteria like project, deployment ID, and request ID. It uses git context by default to automatically scope logs to the current repository when debugging. Source →
Designer's Takeaway: Notice how developer tools are evolving to support AI agent workflows. Consider how your design systems and documentation might need similar contextual intelligence to help AI tools work effectively.
Pattern: Contextual Assistance
| Feb 9 |
Self-driving leaderboard handles 45,000+ AI coding skills automatically
Skills.sh built a fully automated leaderboard system for AI coding agents across Claude Code, Cursor, Codex, and 35+ other tools. The system has tracked over 45,000 unique skills since launch without manual review, using automation to handle scale, messy data, and bad actors. Anyone can publish skills to their GitHub repo and see adoption metrics. Source →
Designer's Takeaway: Apply this automation-first approach to your own design systems. Consider how user-generated content and community contributions can scale through smart defaults and automated quality controls.
Pattern: Collaborative AI
| Feb 10 |
Agents can now access runtime logs through MCP server integration
Vercel's new MCP (Model Context Protocol) server allows AI agents to directly access runtime logs for debugging and monitoring. Agents can retrieve logs for specific projects or deployments, inspect function output, search for errors, and investigate runtime issues. This enables more autonomous debugging workflows without human intervention. Source →
Designer's Takeaway: Consider how AI agents might interact with your design systems and documentation. Design APIs and interfaces that can serve both human users and AI agents effectively.
Pattern: Human-in-the-Loop
Today's Takeaway
AI interfaces are becoming more transparent and collaborative
This week's updates show a clear trend toward making AI systems more transparent through comparison tools and more collaborative through enhanced agent-to-system communication. Rather than replacing human expertise, these tools are designed to augment human capabilities and provide clearer insights into AI decision-making.
Want to learn more about the patterns mentioned today?
Explore All 28 Patterns →| 🔍 Try the Audit Tool → | 📰 Read Past Editions → |
| ✏️ Read on Medium → | ⭐ Star on GitHub → |