What Works (and Doesn’t) When Building AI Products
- Jessica Hall
- a few seconds ago
- 3 min read

In our session at the Global Agility + Innovation Summit, a data scientist (that’s Pri) and a product manager (that’s me) broke down what actually changes when you build AI products, what still holds true, and how to make smarter decisions along the way.
What Needs to Change When Building AI Products
1. Monitoring and Continuous Evaluation Are Critical
AI features require ongoing evaluation after launch, unlike traditional software.
Invest early in telemetry, monitoring, and feedback loops to track performance, model drift, and degradation.
V1 Trap: Teams often overestimate early performance and skip monitoring. Don’t.
2. Probabilistic Nature Requires a New Mindset
AI outputs are not deterministic—they're influenced by context and often have multiple valid answers.
Teams must grow comfortable with uncertainty and open-ended results.
Build fallback logic, success ranges, and user control into your features.
3. ML Teams Don’t Fit into 2-Week Sprints
ML work often requires exploration, data gathering, and experimentation—things that don’t map cleanly to agile sprint cycles.
Bring ML teams in early for strategy and research—not just late-stage implementation.
Long tail iteration is a feature, not a bug.
4. MVPs Don’t Work the Same Way
Traditional MVPs fall short for AI features; you can't always fake intelligence.
Instead, prototype with Wizard-of-Oz tests, rules-based logic, or small models to validate before scaling.
What Still Works When Building AI Products
1. You Still Need a Real Problem
AI doesn’t make a weak idea better.
Validated, specific user pain points are non-negotiable. If it doesn’t pass the “so what?” test, don’t build it.
2. Prototyping and Feedback Loops Matter
Just like any product, user feedback is gold.
Test early and often, even before committing to building an actual model.
3. Cross-Functional Collaboration Is Key
Success still depends on tight alignment between product, engineering, design, and now ML.
ML should be a thought partner, not a service team.
Red flag: “We’ll bring them in later.”
What You Should Be Doing
Start with Strategy
What’s the business goal?
How does AI help achieve it?
Economic reality = fewer, smarter bets.
Checklist:
Clear business and customer value
Focused use case
Data feasibility
Risk analysis
Define Use Cases Before Writing Code
Use the Objective → Use Case → Experience → AI Feasibility framework.
Validate with data, not just instinct.
Checklist:
Clear input/output
Early data signals and KPIs
What “good” looks like
Engage ML Teams Thoughtfully
Involve data scientists from ideation to implementation.
Avoid the “handoff” model—treat ML as part of the core team.
Checklist:
Clear roles and collaboration points
Feedback from ML team early and often
Build a Feedback Loop
Treat AI products like operational systems, not just code deployments.
Monitor and iterate continuously post-launch.
Checklist:
Tooling for feedback and performance monitoring
Plan for long-term support and improvement
Use Existing Tools
Apply IA tools like card sorts, metadata taxonomies, etc., to support structure.
Invest in data quality and governance early.
Checklist:
Use proto-ontologies and personas
Agile, test-driven development
Resources
Introduction to ML and AI - MFML Part 1 (Cassie Kozyrkov)
7 Reasons Why Most AI Projects Never Make It to Production (Jan Van Looy)
Your AI Product Needs Evals (Hamel Husain)
LLM Evaluation: Everything You Need To Run, Benchmark LLM Evals (Aparana Dhinkakaran and Ilya Reznik on Arize)
All about LLM Evals (Christmas Carol on Medium)
The definitive guide to AI / ML monitoring (Mona Labs)