AI Urgency: Adapting to Change in B2B Tech
- Oct 2, 2025
- 4 min read
Updated: Jan 20
AI urgency and economic uncertainty are pushing teams and leaders to adapt quickly. This talk for Agile New England explores how AI is reshaping not just our tools but also our teams, workflows, and what it means to lead. Thank you for having me!
What’s Changing in the Landscape
The macro environment is shifting. High interest rates, layoffs, and restructuring are putting pressure on teams to do more with less. Inside organizations, AI is becoming a teammate, not just a tool.
Roles are evolving, and what’s valuable is changing. It’s not just about building; it’s about defining, validating, and collaborating with machines.
The “ICEO” (AI-powered CEO) is emerging. Leaders are becoming more hands-on, often diving into the technical details to ensure AI work gets done right.
What Makes a Good AI Use Case?
A strong use case delivers real value to both the business and the customer. It uses unique data, starts with a measurable baseline, and solves problems that can only be addressed with AI.
If traditional software can do the job, or if the task requires absolute precision, it’s probably not a good fit.
The Importance of Unique Data
Unique data is crucial for effective AI applications. It allows for tailored solutions that traditional methods can't provide. When you leverage distinctive datasets, you set the stage for innovation and efficiency.
Measuring Success
To ensure your AI use case is effective, start with a clear, measurable baseline. This helps in tracking progress and understanding the impact of AI on your operations.
How We Build at OpsCanvas
At OpsCanvas, we use AI to eliminate cloud waste, improve visibility, and orchestrate smarter infrastructure decisions. Here’s what we’ve learned:
AI agents are powerful but need scoped, precise prompts.
Productivity can jump 2–3x, but validation still takes time.
Small, clear requests work better than broad prompts.
Human-in-the-loop review is essential.
The Role of Human Oversight
Even with advanced AI, human oversight remains critical. It ensures that the technology aligns with business goals and maintains ethical standards.
The New Must-Have for a PM: Evals
Product managers now need to be skilled in evaluating AI performance. This means measuring outputs for bias, accuracy, and value. Monitoring and feedback loops are essential to ensure AI remains consistent, safe, and useful over time.
Building a Culture of Continuous Improvement
Establishing a culture of continuous improvement is vital. Regular evaluations help teams adapt and refine their AI strategies, ensuring they stay relevant and effective.
The AI Builder Playbook
Start with Strategy: Define value clearly for both the business and the customer.
Validate the Use Case: Ensure it meets the criteria for a strong AI application.
Data and Baseline: Make sure you have the data, a measurable baseline, and a clear picture of what success looks like.
Prototype and Test: Use scrappy methods like mockups or light models to test quickly and learn fast.
Keep Humans in the Loop: Design for learning, not perfection.
Embracing Flexibility
Flexibility is key in the AI landscape. As technology evolves, so should your strategies and approaches.
Key Takeaway: The Future of Teams
AI won’t replace your team. But teams that use AI well will replace those that don’t. Embracing AI as a collaborative partner can unlock new levels of performance and innovation.
If you want to explore how to integrate AI into your processes, feel free to email me at jess@hallwaystudio.com or book a call if you want me to talk to your team.
Slides
Worksheet
Citations
The Future of Jobs Report 2025 (World Economic Forum)
How We Restructured Airtable’s Entire Org for AI | Howie Liu (Lenny's Podcast)
LinkedIn posts on faster time to exploit
AI Assisted Coding Security Risks from Bay Tech Consulting
AI Fluency (Antropic)
AI for Everyone (Coursera)
AI Transformation Playbook (Andrew Ng)
Working Backwards: Insights, Stories, and Secrets from Inside Amazon (Colin Bryar and Bill Car)
The Flywheel Effect (Jim Collins)
Resources
Introduction to ML and AI - MFML Part 1 (Cassie Kozyrkov)
Advice for Finding AI Use Cases (Cassie Kozyrkov)
7 Reasons Why Most AI Projects Never Make It to Production (Jan Van Looy)
Your AI Product Needs Evals (Hamel Husain)
LLM Evaluation: Everything You Need to Run, Benchmark LLM Evals (Aparana Dhinkakaran and Ilya Reznik on Arize)
All About LLM Evals (Christmas Carol on Medium)
The Definitive Guide to AI / ML Monitoring (Mona Labs)
Tech at Work: What GenAI Means for Companies Right Now (HBR IdeaCast featuring Ethan Mollick)



