RAG Office Hours — Preview and Next Cohort
RAG Office Hours: Preview and Next Cohort
Our next cohort is coming up. Enroll today and get access to live office hours, hands-on reviews, and practical guidance you can use the same week.
Our next cohort is coming up. Enroll today and get access to live office hours, hands-on reviews, and practical guidance you can use the same week.
This comprehensive FAQ is compiled from all office hours sessions across multiple cohorts.
Quick Navigation
Use your browser's search (Ctrl+F) to find specific terms or questions, or browse through the questions below.
I find this to be a pretty interesting topic because I personally believe that coding agents are probably executing at the frontier of agentic ray systems.
The world of autonomous coding agents is rapidly evolving, with fundamental disagreements emerging about the best approaches to building reliable, high-performance systems. This Lightning Series brings together the minds behind some of the most successful coding agents—from SWE-Bench champions to billion-dollar products—to debate the core architectural decisions shaping the future of AI-powered development.
If you just want to sign up, you're going to have to visit every single tab, open these links, and sign up to each one.
RAG in the Age of Agents: SWE-Bench as a Case Study from Colin Flaherty of Augment Code
Lessons on Retrieval for Autonomous Coding Agents from Nik Pash of Cline
Why Devin Does Not Use Multi-Agents from Walden Yan of Cognition AI
| These are all just notes from a 30-minute conversation I had with somebody. A fun little exercise, as you will see.
When people ask me what a hot take is, here's mine: more agent tools and AI tools should be pricing on outcomes and trying hard to figure out what that means. This aligns with my broader thoughts on pricing AI tools as headcount alternatives.
The question hit me personally as a small investor in Lovable and a consultant focused on value-based pricing: Why am I not building my consulting business, my courses, my job board on Lovable instead of spreading them across Stripe, Maven, Circle, Kit, and Podia, It's because I could only possibly pay $100/month, and for that, they could not possibly offer me the features I need to.
I hosted a Lightning Lesson with Skylar Payne, an experienced AI practitioner who's worked at companies like Google and LinkedIn over the past decade. Skylar shared valuable insights on common RAG (Retrieval-Augmented Generation) anti-patterns he's observed across multiple client engagements, providing practical advice for improving AI systems through better data handling, retrieval, and evaluation practices.
tl;dr: You should build a system that lets you discover value before you commit resources.
!! Key Takeaways
Before asking what to build, start with a simple chatbot to discover what users are interested in. There's no need to reach for a complex agent or workflow before we see real user demand.
Leverage tools like Kura to understand broad user behavior patterns. The sooner we start collecting real user data, the better.
This week, I had conversations with several VPs of AI at large enterprises, and I kept hearing the same story: XX teams experimenting with AI, a CEO demanding results for the next board meeting, sales conference, quarterly review, and no clear path from pilot to production.
These conversations happen because I built improvingrag.com—a resource that helps teams build better RAG systems, which has lead me into many conversations from researchers, engineers, and executives. But the questions aren't about RAG techniques. They're about strategy: "How do we go from experiments to production?" "How do we know what to invest in?" "How do we show ROI?"
I hosted a lightning lesson featuring Ben from Raindrop and Sid from Oleve to discuss AI monitoring, production testing, and data analysis frameworks. This session explored how to effectively identify issues in AI systems, implement structured monitoring, and develop frameworks for improving AI products based on real user data.
Today I spoke to an executive about SaaS products, and they told me something that shifted my perspective entirely: AI agents need to be compared to budgets that companies draw from headcount, not tooling.
This is one of those insights that seems obvious in retrospect, but completely changes how you think about positioning AI tools in today's market—especially in this era of widespread tech layoffs and economic uncertainty.
The world of RAG evaluation feels needlessly complex. Everyone's building frameworks, creating metrics, and generating dashboards that make you feel like you need a PhD just to know if your system is working.
I'll be hosting industry experts to share practical techniques for enhancing your Retrieval Augmented Generation (RAG) systems.