Sunsetting 567 Labs and Open Sourcing the Course Content
I am sunsetting 567 Labs. The courses close today, February 2, 2026.
I write about applied AI, open source, personal work, and building with LLMs.
If you want hands-on help, I recommend reaching out to my friend Nila: nila.is. Please mention you came from me.
Subscribe to my Newsletter Follow me on X
Only 6 Evals - that's all you need. Predictions explores where RAG is heading. Improving RAG is the systematic process I use. Levels of Complexity breaks down implementation layers. Systematically Improving goes beyond error tracking.
The Flywheel creates self-reinforcing improvement cycles. Terrible RAG catalogs what not to do. Low-Hanging Fruit covers quick wins that often outperform complex changes. What Is RAG covers the fundamentals. More Than Embeddings shows how query understanding changes everything.
Decomposition breaks complex queries into simpler parts. Authority explains why learning-to-rank beats pure embedding search. Search Metrics measures quality. Anti-Patterns documents failures I've seen across industries.
Common Errors catalogs mistakes in probabilistic systems. Fine-Tuning Foot Guns covers what not to do. Enterprise Embeddings shows how Glean builds custom models. Data Illiteracy explains common mistakes. Hard Truths from watching companies burn millions.
Effective Communication moves beyond vague updates. Leading Teams connects work to business value. AI Standups explains why AI is applied research, not engineering. Hiring MLEs explains when to hire (spoiler: later than you think). LLM Observability uses Open Telemetry, not fancy tools. Data Flywheel builds self-improving systems.
Beyond Chunks shows how faceted search gives agents peripheral vision. Rapid Prototyping validates ideas with folder tests. Slash vs Subagents solves context pollution - same capability, dramatically different economics. Compaction Experiments explores using compaction as momentum. Agent Frameworks helps choose autonomy levels.
Grep Beats Embeddings from Augment shows simple tools win. No Multi-Agents explains the telephone game effect. Stopped Using RAG from Cline makes the case for direct exploration. Rethinking RAG from Sourcegraph inverts chat-era assumptions.
AI MVPs focuses on what 80% means. Pricing Agents compares to headcount, not tooling budgets. Revenue Sharing explores outcome-based pricing.
Advice for those starting out. Losing My Hands forced reinvention. Learning to Learn across pottery, weightlifting, jiu jitsu, Rocket League. Things is a running list of what I'm using.
I am sunsetting 567 Labs. The courses close today, February 2, 2026.
Some links may include affiliate attribution. Recommendations are based on personal use.
In the past 2 decades I went from sharing a bed with my parents renting out the unfinished basement of some Canadian family to doing quite well for myself. This is all the stuff I use, plan to use, and what's on my upgrade roadmap. Each item includes why it works for me.
This past spring, two senior engineers at different companies received the same challenge from their CEOs: "We need to move faster. Use AI to get there."
I hosted a series of conversations with the teams behind the most successful coding agents in the industry—Cognition (Devin), Sourcegraph (Amp), Cline, and Augment. Coding agents are the most economically viable agents today—they're generating real revenue, being used daily by professional developers, and solving actual business problems at scale.
This makes them incredibly important to study. While other agent applications remain largely experimental, coding agents have crossed the chasm into production use. The patterns and principles these teams discovered aren't just theoretical—they're battle-tested insights from systems processing millions of real-world tasks.
This series captures those hard-won lessons, revealing what works and what doesn't when building agents that actually deliver economic value.
Related Series
Context Engineering Series: Technical implementation patterns for agentic RAG systems, including tool response design, context management, and system architecture. This Speaker Series provides strategic insights, while Context Engineering offers implementation details.
**[RAG Master Series](./rag-series-index.md)**: Comprehensive guide to retrieval-augmented generation systems. Many coding agent insights (like why simple approaches beat complex ones) apply directly to RAG system design and optimization.
Retrieval-Augmented Generation (RAG) has become the foundation of modern AI applications that need to access and reason about external knowledge. This comprehensive series distills years of consulting experience helping companies build, improve, and scale RAG systems in production.
RAG systems are fundamentally different from other AI applications - they combine the complexity of information retrieval with the unpredictability of language generation. This series provides a systematic approach to mastering both aspects, from basic implementations to enterprise-grade systems serving millions of users.
This guide covers everything from fundamental concepts to advanced optimization techniques, anti-patterns to avoid, and real-world case studies from successful deployments across industries.
If you want hands-on help, I recommend reaching out to my friend Nila: nila.is. Please mention you came from me.
I hosted a session featuring Chris Lovejoy, Head of Clinical AI at Anterior, who shared valuable insights from his experience building AI agents for specialized industries. Chris brings a unique perspective as a former medical doctor who transitioned to AI, working across healthcare, education, recruiting, and retail sectors.
I hosted a special session with Anton from ChromaDB to discuss their latest technical research on text chunking for RAG applications. This session covers the fundamentals of chunking strategies, evaluation methods, and practical tips for improving retrieval performance in your AI systems.
I hosted Colin Flaherty, previously a founding engineer at Augment and co-author of Meta's Cicero AI, to discuss autonomous coding agents and retrieval systems. This session explores how agentic approaches are transforming traditional RAG systems, what we can learn from state-of-the-art coding agents, and how these insights might apply to other domains.
I hosted a session with Walden Yan, co-founder and CPO of Cognition, to explore why multi-agent systems might not be the optimal approach for coding contexts. We discussed the theory of context engineering, the challenges of context passing between agents, and how single agents with proper context management can often outperform multi-agent setups.
I hosted a session with Kelly Hong from Chroma, who presented her research on generative benchmarking for retrieval systems. She explained how to create custom evaluation sets from your own data to better test embedding models and retrieval pipelines, addressing the limitations of standard benchmarks like MTEB.