Skip to content

Writing and Mumblings

Why Google Search Sucks for AI (Will Bryk, Exa)

I hosted a session with Will Bryk from Exa who shared insights about the evolution of search technology, how AI is changing search requirements, and the technical challenges of building a semantic search engine. This session explores how traditional search engines like Google differ from next-generation semantic search systems designed for AI applications rather than human users.

Rethinking RAG Architecture for the Age of Agents - Beyang Liu (Sourcegraph)

I hosted a session with Beyang Liu, CTO of Sourcegraph, to explore how the evolution of AI models has fundamentally changed how we should approach building agent systems. This discussion revealed why many best practices from the chat LLM era are becoming obsolete, and how the architecture of effective agents requires rethinking context management, tool design, and model selection from first principles.

How Should We Choose Agent Frameworks and Form Factors?

This is part of the Context Engineering Series. I'm focusing on agent frameworks because understanding form factors and complexity levels is essential before building any agentic system.

What Do We Actually Mean When We Say We Want to Build Agents?

Field note from a conversation with my friend Nila, who helps companies navigate AI implementation decisions: nila.is. Nila focuses on implementations and workflows; I focus on writing down strategy and execution patterns on this blog.

When companies say they want to build agents, I focus on practical outcomes. What specific functionality do you need? What business value are you trying to create?

How Do We Prototype Agents Rapidly?

This is part of the Context Engineering Series. I'm focusing on rapid prototyping because testing agent viability quickly is essential for good context engineering decisions.

If your boss is asking you to "explore agents," start here. This methodology will give you evidence in days, not quarters.

Most teams waste months building agent frameworks before they know if their idea actually works. There's a faster way: use Claude Code as your testing harness to validate agent concepts without writing orchestration code.

Context Engineering Series for Agentic RAG Systems?

I've been helping companies build agentic RAG systems and studying coding agents from Cognition, Claude Code, Cursor, and others. These coding agents are likely creating a trillion-dollar industry—making them the most economically viable agents to date.

This series shares what I've learned from these teams and conversations with professional developers using these systems daily, exploring what we can apply to other industries.

If you want hands-on help, I recommend reaching out to my friend Nila: nila.is. Please mention you came from me.

Related Series

Coding Agents Speaker Series: Deep insights from the teams behind leading coding agents including Cognition (Devin), Sourcegraph (Amp), Cline, and Augment. While this Context Engineering series focuses on technical implementation patterns, the Speaker Series reveals strategic insights and architectural decisions.

RAG Master Series: Comprehensive guide to building and scaling retrieval-augmented generation systems. Context Engineering principles directly enhance RAG implementations—structured tool responses and faceted search are foundational RAG optimization techniques.