Tobi Lütke, Shopify's founder, built and published a set of practices and tooling for getting dramatically better results from AI coding agents. The observation that's spreading: you don't need a new model to get better outputs. You need to set up your environment correctly.
What the Tool Actually Is
Without going into implementation details: it's a structured approach to feeding your coding agent the right context, in the right format, at the right time.
The insight behind it is simple but overlooked: agents don't fail because they're not smart enough. They fail because they don't have the context they need to make good decisions. The "tool" is essentially a system for solving that problem.
Why It's Surprising This Came from Shopify
Anthropic, OpenAI, and the major AI labs are all trying to improve agent capabilities by building better models and better scaffolding. A retail commerce company's CEO isn't an obvious source of state-of-the-art agent tooling.
But it makes sense when you think about it. Shopify has thousands of engineers and a massive codebase. If your agents are wasting time on bad context or producing inconsistent output, the cost adds up fast. Lütke's team has clearly invested in solving this problem at scale.
The Takeaway for Solo Founders
You don't need a team of ML engineers to benefit from this. The core principles — structured context, clear constraints, well-defined scope — apply to any coding agent workflow.
If you're using Claude Code (or any other coding agent) and you're frustrated by inconsistent output, the problem is almost never the model. It's the setup. Fix the setup.
This is what separates people who get 3x productivity from coding agents from people who get 10-20x. Not the model they're using — the environment they're running it in.