AI and documentation
Like many devs, I used to say “the code is the documentation,” but the rise of tools like Cursor, Copilot, and other AI coding agents has really changed my perspective lately.
In the past, documentation and comments were static information in a codebase that was always evolving. The expressiveness of the language, type system and tests, combined with team processes that encourage knowledge sharing, let us capture most constraints directly in the code, without needing much extra documentation. By comparison, documentation was often a separate set of documents, easy to forget about, sometimes unknown, and often out of date when you finally needed it. In short, documentation used to be a cost: sometimes necessary, but rarely a tool.
So what do AI agents change?
Quite simply, documentation is no longer “dead”: agents can actually use it to generate code that fits your codebase standards. And when the docs are out of date, it gets noticed quickly because the generated code stops working as expected.
In other words, the fact that AI agents can use our documentation means we get more direct feedback on its quality, and we can iterate on it much faster.
Example: Cursor’s “rules”
Cursor
’s rules are a great example of this new approach, integrating documentation directly into the tool and the development process. For those who don’t have this kind of feature built into their IDE:
- You can use Markdown files directly in your codebase
- If your documentation needs to live outside the code (Notion, Confluence, etc.), you can use an MCP to bring it back into your favorite tool
Ultimately, this shift in the role of documentation is really about the DX of our AI agents: how can we make their job easier, so they can make ours easier in return?
Further Reading
Here are two reads I enjoyed that dig deeper into this topic: