I posted flutterdocs_mcp to pub.dev. It is a development tool that wraps the offline Flutter/Dart API documentation in an MCP server that agents can search and navigate. There is also a complementary agent skill (see Best Practices in the pub.dev Example tab).
The offline documentation itself is preprocessed and stored in a sqlite3 database file, as detailed in the README. This makes it easy and fast for agents to perform a full-text search across all libraries and eliminates fetch and conversion delays. The preprocessing also makes the documentation easier for agents to navigate and consume.
The only agent host I have used it with is GitHub Copilot in VS Code, albeit with a variety of models. But MCP is a standard and I would expect similar results with other agent hosts (Claude, Codex, etc.). I have also used it with MCP Inspector, but that’s purely an MCP server testing tool.
With LLMs being released and updated at a rapid rate, it’s an open question as to how much having the most up-to-date documentation improves the performance of AI assistants. I am interested in doing some quantifiable A/B testing, versus the ad hoc testing I’ve done to date, and would be interested in ideas (or first-hand experiences) on how best accomplish this.
Cheers, Steve