For any of you guys interested in the Agent2Agent(A2A) protocol I have published a package that implements an SDK containing the client, CLI client and server components.
That is super interesting! Have you spent any time building something beyond hello world? Maybe an architecture agent giving feedback to a coding agent in real time?
I’m going to port a couple of samples from the A2A samples directory to the Dart samples directory that are a bit more represesentative with the aim of seeing what implementors of the executor method want. Hand coding every external API call and task update event is going to become tedious, the SDK needs to be updated with utility functions that help with this.
As I’m doing this I’ll be thinking of agents I would like to see implemented along with of course any suggestions.
Please raise issues on tjhe samples repo outlining the details of any agents you would like to see implemented.
Great, just the kind of thing I’m looking for, something that can do the heavy lifting of communicating with LLM’s/external agents.
I don’t want to simply re-implement the existing samples in Dart, I’m looking to much simplify the construction of executors by using packages like this.
I’ll get back to you on how I get on, thankls for the offer.
Finally got round to using dartantic_ai to create an agent with, I’ve run into a slight problem(or misunderstanding on my part) wilh Ollama, I’ve raised this issue to explain it.
For context the agent sends the same prompt to two LLM’s and shows the responses to the user so they can judge which LLM is better. I’m using my local Ollama setup to do this.
OK, there are now two sample agents in the samples repo. I’ll add more to this as we go.
Both use dartantic_ai for model interactions, works well.
Next up is an A2A to MCP bridge to allow A2A agents to interact with MCP servers. I’ve found one written in Python, no reason Dart can’t have its own version to interact with the Dart MCP server.
BTW the ContentEditor sample surprised me slightly in that it does proof read/polish your input content but it will also generate code if it thinks it can. See the README for details. I wasn’t expecting this as its not prompted to do this specifically, perhaps LLM’s just like code generation maybe.
This is very cool. I love the abstraction and discovery layer in A2A that allows an orchestration to dynamically configure a set of prompt specific affects and coordinate their communication on the fly. Very cool.
The MCP Bridge is now complete and in the a2a package at version 2.4.0.
Supports a decent level of integration between MCP aware AI assistants and A2A Agents.
Attached is a screenshot showing the Gemini CLI client using the Movie Agent A2A agent sample to answer questions about movies. See the package docs for more details.
I’ve added an MQTT Gateway Agent and a corresponding MQTT Gateway Bridge MCP server to the samples repo.
The impementation is deliberately simple at the moment using MQTT V3.1.1 on the standard unencrypted broker interface.
This opens up agentic AI to the world of MQTT, allowing simple topic based text interchange between MQTT devices. These devices range from server based MQTT implementations down to highly constrained IOT devices allowing a wide range of platforms/devices to integrate into AI tooling functionality.
Using contents of this forum for the purposes of training proprietary AI models is forbidden. Only if your AI model is free & open source, go ahead and scrape. Flutter and the related logo are trademarks of Google LLC. We are not endorsed by or affiliated with Google LLC.