feat(a2a_client): expose extension metadata in a2a_send_message#398
feat(a2a_client): expose extension metadata in a2a_send_message#398prashant1rana wants to merge 1 commit intostrands-agents:mainfrom
Conversation
|
Hi, can I ask what the use case is? The way you are adding the metadata right now, it's another parameter for LLM to add. Is that the desired behavior? Would it be better to get it through |
I was considering an approach like this on the client side to have greater control over the metadata. |
If no metadata is provided, I think we could auto-detect it as a fallback. |
My use case involves passing a portion of the metadata to the A2A sub-agents, which can be solved directly using tool_context. |
|
So, I have done a bit more research (see below). The concern is now that we are adding this tool, it can break anyone who uses it, because now LLM can hallucinate metadata, and it will be sent to the A2A servers. It can have unintended consequences. I think we should to move to tool context method only, unless there is an actual need to generate metadata from LLM, but even then, I'd consider it risky for our core SDK, because it might impact other developers who already use the tool
I think we are aligned in the sense that we want devs to have greater control over metadata, but this approach gives the control to LLM by default, and it's risky Agent investigation: A2A metadata hallucination risk analysisWhat the PR does: Adds an optional What A2A metadata actually is: In the A2A protocol spec, Your core question: if the LLM hallucinates metadata, would it break stuff? Short answer: it won't break the protocol, but it's a real concern. Here's why:
Bottom line: Hallucinated metadata won't break the wire protocol (it's just JSON in an optional field), but it can cause incorrect behavior on servers that actually consume that metadata. The safer pattern is what you suggested: source metadata from |
mkmeral
left a comment
There was a problem hiding this comment.
We should not allow LLM to send metadata by default, as it can impact. I am leaving the review for now to make sure we do not unintentionally merge this
Thanks for calling out, hallucinating metadata could cause real issues. Let me revise the PR. |
src/strands_tools/a2a_client.py
Outdated
| # Extract metadata from tool_context only | ||
| metadata = None | ||
| if tool_context is not None: | ||
| metadata = tool_context.get("invocation_state", {}).get("metadata") |
There was a problem hiding this comment.
nit: I'd call it a2a_tool_metadata (a2a_metadata, or sth similar) to make it more clear, as invocation state is passed to all tools. That could cause conflicts
There was a problem hiding this comment.
Make sense, I will update metadata -> a2a_tool_metadata
There was a problem hiding this comment.
Just following up on here, I think there was a misunderstanding, sorry about that. I meant replace
metadata = tool_context.get("invocation_state", {}).get("metadata")
with
metadata = tool_context.get("invocation_state", {}).get("a2a_tool_metadata")
The main reason is, the invocation state is a shared place and metadata is too generic. So the usage of this would actually be
agent("Use A2A to solve my problem", metadata=my_a2a_metadata)
# the problem here is metadata too generic and can cause collisions, especially if other tools want to use the same logic/key
vs
agent("Use A2A to solve my problem", a2a_tool_metadata=my_a2a_metadata)
src/strands_tools/a2a_client.py
Outdated
| target_agent_url: The exact URL of the target A2A agent | ||
| (user-provided URL or from a2a_list_discovered_agents) | ||
| message_id: Optional message ID for tracking (generates UUID if not provided) | ||
| tool_context: Tool execution context (automatically injected by Strands |
There was a problem hiding this comment.
small nit: I wouldn't add it to the docstring, when tool spec is not explicitly provided strands defaults to using docstring as tool description. So in this case, you'd be injecting tool context as a terminology to the model. Not the end of the world, but it might be unnecessary distraction
(and worst case scenario, it can cause hallucinations, if the app dev also has the concept tool context available to the model)
We should still document the a2a metadata that's passed through invocation state though
There was a problem hiding this comment.
Agreed — even if tool_context is not exposed as an argument, the tool description alone could lead to hallucinations by the Agent
src/strands_tools/a2a_client.py
Outdated
| # Extract metadata from tool_context only | ||
| metadata = None | ||
| if tool_context is not None: | ||
| metadata = tool_context.get("invocation_state", {}).get("metadata") |
There was a problem hiding this comment.
Make sense, I will update metadata -> a2a_tool_metadata
src/strands_tools/a2a_client.py
Outdated
| target_agent_url: The exact URL of the target A2A agent | ||
| (user-provided URL or from a2a_list_discovered_agents) | ||
| message_id: Optional message ID for tracking (generates UUID if not provided) | ||
| tool_context: Tool execution context (automatically injected by Strands |
There was a problem hiding this comment.
Agreed — even if tool_context is not exposed as an argument, the tool description alone could lead to hallucinations by the Agent
…llucination Remove metadata as LLM-facing parameter and source it exclusively from tool_context to prevent LLM from hallucinating metadata values that could cause issues on remote A2A servers. - Add @tool(context=True) to exclude tool_context from LLM schema - Remove metadata parameter from a2a_send_message signature - Extract metadata from tool_context.invocation_state dict - Simplify extraction logic using dict.get() method - Update tests to mock invocation_state as dict - All 36 tests passing
Summary
Pass A2A request metadata to sub-agents via tool_context instead of exposing it as an LLM parameter.
Motivation
The A2A protocol's MessageSendParams.metadata field enables passing metadata data between agents.
The original approach exposed this as an LLM-facing parameter, allowing the model to fabricate values that could cause:
While hallucinated metadata won't break the protocol (it's optional JSON), it creates semantic risks when remote servers interpret fabricated values.
Solution
Use @tool(context=True) to hide tool_context from the LLM schema and extract metadata transparently:
How it works:
Testing