Replies: 1 comment 1 reply
-
|
I don’t have an environment to test Mistral, so I can’t confirm it, but it seems possible by creating the appropriate |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Question: How do we invoke non-OpenAI (Mistral) deployments with Microsoft Agent Framework on Azure AI Foundry?
Hi team,
We’ve integrated the Microsoft Agent Framework with Azure AI Foundry, and OpenAI-based deployments (GPT‑4o, GPT‑4o Mini) are working perfectly. However, when we switch to our Mistral deployment the request fails with a 404
DeploymentNotFounderror.Context
AgentFactory→ Microsoft Agent Framework (AIAgent)AzureOpenAIOptionscaptures each deployment (endpoint, API key, display name, and optionalModelvalue for non-OpenAI providers).AzureOpenAIClientProvidercreates anAzureOpenAIClientper model at startup so we reuse the heavy client and fail fast on bad keys/endpoints.For Mistral we also attempted to set
ChatClientAgentOptions.ChatOptions = new ChatOptions { ModelId = modelConfig.Model }, but the same 404 occurs.Error Details
Azure portal shows the deployment’s target URI and explicitly states:
What we’ve tried
Provisioning state: Succeeded) and accessible via the Azure AI Studio playground.ChatOptions.ModelId = "Mistral-3B"when creating the agent.await chatClient.CompleteAsync(...)) with the same result.Question
modelquery parameter (or another property) so the generated request includesmodel=Mistral-3B?*.services.ai.azure.com?Any guidance or sample code would be greatly appreciated. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions