You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are a cases where you might want to change the model (or even prompt) used for a given request based on the current context. Adding a new PromptSelector plugin will let a wave evaluate the current context and select the appropriate prompt and model to use. Here are a few scenarios this would enable:
Choosing an alternate model for longer prompts. Use a low cost model for short tasks but switch to GPT-4 when you need the added tokens.
Choosing an alternate model based on query complexity. You can use a classifier to select the model to use based off the complexity of the current query.
There would be 2 default implementations provided: The DefaultPromptSelector would just select the configured prompt every time. The FeedbackPromptSelector would chose an alternate prompt when giving the model feedback.
There are a cases where you might want to change the model (or even prompt) used for a given request based on the current context. Adding a new
PromptSelectorplugin will let a wave evaluate the current context and select the appropriate prompt and model to use. Here are a few scenarios this would enable:There would be 2 default implementations provided: The
DefaultPromptSelectorwould just select the configured prompt every time. TheFeedbackPromptSelectorwould chose an alternate prompt when giving the model feedback.