Skip to content

feat(rig-1197): handle llama.cpp tool call (#1408)#1409

Merged
gold-silver-copper merged 11 commits into0xPlaygrounds:mainfrom
inqode-lars:main
Apr 7, 2026
Merged

feat(rig-1197): handle llama.cpp tool call (#1408)#1409
gold-silver-copper merged 11 commits into0xPlaygrounds:mainfrom
inqode-lars:main

Conversation

@inqode-lars
Copy link
Copy Markdown
Contributor

@inqode-lars inqode-lars commented Feb 19, 2026

handle llama.cpp tool call

closes #1408

Signed-off-by: Lars Weber <lars@inqode.solutions>
Copy link
Copy Markdown
Contributor

@joshua-mo-143 joshua-mo-143 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments. Rest looks ok to me

Comment on lines +413 to +424
fn deserialize_arguments<'de, D>(deserializer: D) -> Result<Value, D::Error>
where
D: Deserializer<'de>,
{
let value = Value::deserialize(deserializer)?;

match value {
Value::String(s) => serde_json::from_str(&s).map_err(serde::de::Error::custom),
other => Ok(other),
}
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add docstring to explain why this is required (it's not immediately clear to anyone who reads this in the future if they don't refer back to this PR)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea. Added a comment for that.

pub name: String,
#[serde(with = "json_utils::stringified_json")]
#[serde(
serialize_with = "json_utils::stringified_json::serialize",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct? It seems to have been perfectly fine before to my knowledge

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For openai its fine. But with llama.cpp there is a problem. See here #1408

@gold-silver-copper
Copy link
Copy Markdown
Contributor

This was a good PR overall, I only made some minor changes and tagged on some extra bug fixes.
I created a shared deserialization helper between hugging face and openai (for llama.cpp) since the code was the same.
I standardized all providers to convert "" args to {}, this follows the behavior of other providers, and is in line with other inference abstractions such as here: vllm-project/vllm#19419
Added a few extra tests.

Finally I tested this PR on a local llama.cpp instance and it was successful.

@gold-silver-copper gold-silver-copper added this pull request to the merge queue Apr 7, 2026
Merged via the queue into 0xPlaygrounds:main with commit 54bbf79 Apr 7, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug/feat: tool call from llama.cpp fails

3 participants